Exploring RoNIN
RoNIN uses smartphone IMU sensors to track human movement accurately indoors, underground & in other GPS-denied locations.
Juxta
Juxta Team

Exploring RoNIN: Advancements in Robust Neural Inertial Navigation
Smartphones have become constant companions in our daily lives, and each one carries a small but powerful sensor suite called an inertial measurement unit IMU. These sensors track motion and orientation, opening up possibilities for navigation that work anywhere, anytime.
RoNIN (Robust Neural Inertial Navigation) represents a significant leap forward in using this sensor data to accurately estimate human position and movement.
The Navigation Challenge
Traditional navigation systems each have their limitations.
GPS, while popular, fails indoors where satellite signals cannot penetrate. Visual odometry can produce precise motion tracking but requires an active camera, which doesn't work when your phone is in your pocket. Additionally, video recording and processing drain battery life rapidly.
IMU-based navigation offers a compelling alternative. It's energy efficient and works everywhere: indoors, outdoors, and even deep inside a pocket or bag. This makes inertial navigation potentially the ultimate solution for tracking relative locations continuously, regardless of how you carry your smartphone.
What RoNIN Brings to the Table
RoNIN advances inertial navigation research through three key contributions:
- A comprehensive benchmark containing IMU sensor data and ground truth motion trajectories captured during natural human movements
- Novel neural network architectures specifically designed for inertial navigation
- Extensive qualitative and quantitative evaluations that widely edge out competing methods
Building the Dataset
The research team developed an innovative data acquisition protocol using two devices:
i. Any standard smartphone as the IMU device.
ii. A Google Tango 3D tracking phone mounted in a body harness for ground truth motion capture.
This dual-device approach allows users to handle the IMU smartphone naturally, mimicking real day-to-day activities. The resulting dataset is impressive in both scale and diversity:
- Over 40 hours of natural motion data
- More than 100 kilometers of travel distance
- Four different smartphone brands
- 100 human subjects
- Data collected across three buildings
This makes RoNIN the largest and most diverse inertial navigation database available.
Methodology
RoNIN uses deep learning to estimate body velocity from historical IMU data, specifically angular velocities from the gyroscope and accelerations from the accelerometer.
The system features three families of neural architectures with two key innovations:
Novel coordinate frame definition that specifies how input IMU data relates to output body velocities
Robust velocity loss functions that increase the signal-to-noise ratio for more accurate predictions
The deep learning module integrates with traditional sensor fusion algorithms that compute device rotations, enabling the system to effectively estimate both relative body positions and orientations.
Performance in Real-World Scenarios
When compared against ground truth data from visual inertial odometry systems, classical step counting methods, and state-of-the-art data-driven approaches like IONet and RIDI, RoNIN demonstrates superior performance across diverse conditions.
Variable Walking Speeds and Device Placement
In testing scenarios where subjects carried devices in the same position while walking at different speeds with intermittent stops, RoNIN and step counting both performed well. However, IONet failed when users changed walking direction after stopping. When device placement changed between hand and pocket during walking, RoNIN was the only method producing accurate estimations.
Complex Motion Patterns
RoNIN excels with complex movements including:
- Turning while walking straight
- Walking sideways
- Carrying the phone in various locations (hand, bag, pocket)
- Frequent device placement changes
The system does show some errors for motion types that are rare in nature and therefore underrepresented in the training data, such as sidestepping or carrying a phone inside a bag. Additional training data is expected to address these edge cases.
Quantitative Results
RoNIN's performance can be measured by average positional error within one-minute windows. At 80 percent of frames, the system achieves an average positional error of less than five meters over a one- minute period, outperforming existing methods by a substantial margin. Extensive quantitative evaluations against four competing methods across three different benchmark datasets consistently demonstrate RoNIN's superiority. The system is unique among tested methods in producing both body orientations and positions, rather than positions alone.
Broader Applications
While RoNIN functions as a self-contained navigation system, inertial navigation serves as a critical modality across numerous domains:
- Virtual and augmented reality
- Autonomous vehicles
- Drone navigation
- Robotics applications
A Foundation for Future Research
RoNIN establishes a new foundation for data-driven inertial navigation research.
By providing a large, diverse benchmark dataset, novel neural architectures that handle challenging motion cases, and comprehensive evaluations of competing methods, the project enables the research community to build upon this work. As smartphones continue to evolve and IMU sensors become more sophisticated, systems like RoNIN will enable increasingly accurate and reliable navigation that works anywhere, anytime, without draining your battery or requiring a clear view of the surrounding.
Integrity note: This post reviews and references publicly available work from third-party authors.