Innovation: A multi-sensor navigation system for outdoors and indoors
Getting the Best in Both Worlds
By Karsten Mueller, Jamal Atman, Nikolai Kronenwett and Gert F. Trommer
IT DOESN’T WORK EVERYWHERE. GPS, that is. Unlike many radio broadcasts and the transmissions from nearby cell-phone towers, the signals from GPS satellites are too weak to be reliably received indoors. They don’t make it into tunnels either. And even outdoors, the signals can be blocked by tall buildings and mountains. This is why the Japanese developed the Quasi-Zenith Satellite System — to provide supplementary signals when an insufficient number of GPS signals are available in the concrete canyons of Tokyo and other high-density cities. Even if a GPS signal can be received, it might be contaminated with multipath interference resulting in a degraded position solution.
While GPS signals can be piped indoors from an antenna on the top of a building and reradiated, a GPS receiver will give its position as that of the rooftop antenna and not where it is in the building. While this might be useful for establishing the approximate whereabouts of the receiver when it’s on a bus in an underground terminal, for example, and allows the receiver to continue to receive up-to-date navigation messages providing a quick time-to-first-fix when it leaves the terminal, it’s far from satisfactory as a general indoor navigation solution.
While there are some improvements in signal reception in degraded environments with modernized signals from GPS and the other GNSS constellations, in many instances where we don’t have an unobstructed line-of-sight view of the satellites, GPS alone won’t cut it. Thankfully, other navigation sensors can be used to supplement or replace GNSS when the going gets tough for GPS alone. These include, among others, inertial measurement units, digital compasses, barometric pressure sensors, cameras and laser rangefinders.
But, even with these, is one better than another in all situations, or do they each have benefits and drawbacks just like GNSS? Would a system composed of multiple sensors be best? Such considerations are important if trying to develop a navigation system that can work well in most any environment both outdoors and indoors and transition gracefully when moving from one type of environment to another. This is the problem that confronted a team of researchers from Germany’s Karlsruhe Institute of Technology when designing a navigation system to allow a micro aerial vehicle to operate continuously and autonomously in almost any environment. In this issue’s “Innovation” column, we learn how they went about it and how well the system worked.
Today, micro aerial vehicles (MAVs) are widely used in outdoor environments. The navigation solution of commercially available products typically relies on the availability and accuracy of GNSS. To expand the field of application of MAVs to autonomous operation in indoor environments, an accurate navigation solution is necessary. Possible scenarios include the support of rescue forces, surveillance tasks and inspection missions. Different algorithms using camera or laser rangefinder measurements for indoor navigation can provide accurate results.
However, application of these algorithms is typically limited to indoor scenarios and will not provide accurate results in outdoor environments. Another drawback of these approaches is that absolute positioning is not achieved. Hence, we sought a navigation system for outdoor and indoor environments that combines the beneficial properties of outdoor and indoor navigation systems. Such a navigation system should provide an accurate navigation solution both outdoors and indoors, as well as during transition phases from outdoor to indoor and vice versa.
THE PROBLEM
Several challenges arise when combining multiple sensors in a single navigation system due to specific sensor characteristics. While an accurate navigation solution is obtained by an inertial navigation system with GNSS aiding in open-sky environments, urban canyons and indoor environments degrade the quality of GNSS signals or lead to GNSS outages such that no accurate navigation solution is available.
On the other hand, laser rangefinder measurements allow for the generation of accurate relative measurements indoors. However, due to the limited range of the laser rangefinder, no or only a few measurements are available outdoors away from buildings. Obviously, it is best to exploit the complementary characteristics of both sensors. To avoid losing information, hard switching between two different navigation systems is undesirable. Hence, two main challenges arise:
- Accurate time synchronization is necessary when processing measurements from different sensors.
- A method has to be developed for the decision on whether a measurement should be processed or rejected.
Moreover, for aerial vehicles, two more requirements must be met:
- Estimation of the 3D position and attitude instead of only the 2D position and heading as provided by 2D simultaneous localization and mapping (SLAM) approaches.
- Estimation of the vehicle’s velocity and inertial measurement unit (IMU) biases.
Our goal was to develop a navigation system that provides an accurate navigation solution for large-scale environments. The navigation system needed to provide a frequent navigation solution at the update rate of the IMU with very short delays. The framework needed to seamlessly integrate GNSS and other sensors such as a laser rangefinder or cameras. Additionally, the approach could not be limited to a specific sensor setup except for a mandatory GPS receiver, necessary for absolute positioning.
The results presented in the literature often do not include large-scale, realistic environments. Some investigators only consider short indoor sequences, while others ignore challenging GNSS conditions. In contrast, the focus of our approach is on rejecting outlier measurements in transition zones such as urban-canyon environments occurring between outdoor open sky and indoor environments. The choice of the navigation system architecture depends on the requirements of a specific platform. In the case of a quadrotor helicopter (see FIGURE 1), a high update rate is necessary for vehicle guidance and control. Therefore, we chose a Kalman-filter-based approach because it has the advantage over pure SLAM approaches when providing a navigation solution at a high update rate is required.
SYSTEM OVERVIEW
We attached several sensors and two processing platforms to the quadrotor helicopter used in our work. A microcontroller sensor board reads the sensor values from the IMU, digital compass, air pressure sensor and a GPS-only GNSS module. Timestamps are generated for each sensor data type so that accurate synchronization is provided even when delays occur, such as when processing the sensor data. The IMU is mounted close to the center of the vehicle. The air pressure sensor is directly attached to the sensor board, while the three-axis digital compass is attached to the quadrotor’s landing skid to avoid interfering magnetic fields from power electronics. The GPS receiver provides pseudorange and Doppler measurements at a rate of 10 Hz. Moreover, ephemeris data for each satellite and Klobuchar ionospheric parameters are recorded to correct the measurements. Each second, a time pulse is generated by the receiver to precisely determine the time when GPS measurements were taken. Additionally, the time pulse is used to estimate the drift of the real-time clock (RTC) on the sensor board and, therefore, to provide more accurate timestamps.
A two-dimensional laser rangefinder is mounted on top of the helicopter. Distance and angular information of objects within a scan angle of 270° is provided by this sensor. The maximum range is 30 meters. Time synchronization is achieved through a pulse registered by the microcontroller sensor board before every scan. The body of the laser rangefinder is shielded using copper foil to reduce interference with signals received by the GPS antenna. A trigger signal is sent to the camera mounted at the front of the helicopter to provide time synchronization. However, the camera was not used for the results presented in this article. An overview of the sensor setup and time synchronization is depicted in FIGURE 2.
The camera and laser rangefinder data is sent via USB to a powerful computing platform attached to the bottom carbon-fiber sheet. Time synchronization information and additional sensor data is sent from the microcontroller sensor board to the computer for processing the sensor data and calculating the navigation solution.
NAVIGATION SYSTEM
The navigation system presented in this article was developed to provide a navigation solution in both outdoor and indoor environments. Therefore, processing GPS position and velocity estimations must be possible, as well as handling of relative position and heading angle changes resulting from the laser rangefinder scans. Challenges arise due to the different time delays as illustrated in FIGURE 3. IMU measurements are available at a high frequency. Messages with the trigger timestamps are sent from the sensor board to the computer to provide information about when a GPS or laser measurement was taken.
The corresponding measurements are available with significant delays. Since GPS pseudorange and Doppler measurements are not immediately available and processing requires additional time, the typical delay between the point in time when the measurement was taken by the receiver and the time when the estimated position and velocity are available to the navigation filter is between 70 and 90 milliseconds. Even longer delays occur when processing laser rangefinder data. After processing the laser scans, the horizontal position changes and yaw angle changes (in this article, denoted as two-dimensional pose change measurements) are available for analysis. However, these changes are relative to a point in time in the past. Moreover, due to the processing, additional delay occurs and synchronization with the correct laser rangefinder trigger signal is required. The requirement to process measurements with a temporal overlap causes additional complexity, such as having several GPS measurements that are taken in the time period covered by a pose change measurement.
Error-State Kalman Filter with Stochastic Cloning. An error-state Kalman filter with 16 states estimates the vehicle’s 3D position, 3D velocity, attitude, accelerometer and gyroscope biases, and the bias for the barometric altimeter. The prediction step of the filter consists of integrating the specific force and angular rate measurements of the IMU. Measurements of the air pressure sensor and the digital compass have negligible delays, so these measurements are processed in the Kalman filter update step without compensating for delays. As we mentioned, the assumption of insignificant delays does not hold for GPS measurements and pose change measurements. Thus, we implemented stochastic cloning to overcome errors that would be introduced by delays. The idea of stochastic cloning is to augment the state vector and covariance matrix by copies of the state and covariance estimates at a specific point in time. As the augmented covariance matrix contains cross-correlation terms between the state at a previous time instance and the current state, processing of delayed measurements corrects the current state and covariance estimations.
Processing GPS Measurements. The first step when processing GPS measurements is to clone the current filter state. As outlined in the section “System Overview,” the time pulse generated by the receiver is used to determine the time when a measurement is taken. Once the pseudorange measurements are available, corrections are calculated. A weighted least-squares estimation is used to calculate position and velocity. The weight for each pseudorange measurement is the inverse of the estimated variance, which is calculated depending on the carrier-to-noise-density ratio. Weights for Doppler measurements are calculated similarly.
To reduce the errors introduced by satellite signals of low quality, a minimum carrier-to-noise-density ratio of 33 dB-Hz and a minimum elevation angle of 15° are required for the satellite signals. In addition to position and velocity, valuable information is drawn from the estimation: The variance of the calculated position is chosen to be proportional to the weighted root mean square value of the residuals and the position dilution of precision (PDOP). The velocity variance is calculated similarly. In case only four satellites are used, the variance is only proportional to the PDOP as no residuals are available. The position and velocity estimates are processed by the Kalman filter using these estimated variances. Moreover, before the filter update step is executed, the Mahalanobis distance for each measurement is calculated and outliers removed.
Additionally, measurements are not processed if their variance is above a threshold. This typically occurs in the vicinity of buildings as non-line-of-sight signals are tracked by the receiver and, therefore, processing these measurements is not desired.
Laser Rangefinder Processing. As described in the previous section, stochastic cloning is used to treat delayed pose change measurements. To process a measurement, two cloned states are necessary.
A pose change measurement consists of a relative translation and a rotation, both given in coordinates of the body-stabilized frame, which is identical to the body frame but compensated for roll and pitch angles. Hence, the x and y axes of the body-stabilized frame are parallel to the ground. Several methods could be used for generating pose-change measurements, such as camera-based approaches, laser rangefinder approaches or hybrid approaches. In our work, Cartographer, a laser SLAM approach, is used to obtain horizontal position and yaw angle changes. However, the SLAM module could be easily replaced by other laser SLAM approaches.
As laser SLAM approaches build an incremental map, the laser’s pose is given with respect to the map frame. Therefore, the translational and rotational components of the pose-change measurement must be transformed from the map frame to the body-stabilized frame before being processed by the Kalman filter. Different options are possible when choosing the first point in time for a relative measurement (the second point in time is determined by the most recent laser measurement).
We decided to use a keyframe-based aiding technique. A keyframe is defined and the filter state is cloned accordingly. After the processing of a laser measurement by the SLAM algorithm, pose estimations given in map coordinates are transformed to pose change measurements relative to this keyframe. The keyframe is changed depending on the filter status information as outlined in the section “Using the Filter Status Information” of this article. Additionally, the keyframe is changed if the difference between consecutive pose estimations exceeds a threshold. This indicates an erroneous pose estimation by the SLAM module as only small pose changes are expected due to the high update rate of laser scans and the limited velocity of the vehicle. As a result, the influence of errors in the SLAM module on the navigation solution provided by the Kalman filter is reduced.
FILTER STATUS
Above, we described how relative and absolute delayed measurements are processed in an error-state Kalman filter. However, simply processing all available measurements will not lead to the best performance of the filter. For example, the laser SLAM algorithm might not provide accurate and reliable results in open-sky environments free from human-made structures, as mainly vegetation is detected by the laser rangefinder.
To derive a metric for the decision on the necessity of integrating additional relative measurements, we provide a classification scheme based on GPS measurements. The advantage of using only GPS measurements for the filter status determination is the versatility of the approach: A GPS module will be available on almost every platform. The laser rangefinder, however, could be replaced by a camera without modifications in the classification scheme.
Clearly, processing GPS in indoor environments is not an option as no measurements are available. On the contrary, in outdoor open-sky environments, a sensor setup comprising GPS, IMU, digital compass and air pressure sensor results in an accurate navigation solution. Therefore, the interaction of different sensors in transition phases and urban-canyon environments is the most critical part for an accurate navigation solution in large-scale environments. The following paragraphs introduce the classification of single GPS position measurements and the determination of filter status based on the GPS classification.
Classification of Single GPS Position Measurements. The first step for the filter status determination is the classification of single GPS position measurements. The categories for a measurement are very good, good, medium and poor. Two parameters are used for the classification: the number of satellites used for the position calculation and the estimated variance. For a very good measurement, at least six satellites are required; for a good measurement, at least five satellites are necessary. Moreover, thresholds for the estimated position variance are applied. As the variance is proportional to the PDOP and the root mean square of the weighted residuals, this means that a very good or good position measurement must offer a good satellite constellation and small residuals.
Filter Status Determination. The classification of GPS position measurements is used to calculate a filter status. First, a sum over a time interval of one second is computed. The number of positions classified as very good are multiplied by a factor of four, good positions count twice, and the number of medium positions added without a multiplicative factor. In our setup, 10 position measurements are available in one second. The final filter status is determined using two thresholds. If the sum is at least 20, the filter status is “Good GPS.” This means that five measurements classified as being very good or all 10 measurements classified as being good would be sufficient for this status.
The “Medium GPS” status is achieved with a sum between 10 and 20. If no valid GPS measurements have been available over the last five seconds, an additional indoor flag is set, and it is assumed that the vehicle is now indoors. As soon as GPS position measurements become available again, the filter status is re-calculated. The parameters for the filter status are determined empirically and provide robust results for a large variety of scenarios. However, minor changes of the parameter set to classify single measurements might be necessary in case a different GNSS hardware setup is used.
The resulting filter status for an example trajectory is shown in FIGURE 4. As expected, GPS is good in the western part of the trajectory, and the status quickly deteriorates to poor GPS between the high-rise buildings. Just before entering the building, the status changes to “Indoor.” After leaving the building and moving north, the filter status changes mainly between good and medium GPS as signals are blocked due to buildings or mitigated due to foliage. The end of the trajectory in the eastern part offers better GPS conditions since the surrounding buildings are smaller and the status changes to “Good GPS.”
Using the Filter Status Information. The filter status provides valuable information when combining GPS and relative measurements. As outlined in previous sections, the filter status “Good GPS” occurs in open-sky environments where processing of additional relative measurements does not improve the navigation solution. Since the laser SLAM solution might be corrupted in areas without a sufficient number of human-made structures, relative measurements are not processed while the filter status is “Good GPS.” Additionally, the keyframe is changed in short time intervals during this status. The reasoning behind this decision is that it is desired to have a good estimation of the absolute position and orientation with a low uncertainty at the time a keyframe is chosen.
During a period with “Good GPS” conditions, position estimation typically becomes gradually better. For the same reason, it is best to retain a keyframe for a long time when the filter status is “Poor GPS” or “Indoor.” In these scenarios the laser SLAM algorithm provides accurate results as the environment mostly consists of human-made structures. A drawback inside buildings is that the Earth’s magnetic field might become distorted, for example close to elevators. Hence, magnetometer measurements are not processed when the “Indoor” flag is set. If the status “Medium GPS” is set, GPS and relative measurements should be weighted equally. The keyframe is retained until a predefined maximum age is reached or inconsistencies in the SLAM solution are detected.
In contrast to the “Poor GPS” case, the integration of relative measurements is more pessimistic, and the variance is chosen in the range of the typical GPS accuracy. This takes into account that a very accurate laser SLAM solution is not assured. However, the processing of relative measurements improves position accuracy and avoids the growth of filter state covariance, which is beneficial for rejecting faulty measurements. Independent of the filter status, GPS measurements fulfilling the Mahalanobis distance threshold criterion are processed.
RESULTS
The results of three trajectories recorded at the campus of the Karlsruhe Institute of Technology are presented in this section. All trajectories cover outdoor environments with good GPS signal reception as well as urban-canyon and indoor sections. Since flying these challenging trajectories was not possible due to legal reasons and due to small doors that had to be passed through, the quadrotor helicopter was manually carried.
The first trajectory shown in FIGURE 5 starts in an open-sky environment. At position 1, the footpath goes between two 40-meter buildings. Hence, GPS satellite signals are blocked and non-line-of-sight signals are tracked by the receiver that increasingly deteriorate GPS positon and velocity accuracy. The indoor section starts at position 2. After 30 seconds of indoor navigation, the trajectory continues north on the sidewalk. On this section, numbered 4 in Figure 5, a six-story building on the left side and a nearby building on the right side cause medium to poor GPS conditions as was shown in Figure 4. Despite the difficult conditions, the trajectory follows the footpath correctly. Of course, as no GPS correction service or satellite-based augmentation system is used, sub-meter level accuracy is not achieved. At position 2, the trajectory passes along stairs.
Therefore, accuracy in the north direction is very good. In the east direction, however, the error is larger as the trajectory should be farther east within the building. This error remains throughout the indoor section until position 3, as no GPS position measurement is processed to correct for the error. After leaving the building, the error in the east direction becomes smaller by processing accurate GPS position measurements. After heading north on the sidewalk, the error is within the expected accuracy bounds specified by the GPS position accuracy. The smoothness of the trajectory after leaving the building shows that the rejection of GPS position outliers leads to a consistent navigation solution.
The second trajectory is the longest of the three trajectories, covering 400 meters in 9 minutes. The first difficult section is denoted by position 1 in FIGURE 6, when the vehicle moves between two buildings. The walls of the right building are covered by metal plates. It looks like the trajectory is very close to the edge of the right building. However, this effect is from the perspective view of the building in the georeferenced image. We passed below a canopy at position 2 and entered a building at position 3. An accurate position solution is available during the long indoor section with multiple turns. The total time spent indoors was 112 seconds. GPS position measurements becoming available after leaving the building at position 4 improve the accuracy of the navigation solution. However, due to the high accuracy of the position estimation before leaving the building, only small filter innovations occur. The trajectory ends on the sidewalk near the building identified as number 5.
Trajectory three, shown in FIGURE 7, is the most challenging, with position errors exceeding those of the previous two trajectories. Already at the start of the trajectory, only six GPS satellites can be used for calculating position and velocity estimates. It is several meters until an accurate position estimate is available at position 1. Between positions 2 and 3, a section with buildings up to 56 meters tall results in no accurate GPS position fixes being available for more than 30 seconds. In this section, the computed trajectory clearly is several meters too far north. Additionally, at position 2 the heading change is smaller than 90 degrees, which results in additional drift. Before entering the building at position 3, GPS position measurements become available and the position is corrected, reducing the error in the north. After 57 seconds indoors, we exited the building at position 4. The position solution is still too far north, but is corrected by additional measurements so that good accuracy is achieved when walking on the sidewalk. The trajectory ends at its start position.
CONCLUSION
The navigation system presented in this article fuses GPS measurements and relative pose change measurements to provide an accurate navigation solution in both outdoor and indoor scenarios. We show that position errors are small even for challenging scenarios with high buildings and poor GPS signal reception. Currently, the accuracy in outdoor environments is limited by GPS accuracy. Further improvements are expected by including additional GNSS such as GLONASS or Galileo to obtain better satellite geometry, especially in urban-canyon scenarios.
MANUFACTURERS
We used a u-blox LEA-M8T GPS receiver, an Analog Devices ADIS 16448 IMU, a Freescale (now, NXP Semiconductors) MP3H6115A air pressure sensor, a Honeywell HMC5843 digital compass, an Hokuyo UTM-30LX laser rangefinder, an IDS UI-3260CP-C-HQ camera, and an Intel Next Unit of Computing (NUC) platform. We constructed the quadrotor helicopter ourselves. The motors, motor controllers and landing skid are from MikroKopter, while the carbon fiber sheets and the sensor board PCB are our own design. We used a Pixhawk 4 flight controller from Pixhawk.
ACKNOWLEDGMENTS
The authors acknowledge financial support from the Federal Ministry of Transport and Digital Infrastructure of Germany in the framework of mFUND. We also thank the City of Karlsruhe for providing the georeferenced orthophotos. The datasets used for the results presented in this article are available on our project website. This article is based on the paper “A Multi-Sensor Navigation System for Outdoor and Indoor Environments” presented at ION ITM 2020, the 2020 International Technical Meeting of The Institute of Navigation, San Diego, California, Jan. 21–25, 2020.
KARSTEN MUELLER received an M.Sc. from the Karlsruhe Institute of Technology (KIT), Germany, in 2015, after which he started research as a Ph.D. candidate in KIT’s Institute of Systems Optimization.
JAMAL ATMAN received an M.Sc. in electrical engineering and information technology from KIT in 2015. He is a research engineer in KIT’s Institute of Systems Optimization.
NIKOLAI KRONENWETT received an M.Sc. degree in electrical engineering and information technology from KIT in 2015. He is a Ph.D. candidate in KIT’s Institute of Systems Optimization.
GERT F. TROMMER received Dipl.-Ing. and Dr.-Ing. degrees in electrical engineering from the Technical University of Munich, Germany. He is a professor in KIT’s Institute of Systems Optimization.
FURTHER READING
- Authors’ Conference Paper
“A Multi-Sensor Navigation System for Outdoor and Indoor Environments” by K. Mueller, J. Atman, N. Kronenwett and G.F. Trommer in Proceedings of ITM 2020, the 2020 International Technical Meeting of The Institute of Navigation, San Diego, California, Jan. 21–24, 2020, pp. 612–625. https://doi.org/10.33012/2020.17165.
- Camera and Laser Rangefinder Navigation
“Navigation Aiding by a Hybrid Laser-Camera Motion Estimator for Micro Aerial Vehicles” by J. Atman, M. Popp, J. Ruppelt and G.F. Trommer in Sensors, Vol. 16, No. 9, 2016. https://doi.org/10.3390/s16091516.
“Vision-Based State Estimation and Trajectory Control Towards High-Speed Flight with a Quadrotor” by S. Shen, Y. Mulgaonkar, N. Michael and V. Kumar in Proceedings of Robotics: Science and Systems IX, Berlin, Germany, June 24–28, 2013. https://doi.org/10.15607/RSS.2013.IX.032.
“Laser Range Finder Aided Indoor Navigation for a Micro Aerial Vehicle” by P. Crocoll, J. Seibold, M. Popp and G.F. Trommer in European Journal of Navigation, Vol. 11, No. 1, pp. 4–14, 2013.
- Keyframe-Based Navigation
“Relative Navigation: A Keyframe-Based Approach for Observable GPS-Degraded Navigation” by D.O. Wheeler, D.P. Koch, J.S. Jackson, T.W. McLain and R.W. Beard in IEEE Control Systems Magazine, Vol. 38, No. 4, 2018, pp. 30–48. https://doi.org/10.1109/MCS.2018.2830079.
- Integrated Navigation
“3D Multi-Copter Navigation and Mapping Using GPS, Inertial, and LiDAR” by E.T. Dill and M. Uijt de Haag in NAVIGATION: Journal of The Institute of Navigation, Vol. 63, No. 2, Summer 2016, pp. 205–220. https://doi.org/10.1002/navi.134.
“INS/GPS/LiDAR Integrated Navigation System for Urban and Indoor Environments Using Hybrid Scan Matching Algorithm” by Y. Gao, S. Liu, M.M. Atia and A. Noureldin in Sensors, Vol. 15, No. 9, 2015, pp. 23286–23302. https://doi.org/10.3390/s150923286.
“Toward a Unified PNT — Part 1; Complexity and Context: Key Challenges of Multisensor Positioning” by P.D. Groves, L. Wang, D. Walter, H. Martin and K. Voutsis in GPS World, Vol. 25, No. 10, October 2014, pp. 18, 27–34, 49.
“Toward a Unified PNT — Part 2; Ambiguity and Environmental Data: Two Further Key Challenges of Multisensor Positioning” by P.D. Groves, L. Wang, D. Walter and Z. Jiang in GPS World, Vol. 25, No. 11, November 2014, pp. 18, 27-35.
Principles of GNSS, Inertial, and Multisensor Integrated Navigation Systems, 2nd edition, by P.D. Groves. Published by Artech House, Boston, Massachusetts, 2013.
- Stochastic Cloning
“Stochastic Cloning: A Generalized Framework for Processing Relative State Measurements” by S.I. Roumeliotis and J. W. Burdick in Proceedings of 2002 IEEE International Conference on Robotics and Automation, Washington, DC, May 11–15, 2002, pp. 1788–1795. https://doi.org/10.1109/ROBOT.2002.1014801.
Follow Us