Road navigation using multiple dissimilar environmental features
Look Around
Road Navigation Using Multiple Dissimilar Environmental Features
New navigation paradigms combining GNSS and inertial with additional sensors can increase overall reliability and power robust road navigation. A feasibility study tests a barometer, a magnetometer and a camera looking at road signs, and concludes that such sensors examining environmental features can supply the necessary context for frequently traveled or shared routes.
By Debbie Walter, Paul D. Groves, Bob Mason, Joe Harrison, Joe Woodward and Paul Wright
Where a robust and reliable position solution is required, it is necessary to combine GNSS with other technologies. Dead-reckoning is only suitable for bridging short outages. For robustness against longer GNSS outages, alternative position fixing techniques are needed. Radio-based signals have been excluded from this study as they are either not yet mature or are, like GNSS, susceptible to jamming, though they may still play a part in the final navigation solution.
For land navigation in particular, a new approach is therefore needed. Environmental features provide a potential source of location information. These include buildings or parts thereof, signs, roads, rivers, terrain height, sounds, smells and even variations in the magnetic and gravitational fields. Visual navigation technologies are being developed and are likely to be complementary to the feature-matching discussed in this article; however, they will not be directly discussed. The environmental features will be integrated with dead reckoning to provide robust positioning.
The overall solution is to place hardware within a batch of vehicles, comprising multiple sensors, including a GNSS receiver and sensors for dead reckoning. Road map matching could also be included. During normal usage, the GNSS receiver is used for positioning and a database is updated with the feature information from all the sensors accompanied by location stamps from the GNSS-based position solution.
As the multiple vehicles travel around an area, the database is built up for these routes. In the event that the GNSS receiver does not receive sufficient signals to maintain an accurate position, the database is called upon for navigation by environmental feature matching. In this scenario, the sensors continue to take measurements and, by combining the knowledge of the last known location, dead reckoning and the sensor’s outputs, the positioning algorithm draws upon the database to estimate a positioning solution. This method is shown in Figure 1 and Figure 2.
This navigation system relies upon the roads being travelled on a regular basis so that the “maps” created from the sensor’s outputs are kept up to date and therefore valid. The most likely users of this technology would be fleets of vehicles that can share the mapping information. To focus on a typical system, use in emergency vehicles was considered. Knowing your position is vital in an emergency vehicle, and a system that incorporates a back-up to GNSS would be advantageous. The motivation for maintaining a continuous positioning solution is that, when moving within a complex environment, it is necessary to maintain the integrity of the current position. In emergency situations, delays are not acceptable and integrity is vital. There will be no point in time when the vehicle can be delayed to obtain a position fix.
Although this system will be designed with emergency service vehicles such as ambulances and police cars in mind, it could also be used in wider applications such as fleet management and tracking devices. Ultimately, crowd sourcing or cooperative techniques could be used to pool information from different vehicles equipped with the system. With a very large number of vehicles maintaining the feature database, the system could adapt to changes in the environment very quickly.
To reliably achieve meters-level positioning across a range of different challenging environments, a paradigm shift is needed. We need to use as much information as we can cost-effectively obtain from many different sources in order to determine the best possible navigation solution in terms of both accuracy and reliability.
This new approach to navigation and real-time positioning in challenging environments requires many new lines of research to be pursued.
ROAD EXPERIMENT
A set of sensors with a GNSS receiver were attached to a car and driven in closed loops around Stoke-on-Trent on multiple road types over multiple days. The loops were repeated three times on each day on four road types and then repeated over three consecutive days. The sensors used can be seen in Table 1.
The accelerometer, air quality sensor, barometer, dust sensor, light sensors and microphone interfaced with an Arduino microprocessor which outputted the signals from the sensors to a laptop. The Arduino sensors had a data rate of 20 measurements per second. There was an individual accelerometer (attached to the axel of the vehicle) for use in identifying road texture. There are also accelerometers that form part of the inertial measurement unit (IMU), and these were used for dead reckoning.
The onset of movement as recorded from the IMU was used to assist in identifying the beginning of each circuit. During the car journeys, there were two experimenters, one to drive the car and another to monitor the sensors. There were 5–10 minutes between each round; during this time, the sensors would be turned off and then restarted. The equipment was designed for the outputs of the sensors to be post-processed.
The four classes of road were suburban, urban, rural and high-speed road. The route taken and a view from Google Street View showing the general type of landscape traveled through is shown in Figure 3.
A road experiment travelled the routes, using GPS receivers with the Arduino, video camera and the IMU so that GPS time could be used as a constant for the various sensors.
WHOLE ROUTE ANALYSIS
The outputs from the sensors were evaluated initially for their cross correlation over the whole route. This process assessed whether the data from different runs over the same terrain were similar and thus had a high cross correlation. This is vital for this map-building method of navigation. This section deals only with sensors that produce continuous output. The next section discusses discrete features.
Cross-Correlation Coefficients. The correlation coefficient (see online version of this article for derivation equations) is used to calculate the cross-correlation of two rounds of sensor data. The cross correalation coefficient is a normalized value. If a signal is correlated with itself, at zero offset (autocorrelation) this would give a value of 1; entirely uncorrelated data gives 0. Signals 180° out of phase would give a correlation value of –1.
The cross-correlation coefficients are shown in Table 2 for all of the sensors. It shows the coefficients for the four different road types using combinations of rounds (round 1 and 2, round 2 and 3 and round 1 and 3 for each three days) from the same days and the average of the coefficients for all the combinations. The sensors with higher coefficients are discussed in more detail in the following subsections. Road signs do not have cross-correlation coefficients; they are treated differently as this is a discrete measurement.
Accelerometer. The magnitude of the acceleration from a accelerometer triad was used in this experiment as a method of measuring road texture. A zoomed-in section of the acceleration as recorded from the accelerometer against the distance traveled can be seen in Figure 4.
It is difficult to see similarities in the output from the different rounds, although the accelerometer can show movement from stationary to driving and this was used to initialize the sensor outputs from the XSens IMU. This is shown in Figure 5 at 44s.
Barometer. The barometer measures height change of the vehicle. This sensor consistently produced the highest cross-correlation coefficient, shown in Figure 6.
Magnetometer. The magnetometer produced data with distinct spikes caused by various magnetic anomalies in the environment being travelled through. This can be seen in Figure 7 for the high-speed road.
Figure 8 is a zoomed-in section of the magnetometer data from the high-speed road in Figure 7. It shows correlation with an offset of approximately 44m between round 1 and round 3. This is mostly due to synchronization errors between the magnetometer counter and the GNSS receiver clock. This is the reason a second run of the road experiment was completed.
Microphone. The microphone was able to pick up clear signals when the vehicle was stationary, and the signal seems to be dependent on the speed of the vehicle. Figure 9 shows the profile from the microphone.
It may be possible to combine this data with the accelerometer or odometer data to develop a clearer picture of what sound is resulting directly from the road surface and what is speed related, although this still may not result in a useful feature for this study.
Thermometer. Temperature can vary particularly in a rural environment, seen in Figure 10. Similarities are not consistent across environments as seen from cross-correlation values in Table 2 and are likely to change with the seasons and due to weather conditions.
Light Sensor. Four light sensors were used in the experiment: upwards, forwards, left and right facing. Figure 11 shows the data from the upward-facing sensor on the high-speed road. There are distinct events where the light level drops. Many of these instances correspond to gantries (bridge -like structures spanning highways displaying speed limits and other information). These features could be treated as discrete, whereby the sharp dips in light level would be treated as momentary events. Some of the information would be lost in treating the ambient light as discrete, but it would make the feature more robust against changing light levels due to shadowing or cloud cover.
If light is treated as a continuous feature, it can be seen in Table 2 that the cross correlation was inconsistent. This is partly due to the effects of changing light conditions. On the days with direct sunlight, the light sensor would reach its maximum intensity and be saturated. This can be seen in Figure 11, and this affects the cross-correlation coefficient calculated.
Feature outcome. Thermometer data has been discounted as although it gave a cross-correlation coefficient of about 0.5 for the rural route, the other routes had lower cross-correlation values. Similarly, the microphone data had moderate success in high speed and rural environments but not in the other two routes. Therefore it will not be taken forward to the next phase although it could be used in the future if further processing was carried out on the data. As with the microphone, the light sensor had cross-correlation values greater than 0.5 in the rural and urban environments but had lower values in the other two. The success of this sensor is more reliant on the weather conditions than the environment type. At the current point this will not be brought forward to the next stage.
The accelerometer (used to measure road texture) showed no correlation with cross-correlation coefficients of approximately zero (between 0.0 and 0.3). It was useful for use in dead reckoning, but does not illustrate road texture.
The magnetometer and barometer showed the greatest potential for positioning with the highest cross-correlation values consistent over all the environments. These sensors are taken forward into phase 3.
DISCRETE FEATURES
A discrete feature is one where there are environmental events that occur at one position but can repeat multiple times along a route. The discrete feature can either be Boolean (an event occurs or does not) or it can be descriptive (different possible events or the strength of the event). Examples of discrete features include lamp posts, speed humps and shop signage.
In this paper, the discrete feature that will be discussed will be street signs, although the techniques used are applicable to many discrete features. How the signs are identified will also not be discussed in detail in this paper; instead, focus will be on how a sequence of discrete features is used show consistency across a route.
SCANNING METHOD
This section will look at scanning one round to find the region that best matches a region from a different round. Figure 12 illustrates this technique.
The test data is scanned through the reference data. Cross-correlation coefficients are calculated as the test data is scanned through. The aim is to locate the position of the test data using the reference data for which the position is already known. The output of this exercise gives a cross-correlation profile (cross correlation as a function of position).
This profile can be treated similarly to a probability density distribution of position (although they are not the same) and so gives an idea of the probability of the position at each point in the test data.
Results. Two rounds from the suburban route are shown in this section as an example of the results achieved with the scanning method. Figure 13 and Figure 14 show the cross-correlation profiles for magnetic field and height for day 3 rounds 1 and 2 on the suburban route respectively. The test data region chosen is centered at 1.6 km into the route. The test data region size was 125 m for 4.5 km reference data.
It can be seen that the magnetic field has a number of peaks along the route. The peak with the highest cross-correlation coefficient is at the 1.6-km point (which is the correct position). For the height figure, there are many broad peaks at similar cross-correlation values approximately 700 m apart. The height peaks are broader than the magnetic peaks because the terrain height changes more slowly than the magnetic field.
Ambiguities, Dead Reckoning. The two graphs in Figure 13 and Figure 14 show that there are ambiguities present in both of the features. The majority of the features will have some ambiguities along a route, and so it is important to develop a technique that could mitigate them. One way ambiguities could be mitigated is by using the information available from dead reckoning. The dead-reckoning solution will have a specific position error (which grows with time), and the ambiguities from the features can be reduced by only considering the candidate position within the dead reckoning position uncertainty bounds.
COMBINING FEATURES
The quality of position information that can be extracted from a particular feature type varies with location. Thus, a better position solution can be obtained if higher weighting is attributed to higher quality features. Factors that will need to be considered include the precision of position that can be extracted from a feature, the level of ambiguity (Are there multiple candidate positions?) and the reliability (how much measurements vary unpredictably with time).
There are multiple ways to combine the scores from different features. Initially, there is the decision as to when in the position estimation process to combine the features. There are two ways to do this: Either combine the scores for each feature, or combine the position estimates for each sensor. The following subsections will describe a number of ways of combining the scores before estimating the position. It will be noted if these techniques could also be used to combine position estimates.
Equal Weighting. A simple combination technique is for each feature score to have equal weighting. The equal weighting used earlier took the two scores and found the average. This way, no single feature will dominate the navigation solution. As the feature scores are not probabilities, the values are not self-weighting, therefore it cannot be presumed that that equal weighting would always provide an optimal position estimation.
Test Data Weighting. This method takes a set of experimental data and empirically determines the weighting coefficients based on the best position solution in this test dataset. The test data would be used to maximize the score of the combined features using weighting at the correct position. This would have the benefit of using real data to determine the weighting, but its strength is based on how representative the test dataset is to the environments that the car will travel in.
Environment Weighting. This would detect the environmental context and use this to select an appropriate set of weights. For example, the presence of many Wi-Fi sources would suggest a suburban or urban environment, while a vehicle speed of 31m/s (70 mph/113 km/h) would suggest that the vehicle is likely to be on a highway. Based on this knowledge, it would possible to use a specific weighting coefficient set is developed for that environmental context.
Cross-Correlation Weighting. This weights each feature according to the characteristics of the cross-correlation coefficients profile obtained using the scanning method described earlier. This enables the weighing to adapt to the quality of the data. Figure 15 shows traits of a set of peaks that affect the confidence in the highest peak being the correct position.
Taking the uncertainty in the current position, only peaks that, for example, fall within 3 standard deviations would be evaluated. The characteristics of the tallest peaks compared with the others would be used to determine a measure of confidence for that feature.
There will be more confidence in the tallest peak (h0) if there is a greater difference between its height and that of the other peaks within the uncertainty range (h1, 2, 3). In Table 3 this is height.
The next factor is the number of peaks within the uncertainty range (No. Peaks). The more peaks, the less confidence that the correct peak has been chosen as the position estimate.
The average cross-correlation coefficient within the uncertainty region (γ) would affect the confidence in the estimated position. If the average coefficient value (Av. CC) was similar to that of the highest peak, this suggests insufficient variation in the data being analyzed from that feature.
Finally, the standard deviation could be used. Calculating how many standard deviation (Std Dev) the highest peak was from the mean could provide a weighting value.
Each of these characteristics was looked at separately and compared against the benchmark of equal weighting using the scanning method comparing multiple pairs of rounds on different routes. It can be seen in Table 3 that the standard deviation from the mean provided the best weighting outcomes. To optimize the weighting algorithm, it may be that using a combination of these profile characteristics would provide the best position estimation.
Figure 16 and Figure 17 show examples of cross-correlation profiles; they show high and low confidence respectively. Figure 16 is the cross correlation of data from day three, rounds two and three, on suburban roads. It has a few spaced out peaks over the full profile, and one of the peaks is clearly higher than the others. Figure 17 is the cross correlation of data from day two, round three, and day three, round three, from the high-speed road. It has many similar height peaks all around the value of 0.5.
CONCLUSION
Environmental features have sufficient variability spatially and stability temporally for a database of features to be developed to create a map of the environment. This supports the hypothesis that it is feasible to map a space and then create a feature-mapping and navigation algorithm using a combination of environmental feature sensors, a GNSS receiver and sensors for dead reckoning.
FUTURE WORK
The next step of the project is to develop a feature-matching, mapping and navigation algorithm that incorporates inputs from the multiple sensors, a GNSS receiver, map-matching and sensors for dead reckoning. The algorithm will run collecting sensor data while GNSS receiver data is available, and store this in a database along with location stamps until called upon in times of GNSS receiver signal disturbance. The data from the road experiments will be used for a test database in developing the navigation system.
ACKNOWLEDGMENTS
Debbie Walter is funded by Engin-eering and Physical Sciences Research Council (EPSRC) and Terrafix ltd.
The authors thank Paul Neesham for a method of manually recording street signs seen in video footage and Juliusz Romaniuk of Terrafix for advice and creating hardware that contained the sensors’ carrier frequencies.
DEBBIE WALTER is a Ph.D. student at University College London in the Engineering Faculty’s Space Geodesy and Navigation Laboratory, and a software engineer at u-blox.
PAUL GROVES is a lecturer at UCL, where he leads a program of research into robust positioning and navigation. He holds a Ph.D. in physics from the University of Oxford.
BOB MASON is chief scientific officer and director of Terrafix Limited, holding a Ph.D. in communications and neuroscience from Keele University.
JOE HARRISON is principal radio frequency design engineer at Terrafix Ltd.
JOE WOODWARD is a software design engineer at Terrafix Ltd.
PAUL WRIGHT is a development engineer with Terrafix Ltd., with doctoral degrees in physics and electronics.
Follow Us