Your behavior appears to be a little unusual. Please verify that you are not a bot.


GNSS at the front end and back end of Intelligent Transportation

June 8, 2023  - By
Image: Hexagon | NovAtel

Image: Hexagon | NovAtel

It has been a wild decade, with so many players in the autonomous vehicle (AV) market, all striving for a leg up. Until the dominant design of present AV stacks emerged, there was no small amount of experimentation and less-than-successful alternate approaches. For instance, there was one big-name player that initially sought to create an AV solution without GNSS. Reality set in, and they soon embraced GNSS as an essential component.

Gordon Heidinger, segment manager, automotive and safety critical systems at Hexagon’s Autonomy and Positioning division, has had a front-row seat from which to observe, and contribute to the evolution of AV.

“I’ve been in the automotive industry for 20 years, all the way from OEMs like Chrysler to tier ones like Harman,” Heidinger said. “I’ve worked on the engineering side, on the project management side, and have now joined Hexagon | NovAtel to help further their involvement in the automotive industry. NovAtel was there for aviation 20 years ago, helping develop systems for planes to take off and land autonomously — we have a deep bench when it comes to applying such expertise for vehicular autonomy.”

NovAtel has long provided GNSS and IMU products and solutions, as well as real-time positioning services. Each are key elements of AV sensor stacks and overall autonomy solutions. Parent company Hexagon has multiple divisions contributing to intelligent transportation — on both the front end and back end.

The Front End

AV systems require highly reliable and smart sensor stacks that typically include cameras, radar, lidar and sonic sensors; these provide the relative positioning for advanced driver assistance systems (ADAS), which are becoming commonplace for newer vehicles. There are also implementations that include GNSS/IMU for navigation and lane keeping.

“Lane keeping is possible to a limited degree with combinations of the other sensors; however, you need GNSS to let you know where you truly are for autonomous driving,” Heidinger said. “Are you on the right freeway lane in Ottawa, or is this an exit ramp? This was a big problem with today’s simple single frequency solution; a car can assume highway speeds on an exit ramp, not realizing it was an exit ramp.”

Only with the absolute precise positioning that GNSS provides, and a high-definition map, level 4 autonomy — and potentially level 5 someday — could be achieved. With current sensor stacks, when the car is moving, it can reliably detect the other cars moving in its vicinity. Furthermore, vehicle-to-vehicle (V2V) solutions are being developed and tested, which enable a vehicle to share data about where it is going, its speed and acceleration, and its current location. We may remain far from full autonomy until such solutions are broadly deployed, however we will see some of the vehicle-to-everything (V2X) solutions sooner than later.

Various developers and departments of transportation around the world are testing short range V2X communication systems.
“We would need real-time construction zone updates,” Heidinger said. “It would be tough to do lane keeping if a construction site closes or diverts lanes during the course of a day. Or if cameras detect crashes, or blocked lanes, this will need to be broadcast immediately and continuously in real-time.”

A representative example of a production high precision positioning system was demonstrated at the recent Consumer Electronics Show 2023 (CES 2023). ZF Friedrichshafen AG (ZF) has developed ProConnect — a dedicated short-range communication (DSRC) solution that enables positioning and communication for use in applications with roadside infrastructure, such as traffic lights. It can be scaled to include other over-the-air alerts that could include first responder vehicle proximity and construction site status. At CES, the GNSS positioning was demonstrated with an autonomous vehicle platform from Hexagon.

“The precise map and the real-time updates from V2V and V2X systems all need precise absolute positions to relate objects to each other,” Heidinger stated. The question then becomes “…how reliable and trustworthy is that solution”?

There are international automotive-grade requirements such as the ISO 26262 standard for electrical/electronic systems, and automotive safety integrity levels. For instance, ASIL-B(D), and cybersecurity standard ISO/SAE 21434. The latter provides protection against external access without authorization.

“The level of reliability required is extremely high,” Heidinger said. “After all, these are human lives, in metal boxes hurtling along at highways speeds. There are ASIL standards that call for a probability of 10-8, or 1 in 100 million, in an hour that the system is wrong. These levels of reliability need to apply to electronic components, communications, and the availability of the GNSS positioning solution to really automate any type of vehicles. You’ll encounter similar AV standard references to five-nines, or 99.999%.”

Positioning Services

Heidinger explained that for most aspects of autonomy, GNSS can be “good enough”, even just to a foot. However, uncorrected, GNSS can never meet even those needs — achieving an accuracy of a few meters at best. Then there is the matter of reliability. Augmentations like real-time kinematics (RTK) and precise point positioning (PPP) apply broadcast “correctors” that can yield centimeter positions. RTK is not practical for broad areas or highway and road networks as it requires dense infrastructure and two-way communication with the vehicle, which can introduce security challenges.

Solutions for autonomy are typically PPP. While there are many applications of PPP that use clock, orbit and ionospheric model data broadcast from geostationary L-band satellites, for applications such as surveying, mapping, maritime and agriculture, this would not meet the reliability requirements for AV. The Achilles heel of broadcast PPP is that the satellites are usually limited in number and positioned over the equator; the vehicle can often lose sight of these. Instead, PPP services, such as that provided by NovAtel and others, are tapped by vehicles via mobile internet connections; this means cellular networks. While cellular services can often meet reliability goals, there are still vast areas of highways where availability is sparse.

The other challenge for PPP is the convergence time needed to get reliable sub-foot precision.

“No one wants to wait five minutes or more for it to converge,” Heidinger said. “By processing data from semi-dense networks of reference receivers, our PPP can converge rapidly enough to be ready to roll as soon as you start driving.”

The Back End

A free-for-all of autonomy is not going to happen on highways and roads that are not precisely mapped and kept up to date.
“There are visions of crowd sourcing map updates from the sensors in cars,” Heidinger said.

Crowd-sourced data is not systematic enough, though, and could be inconsistent. After all, there are privacy considerations, and how many vehicle owners would be willing enough to participate?

There are numerous mapping and imaging “buggies” plying road and highway networks on an ongoing basis; this could provide a base layer. But how precise? The specific applications these mapping buggies support may not need high precision. And operators may not be willing to invest in high precision/accuracy. The precision of the 3D maps would need to be higher than the target range of the AV systems. The technology exists and is broadly used for various applications in the form of centimeter precision 3D mobile mapping — at highway speeds. Such systems with lidar scanners, cameras, and positioning solutions can include GNSS, IMU, wheel speed encoders, and SLAM lidar for enhanced position stabilization. An example is the Pegasus TRK from Hexagon | Leica Geosystems.

GNSS is the key component — the provider of precise absolute positioning. When people drive, they are the sensor stack, and they are (mostly) aware of the context of where they are and can see and hear what is going on around them. Before we can hand over the driving duties to machines, and fully accept any autonomous driving technology, it will not only need to be as smart and aware as humans, but much better and more aware than humans. Autonomy sensor stacks can tell a car what it is doing, and what other things are doing in its immediate vicinity, but without a precise map, and knowing precisely where it is in real-time, a car would be still tip-toeing around in a fog of uncertainty.