Google to provide raw GNSS measurements
User location takes center stage in new Android OS
Raw GNSS measurements from Android phones. Yep, they are coming. At Google we have been working with our GNSS partners to give application developers access to raw GNSS measurements from a phone.
This is really exciting, and marks a new era for our GNSS community. At Google I/O in May, we announced that raw GNSS measurements are available to apps in the Android N operating system, which will be released later this year. This means you can get pseudoranges, Dopplers and carrier phase from a phone or tablet.
When can you get it? Well, it will take some time to proliferate throughout the ecosystem, but the first phone that will provide raw measurements will be the Nexus phone that we will launch later this year, and then next year you will see new Android handsets start to support it, as it will become a mandatory feature in Android.
Tutorial. At the Institute of Navigation’s ION-GNSS+ conference this September, Frank van Diggelen and I will teach a tutorial where you can learn to access and use these raw measurements. This will be a hands-on course where you collect, view and process raw measurements. You will leave the class with the data, Google software tools, and the knowledge of how to use them.
This tutorial is open only to ION-GNSS+ attendees. To register for the conference, visit www.ion.org/gnss/registration.cfm.
Then, to tailor this tutorial to your own needs, visit this online form and let us know what you’d like us to cover in the class.
More from Google I/O
Finally, I’d like to give you some highlights from Google I/O, the annual developer-focused conference held by Google in the San Francisco Bay Area.
During the keynote, Google CEO Sundar Pichai made many references to location, context and places. This was really exciting to see. We are innovating and working on a lot. It is amazing, even to me, after more than 13 years in the field of location, arriving at Google just under two years ago, to see how location and a user’s context are at the center of our connected world.
At Google, we are exposing as much as we can to the ecosystem so that innovation can thrive around us.
Sundar Pichai’s keynote address shows that user’s location is at the center for the knowledge graphs that we are building.
Conversational examples were shown on Google Assistant and on how it can be used to get things done in the world. Sundar spoke on how location and context are the key to this future, noting that a user standing next to a famous sculpture can simply ask: “Who designed this?”
All Google I/O talks from the Android Location and Context Team can be found at these YouTube links :
Hello, one suggestion that I have to include in this seminar is the ability of the android OS to utilize what is called “mock location”, which enables one to geotag pictures with an external bluetooth enabled GPS device and disregard the internal chip-set which is tied to the camera by default. The picture then takes on the modified location EXIF data. I have found this very useful for certain applications.I did not have much success with the Apple OS.
Google’s announcement is really wonderful news for folks who want to have the most control over how a position/velocity is computed. It also allows sensor fusion at the measurement level. The downside is that the user also has to worry about all the gory details, and there are tons of them. I will take the liberty of quoting the great Kanwar Chadha, a founder of Sirf, the leader in consumer GPS for a key time in the industy. He mentioned it took almost 5 years to get the software stable. I would confer, and that was probably with a pretty good team. So raw measurements are not for the faint of heart. I also will quote some folks from University of Calgary: they had access and had a hard time getting close to the ground track performance that Sirf did with the same measurements. So there is a lot of knowledge and understanding of how the tracking loops work, their stats, and when their measurements can be trusted. Only the designers can know it, unless their is awesome documentation. So Google now has to come up with a way of standardizing across chip providers so that they are not overwhelmed with support and complaints. Another aspect to worry about is timing…not only the measurement and navigation data time tags, but being able to get all the data ALL the time. As a current android developer, I still have issues where the phone maker, not android, locks me out from time to time…Its a difficult and frustrating problem. In the end, another solution might have been to make the Kalman filter tuning open so that the user can get the benefit of the chips knowledge that goes into the KF, without all the hassle of going back to the prehistoric world of raw data. In the end, they just want to set the static logic, the time constants, and a few other thresholds. But that is also not easy as every chip maker has a different algorithm for doing the same thing. In the end, its really hard to design a set of rules and impose it on the chip makers so the app developers have a simple time. I think in the end, their will be companies who spring forward with carrier phase applications where the integer ambiguity logic will be the main differentiation point. After all, if you can get a low cost platform with communication, sensors, and raw measurements for everything, you might be able to compete on performance. And that is how Sirf dominated for quite along time. Good luck to you folks getting dirty with raw measurements and to Google figuring out how to support them. I hope someone makes money on the adventure.