GNSS clocks prove to be invisible and indispensable
In the early 19th century, as the sun moved across Britain from east to west, people set their clocks to local mean time, so that noon in Greenwich would occur about 16½ minutes before noon in Plymouth. Back then, travel on foot, by horse, or by coach was slow and inconvenient, so having to adjust their pocket watch, for the few who even had one, was the least of travelers’ concerns.
However, with the advent of railway travel, keeping track of time differences became confusing and impractical. In 1845, Henry Booth, a railway businessman involved with the Liverpool and Manchester Railway, petitioned parliament for a “Uniformity of Time,” arguing that when “the great bell of St. Paul’s strikes ONE, simultaneously, every City clock and Village chime, from John of Groat’s to the Land’s End, strikes ONE, also.”
In addition to rail travel, advances in industrialization and automation also increasingly required time standardization, synchronization, and optimization. With the advent of satellite navigation, the requirement for accurate time reached the order of nanoseconds, because a signal delay of one nanosecond corresponds to roughly one foot of distance on the ground. This is why atomic clocks were one of the enabling technologies for GPS.
In turn, atomic clocks on GNSS satellites became the most convenient way to calibrate and synchronize local clocks on the ground and to meet the stringent timing requirements of financial institutions, communication and broadcast networks, power utilities, transportation networks, weather radars, and a variety of scientific, commercial, military and consumer systems. Even though computer networks use PTP and other synchronization protocols, they all ultimately tie back to GNSS timing receivers to synchronize them to a global clock. This has made GNSS timing receivers ubiquitous and indispensable. Yet, the T in PNT (positioning, navigation, and timing) is invisible to most people and often an afterthought even for many of us in the industry.
I discussed some of the challenges of GNSS timing with representatives of five companies:
- Mark Tommey, sales director, Precise Time and Frequency
- Paul Skoog and Eric Colard, senior technical engineers of product marketing, Microchip, frequency and time systems business unit
- Jeff Gao, GM of communications, enterprise and data centers, SiTime
- Farrokh Farrokhi, founder and president, etherWhere
- Beacham Still, director of business development and operations lead, SyncWorks
For the full transcripts of my interviews for this article, visit here.
Positioning vs. timing
The first step in using GNSS signals for time synchronization is to process them to extract pseudoranges in the same way as for positioning — except that the signal from a single satellite is usually sufficient, because the position of the phase center of the receiver’s antenna is determined once and for all when it is installed.
However, most timing applications require much more accurate timing than positioning applications. “In GPS, let’s say that position accuracy is one meter with a clear view of the sky,” said Farrokhi. “That translates to a few nanoseconds of error. To achieve that over, say, a 24-hour cycle requires much tighter jitter on the receiver. So, the challenge for a timing application is to do a much better job of removing sources of errors compared to positioning. In the past, a requirement of 20 ns jitter in timing may have been enough for many applications. However, as the communication systems’ bandwidth and throughput increase, the requirement for timing becomes more stringent. We must come up with new algorithms and new architectures to reduce jitter and improve accuracy.”
Another difference is that most timing receivers — such as those in a cellular base station — are stationary and connected to an antenna with a clear view of the sky. “There are methods to extract and remove most uncertainties and inaccuracies,” said Farrokhi.
“Since it’s not moving, many satellites feed into the equations that help you solve the math to get you very accurate timing,” said Skoog.
”Finally, most GNSS positioning applications don’t require holdover, while for GNSS timing “holdover is a universal requirement,” said Gao, “ranging from four hours, for an edge data center or a small facility, all the way to 24 hours for a large cluster of servers or, in some extreme cases, even 48 to 72 hours for deployment in or near a hostile environment, where you expect jamming and all those bad things to happen.”
Accuracy requirements
The main critical applications for GNSS timing can be roughly grouped by the accuracy they require — but they are changing. “For example, for cellular systems up to 30 ns jitter used to be enough,” said Farrokhi.
“As we move to 5G and 6G, this spec becomes tighter and tighter. We now must be below 5 ns for 6G. As we increase the bandwidth and must support higher throughput, we are more sensitive to timing inaccuracies.”
“5G probably has the clearest requirement because you need 130 ns of relative time accuracy from one tower to another, mostly for handoff,” said Gao. “The accuracy requirements increase as you start to provide different services. For example, if different carriers want to aggregate some services, you start moving from 130 ns down to 65 ns, maybe even down to something more accurate.
“Today, what’s driving the growth of our business is all in data centers and artificial intelligence (AI),” said Gao. “That ranges from traditional front-end server infrastructure and back-end AI workloads to edge data centers.” Timing requirements for data centers differ from those for other applications in terms of accuracy, reliability, and distribution to different locations, not all of which can have an antenna on the roof. “It’s a very interesting, multi-dimensional problem.”
The requirements for financial services are defined in the United States by the Securities and Exchange Commission (SEC) and in Europe by the European Securities and Markets Authority (ESMA). To be legal, timing must have an audit trail all the way back to UTC and not diverge from it by more than 100 μs at the transaction level — the servers and the routers, said Gao.
Additionally, in the United States, the Financial Industry Regulatory Authority (FINRA) requires financial institutions to be 50 ms to the National Institute of Standards and Technology (NIST). “That’s a hole so big you can drive a bus through it,” said Skoog. “However, if you want to trade on a stock exchange in Europe, you’re down to 100 µs. People typically will get a time server that will get them down to where they’re doing all their time stamping at better than a microsecond, but they put in a rubidium oscillator, so that if GPS goes away they can still finish that trading day and be better than 100 µs to UTC.”
“For the bigger data centers there are no industry-wide standards,” said Gao. “Cloud service providers can each define their own requirements. What they care about is the window of time uncertainty: whether at the server level I have an error of 1 ms or 5 ms. You can go to 1 μs of error or down to 10 ns of error, each of which will enable you to provide a set of services. At 100 μs, for example, 99% of all your services are running. At 5 ms, you may have to start shutting down some services. More accurate time on the server also enables you to minimize the network traffic. So, conceptually, you can look at data center requirements anywhere from 5 ms all the way down to hundreds of nanoseconds, or even more accurate.”
“Many markets have a lot in common, because they have communication networks,” said Colard. “For example, train and subway networks have communication networks very similar to those of telecoms. In the power industry, you have a communication network and a substation network. In the defense sector, you have confidential communication networks that are very similar to those from AT&T or Verizon. So, all these markets have the same requirements and the same features and challenges.”
“Probably the number one reason why people put in a Stratum 1 NTP time server is to make sure that their log file time stamps are accurate,” said Skoog, “because that makes their network management systems more accurate and reliable.” However, accuracy is not the only concern. “The clocks are pretty accurate, but they connect to the network. All the network guys — the people who manage these networks — cannot plug this clock in until the security people give their stamp of approval.”
Clocks and oscillators
For all these accuracies, the key mechanism is GNSS timing. “In a typical data center,” said Gao, “you’re going to start with two grandmaster clocks, which are boxes that combine GNSS timing with locally accurate timing. That’s probably going to provide 5 ns to 10 ns of accuracy. More importantly, in addition to that, they have extremely good local oscillators that could be OCXOs, even some atomic clocks, that enable them to hold over if they lose GNSS timing for four, five hours, or 10 hours — up to 24 hours or 48 hours for a huge facility with many AI clusters.”
Likewise, many financial services units don’t have GNSS antennas for every server, router and network card. “It just gets tremendously expensive to distribute the signal to each server,” said Gao, “because most of them are housed in huge warehouses that don’t have access to an antenna. They typically have a grandmaster clock.”
“The GPS receiver itself is one product for all the segments that we sell into, but configured depending on how many timing outputs the customer wants and which frequency outputs,” said Tommey. “We also put a holdover oscillator into the unit that — if, for whatever reason, the GPS signal is lost — continues to provide valid time outputs for days, weeks, or even months.”
“The advantage of GNSS is that over a long period of time it is extremely accurate,” said Gao. “The accuracy of an oscillator depends on how much holdover time you require. GNSS has a natural resolution of roughly 20 ns. At 5 ns, you start to rely on your local oscillator to do the next level filtering. For a base station or a core router, you need to get to 5 ns or better. So, you have GNSS native, you have an oscillator to do filtering to get a better accuracy and holdover, then you have network-based timing in a time scale of some sort.”
“A data center, core network, or edge network never relies on a single source for timing,” said Gao. “GNSS is always viewed as extremely stable timing that everybody needs when you have access to the receiver and the antenna. Then you rely on the local oscillators and 1588 network timing as complementary technologies to ensure that you will always have timing all the time at a given accuracy.”
Networks
Increasingly, timing is distributed over a network. Some markets are more focused on Network Time Protocol (NTP), which has an accuracy of a few milliseconds, while others, such as telecoms, are more focused on Precision Time Protocol (PTP), which follows IEEE standard 1588 and is traceable all the way to a grand master somewhere. If someone just needs NTP, “it’s pretty easy to get 1 µs to 10 µs time accuracy between an NTP server and an NTP client,” said Skoog. “They may not even need 1 µs to 10 µs, but they’re going to take it if they get it, because log file correlation is very useful. Then when you get to PTP, it brings in a lot of hardware, time stamping and on-path assistance to get rid of some of that asymmetric delay. Now you’re down to sub-microseconds, and even approaching low nanoseconds. Then, if you must be down to 1 ns or something smaller, you’re into a 1 PPS application.”
Jamming and spoofing
Any infrastructure that must always be in service requires redundancy and resiliency. “We build rubidiums, cesiums, hydrogen masers and so forth,” said Skoog. “For years, the cesium was the domain of the metrologist. Those days have passed. Sure, metrologists buy them. But you need a plan B for what you’re going to do if GPS goes away, so you can connect pretty much all our products to a cesium clock.”
When it comes to the impact of jamming and spoofing on timing, perspectives vary substantially between companies. “We’ve only ever had one customer who thought they’d been jammed or spoofed,” said Tommey. “We honestly don’t see very much of that at all.” However, according to Still, in the United States, a common problem is the proliferation of personal GPS jammers. “You see this through people with corporate vehicles and a fear of being tracked — similar to the rise of VPNs. Our power distribution systems, our substations, our telco central offices are in the communities they serve.” The problem arises, for example, “at substations located next to truck stops, night clubs, bars, all the different places that folks might not want to have pop up on their corporately tracked vehicles.”
Often, when network operators see anomalies on their GNSS-based timing systems, it is challenging for them to identify and effectively mitigate the source of that interference. “You can naturally go to the site and try to do audits, and there are tools to try to measure and monitor this,” said Still. “What is more common and practical for network operators is designing and deploying their GNSS networks with the expectation that they’re going to encounter some form of interference.”
Current wars have spurred great interest in distribution of timing over optical networks, said Colard. “Close to Russia, China, Israel, any of the conflicts in the world, there have been attacks on these networks every day. Spoofing is the main concern that I’ve seen. Anti-spoofing or anti-jamming are not enough. You need to find alternate time references for when GPS fails for any reason, so it’s an architecture discussion. For example, assisted partial timing support (APTS) has been used for years. It connects to other PTP grandmasters in the network and provides PTP input while GPS is down. Another alternative is to rely less and less on GNSS in general.
“The alternative to using GPS receivers everywhere is to limit them to very specific strategic points and distribute time over optical networks,” said Colard. “There are segments of hundreds of kilometers in many countries without any GPS receivers. There are also enhanced primary reference time clocks (ePRTCs), which are usually connected to GPS and cesium clocks for resiliency. Often, carriers now are not even using GPS there. They’re using metrology labs and the national time coming from NIST or similar national time agencies as the time reference, instead of GPS, to limit the use of GPS as much as possible across the network.”
Multipath
As with the impact of jamming and spoofing, perspectives vary regarding the impact of multipath on timing. “We haven’t seen issues with multipath, except where users don’t do a good job of positioning their antenna or antennas,” said Tommey. Conversely, Gao said that “multipath is extremely relevant to timing. Let’s say, to give an extreme example, that you’re locking onto a single satellite. Depending on whether you have an unimpeded line of sight and no multipath or the signals are bouncing off a building, the difference could be 100 ns to 500 ns.”
“Multipath might be a problem in a GPS antenna for timing, which usually sits on the roof,” said Skoog. “If you can keep this signal from reflecting up to the antenna in the first place with an adequate ground plane, that’s probably the singularly most effective thing you can do. I’ve been in GPS a long time. It used to be a very big deal. I never get asked about it anymore. It’s an old problem that’s sort of been solved.”
Many people who have static antennas do not understand “that their sky view changes over the course of the year, and their visibility throughout the seasons and the winter solstice will be different than in the summer,” said Still.
Transition
The telecom industry is transitioning how it times and synchronizes networks from the time-division multiplexing (TDM) method that it has used for decades to IP and packet-based networks. “Particularly in TDM networks, the idea of UTC-traceable time of day was not really a focus until the advent of NTP, but ultimately it was all frequency synchronization,” said Still. “The idea was that if your network was in a frequency lock, and the phased alignment was good, your network would all drift together. So, TDM networks were also inherently synchronous, in a Synchronous Optical Networking (SONET) environment. You can distribute that frequency again throughout your network and pull it down from the overhead. By comparison, packet networks are inherently asynchronous, so it breaks the frequency chains that we’ve long relied on to distribute and synchronize our networks, and ultimately requires a new approach.
“Modern networks and applications are increasingly reliant on precision time from GNSS-derived sources — high speed, low latency, high throughput, all being deployed to meet current and future needs,” said Still. This requires new sources of time, such as UTC-traceable time of day. Global networks and edge applications will all rely on time of day. “Not only are you trying to keep all your own networks synchronized, you must also have a relative accuracy to the rest of the world. So, some significant changes are taking place, particularly for engineers who have spent their whole career on TDM or SONET networks.”
Now, Still said, “we can be more accurate using PTP on the edge than we can be with GPS. On the edge GPS now is an option. We keep those in place, distributed throughout the network, in case of bi-directional fiber cuts or losing some of the transport that we use to distribute precision timing, but you can be more accurate, more secure and more stable by using PTP than we can by relying on GPS.”
Conclusions
GNSS timing receivers are central to timing vast swaths of our industrial societies. Yet, as with positioning and navigation, growing concerns about jamming and spoofing are motivating some sectors to reduce their reliance on GNSS for timing and to develop alternative time references, including low-Earth orbit (LEO) satellites and eLoran. Meanwhile, many networks are transitioning to a new way of distributing timing.
Follow Us