[time-nuts] The future of Telecom Frequency Standard surplus

Lincoln lincoln at ampmonkeys.com
Thu Jun 1 14:21:22 EDT 2017


> On May 31, 2017, at 7:07 PM, Bob kb8tq <kb8tq at n1k.org> wrote:
> 
> Many systems are indeed going to much tighter holdover numbers. That is requiring either a much better OCXO or an Rb as a holdover 


So sync limits are going down. 4G-TDD has a node to node limit of 3µs / node to UTC of 1.5µs. 5g is looking at ± 500 / 400 ns node to node but hasn’t really been deployed on a large scale that I know of. I don’t know that much about the workings of 5G to comment. 

The density of traffic (think how many cell phones would be in a mall or a stadium event. ) are requiring more nodes but that cover small geographic area. 

This is providing economic motivations to separate the eNode B (cell radio) in to two units. The analog radio and ADC / DAC into a head unit that gets mounted on a pole ( RRH ) and then a base band unit (BBU) that will take the radio I/Q data streams and process them into network traffic. The BBU can process the data form a number of RRHs, The sync between these to units needs to be very tight, MTIE of ± 100ns is so far the leading contender for MTIE limit. This is called the CRAN model. 

There are a couple of different factions that champion how to achieve this in the industry. Some say PTP and the use of transparent switches. Others say the use of syncE to transfer frequency with PTP on top to transfer time. Others champion non ethernet solutions.  

Another thing that is affecting deployment  is the cost of deploying GPS antennas. An average for one carrier in Asia was about 12000 USD to instal an antenna and maintain it for three years. In there network they would see 9~10% of the nodes serviced by GNSS would have some fault directly relating to the GNSS cable / Antenna.  They are using PTP / IEEE 1588 as a way to distribute sync. 

The old way of doing things was to have 3 / 4 good clocks in the core of the network (Cs) and the sync flows out to the edge. ITU G.8261 has test cases that are to simulate 10 hops between the master / slave. Yes, 8261 is really for frequency but the test cases are also used of phase because it is what we have.   Now most operators are interested in putting smaller masters at the edge. Rather than serving 1000’s of clients, serve less than 100, maybe a low as 16. The edge master will have a local GPS reference but will also use PTP / syncE / BITS as a backup to when GPS fails.  If all sources fail we are seeing hold overs in the 4~8 hours. 

Keep in mind a carrier will have a huge number of clocks. I visited a cell operator in Asia that had over 1e6 clocks in there network. The size of the networks are staggering. 

So what dose this mean? Carers do not want to deploy Rb, are looking at other tecnologies to extend the stability of OCXOs and TCXOs.  I don’t know how far out 5G service is. Its not 100 % clear to me how the deployments will actually happen, depend if the base station MFGs will get CRAN up and running. We should have surplus Rb etc up until G5 is fully deployed. 10~15 years after that it will be OCXO and other switching equipment, that will probably be up to the community to support. 







More information about the time-nuts mailing list