[time-nuts] Software Sawtooth correction prerequisites?

Tom Clark, K3IO K3IO at verizon.net
Sat May 12 02:06:25 EDT 2007


Bruce Griffiths wrote:

>
> The Dallas delay lines aren't all that accurate, you need to calibrate
> them to acheive 1ns accuracy (read the specs) and then you have to
> worry about temperature variations.
> To use them you need to decode the sawtooth correction message from
> the GPS timing receiver.
> If you've decoded this message then you have all the information
> needed to make a software correction to the measured phase error.
I need to correct some impressions that seem to have gone astray. To
help me, I refer you to a PowerPoint presentation that I gave to the
technicians and operators at the world's VLBI (Very Long Baseline
Interferometry) sites. The presentation is available at
http://gpstime.com as the 2007 version of  "Timing for VLBI".

[Aside -- If you are interested in learning about some of VLBI's
buzz-words, I also gave a tutorial "What's all this VLBI stuff, anyway?"
that was intended as a view of the Physics and Radio Astronomy of making
VLBI measurements. Some people find my de-mystifying of Heisenberg's
Uncertainty Principle interesting -- especially the Schroedinger quotes
at #21. This "plays" best if you view it as a PPT presentation.]

Starting on Slide #20, I describe the reason that the Motorola receivers
have the sawtooth "dither". Basically clock edges of the receiver's 1PPS
pulse are locked to a crystal oscillator in the receiver and that
oscillator is on a frequency that is not neatly commensurate with the
"true" second marks. As has been pointed out in these discussions,
Motorola reports an estimate of the error on the NEXT 1PPS tick. Slides
21 and 22 show some of the pathological example we have seen on typical
receivers. AFAIK, all the bizarre behavior has been traced to firmware
problems.

The reason for making sawtooth corrections (and not simply averaging
multiple samples) can be seen in the "hanging bridges" (22:34 to 22:36
on #22, 01:04:30 to 01:05:30 on #23) when the 1PPS signal went thru a
zero-beat. For these 1-2 minute windows, all statistical averaging
breaks down and typical GPSDO's perform badly. However, when the
sawtooth is corrected in software (blue line on #23) the resulting
"paper clock" is well behaved (at ~1.5 nsec RMS level).

Slides #24 & #25 describe an annoying problem in VLBI -- we want to be
able to blindly trust ANY 1PPS pulse whenever (rarely) we need to reset
the "working" VLBI clock. Slide #26 is the block diagram of the circuit
that Rick has implemented in his newest clock. Slide #29 shows a (more
noisy than normal) comparison between the hardware ans software
correction performance with only 0.3 nsec RMS noise between the two.

Bruce noted a misconception that may have come from our earlier
implementation of the correction algorithm. What we found was that EVERY
sample of the 1 nsec step Dallas/Maxim delay line showed considerably
more scatter.What we found, on closer examination, was that it seems
that the DSI delay line chip defines "one nsec" about 10% differently
than Motorola's "one nsec". After correcting for this "definition"
problem, as you see in #30, the hardware  and software correction are in
agreement with an observed regression coefficient of 0.9962 (on this
sample, which shows correlation coefficient > 0.999) and good tracking
between samples.

Bruce also made some disparaging comments on the stability of the delay
lone. I can say that we have not seen any stability problems at all.
This is quite logical when you carefully reverse engineer the DSI chip
based on its data sheets. The delay inside the chip is really an analog
delay. The 8-bit number you sent to the chip programs a D/A converter to
produce a (256 step) constant current source. When the input pulse is
applied to the DSI delay line, the constant current charges an on-chip
capacitor. When the resulting ramp matches the level defined by a
comparator, the output is changed. The comparator level and capacitor
value are temperature compensated by a second, fixed rate ramp. This is
pretty much the same thing that you all have been described here.

The place where I suspect that there may be some temperature sensitivity
is in the modular GPS receivers. If you look at my slide #19 from late
2000, the really great "Never Happened" receiver had to be temperature
controlled (to ~ 1ºC), otherwise it showed diurnal room temperature
variations. All these receivers have a bandpass filter with ~1.8 MHz
wide somewhere in their IF chain; this filter's bandwidth is matched to
the 1.023 MHz C/A code chip rate that is the root of the timing
performance. Heisenberg would argue that a filter this wide will show a
group delay ~500 nsec and it is often implemented as a SAW (Surface
Acoustic Wave) device at an IF in the 50-200 MHz range. This is a
measurement topic itching for some work! Regarding the SAW filters, on
slide #33 you will see that the 4 M12+ receivers that Rick tested at
USNO fell into two groups with ~4-5 nsec "DC" timing difference between
them.You will also note on #36 that the one sample of the new iLotus
M12M that I've seen has ~30 nsec of bias.
> Why add the cost of a programmable delay line when the additional cost
> of correction is a few lines of code?
> They also don't remove the requirement for subnanosecond phase
> measurement resolution and accuracy.
But the receiver itself has intrinsic noise at the nsec level. You are
better off by averaging sawtooth corrected (either hardware or software)
measurements to achieve sub-nsec precision; IMHO, sub-nsec individual
measurements aren't needed. Surely you don't plan to tweak a GPSDO every
second! A good xtal is much better than ANY GPS rcvr on times of 1-100 sec.
> Whilst an analog phase lock loop can have the necessary resolution
> they are somewhat impractical for the relatively long averaging times
> required when optimally disciplining a good OCXO.
>
> The computational load isnt that severe as you only make one phase
> measurement per second.
>
> One of the simplest ways of achieving subnanosecond phase measurement
> resolution is to feed a quadrature phase 10MHz sinewave into a pair of
> simultaneous sampling ADCs (MAXIM have suitable devices prices seem
> reasonable). The sinewaves are sampled at the leading edge of the GPS
> receiver PPS signal.
> The ADC outputs can then be used to determine where in the cycle the
> PPS edge occurred. This in effect is a subnanosecond resolution phase
> detector with a range of 100nsec. The range can easily be extended by
> using a small CPLD which incorporates a couple of synchronisers (one
> clocked by the positive slope transition of the 10MHz signal and the
> other clocked by the negative slope xero crossing transition of the
> 10MHz signal) The output of both synchronisers samples the value of a
> synchronous counter which is clocked by the positive slope zero
> crossing of the 10MHz sinewave. Software then sorts out which latched
> count is most reliable (the synchroniser whose clock edge is furthest
> from the PPS transition). This sounds complex but it isnt, especially
> if you select the right PIC (or other micro) with built in counters
> (PIC18F4550?) that can be sampled by an external transition (output of
> a synchroniser). The counter need only be an 8 bits counter.
This sure sounds like a more complicated measurement than is necessary
to me. If you have a 10 MHz oscillator, simply feed it into the "D"
input into a latch clocked by the de-sawtoothed GPS 1PPS. The output of
the latch is a 0 or 1 depending on the precise phase of the oscillator.
You want this latched 0/1 measurement to average to ½ over a long term
(seconds). As the statistics deviate from a 50/50 split, you tweak the
oscillator. The ~1 nsec of residual noise from the sawtooth corrected
GPS rcvr acts a natural dither. No counters, no ramps, no big A/D
converter -- it couldn't be simpler! And if the 10MHz (=> 100 nsec phase
ambiguity) is too fine for your oscillator, then divide it to 5 MHz
(=>200 nsec) or 1 MHz (=> 1µsec). This should be good enough to pull in
a xtal that is off by 1:10e6.
>
> Another technique is to start a ramp on the leading edge of the PPS
> signal from the GPS receiver and stop it at the corresponding output
> transition of a synchroniser (clocked at 10MHz) whose output samples
> an (8bit) counter (also clocked at 10MHz - your local OCXO standards
> frequency). The final value of the ramp is sampled by an ADC and
> combined with the sampled count to resolve the 1 count ambiguity at
> the synchroniser output. The ramp is then reset for the next PPS
> pulse. Calibration of the ramp generator is required but calibration
> cycles are easily interleaved between PPS pulses.
> Although it may seem that a fast opamp is required for the ramp
> generator, this isnt so as you can wait for any opamp (and/or ADC
> input) to settle to before sampling the ramp output.
> With careful design curvature correction isn't required (don't
> slavishly copy the Linear technology Application note, you can do
> better with less). The ramp generator needs a range of  300ns or
> greater with a  10MHz synchroniser clock. A 10-12 bit ADC will provide
> subnanosecond resolution. The ADC need not be fast (10us per
> conversion is adequate), however a sigma delta ADC is unsuitable.
>
This also strikes me as a more complicated implementation than is
needed. But then, I prefer beer and white zinfandel wine too.

I hope these comments helped a bit -- 73, Tom




More information about the time-nuts mailing list