[time-nuts] question Alan deviation measured with Timelab and counters

Magnus Danielson magnus at rubidium.dyndns.org
Wed Jan 14 02:03:34 EST 2015


Bonjour St├ęphane,

On 01/14/2015 02:16 AM, St├ęphane Rey wrote:
> Hi Magnus,
>
> For some reason I've missed this message and the one from Jim until now ! This answers many of the questions I had. For my defense, I've 3000 messages since the last 3 months on the list !!!
>
> ah, yes, I'd like to get even better than 1E-12. 1E-14 would be perfect but my best standards for now are a HP GPSDO and an Effratrom FRK Rb which both are around 1E-12 'only'. I may have to invest in something better if prices are acceptable. I guess I won't be able to measure beyond the standard itself.
>
> The method you describes gives tau=2E-9 ? This is more or less what I could get with the frequency measurement (even a bit lower). So what is the benefit of the time interval measurement here against the frequency measurement ?

I've been sloppy with the scaling factor, so there is a fixed scaling 
factor for the noise that the single-shot resolution produces, and that 
would be a measurement limit that if everything else is ideal would 
dominate. This quantization noise is sqrt(1/12) or about 0.289 if I 
remember correctly, so that is the scale-factor. It will also have a 
1/tau slope. So that is how you can expect this noise to behave, it will 
look like white phase noise, but isn't, it is highly systematic noise, 
and if you play nicely with it, you can measure below it. However, doing 
so is non-trivial.

I have one counter that does that. The good old HP5328A with the Option 
040-series of boards will introduce noise to the counting 100 MHz 
oscillator such that averaging gets you down towards 10 ps rather than 
10 ns resolution in TI mode. However, it does not help you to get nice 
frequency or stability measures.

I've not taken the time to detail-analyse the ADEV scaling factor 
thought, I should do that, but it follows the general formula of 
ADEV(tau) = k*t_res/tau
where t_res is the single-shot resolution and k is a constant.
There is more to this, as counters can show up non-linearities of 
several sorts, and that the trigger conditions of the input has been 
optimized, which can be slew-rate limited for many counters and conditions.

So, anyway, there is a bit of hand-waving in there, but I thought it was 
better to get you to "get" the basic trend there first, and then we can 
discuss the detailed numbers, as theory is one thing and achieved number 
can be quite a different one.

As for frequency and time-interval measurements, if properly done, they 
can be used interchangeably without much impact. Realize that frequency 
and time-interval measurements will both be based on time-interval 
measurements as the core observation inside the counter, so the 
single-shot resolution limit applies to them both. However, subtle 
details lies in how the counter works and there is ways that the 
frequency precision can be lost. A good counter is the SR620, but the 
way it does the frequency measure, you need to calibrate the internal 
delay to make it "on the mark" measure. Using it in time-interval mode 
and you can eliminate that offset, because the start and stop measure of 
your signal under test is done with the same channel, with essentially 
the same delay both trigger-times.

Another subtle detail is that when you make frequency measurements, you 
arm your counter, the start channel triggers, you wait the time you have 
programmed as the measurement time before you arm the stop channel, and 
then it triggers, after which you then read out your coarse counter of 
cycles, the interpolator states for the start and stop channels and 
well, the count of the time-base (which should be known), you calculate 
the frequency and output and well, once you cleared the "bench" from 
that measure you then arm the counter core of the next measurement. The 
time from the stop event to the following start event is called the 
dead-time. This dead-time is a period when the signal is not being 
observed. The actual time between the measures (time between the start 
events) and the length of the measures (time between the start and stop 
events) will not be the same, this will create a measurement bias in the 
ADEV. If you can establish the length of the dead-time you can 
compensate the measures. Very very few people do this these days, part 
of it is ignorance, part of it is why bother when you can use any of a 
number of techniques that avoid the dead-time altogether.

Being able to measure frequency does not easily convert into making 
quality ADEV measures.

Also, another danger of using frequency measures is that many modern 
counters use one of several techniques to improve the frequency 
measurement resolution by using things like linear regression. This 
behaves as a narrow-band filter, and the ADEV measures for white noise 
depend on the bandwidth of the system, and well, very very few 
measurements is annotated with their bandwidth, so traceable ADEV 
measurements will not be done there, and this pre-filtering effect 
bandwidth isn't even mentioned in those systems, even if it can quite 
accurately be modeled and calculated, which very very very few 
researchers do (yes, I know them). Also, typically such a pre-processing 
creates a bandwidth effect to "improve" the reading, but as the tau 
increases, the "improvement" wears off, and only becomes apparent as a 
low-tau drop in the ADEV, which deviates from the expected measurement 
limit of 1/tau, and as you go into higher taus, you end up back on the 
1/tau line that the raw time-interval measures gives you, so why bother?

So while I see the frequency measures as "problematic", for some setups 
it may be the only thing practical.

The details of how the measurement is actually done will no doubt create 
a whole range of subtle problems down the calculation route.

So, welcome to time-nuts, where the devil in the details, some of which 
few on this globe oversee and understand, and as you learn more you 
learn quirks about many more things you thought you needed to know and 
understand. :)

Your PM6654C deviates a little from the above description, as there 
isn't interpolators, it get's its 2 ns single-shot resolution from the 
fact that it uses a 500 MHz counter directly. The ECL chips in there 
get's hot, but ah well. The HP5535A counter that it rivaled was using a 
10 MHz clock for coarse counter and then used analog interpolators to 
get 200 times better time-resolution. With the PM6680 series and forward 
Philips Industrier (also branded Fluke, later changed name to Pendulum 
after it was sold of from Philips) went the analog interpolator route.

> However if I hear what you says, the GPSDO provides the 10 MHz standard reference for the counter, the GPSDO PPS on channel A and channel B receives for instance a 10 MHz signal I want to measure.
> So what will be the result of Time A-B then ? I do not understand why you put the PPS on channel A instead of something of the same frequency than the DUT ? How the time A-B will behave with these two different frequencies... " By letting TimeLab know the frequency, it can adjust for any slipped cycles on the fly." I guess this is what I've not understood.

No, only one channel should receive the signal you measure. You use the 
other channel to "start" the measure. You get a time-interval from each, 
and that is what you feed TimeLabs with, and it will track it.

You can use the 10 MHz on the A channel, if you let the PPS arm the 
measurement. This have the benefit that the PPS jitter may be replaced 
by the 10 MHz jitter (which will be significantly less usually). You do 
however want the PPS to arm it to get stable distances between your 
samples. The arming action regardless of how you do it will be of 
importance, as it can fool you to miss measurements, and make wiggely 
time-lines. However, if you measure TI of signals 2 times or higher than 
the arming rate, you can hide the dead-time neatly in the arming pattern 
(this is also known as picket fence).

So, yes you can improve things, but I wanted to reduce the complexity of 
the initial setup to a minimum setup, to start there. Once that is 
operating well, we can make the setup a little more complex.

Oh, always verify the trigger noise of the inputs and try to minimize 
it. You want to reduce it so that it doesn't limit you even more than 
the counter resolution would make you expect. For DMTD measures, trigger 
jitter is the reason that you don't put your general counter straight of 
the mixers, but need amplifier stages to optimize the performance.

> Now if I mix down the 10 MHz DUT with a 10.005 reference to increase the resolution, I'll get 5 kHz on channel B and still PPS on channel A ? Again I do not understand what will happen with these two signals on the time A-B. If I push your method a bit more, I could even get a beat frequency of 1 Hz and with 10-digits I would have increased my resolution by 10E6. Then I will be limited by the standard stability but on the principle would it work as well ?
> On that document http://www2.nict.go.jp/aeri/sts/2009TrainingProgram/Time%20Keeping/091017_DMTD.pdf it says (page 6) the accuracy of measurement is improved by a factor v/vb (the DUT and offset LO 1/2.PI.f). So it sounds to me that there is a compromise between resolution increase and accuracy. If I chose a beat frequency of 1 Hz the accuracy will not be improved but the resolution will be, right ?

I used those numbers to show where you would end up getting in the right 
neighborhood. DMTD style operation is a great tool for improved 
resolution, but it has it's own set of challenges.

The trigger jitter of a comparator. A general counter's trigger point is 
a voltage comparator, and the "event" occurs when the voltage passes 
that point and the time-stamping of that trigger event is taken. A 
general counter input is a wide-band type of input, so it has not been 
optimized. The trigger jitter of such an input can be modelled to be 
some internal jitter plus the total input noise divided by the slew-rate 
of the signal (at the trigger voltage). The quick and easy fix of the 
experienced operator is to change the trigger point to such a point on 
the signal that you get the highest slew-rate (and thus lowest jitter). 
However, what if you already measures at the maximum slew-rate? Well, 
you *might* reduce the noise somewhat.

Now, coming out of a mixer you have two sines (huge simplification, but 
let's just assume it to get the basic problem), one being the sum of the 
input frequencies and one being the difference. We filter away the sum 
frequency, as we want to measure the difference (beat) frequency signal. 
The peak slew-rate of this signal will be

S = 2 * pi * f * A

where f is the frequency of the sine and A is the peak amplitude of the 
sine. You get this from the model of V(t) = A * sin(2*pi*f*t), deriving 
it to get the slope, getting V'(t) = 2*pi*f*A * cos(2*pi*f*t) and 
realizing that the peak will be found at V'(0) as cos(0)=1.

The lower frequency, the lower slew-rate, and the trigger jitter formula is:

t_n = e_n / S

where e_n is the total noise (V RMS), S is the slew-rate (V/s) and t_n 
is the RMS time noise. Combining them gets you

t_n = e_n / (2*pi*f*A)

You naturally wants to keep the amplitude out of the mixer to a maximum.

Anyway, now we clearly see how the gain of the low beat frequency turns 
out to be our enemy in the trigger jitter of the resulting beat-note.

What you can do is to make sure that the amplifier you apply, does not 
have a bandwidth higher than needed to support the slew-rate you have, 
thus reducing the e_n part of the formula. State of the art DMTD works 
by providing a chain of low-noise amplifiers (to keep adding as little 
additional noise as possible in each stage), with increasing bandwidth, 
only to support each step's output slew-rate, and then considering the 
beat-frequency as the amplifiers noise will now be a combination of 
white noise and flicker noise (1/f) combination. In order to achieve the 
gain of the DMTD "trick" there is a whole range of issues to attend to, 
which is why this is not widely used of the shelf in generic counters.

Another part of the DMTD trick is that you do this to two channels, 
which to some degree cancels the noise of the offset oscillator, and the 
some degree aspect naturally provides that the remaining leakage can 
become a measurement limit too.

> What is the transfer clock you're talking about ? and by the way should the offset LO be as stable as the standard reference meaning greater than the DUT ?

The offset oscillator will be a "transfer clock" in the DMTD setup, as 
it's noise mostly cancels when doing DMTD.

The reason it does not fully cancels is due to the fact that the two 
channels of the DMTD setup will not go though zero at the same time, so 
the noise integrate over different time-periods and thus do not fullly 
cancel between the measurments, it would if the noise would fully 
correlate between the channels.

If they would work properly, the following simple model of phases would 
work:

P_AT = P_A - P_T
P_BT = P_B - P_T
P_AB = P_AT - P_BT = (P_A - P_T) - (P_B - P_T) = P_A - P_T - P_B + P_T = 
P_A - P_B

where P_A and P_B is the input phases, P_T is the transfer oscillators 
phase, P_AT and P_BT is the phases out of the mixer difference signals 
(assuming no other effects) and then doing the time-difference (TD) part 
of the DMTD we produce the P_AB difference, which as you see, should be 
the A - B phase difference. The gain factor of the beat note does not 
show in these equations, because they show up as time when you measure 
these phases at some frequency. The gain being

G_A = F_A / F_AT = F_A / (F_A - F_T)
G_B = F_B / F_BT = F_B / (F_B - F-T)

Since we assume that F_A and F_B is so close that they are nearly the 
same. They will actually create slightly different beat frequencies so 
you will span over the full range of relative phases. The beat frequency 
range needs to be large enough to handle the frequency difference, again 
making it harder to use in a general setup, but work well enough for the 
dedicated time-nuts.

A more modern way to do things, which is similar to DMTD but use a 
different approach was introduced by Sam Stein, and uses the fine A/D 
converters available today with low noise, nice sampling capability, and 
the back-side being FPGAs to digitally decimate the signal down. This 
can both get very much lower noise while being a much more generic 
technique. This is what the TimePod is. Correctly used, it can do more 
nice tricks, as the cross-correlation tricks which allows you to measure 
under the noise level of your reference oscillators. I regularly use two 
8600 BVAs as reference to my TimePod, which is a pretty decent setup.

> Well, it's far too late here to let my brain working anymore. I will perform further experiments tomorrow at the office.

Speaking of which, I should get up and to the office. The joy of a 
morning post. :)

Cheers,
Magnus


More information about the time-nuts mailing list