[time-nuts] Zero dead time and average frequency estimation

Magnus Danielson magnus at rubidium.dyndns.org
Mon Feb 1 09:42:17 UTC 2010


Tom Van Baak wrote:
> Magnus,
> Correct, all the terms cancel between the end points. Note
> that this is exactly equivalent to the way a traditional gated
> frequency counter works -- you open the gate, wait some
> sample period (maybe 1, 10, or 100 seconds) and then
> close the gate. In this scenario it's clear that all the phase
> information during the interval is ignored; the only points
> that matter are the start and the stop.

There is a technical merit to take samples in between even if they 
cancel... you avoid counter overflow, but you can do better, much better.

> Modern high-resolution frequency counters don't do this;
> and instead they use a form of "continuous counting" and
> take a massive number of short phase samples and create
> a more precise average frequency out of that.

Yes, and the main point for creating this little thread is to make 
people aware that how you process your data do make a difference. It can 
make a huge difference in fact. The effective two-point frequency 
calculation only use two sample point to estimate the frequency and thus 
also use the systematic value and noise of one sample to cancel the 
noise of the first sample. For 1/f power noises this is an effect that 
can even becomes larger (for 1/f^3 noise) with time as it is 
non-convergent, so by looking at it briefly you don't realize it is a 
noisy number.

> There are some excellent papers on the subject; start with
> the one by Rubiola:
> < 
> http://www.femto-st.fr/~rubiola/pdf-articles/journal/2005rsi-hi-res-freq-counters.pdf 
>  >
> There are additional papers (perhaps Bruce can locate them).

In particular, there is one paper that corrects some mistakes of 
Rubiola, Australian if I remember correctly.

> I wonder if fully overlapped frequency calculations would be
> one solution to your query; similar to the advantage that the
> overlapping ADEV sometimes has over back-to-back ADEV.

A very simple extension leads to that. Consider the frequency estimate of

          x(i+m) - x(i)
y(i,m) = -------------

And averaging over those:

         1  \
y(m) = ---  >  y(i,m)
        N-m /

You get 2*m samples in each end and the maximum m is of course N/2. This 
can be written in another form as N/2 number of m=N/2 overlapping 
frequency estimates. This is equivalent to averaging the first and 
second half, subtract the first from the second and divide by m*tau0.

> Related to that, I recently looked into the side-effects of using
> running averages on phase or frequency data, specifically
> what it does to a frequency stability plot (ADEV). See:
> http://www.leapsecond.com/pages/adev-avg/
> Not surprising, you get artificially low ADEV numbers when
> you average in this way; the reason is that running averages,
> by design, tend to smooth out (low pass filter) the raw data.

Indeed. It creates a bias on the measurement that needs to be taken out 
for a valid measurement.

> One thing you can play with is computing average frequency
> using the technique that MDEV uses.

Which is inspired by an article by Snyder that details how to perform 
one such overlapping estimation in the counter core for better 
convergence in noisy data. The MDEV is a separate basic measurement than 
ADEV. So one needs to be careful not to mix results freely.


More information about the time-nuts mailing list