[time-nuts] ADEV drop

WarrenS warrensjmail-one at yahoo.com
Sun Aug 12 03:02:30 UTC 2012


The basic problem is that one can not meet Allan's requirement
of the integral of the instantaneous frequencies over tau0 time
and Nyquist-Shannon sampling theorem requirement if taking just
one raw phase sample per displayed ADEV tau0.
The two requirements are then mutually exclusive.
This is especially true when that sample is coming from a
DTMD zero crossing detector.

The way I get ADEV tau answers that do not droop at all near tau0,
and that are independent of the displayed tau0, the oversample rate
and the NEQ.BW filter (if BW > 2* tau0) without having to throw
away the low tau answers or save data files with more than tau0
number of samples, is by oversampling the raw data and then
reducing it in an appropriate way before saving it as tau0.
With high speed oversampling it is also very simple to avoid
any aliasing problems.

Using an external  DC coupled sound card, oversampling at 48KHz
for any tau0, both the TPLL2.0 and the XOR-LPD give non-drooping
tau0 answers that are not a function of the oversample rate or the tau0
reduction rate.
That is, get the same ADEV tau 1sec answer if the tau0 is 1KHz, 1Hz or 
anywhere in-between.

ws

************************

Fellow time-nuts,

As David insisted that I get and then read the ITU Handbook Selection
and Use of Precise Frequency and Time Systems (1997) and in particular
Chapter 3 I took the time to get it and start reading it. In there I
found clause 3.3.2.4.4 Truncation effects, which addresses this issue,
which also aligns up with my own writing on Allan Deviation, and the
Measurement bandwidth limit (I will have to update that one).

The key point is that the main lobe of the kernel function (the way that
the sin(pi*tau*f)^4/x^n look), will be affected by the system bandwidth
and values will be not matching up to the brick-wall analysis of the
traditional system. The result will be that the ADEV measure will be
lower than it should. This situation was analysed by Bernier in 1987 as
part of analysing the modified Allan deviation, which has a "software
bandwidth filter" in the form of the n*tau_0 average filter.

So, the first low n values is even expected to give systematic low
values, which is the reason for the ITU-T to put minimum requirements on
the tau_0 to lowest tau to ensure that repeatability is achieved.

This is also the same effect that Sam Steiner mentioned in his
presentation during this years NIST seminars. Sam also went on to
discuss the effect of aliasing, which helps to bring even more false
values in that region.

Conclusion: Just don't look all that hard on the lower tau values, as
they can be systematically off. Make sure that you have a tau_0 well
below the taus you are interested in to ensure that your values is
reasonably valid.

Cheers,
Magnus




More information about the time-nuts mailing list