[time-nuts] theoretical Allan Variance question
magnus at rubidium.dyndns.org
Sun Oct 30 18:43:52 EDT 2016
On 10/30/2016 01:38 AM, Stewart Cobb wrote:
> What's the expected value of ADEV at tau = 1 s for time-interval
> measurements quantized at 1 ns?
> This question can probably be answered from pure theory (by someone more
> mathematical than me), but it arises from a very practical situation. I
> have several HP5334B counters comparing PPS pulses from various devices.
> The HP5334B readout is quantized at 1 ns, and the spec sheet (IIRC) also
> gives the instrument accuracy as 1 ns.
Experience show that you have a measurement limit typically around 1/tau
times resolution... typically, unless naturally you have a noisier signal.
Proving this limit turns out to be much harder than the first attempt of
using it's slope as an indication of the white phase modulation noise
which we assume to be there.
> The devices under test are relatively stable. Their PPS pulses are all
> within a few microseconds of each other but uncorrelated. They are stable
> enough that the dominant error source on the ADEV plot out to several
> hundred seconds is the 1 ns quantization of the counter. The plots all
> start near 1 ns and follow a -1 slope down to the point where the
> individual device characteristics start to dominate the counter
> quantization error.
Which matches the typical empirical behavior.
> One might expect that the actual ADEV value in this situation would be
> exactly 1 ns at tau = 1 second. Values of 0.5 ns or sqrt(2)/2 ns might not
> be surprising. My actual measured value is about 0.65 ns, which does not
> seem to have an obvious explanation. This brings to mind various questions:
> What is the theoretical ADEV value of a perfect time-interval measurement
> quantized at 1 ns? What's the effect of an imperfect measurement
> (instrument errors)? Can one use this technique in reverse to sort
> instruments by their error contributions, or to tune up an instrument
It turns out to be a bit complicated, as you probably gather.
The exact value depends on several aspects.
Let's start of with assuming that you have a long run, in which case the
confidence interval of the ADEV for tau=1s has become tight, that is it
is not spread around much from a infinite long average.
The resolution of a counter and it's behavior for different phase
releasionships may not be perfect, so that can create some noise, but
let's assume there is no such effect.
The interpolator may also have biases and variations due to interaction
with another signal, let's rule that out.
Remains is interpolator steps and noise. The average value and the
variance of the value depends on the quantization step size and noise...
in an interesting fashion. Rather than having a constant value as you
assume, they interact so that you expectance value for ADEV varies with
several parameters. I've so far not seen a paper that nails this
properly. I have ideas for a paper, but nothing ready yet.
So, for the moment, the rule of thumb is what you can expect the counter
to do, roughly. Shifting of this line is what motivates you to buy
better counters or do tricks like DMTD.
More information about the time-nuts