[time-nuts] Modified Allan Deviation and counter averaging

Bob Camp kb8tq at n1k.org
Sat Aug 1 07:53:50 EDT 2015


If on the same graph you plotted the “low pass filter” response of your sample / average
process, it would show how much / how little impact there likely is. It’s not any different than 
a standard circuit analysis. The old “poles at 10X frequency don’t count” rule. No measurement 
we ever make is 100% perfect, so a small impact does not immediately rule out an approach. 

Your measurement gets better by some number related to the number of samples. It might be
square root of N, it could be something else. If it’s sqrt(N), a 100 sample burst is getting you an
order of magnitude better number when you sample. You could go another 10X at 10K samples.
A very real question comes up about “better” in this case. It probably does not improve accuracy, 
resolution, repeatability, and noise floor all to the same degree. At some point it improves some 
of those and makes your MADEV measurement less accurate. 


Because we strive for perfection in our measurements, *anything* that impacts their accuracy is suspect. 
A very closely related (and classic) example is lowpass filtering in front of an ADEV measurement.
People have questioned doing this back at least into the early 1970’s. There may have been earlier questions,
if so I was not there to hear them. It took about 20 years to come up with a “blessed” filtering approach 
for ADEV. It still is suspect to some because it (obviously) changes the ADEV plot you get at the shortest tau. 
That kind of decades long debate makes getting a conclusive answer to a question like this unlikely. 


The approach you are using is still a discrete time sampling approach. As such it does not directly violate
the data requirements for ADEV or MADEV.  As long as the sample burst is much shorter than the Tau you 
are after, this will be true. If the samples cover < 1% of the Tau, it is very hard to demonstrate a noise 
spectrum that this process messes up. Put in the context of the circuit pole, you now are at 100X the design 
frequency. At that point it’s *way* less of a filter than the sort of  vaguely documented ADEV pre-filtering 
that was going on for years and years ….. (names withheld to protect the guilty …)

Is this in a back door way saying that these numbers probably are (at best) 1% of reading sorts of data?
Yes indeed that’s an implicit part of my argument. If you have devices that repeat to three digits on multiple 
runs, this may not be the approach you would want to use. In 40 years of doing untold thousands these 
measurements I have yet to see devices (as opposed to instrument / measurement floors) that repeat to 
under 1% of reading. 

> On Jul 31, 2015, at 5:04 PM, Poul-Henning Kamp <phk at phk.freebsd.dk> wrote:
> --------
>> If you look at the attached plot there are four datasets.
> And of course...
> Here it is:
> -- 
> Poul-Henning Kamp       | UNIX since Zilog Zeus 3.20
> phk at FreeBSD.ORG         | TCP/IP since RFC 956
> FreeBSD committer       | BSD since 4.3-tahoe    
> Never attribute to malice what can adequately be explained by incompetence.
> <allan.png>_______________________________________________
> time-nuts mailing list -- time-nuts at febo.com
> To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
> and follow the instructions there.

More information about the time-nuts mailing list