[time-nuts] [?? Probable Spam] Re: frequency stabilty question

Magnus Danielson magnus at rubidium.dyndns.org
Tue Aug 16 05:03:30 UTC 2011


On 15/08/11 17:07, Ulrich Bangert wrote:
> Jim,
>
> pardon to correct you but
>
>> It's if you measured the frequency (instantaneously) at one second
>> intervals, and calculated the standard deviation, that would
>> be the ADEV for tau=1 second.
>
> is simply wrong in at least two aspects.
>
> First: The measurements need to be the AVERAGE frequency measured over a time interval of one second, which most easily is done by setting the counter's gate time to one second.
>
> Note that the misunderstanding about the difference between an instantaneous measurement of frequency (which is possible too, but not with counter based methods) and an averaging measurement of frequency over a given time interval was one of the sources of the upsetting discussion between Bruce and WarrenS in this group about the tight PLL method. For that reason I think we need to be very carefull in not to use terms as "instantaneous" in the wrong sense.
>
> Second: It may be due to that you wanted to explain something as easy as possible but I hope that it is clear to you that Allan's fame relates to the fact that he found out that the standard deviation is exactly the WRONG tool to use in this case and that he needed to formulate a new species of deviation that is today called after him.

He provided both an analysis method which handled the non-white noise as 
well as the analysis method to prove that M-sample variance blew up.

Actually, they had been beating around the same bush for a couple of 
years and several reseachers where considering the same concept. 
However, by the theoretical analysis of bias functions equivalence 
between different number of samples was established, and it proved easy 
to compare 10-sample variance with 7-sample variance by converting 
through 2-sample variance. Thus, it was easier to measure according to 
2-sample variance directly now that both 7-sample and 10-sample 
variances where just bias functions away from 2-sample variance. The 
bias function was all of a sudden known and the bias grew for some 
noises as you went for 1000 samples, 10000 samples etc. to infinity... 
so 2-sample variance was indeed more useful. That's how it was coined as 
Allan's variance by fellow researchers.

He also looked at the bias function for measurements with dead-time, 
another obsticle of its day. It created another bias function which 
would allow for comparable measures. By including that he forged 
together one ring to control them all... it was indeed a break-through 
in comparable measures and understanding of how they interacted.

The statistical bias functions allow for conversion between different 
sample variants and different taus by producing different multiplying 
values to convert from one to the other measure. Bias functions can be 
applied also to the T measurement period for tau measurement time, where 
the T-tau difference is the dead-time. However, using bias corrections 
often require that the dominant noise source (of that tau) is known, so 
it needs to be identified. While there is algorithms to assist in that, 
using the autocorrelation properties, it becomes a bit of a processing 
to do. It has become easy to avoid dead-time and we have all settled for 
2-sample variance so it is mostly tau-bias that we care about today. In 
this sense, bias functions has become less of importance, but they form 
the aid in understanding the behaviours so they are still useful to 
learn about.

Let's say that spending time to read up about this topic has been quite 
rewarding in both understanding the topic, the efforts put into it and 
just how far we have gone.

Cheers,
Magnus



More information about the time-nuts mailing list