[time-nuts] Allan variance by sine-wave fitting
magnus at rubidium.dyndns.org
Mon Nov 27 11:02:22 EST 2017
On 11/27/2017 04:05 PM, Bob kb8tq wrote:
>> On Nov 27, 2017, at 12:33 AM, Ralph Devoe <rgdevoe at gmail.com> wrote:
>> Here's a short reply to the comments of Bob, Attila, Magnus, and others.
>> Thanks for reading the paper carefully. I appreciate it. Some of the
>> comments are quite interesting, other seem off the mark. Let's start with
>> an interesting one:
>> The issue I intended to raise, but which I'm not sure I stated clearly
>> enough, is a conjecture: Is least-square fitting as efficient as any of the
>> other direct-digital or SDR techniques? Is the resolution of any
>> direct-digital system limited by (a) the effective number of bits of the
>> ADC and (b) the number of samples averaged? Thanks to Attila for reminding
>> me of the Sherman and Joerdens paper, which I have not read carefully
>> before. In their appendix Eq. A6 they derive a result which may or may not
>> be related to Eq. 6 in my paper. If the conjecture is true then the SDR
>> technique must be viewed as one several equivalent algorithms for
>> estimating phase. Note that the time deviation for a single ADC channel in
>> the Sherman and Joerdens paper in Fig. 3c is about the same as my value.
>> This suggests that the conjecture is true.
>> Other criticisms seem off the mark:
>> Several people raised the question of the filter factor of the least-square
>> fit. First, if there is a filtering bias due to the fit, it would be the
>> same for signal and reference channels and should cancel.
> Errr … no.
> There are earlier posts about this on the list. The *objective* of ADEV is to capture
> noise. Any filtering process rejects noise. That is true in DMTD and all the other approaches.
> Presentations made in papers since the 1970’s demonstrate that it very much does
> not cancel out or drop out. It impacts the number you get for ADEV. You have thrown away
> part of what you set out to measure.
It's obvious already in David Allan's 1966 paper.
It's been verified and "re-discovered" a number of times.
You should re-read what I wrote, as it gives you the basic hints you
should be listening to.
> Yes, ADEV is a bit fussy in this regard. Many of the other “DEV” measurements are also
> fussy. This is at the heart of why many counters (when they estimate frequency) can not
> be used directly for ADEV. Any technique that is proposed for ADEV needs to be analyzed.
For me it's not fuzzy, or rather, the things I know about these and
their coloring is one thing and the things I think is fuzzy is the stuff
I haven't published articles on yet.
> The point here is not that filtering makes the measurement invalid. The point is that the
> filter’s impact needs to be evaluated and stated. That is the key part of the proposed technique
> that is missing at this point.
The traditional analysis is that the bandwidth derives from the nyquist
frequency of sampling, as expressed in David own words when I discussed
it last year "We had to, since that was the counters we had".
Staffan Johansson of Philips/Fluke/Pendulum wrote a paper on using
linear regression, which is just another name for least square fit,
frequency estimation and it's use in ADEV measurements.
Now, Prof. Enrico Rubiola realized that something was fishy, and it
indeed is, as the pre-filtering with fixed tau that linear regression /
least square achieves colors the low-tau measurements, but not the
high-tau measurements. This is because the frequency sensitivity of high
tau ADEVs becomes so completely within the passband of the pre-filter
that it does not care, but for low tau the prefiltering dominates and
produces lower values than it should, a biasing effect.
He also realized that the dynamic filter of MDEV, where the filter
changes with tau, would be interesting and that is how he came about to
come up with the parabolic deviation PDEV.
Now, the old wisdom is that you need to publish the bandwidth of the
pre-filtering of the channel, or else the noise estimation will not be
Look at the Allan Deviation Wikipedia article for a first discussion on
bias functions, they are all aspects of biasing of various forms of
The lesson to be learned here is that there is a number of different
ways that you can bias your measurements such that your ADEV values will
no longer be "valid" to correctly performed ADEV, and thus the ability
to compare them to judge levels of noise and goodness-values is being lost.
I know it is a bit much to take in at first, but trust me that this is
important stuff. So be careful about wielding "of the mark", this is the
stuff that you need to be careful about that we kindly try to advice you
on, and you should take the lesson when it's free.
>> Second, even if
>> there is a bias, it would have to fluctuate from second to second to cause
>> a frequency error. Third, the Monte Carlo results show no bias. The output
>> of the Monte Carlo system is the difference between the fit result and the
>> known MC input. Any fitting bias would show up in the difference, but there
>> is none.
>> Attila says that I exaggerate the difficulty of programming an FPGA. Not
>> so. At work we give experts 1-6 months for a new FPGA design. We recently
>> ported some code from a Spartan 3 to a Spartan 6. Months of debugging
>> followed. FPGA's will always be faster and more computationally efficient
>> than Python, but Python is fast enough. The motivation for this experiment
>> was to use a high-level language (Python) and preexisting firmware and
>> software (Digilent) so that the device could be set up and reconfigured
>> easily, leaving more time to think about the important issues.
>> Attila has about a dozen criticisms of the theory section, mostly that it
>> is not rigorous enough and there are many assumptions. But it is not
>> intended to be rigorous. This is primarily an experimental paper and the
>> purpose of the theory is to give a simple physical picture of the
>> surprizingly good results. It does that, and the experimental results
>> support the conjecture above.
>> The limitations of the theory are discussed in detail on p. 6 where it is
>> called "... a convenient approximation.." Despite this the theory agrees
>> with the Monte Carlo over most of parameter space, and where it does not is
>> discussed in the text.
>> About units: I'm a physicist and normally use c.g.s units for
>> electromagnetic calculations. The paper was submitted to Rev. Sci. Instr.
>> which is an APS journal. The APS has no restrictions on units at all.
>> Obviously for clarity I should put them in SI units when possible.
>> time-nuts mailing list -- time-nuts at febo.com
>> To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
>> and follow the instructions there.
> time-nuts mailing list -- time-nuts at febo.com
> To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
> and follow the instructions there.
More information about the time-nuts