[time-nuts] Notes on tight-PLL performance versus TSC 5120A
warrensjmail-one at yahoo.com
Fri Jun 4 00:06:39 UTC 2010
to re-focus and provided a good summery of a major basic issue.
He did leave off a small but subtle point that I added in because it is really what my whole point is, and that is the benefits of "oversampling".
>1. How close can a LPF implementation come to integration in ADEV calculations, USING OVERSAMPLING,
>2. How close to true ADEV is "good enough"?
How close is close enough will depend on the situation, and may be from 1% to 2 to1
If one uses 3%, which would be at about what one would notice between two plots
Now maybe just stating the example will shorten this up for some, for others they will want (and disserve) to see a more formal math solution that I must leave for others to do..
The tester can do any reasonable ratio of oversampling to tau0, so for simple round numbers lets use 1KHz oversample rate and tau0 of 1 sec.
Also need to define the LPF, which for a tau0 of 1 sec should be >= 1 Hz.
To be more complete assume a PLL BW of 1 KHz
Need to set a sample length, of say =>100 sec for 1 sec tau0
(that is 100k frequency samples that are going to averaged down to 100 data points before the ADEV calculation by using standard simple averaging of sum of 1000 samples / 1000)
Now, remembering that we are working with noise after all, which will tend to average things,
What would the error have to be at every one of the 100K samples to cause a 3% tau0 ADEV data error over a 100 seconds run (or any other size run)
Even If we assume by some magic that every single one of those 100K samples was not noise but at the worse possible value, there is not enough dynamic range in an analog system to cause a 3% end results error.
To think that some "Magic noise source is going to effect the outcome of that example is silly but forgivable if it is being pushed by some expert.
But for an expert to push it is totally absurd to me..
Now, it is Time for someone to do the math so that I can stop the hand waving and finger pointing and name calling.
Hint: the answer is going to be close enough for Everyone (<<<= 0.01%), Using ANY type of noise source or non noise source anyone can come up with.
If I may be allowed to summarize, it appears that Warren and Bruce
agree that integration is necessary to produce true ADEV
results. Warren asserts that the low-pass filtering his method uses
is "close enough" to integration to provide a useful approximation to
ADEV, while Bruce disagrees. So, the remaining points of contention
seem to be:
1. How close can a LPF implementation come to integration in ADEV
2. How close to true ADEV is "good enough"?
I humbly submit that trading insults has become too dreary for words,
and that neither Warren nor Bruce will ever convince the other on the
I thus humbly suggest (nay, plead) that the discussion be re-focused
on the two points above in a "just the facts, ma'am" manner. One can
certainly characterize mathematically the differences between
integration and LP filtering, and predict the differential effect of
various LPF implementations given various statistical noise
distributions. If one is willing to agree that certain models of
noise distributions characterize reasonably accurately the
performance of the oscillators that interest us, one can calculate
the expected magnitudes of the departures from true ADEV exhibited by
the LPF method. Each person can then conclude for him- or herself
whether this is "good enough" for his or her purposes. Indeed,
careful analysis of this sort should assist in minimizing the
departures by suggesting optimal LPF implementations.
More information about the time-nuts