[time-nuts] Notes on tight-PLL performance versus TSC 5120A

Steve Rooke sar10538 at gmail.com
Fri Jun 4 13:44:01 UTC 2010

On 4 June 2010 08:13, Bruce Griffiths <bruce.griffiths at xtra.co.nz> wrote:

>> So you say but it is producing a result that seems to be VERY
>> interesting. To adopt an attitude where everything has to be done a
>> specific way totally misses out on innovation.
> You cant arbitrarily change the equivalent "filter" and expect to get the
> same results.

Unfortunately the people who have tried to use this method before,
despite all the letters after their names, had a great deal of
problems with the LPF design as it had two constraints which were
totally opposed to each other. They needed a filter which accurately
settled at tau0 time to reflect the fundamental frequency and at the
same time, the filter had to have enough bandwidth to allow the
reference oscillator to track the unknown oscillators noise. That's
about as easy and as much fun as trying to kick a dead whale up a
beach. A way around this is for the PLL-loop to have a wide bandwidth
so that it accurately tracks the two oscillators together and then to
oversample and average the results per tau0 to get accurate data.

>> In the Renault factory
>> when they made the first 2CV motor car they were having trouble
>> assembling it as they could not get some of the body parts to fit
>> together. One of the mechanics gave the item a good kick with his boot
>> and the item popped into place. This method was then adopted for the
>> assembly of that vehicle as it worked, it was not high-tech but it
>> worked. You could say that this was not the optimum method to build a
>> car and send it all back to the drawing board to sort it out at a vast
>> cost in re-engineering or you can stick with what works.
> Irrelevant analogy as the adopted method doesnt conflict with the
> requirements of any accepted physical theory.

Wow Bruce! Where do you come up with these pompous statements. If you
failed to understand the point I was trying to make here then perhaps
I'm wasting my  time trying to explain all this to you.

>> That paper is irrelevant for the method that Warren has chosen.
> If you had read and understood the paper you wouldnt believe that.

You don't need to design an "optimal" filter because this method does
not hinge on that as a critical component unlike the method in the
paper you cited. Now perhaps you should read what Warren and I have
written so that you might just understand why this is so.

>> Really! This is not the case as the loop bandwidth of significantly
>> wider than the oversampling rate and this rate is significantly faster
>> than the minimum Tau sampling rate. Why is there any need to use any
>> "appropriate signal processing algorithms" with such an elegantly
>> simple improvement on the NIST design. I find it hard to understand
>> what you don't understand here.
> You need to read up on the definition of AVAR, ADEV etc and understand that
> it requires average frequency measures or phase differences.
> Integration (in hardware or software) is required to do this.
> Warrens hardware merely implements the low pass filter required to limit the
> contribution of white phase noise etc to ADEV.

And that is where you have totally lost the plot! Warrens filter does
not limit the effects of noise to ADEV, whereas the "optimal" filter
design you have spoken of and cited in the paper does limit this.
Warren's design produces accurate frequency measurement because the
bandwidth of the PLL-loop is very much wider than 1 / tau0 so the
effects of noise are allowed to keep the two oscillators in sync.
Previous attempts at this have tried to make the loop filter time
constant fit the sampling frequency and these attempts have managed to
limit the contribution of noise to the ADEV measurement.

>> The filter does not come into it as Warren has not designed the filter
>> to have a Tc equivalent to the Tau sampling rate. Now the penny has
>> dropped and I can see where you are going wrong here. The filtering
>> here is done simply by oversampling and averaging the results of the
>> measurements over the minimum Tau period.  Excellent paper but far too
>> full of theoretical math for my liking these days.
> If you don't follow the paper, how is it that you feel that you know
> sufficient to make a useful contribution to the discussion?
> The paper explicitly covers the tight PLL technique where a sequence of
> frequency samples are taken and shows how to produce valid ADEV measures
> from these samples.
> It also provides a formula that can be used to predict the consequences of
> inappropriate signal processing.

OK, so I don't have a string of letters after my name but it doesn't
take half a brain to understand this Bruce. Now I use that term almost
literally as a couple of members on this list know what I'm getting at
at least. Anyway, I digress and shall use whatever brain I have left
to try and get this point over. You seem to hold up this cited paper
as one of the Dead See Scrolls and as such being the last word on the
subject. What makes you think that this paper is all that there is to
the tight-PLL method? How can you make a contribution to this
discussion when all you can do is just say it's all ben done before
and there is nothing more to say on the subject so shut up? Hundreds
of years ago you would have been one of the ones that could never be
convinced that the Earth was not flat. Things change and we learn more
and we move on. Your little bag of academic paper links may be
impressive but they are not the end of research in those fields.

>> Poetry is always relevant :) By WVAR do you mean Warren Varience as
>> I'm not sure where you've pulled this from otherwise.
> Yes to distinguish it from ADEV, which without the integration it is not.

Well, the wide band simple R/C PLL-filter is an integrator and the
averaging of the oversampled samples per tauo is a second stage of
integration so it must be ADEV.

>> And I suppose that the TSC has been tested with an infinite number of
>> sources with an infinite variety of phase noise spectra to very that
>> it works perfectly. Can you get Agilent to sign a document that says
>> the TSC will always produce the correct answer with every trpe of
>> input source? No, because it comes from Agilent (or Summetricom,
>> whatever) and costs $$$$, and someone has written a theoretical paper
>> about it, you assume that it is The Standard in test equipment. Well,
>> it probably is THIS YEAR but times will change and who knows when the
>> B version comes out.
> At least the design is consistent with the accepted requirements for
> measuring ADEV whereas Warrens design (without the integration) isnt.

Well, all we can say is that it may be designed with the accepted
requirements for measuring ADEV but that is only current at this
juncture in time and it has not been tested against all source forms
or against any future form of tester which may give more accurate
results given that new methods may be developed. Let's face it, with
increasing precision of oscillators predicted for the future we will
need better instruments to characterise them.

Oh, and Warren's design does integrate BTW.

>> We don't know that it doesn't yet!
> Without integration the equivalent filter has the wrong shape thus it doesnt
> measurer ADEV, period.

a) it integrates and b) the PLL-loop filter does not constrain the
design as it does in previous attempts to implement this method.

>> No, what's perplexing is that you wish to change things before they
>> are even proved to be inaccurate that bothers me.
> If one ignores the accepted theory then one can easily waste a lot of time
> producing measurement systems of limited utility.
> One should examine the consequences of omitting signal processing steps as
> this can save a lot of time by allowing the conditions under which it may
> produce acceptable results to be clearly defined.

No one is ignoring any accepted theory here, you are just trying to
side-step the issue here. The method here is the well accepted
tight-PLL but, unlike previous attempts at implementation, this one is
not constrained in performance by the PLL-loop filter. e don't need to
do any fancy signal processing here, there is nothing to do, there is
no fancy under-te-table stuff going on here or needed for that matter.
Just a simple H/W wide band filter on the actual PLL-loop to keep it
in sync, and stable, and then to average the oversampled tau0 data. If
you process the data in some way you are distorting the result and
this may or may not remove the effects of noise for the result. Why do
you feel there is a need for signal processing anyway?

Come back Tomorrow for the next thrilling instalment of how do you
teach an old dog new tricks :)

Best regards,
Steve Rooke - ZL3TUV & G8KVD
The only reason for time is so that everything doesn't happen at once.
- Einstein

More information about the time-nuts mailing list