[time-nuts] Notes on tight-PLL performance versus TSC 5120A

Bruce Griffiths bruce.griffiths at xtra.co.nz
Thu Jun 3 20:13:20 UTC 2010


Steve Rooke wrote:
> Bruce,
>
> On 3 June 2010 19:27, Bruce Griffiths<bruce.griffiths at xtra.co.nz>  wrote:
>    
>>> Bruce, I don't understand you, when presented with visual evidence
>>> that this method works you still deny it.
>>>        
>> What visual evidence??
>> There is no proof that the technique works well in every case.
>> Only that for the range of Tau tested and for the particular source pair
>> used that it appears to.
>>      
> I have already commented on this in another thread but to reiterate.
> The test that John performed that for a range of Tau that was possible
> to be calculated for the given measurement period, both methods
> produced the same results for each and every value of Tau, not for a
> single value of Tau.
>
>    

>    
>>> Well, what are are the "woefully inadequate" conclusions then? Please
>>> give us your full reasoning.
>>>        
>> A simple example is that for a small number of samples a stability metric
>> like the ordinary (unfiltered) phase variance standard deviation may appear
>> to be stable, whereas with a sufficiently large number of samples the
>> instability of the metric itself becomes evident whenever divergent noise
>> processes like flicker phase noise, random walk frequency noise are present.
>>      
> OK, we need to run the test for a longer period but, as John has
> indicated, he is not able to devote any further time to this. That
> fact does not mean that this method has no value, it is just like
> every other area of theoretical physics, we can never prove it true so
> we try to look at ways of proving it false. The important words there
> are "proving it false".
>
>    
>> Lots of stuff deleted.
>>      
> Lets explore frequency measurement in a way that we all can
> understand. No oscillator can be measured in isolation, it has to be
> measured against another standard oscillator. Conventional frequency
> measurement is performed by counting the number of cycles of the
> unknown oscillator over a known period or gate time. This averages the
> unknown frequency over the gate time so that an instrument built using
> this method can only provide an accuracy down to a single cycle over
> the number of cycles that are counted during the gate time. What this
> means is that the "conventional" method of frequency measurement
> averages the measured value and is subject to an error of the period
> of the frequency.
>
> So how does Warren's system measure the frequency. Using the tight-PLL
> method the feedback voltage controlling the reference oscillator is
> constantly tracking any difference between both oscillators. If
> measurements of the PLL feedback are only taken at the required Tau
> frequency, the result would only show the instantaneous value which
> would be equivalent to basically measuring the period of the last (or
> last few, depending upon the Tc of the PLL feedback filter) cycles. By
> oversampling, IE. taking many measurements during the desired period
> of minimum Tau, these measurements can be averaged to produce the
> averaged frequency of that Tau period. This means that the PLL filter
> loop can be made much faster to keep the unknown and reference
> oscillators tightly tracking. Without some dampening in the PLL loop,
> the circuit would become unstable and quite unusable. All that is
> required in this filter is a very simple integrator as it's Tc is
> faster than the oversampling rate which is in turn faster than the Tau
> period.
>
>    
>>>> Only in your imagination.
>>>>          
>>> One would assume that this method only works when Warren does it as
>>> his "imagination" is required for it to work, but wait, John Miles has
>>> managed to get point for point identical data against a TSC, how can
>>> that be Bruce? Please give answers, not insults.
>>>        
>> Read the following paper:
>>
>> http://hal.archives-ouvertes.fr/docs/00/37/63/05/PDF/alaa_p1_v4a.pdf
>>      
> This is a fine paper but is only relevant to the method they are using
> for AVAR determination. As I have explained above, oversampling
> enables the implementation of a very simple filter in the PLL loop.
>
>    
>> which shows the relationship between AVAR etc, filters and the ordinary
>> phase variance.
>> The paper also outlines the techniques that should be used with the sampled
>> frequency difference data from a tight PLL.
>>      
> Agreed, but they had not thought about oversampling..
>
>    
>>> Again, it it does not work, how come the evidence shows that it does,
>>> how do you explain that Bruce?
>>>
>>>        
>> The evidence doesn't show this at all.
>> It merely indicates that for the devices tested that the phase noise
>> spectral components in the region where the filter responses of the ADEV and
>> WDEV differ (its not ADEV so it shouldnt be labelled as such) dont appear to
>> be significant for the 2 sources compared and the tau range over which the
>> testing was done.
>>      
> OK, for all intents and purposes ADEV measurements are only carried
> out on a limited variety of oscillators as it would be pointless to,
> say, perform such a measurement on an LC disciplined oscillator. So we
> are looking at xtal, Rb, Cs and Hm at the moment. For practical
> purposes the xtal has good performance at short Tau but looses out at
> long Tau due to drift. This where the other score because they offer
> better performance at longer and longer Tau but they suffer at short
> Tau. For our purposes we would be most interested in a xtal that is
> disciplined by one of the others or GPS. In addition to this the mass
> of time-nuts have xtals, less have Rb, few Cs and very very few have
> Hm. Even when we look at a GPSDO, the output stage is a xtal so it is
> that which we are measuring for short Tau and the GPS for long Tau. So
> what am i trying to get at, well for all practical purposes, for most
> members of this list, they will be testing a xtal. Now we need to look
> at what practical deviations from the ideal are found in real World
> xtals, not some theoretical test. So lets find some bad xtals and
> measure them to see the results.
>
>    
>> Extrapolation of such results to predict that the technique will produce
>> such agreement with other devices with differing phase noise characteristics
>> is unrelaible.
>>      
> Warren has already indicated that any measurement with his tester is
> dependant on the reference oscillator which would be exactly the same
> case for a TSC. The reference oscillator would need to be a good low
> noise device or any measurement will be limited by this factor anyway.
> The measurement is dependant upon the measured oscillator having
> higher noise or you will just be measuring the noise floor using any
> method.
>
>    
>> You are confusing producing the same numbers in specific cases with the
>> ability to do so in general.
>> There is no guarantee that such agreement will occur with a given pair of
>> sources.
>>      
> It worked well enough for all the range of Tau that each method
> measured. You seem to think that it produced a single answer which, I
> agree, could be suspect but congruence over such a range of Tau...
>
>    
>> Such agreement in general isnt possible as the equivalent phase noise
>> filters have different frequency responses.
>>      
> Don't understand what your getting at here Bruce?
>
>    
>> Stability measures like AVAR can be shown to be the equal to the ordinary
>> variance of the phase difference at the output of a very specific phase
>> noise filter.
>>      
> Phase noise filter? This sounds to me like some form of S/W processing
> and I think that Warren has specifically kept this whole design simple
> with no need for complex S/W transformations as there is no need for
> it, there is no magic here.
>
>    
>> WDEV has a phase noise filter with a different frequency response so that it
>> doesnt actually measure ADEV.
>>      
> Where did WDEV jump into the mix here...
>
>
>    
>>> You sound like a school teacher marking a pupil's work. Perhaps not
>>> everyone is as eloquent as Shakespeare with their English, there is no
>>> need to resort to this form of denigration. I find your explanations
>>> on things very cryptic and hard to follow but I don'r resort to this
>>> sort of abuse.
>>>        
>> It would be much easier if Warren limited his commentary to the actual
>> results and omitted the wild speculation (and the metaphysics).
>>      
> Warren is doing his best to try and explain what he is doing with this
> but you are just adopting your holier-then-though pedantic attitude
> that comes into play when you disagree with anyone. OK, maybe he is
> not a time god like you who can spout expertly worded textbook
> explanations of everything under the sun but he is onto something here
> and I'm sorry if that does not fit with you but these are the bounds
> you have to work in.
>
>    
>>> Has been explained by John who wrote the method and is available for
>>> you to review.
>>>        
>> I've seen it and its somewhat shy of the optimum signal processing
>> technique.
>>      
> So you say but it is producing a result that seems to be VERY
> interesting. To adopt an attitude where everything has to be done a
> specific way totally misses out on innovation.
You cant arbitrarily change the equivalent "filter" and expect to get 
the same results.
> In the Renault factory
> when they made the first 2CV motor car they were having trouble
> assembling it as they could not get some of the body parts to fit
> together. One of the mechanics gave the item a good kick with his boot
> and the item popped into place. This method was then adopted for the
> assembly of that vehicle as it worked, it was not high-tech but it
> worked. You could say that this was not the optimum method to build a
> car and send it all back to the drawing board to sort it out at a vast
> cost in re-engineering or you can stick with what works.
>    
Irrelevant analogy as the adopted method doesnt conflict with the 
requirements of any accepted physical theory.

>    
>>> So the visual evidence before your very eyes which clearly shows that
>>> this works is not sufficient for you to understand that this
>>> measurement works.
>>>
>>>        
>> Read the theory outlined in the paper and maybe you''ll begin to understand
>> my objections to statements that the technique measures ADEV.
>>      
> That paper is irrelevant for the method that Warren has chosen.
>
>    
If you had read and understood the paper you wouldnt believe that.
>    
>>> Bruce, you do know that this is the NIST tight-loop PLL method which
>>> produces frequency measurements and not the loose-loop PLL method
>>> which produces phase difference data I hope.
>>>        
>> Of course I am aware of that.
>>      
> I wasn't sure you were on the same page with this.
>
>    
>> Phase is the integral of frequency so phase differences sampled at intervals
>> of say T are equivalent to frequencies averaged over time T and sampled at
>> the end of the sample interval. Thus sampling the time average frequency
>> every T seconds is equivalent to sampling the phase difference every T
>> seconds.
>>      
> Cool.
>
>    
>>> Warrens implementation improves on the original NIST implementation by
>>> oversampling.
>>>
>>>        
>> Actually it degrades the simplicity and accuracy of the NIST implementation
>> by replacing the integration inherent when using the counter and VFC with an
>> approximation to the required frequency integral. Fortunately the accuracy
>> can largely be recovered by using the appropriate signal processing
>> algorithms.
>>      
> Really! This is not the case as the loop bandwidth of significantly
> wider than the oversampling rate and this rate is significantly faster
> than the minimum Tau sampling rate. Why is there any need to use any
> "appropriate signal processing algorithms" with such an elegantly
> simple improvement on the NIST design. I find it hard to understand
> what you don't understand here.
>
>    
You need to read up on the definition of AVAR, ADEV etc and understand 
that it requires average frequency measures or phase differences.
Integration (in hardware or software) is required to do this.
Warrens hardware merely implements the low pass filter required to limit 
the contribution of white phase noise etc to ADEV.
>>>>> Why Warren omits this crucial step when all it requires is a little
>>>>> digital signal processing as all the required information is available
>>>>> from the sampled EFC voltage remains a mystery.
>>>>>            
>>> I'm intrigued Bruce, please explain to us in detail what you are
>>> actually getting at here?
>>>        
>> Read the paper on stability variances and filters.
>>      
> The filter does not come into it as Warren has not designed the filter
> to have a Tc equivalent to the Tau sampling rate. Now the penny has
> dropped and I can see where you are going wrong here. The filtering
> here is done simply by oversampling and averaging the results of the
> measurements over the minimum Tau period.  Excellent paper but far too
> full of theoretical math for my liking these days.
>
>    
If you don't follow the paper, how is it that you feel that you know 
sufficient to make a useful contribution to the discussion?
The paper explicitly covers the tight PLL technique where a sequence of 
frequency samples are taken and shows how to produce valid ADEV measures 
from these samples.
It also provides a formula that can be used to predict the consequences 
of inappropriate signal processing.


>>> The needle is stuck again, Bruce, look at the results, as rose by any
>>> other name would smell as sweet.
>>>        
>> Poetry is irrelevant, the fact that the equivalent filters for AVAR and WVAR
>> differ should be of concern.
>>      
> Poetry is always relevant :) By WVAR do you mean Warren Varience as
> I'm not sure where you've pulled this from otherwise.
>
>    
Yes to distinguish it from ADEV, which without the integration it is not.
>>> You keep coming up with imaginary ways that you think this method
>>> would fail to produce the right answer but you've not produced a
>>> source that can be tested in the REAL World. You talked about Warrens
>>> imagination earlier, well I'm calling this on you now. Lets have some
>>> concrete example that shows this method is not usable or shut up.
>>> Warren has put his money where his mouth is, now it's your turn.
>>>        
>> Its obvious if you understand the theory.
>> Otherwise an infinite set of tests with an infinite number of sources with
>> an infinite variety of phase noise spectra are required to show the
>> technique works in all cases.
>>      
> And I suppose that the TSC has been tested with an infinite number of
> sources with an infinite variety of phase noise spectra to very that
> it works perfectly. Can you get Agilent to sign a document that says
> the TSC will always produce the correct answer with every trpe of
> input source? No, because it comes from Agilent (or Summetricom,
> whatever) and costs $$$$, and someone has written a theoretical paper
> about it, you assume that it is The Standard in test equipment. Well,
> it probably is THIS YEAR but times will change and who knows when the
> B version comes out.
>
>    
At least the design is consistent with the accepted requirements for 
measuring ADEV whereas Warrens design (without the integration) isnt.
>>> Bruce, I really do admire your knowledge and intelligence generally
>>> but sometimes you really need to take a step back and smell the
>>> coffee.
>>>        
>> Its not that the method cant be easily fixed so that it produces accurate
>> ADEV measures for an extremely wide range of sources with divergent phase
>>      
> We don't know that it doesn't yet!
>
>    
Without integration the equivalent filter has the wrong shape thus it 
doesnt measurer ADEV, period.
>> noise spectra, its the extreme reluctance to do the signal processing
>> correctly (its not that this even incurs extra hardware costs) that is
>> perplexing.
>>      
> No, what's perplexing is that you wish to change things before they
> are even proved to be inaccurate that bothers me.
>
>    
If one ignores the accepted theory then one can easily waste a lot of 
time producing measurement systems of limited utility.
One should examine the consequences of omitting signal processing steps 
as this can save a lot of time by allowing the conditions under which it 
may produce acceptable results to be clearly defined.
>>      
>>> My apologies to the list for openly expressing my feelings on this.
>>>
>>>        
> Well, people of this list, if you've read this far it's time for a
> beer and I'd buy you all one for putting up with this saga but I
> probably could not afford that anyway, like I cannot afford a TSC.
> Again my apologies to the list, this time for the long post. If your
> reading this on your cellphone, I'm impressed :)
>
> Steve
>
>    
>> Bruce
>>
>>
>> _______________________________________________
>> time-nuts mailing list -- time-nuts at febo.com
>> To unsubscribe, go to
>> https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
>> and follow the instructions there.
>>
>>      
>
>
>    





More information about the time-nuts mailing list