[time-nuts] Characterising frequency standards

Steve Rooke sar10538 at gmail.com
Mon Apr 13 00:34:36 UTC 2009


Bruce,

2009/4/12 Bruce Griffiths <bruce.griffiths at xtra.co.nz>:
> Steve
>
> Steve Rooke wrote:
>> If I take two sequential phase readings from an input source and place
>> this into one data set and aniother two readings from the same source
>> but spaced by one cycle and put this in a second data set. From the
>> first data set I can calculate ADEV for tau = 1s and can calculate
>> ADEV for tau = 2 sec from the second data set. If I now pre-process
>> the data in the second set to remove all the effects of drift (given
>> that I have already determined this), I now have two 1 sec samples
>> which show a statistical difference and can be fed to ADEV with a tau0
>> = 1 sec producing a result for tau = 1 sec. The results from this
>> second calculation should show equal accuracy as that using the first
>> data set (given the limited size of the data set).
>>
>>
> You need to give far more detail as its unclear exactly what you are
> doing with what samples.
> Label all the phase samples and then show which samples belong to which
> data set.
> Also need to show clearly what you mean by skipping a cycle.

Say I have a 1Hz input source and my counter measures the period of
the first cycle and assigns this to A1. At the end of the first cycle
the counter is able to be rest and re-triggered to capture the second
cycle and assign this to A2. So far 2 sec have passed and I have two
readings in data set A.

I now repeat the experiment and assign the measurement of the first
period to B1. The counter I am using this time is unable to stop at
the end of the first measurement and retrigger immediately so I'm
unable to measure the second cycle but is left in the armed position.
When the third cycle starts, the counter triggers and completes the
measurement of the third cycle which is now assigned to B2.

For the purposes of my original text, the first data set refers to A1
& A2. Similarly the second data set refers to B1 & B2. Reference to
pre-processing of the second data set refers to mathematically
removing the effects of drift from B1 & B2 to produce a third data set
which is used as the data input for an ADEV calculation where tau0 = 1
sec with output of tau = 1 sec.

>
>> I now collect a large data set but with a single cycle skipped between
>> each sample. I feed this into ADEV using tau0 = 2 sec to produce tau
>> results >= 2 sec. I then pre-process the data to remove any drift and
>> feed this to ADEV with a tau0 = 1 sec to produce just the tau = 1 sec
>> result. I now have a complete set of results for tau >= 1 sec. Agreed,
>> there is the issue of modulation at 1/2 input f but ignoring this for
>> the moment, this should give a valid result.
>>
>>
> Again you need to give more detail.

In this case the data set is constructed from the measurement of the
cycle periods of a 1Hz input source where even cycles are skipped,
hence each data point is a measurement of the period of each odd (1,
3, 5, 7...) cycle of the incoming waveform. In this case the time
between each measurement is 2 sec so ADEV is calculated with tau = 2
sec for tau >= 2 sec. This data set is then mathematically processed
to remove the effects of drift, bearing in mind the 2 sec spacing of
each data point, and ADEV is then calculated with tau0 = 1 sec for tau
= 1 sec.


>> Now indulge me while I have a flight of fantasy.
>>
>> As the effects of jitter and phase noise will produce a statistical
>> distribution of measurements, any results from these ADEV calculations
>> will be limited on accuracy by the size of the data set. Only if we
>> sample for a very long time will we see the very limits of the effects
>> of noise.

> What noise from what source?

PN - White noise phase WPM, Flicker noise phase FPM, White noise
frequency WFM, Flicker noise frequency FFM and Random walk frequency
RWFM.

> Noise in such measurements can originate in the measuring instrument or
> the source.

Indeed, and this is an important aspect to consider as we have been
discussing the effects of induced jitter/PN to a frequency standard
when it is buffered and divided down. Ideally measurements of ADEV
would be made on the raw frequency standard source (eg. 10MHz) rather
than, say, a divided 1Hz signal.

> For short measurement times quantisation noise and instrumental noise
> may mask the noise from the source but they are still present.

Well, these form the noise floor of our measurement system.

>
>
>> The samples which deviate the most from the median will
>> occur very infrequently and it is statistically likely that they will
>> not occur adjacent to another highly deviated sample. We could
>> pre-process the data to remove all drift and then sort it into an
>> array of increasing size. This would give the greatest deviations at
>> each end of the array. For 1 sec stability the deviation would be the
>> greatest difference from the median of the first and last samples in
>> the array. For a 2 sec stability, this same calculation could be made
>> taking the first two and last two readings in the array and
>> calculating their difference from 2 x the median. This calculation
>> could be continued until all the data is used for the final
>> calculation. In fact the whole sorted data set could be fed to ADEV to
>> produce a result that would show better worse case measurement of the
>> input source which still has some statistical probability. In theory,
>> if we took an infinite number of samples, there would be a whole
>> string of absolutely maximum deviation measurements in a row which
>> would show the absolute worse case.
>>
>> Is any of this valid or just bad physics, I don't know, but I'm sure
>> it will solicit interesting comment.
>>
>>
> No, not poor physics but poor statistics.

Well, poor statistics possibly but that branch of mathematics is not
only about interpreting data it is also about predicting events. What
I proposed is predicting events that would otherwise occur very
infrequently and hence be difficult to collect but would have a baring
on the measurement of the total stability of an oscillator. I'm just
thinking out loud.

73,
Steve

>
>> 73,
>> Steve
>>
>> 2009/4/10 Tom Van Baak <tvb at leapsecond.com>:
>>
>>>> I think the penny has dropped now, thanks. It's interesting that the
>>>> ADEV calculation still works even without continuous data as all the
>>>> reading I have done has led me to belive this was sacrosanct.
>>>>
>>> We need to be careful about what you mean by "continuous".
>>> Let me probe a bit further to make sure you or others understand.
>>>
>>> The data that you first mentioned, some GPS and OCXO data at:
>>>    http://www.leapsecond.com/pages/gpsdo-sim
>>> was recorded once per second, for 400,000 samples without any
>>> interruption; that's over 4 days of continuous data.
>>>
>>> As you see it is very possible to extract every other, or every 10th,
>>> every 60th, or every Nth point from this large data set to create a
>>> smaller data set.
>>>
>>> Is it as if you had several counters all connected to the same DUT.
>>> Perhaps one makes a new phase measurement each second,
>>> another makes a measurement every 10 seconds; maybe a third
>>> counter just measures once a minute.
>>>
>>> The key here is not how often they make measurements, but that
>>> they all keep running at their particular rate.
>>>
>>> The data sets you get from these counters all represent 4 days
>>> of measurement; what changes is the measurement interval, the
>>> tau0, or whatever your ADEV tool calls it.
>>>
>>> Now the ADEV plots you get from these counters will all match
>>> perfectly with the only exception being that the every-60 second
>>> counter cannot give you any ADEV points for tau less than 60;
>>> the every-10 second counter cannot give you points for tau less
>>> than 10 seconds; and for that matter; the every 1-second counter
>>> cannot give you points for tau less than 1 second.
>>>
>>> So what makes all these "continuous" is that the runs were not
>>> interrupted and that the data points were taken at regular intervals.
>>>
>>> The x-axis of an ADEV plot spans a logarithmic range of tau. The
>>> farthest point on the *right* is limited by how long your run was. If
>>> you collect data for 4 or 5 days you can compute and plot points
>>> out to around 1 day or 10^5 seconds.
>>>
>>> On the other hand, the farthest point on the *left* is limited by how
>>> fast you collect data. If you collect one point every 10 seconds,
>>> then tau=10 is your left-most point. Yes, it's common to collect data
>>> every second; in this case you can plot down to tau=1s. Some of
>>> my instruments can collect phase data at 1000 points per second
>>> (huge files!) and this means my leftmost ADEV point is 1 millisecond.
>>>
>>> Here's an example of collecting data at 10 Hz:
>>> http://www.leapsecond.com/pages/gpsdo/
>>> You can see this allows me to plot from ADEV tau = 0.1 s.
>>>
>>> Does all this make sense now?
>>>
>>>
>>>> What I now believe is that it's possible to measure oscillator
>>>> performance with less than optimal test gear. This will enable me to
>>>> see the effects of any experiments I make in the future. If you can't
>>>> measure it, how can you know that what your doing is good or bad.
>>>>
>>> Very true. So what one or several performance measurements
>>> are you after?
>>>
>>> /tvb
>>>
>>>
>>> _______________________________________________
>>> time-nuts mailing list -- time-nuts at febo.com
>>> To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
>>> and follow the instructions there.
>>>
>>>
>>
>>
>>
>>
>
> Bruce
>
> _______________________________________________
> time-nuts mailing list -- time-nuts at febo.com
> To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
> and follow the instructions there.
>



-- 
Steve Rooke - ZL3TUV & G8KVD & JAKDTTNW
Omnium finis imminet



More information about the time-nuts mailing list