[time-nuts] Characterising frequency standards
sar10538 at gmail.com
Tue Apr 7 12:09:38 UTC 2009
A while back when we were discussing the performance of the Shortt
free pendulum clock a reference was made to tvb's paper on allen
deviation, http://www.leapsecond.com/hsn2006/ch2.pdf, which I found to
be an excellent primer on the subject. It was interesting to see that
with only a subset of the data, the allen deviations up to about the
total of the data collection period could be calculated with
reasonable accuracy. This had me thinking that if just a proportion of
the data covering up to a specific averaging time gave good results,
would disconnected data amounting to the same period give the same
results. To me it seems that accuracy of the results is not related to
the need to capture every event consecutively, it is more a case of
collecting the same size data set even though samples were not
consecutive. My reasoning behind this is that any set of data for a
DUT should give the same results even though the data sets are not
related time wise. OK, there are affects caused by different
environmental conditions and drift but these can be calculated out.
The only think that would shoot a big hole in this is if there was a
repeatable difference between alternate cycles.
So why am I saying this, well from what I have read on this group and
on the web, I have been left with a feeling that it was vital to
capture every event over a samplig period to ensure an accurate
measurement. This requires equipment capable of time-stamping each
event or employing such techniques as picket-fence. This is due to the
limitations of most counters being unable to reset in time to measure
the next time period of an input. At this stage I cannot see why it is
not possible to just measure a cycle, let the counter/timer reset and
then let it measure the next full cycle that follows. Agreed this
would mean that alternate cycles were lost (assuming the counter/timer
can reset within the space of one cycle) but the measurement could
still collect the same amount of data points, it would just take twice
as long. In fact, it could be possible to make the counter/timer
measure alternate cycles on the opposite transitions, thereby reducing
the total measurement time to just one and a half times the 'normal'
time. With respect to any problem related to alternate cycles, the
measurement system could be made to collect two data sets with single
cycle skipped between each set.
The difference will be that the data set would consist of measurements
of each individual non-sequential cycle as opposed to a history of the
start times of each cycle.
So the short story is, does the data stream really have to consist of
sequential samples or is it just a statistical thing so for the same
size of data set, the results should be similar.
Steve Rooke - ZL3TUV & G8KVD & JAKDTTNW
Omnium finis imminet
More information about the time-nuts