[time-nuts] GPSDO recommendation

David I. Emery die at dieconsulting.com
Thu Oct 20 08:05:40 UTC 2011


On Wed, Oct 12, 2011 at 07:33:29AM -0400, Peter Gottlieb wrote:
> I would like to get better than 100 uSec so I can get a couple of degrees 
> resolution on a synchrophasor project.

	Once one gets into that region with OSes one gets into a kind of
relativity... 
	
	It all depends on what you intend to do with the time of day... 
and where.

	Do you intend to schedule events with some required absolute
time accuracy as to when they occur ?   Or time stamp an event with  a
time it occurred ?   Or read something that purports to be the current
absolute time of day in the midst of some code path. Or do one of several
other time related things...

	All involve fairly deep questions about what assumptions you
make about both software and processor and system hardware behavior.

	Modern processors with multiple pipelines and even multiple
cores and (especially multi level cache) memory systems have layers and
layers of not all that deterministic behavior which introduce
microsecond level jitter in the relationship of the time anything
happens and the time the system reacts to it...

	And that is usually dwarfed by the various thread and process
scheduling and interrupt processing latencies in almost any OS - some
versions, properly configured, being much more hard real time than
others.

	Most OSes can most readily schedule something on a particular
clock interrupt of a multi KHz rate real time clock interrupt stream -
usually (with normal hardware) based on a not very accurate or terribly
stable crystal.    Whether that something actually happens close to the
time of that clock tick depends very much on thread priorities (usually
set by the user as well as the kernel)  and CPU load and interrupt
activity and how well tuned that particular kernel is with respect to
minimizing long lockouts due to critical regions in kernel code or
contention for locks and resources.  And on top of this there may be
contention and lockouts in the actual hardware... 

	And most all CPUs have high rate time of day counters that can
be read by the OS - also typically based on a relatively poor frequency
reference - almost always the same one used for the RTC interrupts.  
This can be used to establish with high precision the counter time the
counter was read, but the relationship of this to an external event
depends on how the kernel detects the event and with what priority and
latency. And of course the counter time of day is always somewhat off
due to the drift of the frequency reference behind it.

	NTP and various related kernel PPS code attempt to use an
accurate stream of 1 PPS (or whatever)  interrupts from a highly stable
and accurate external ticker somewhere to measure and predict the
behavior of the drifting unstable CPU clock so it is possible to compute
a running estimate of  offset between the time of day counter on the CPU
(and the real time clock interrupts) and some external idea (from the 1
PPS) of the real world time of day.   This allows conversion of a
reading from the time of day counter to some notion of real world
time... and in most OSes the kernel does this for you and returns a time
of day rather than time counter reading (or more properly does so if
asked).

	And depending on the OS there may also be an ability to attempt
to determine the real world times at which the regular RTC interrupts
happen so an event can be scheduled as close to some absolute time of
day as possible.

	However the magnitude of jitter can be significant, and
its statistics not always easy to predict... 

	And most important if there are multiple sequences of events
occurring it may not be possible to predict or ensure that events that
have a particular time sequence in the outside world appear to have the
same  ordering in time to the software... or even the same sequence to
different threads... 

	All of which means that using such tools to control or measure
60 Hz (I assume) phase within a degree or so depends very much on system
and software choices - certainly readily possible if done right but at
least on a very slow or heavily loaded or poorly configured platform
also quite possible to occasionally have significant transient error.

	Most folks who have played with measuring kernel time base
performance with modern *nix kernels and PPS sync find low microsecond
timing jitter is pretty much the limit... though lots depends on
hardware and software implementation details.

	One suspects if one needs anything down in that low us area or
below a FPGA based much more deterministic approach might make sense...
with software and OS only configuring, supervising  and monitoring.


-- 
  Dave Emery N1PRE/AE, die at dieconsulting.com  DIE Consulting, Weston, Mass 02493
"An empty zombie mind with a forlorn barely readable weatherbeaten
'For Rent' sign still vainly flapping outside on the weed encrusted pole - in 
celebration of what could have been, but wasn't and is not to be now either."




More information about the time-nuts mailing list