[time-nuts] seeking a time/clock software architecture

Jim Lux jimlux at earthlink.net
Fri Sep 23 22:13:04 UTC 2011

On 9/23/11 2:00 PM, Chris Howard wrote:
> Seems like a lot of unknowns. You would have to
> have sensors monitoring the sensors.

I think the "clock model" (insofar as variations in the oscillator) are 
outside the scope, as long as the effect of that variation can be 
represented cleanly.

For example, with a simple 2 term linear model t = clock/rate + offset, 
you can describe the *effect* of a rate, and if the rate changes, the 
model changes.  As long as you keep track of the rates and offsets 
you've used in the past, you can reconstruct what "clock" was for any t 
or vice versa.

A clock model predictor might use all those factors to better estimate 
the rate.  Having a high order polynomial model might let you not need 
to update the model parameters as often.  That's a tradeoff the user 
could make: Do I use a 2 or 3 term clock to time transformation, and 
update it once a minute, or do I use a 20 term transformation, and 
update it once a month.

> Do you lose too much by just maintaining a lifetime worst-case number, or
> maybe some kind of probability function?

Certainly one cannot do a worst-case number.  Consider that you have two 
endpoints that need to be synchronized within 1 millisecond.  This 
requires that the clocks at each end have known rate/offset to an 
accuracy of around 1ppm for 1000 second time span.  Assuming that you 
have some magic means to measure this, you'd like to have a standard way 
to describe the rate and offset (so that you don't have as many formats 
as you do endpoints).

OK, so if you wanted an output from your Time API that gave you a 
"estimated uncertainty of time" (think like the accuracy estimates from 
GPS receivers), what would that look like?

Do you give a 1 sigma number?  What about bounding values?  (e.g. the 
API returns "the time is 12:21:03.6315, standard deviation of 1.5 
millisecond, but guaranteed in the range 12:21:03 to 12:21:04)

I would expect that a fancy implementation might return different 
uncertainties for different times in the future (e.g. I might say that I 
can schedule something with an accuracy of 1 millisecond in the next 10 
minutes, but only within 30 milliseconds when it's 24 hours away)

The mechanics of how one might come up with this uncertainty estimate 
are out of scope, but the semantics and format of how one reports it are 
in scope for the architecture.

More information about the time-nuts mailing list