[time-nuts] Re: UTC - A Cautionary Tale

M. Warner Losh imp at bsdimp.com
Mon Jul 18 16:36:34 EDT 2005


In message: <913A392A-7DF4-47C9-8B1E-AA338482407D at noao.edu>
            Rob Seaman <seaman at noao.edu> writes:
: Your program could have been layered on TAI.

Layering the program on TAI is likely a non-starter.  Since the
cellular networks use UTC, he'd still need to know about leapseconds.
There's no way around that requirement.

: Your program could have referenced one of the online sources of leap  
: second tables (ftp://maia.usno.navy.mil/ser7/tai-utc.dat).

This only works if there's an internet connection.  Often there is,
but not always.

: There is interval time and there is time-of-day.  There are a number  
: of other timescales as well.  What is difficult to forecast is the  
: relationship between TAI and UT1.  The IERS and such folks do a very  
: good job of this actually, but over the long term it is simply not  
: deterministic.

Over the short term we do very well.  We know to within a few tens
milliseconds what the difference will be by the end of the year.
Predicting where we'll be in 10 years is harder, but we can likely do
it within a second.  The further out one goes, the harder it gets.
But long term we know that this is at least a quadratic, so no matter
what the value of the second that was picked in the 1960s turned out
to be, we'd have this problem.

: What is missing here is a  
: coherent leap second scheduling algorithm.  The IERS chooses each  
: leap second manually by some committee vote - they should adopt and  
: use a specific strategy instead.  Minimizing |UTC-UT1| is a good  
: start for developing such a strategy.

If our models get good enough, one would hope that the leap second
scheduling could be done further in advance than 6 months.  One option
to the problem is to say 'looking 10 years out, our best guess is that
there will be 14 leap seconds, so we'll have them here, here here,
etc'.  Such an approach would require the DUT1 tolerance to grow, but
it wouldn't be unbounded.  I don't think the state of the art is yet
to this point, however.

Another strategy would be to accept a slightly larger error, but
schedule things out a few years based on the best models.  It all
depends on what you want to optimize for: predictibility of leap
seconds, or minimizing |UTC-UT1|, both have their supporters.

The problem with more frequent steering is that the time necessary to
propigate leap second information may be insufficiently short.  A
number of ways exist to solve this problem, but any proposal that
increases the times we could have them, and might reduce the warning
we get for them, needs to address this issue.  Also, although leap
seconds can happen at any end of month, there's a lot of
hardware/software that 'knows' that a leap can only happen
June/December, so that hardware would need to be replaced/fixed, so a
long lead time would be necessary before implementing such a change.

: By attempting to ignore an intrinsic reality, we are making such  
: issues more likely, not less.  How about an extension to ISO 8601  
: that would permit distinguishing timescales, something like:
: 
:      2005-07-18T12:34:56Z (UTC)
:      2005-07-18T12:35:28A (TAI - same instant)
: 
: Multiple timescales will always exist.  We should acknowledge that  
: fact and move on.

The reason that 'Z' is used for UTC is that A-X are used for all the
other time zones on the planet (well, all the ones that fall on hour
boundaries).  I'm unsure if 'Y' is free or not for TAI.

: > leap hours are 3,600 times more absurd!
: 
: Common ground has been reached.

Indeed.

Warner




More information about the time-nuts mailing list