[time-nuts] Low-long-term-drift clock for board level integration?

Hal Murray hmurray at megapathdsl.net
Wed Feb 22 00:15:36 UTC 2012

albertson.chris at gmail.com said:
> Even with a very minimal local Ethernet where the path is the same for every
> packet you still have variable timing.   There are queues and buffers.  Also
> it is not so easy to measure the time a network packet arrives at a
> computer.

> The serial port is the best hardware for timing.  The DCD pin is directly
> tied to a hardware interrupt that has as low a latency as you will find on
> the PC.   These is nothing like this hardware interrupt in the Ethernet
> controller.

Don't forget cache misses, memory, arp, routing...

I don't know of any first-order differences in the interrupt hardware between 
ethernet and serial.

Many ethernet controllers have an option to batch interrupts.  The idea is to 
reduce CPU overhead.  That will screw things up but it can (usually?) be 
turned off.

Many OSes have kernel code to capture a time stamp on an interrupt from DCD.  
There is no reason that similar time-stamps couldn't be added to ethernet 
packets.  I'm out of touch with that area.  Somebody has probably done it 
already.  (Seems like a good thesis topic so it's probably been done many 
times if it isn't already included in the mainline code for one of the major 
open source OSes.)

If nothing else, it would help tcpdump/wireshark.  From:
struct sk_buff {
	struct timeval		stamp;
Here we record the timestamp for the packet, either when it arrived or when 
it was sent. Calculating this is somewhat expensive, so this value is only 
recorded if necessary. When something happens that requires that we start 
recording timestamps, net_enable_timestamp() is called. If that need goes 
away, net_disable_timestamp() is called.

Timestamps are mostly used to packet sniffers. But they are also used to 
implement certain socket options, and also some netfilter modules make use of 
this value as well.

gpsd uses a Linux specific IO call to get a wakeup on a modem signal change.  
It should be possible to write a simple program to collect a bunch of PPS 
data and compare the user mode wakeup path with the kernel time stamps.

> So you add the uncertain timing because of queues to the un-abilty to
> accurately determine when the data pack arrives and you are stuck in the
>  millisecond level

I'm not sure what you mean by "millisecond level".

Play with ping a bit.  I get things like this between systems on a local lan.
  25 packets transmitted, 25 received, 0% packet loss, time 23998ms
  rtt min/avg/max/mdev = 0.136/0.202/0.273/0.044 ms

That's ms level if my only choices are ms vs us, but it's enough better than 
ms to be interesting for this discussion where one of the choices was 0.1 ms.

These are my opinions, not necessarily my employer's.  I hate spam.

More information about the time-nuts mailing list