NTP Serial Port Jitter
NTP reference clocks that output their time tick as part of an ASCII data stream suffer from the effects of jitter in the operating system serial port driver. This page shows the effect of serial port tuning.I talk about Linux first, and then provide some FreeBSD information further down the page.
Linux
The short answer is that under Linux, using the setserial command to set the "low_latency" flag is the right thing to do: setserial /dev/ttyS0 low_latency".I used an HP Z3801A GPS-disciplined oscillator (a telecomm packaged unit very similar to the HP 58503A "SmartClock") hooked to a PC running Linux kernel 2.4.21 with the "PPSkit" patches applied, and NTP version 4.2.0. The PC is an Athlon 2200+ and the serial ports are on the motherboard.
I tested four configurations for about one day at a time, using the offset and jitter data from NTP's "peerstats" log file as the data. The table and graphs below show the performance for each configuration.
Note: These results measure jitter in the serial data stream only. If you are using a "PPS" interface that utilizes a pulse on the DCD or other control signal of a serial port, that doesn't seem to be impacted by any of these settings. Here are some statistics for the PPS offset and stability showing their immunity to the configuration tests done here.
I collected data with the "low_latency" flag both set and unset ("^low_latency").
Because the FIFO buffer in modern UART chips can contribute to jitter, I attempted to disable the FIFO by forcing the kernel to think that the chip was an old 8250 rather than the more modern 16550A. As shown in the results, this didn't make any difference. It's possible that my UART-forcing trick didn't really disable the FIFO, but I have been unable to find any information on how to do so in any other way.
Here are some basic statistics from the ~24 hour run in each configuration. All values are in milliseconds, and the "Range" value is the absolute difference between the minimum and maximum values recorded.
  | Mean Offset | Offset Range | Offset StdDev |
Mean Jitter | Jitter Range | Jitter StdDev |
^low+16550a | -4.7116 | 16.538 | 2.9735 | 4.3352 | 12.924 | 1.1380 |
^low+8250 | -4.7559 | 16.133 | 3.0265 | 4.3897 | 14.399 | 1.2271 |
low+8250 | 0.24129 | 7.5990 | 0.70391 | 0.54408 | 7.5862 | 0.82720 |
low+16550a | 0.23298 | 7.9120 | 0.71086 | 0.57178 | 7.7006 | 0.81877 |
You can see two things from this data beyond the obvious facts that the low latency flag makes a big difference, while the UART setting doesn't.
First, there is much greater average offset -- about -4.7ms compared with +0.24ms -- when the low_latency flag is not set. That makes sense, considering that the "setserial" documentation indicates that the serial subsystem normally has a 5-10ms delay.
Second, and this is shown much more clearly by the charts below, when the low latency flag is set, the readings cluster very closely to an offset "ceiling" that is never exceeded; the noise is all in individual readings that have significantly increased delay. When the flag is not set, the mean is much closer to the median value and the data looks much more like a triangle waveform.
Speaking of graphs, here they are: