[time-nuts] WWV receivers?

jimlux jimlux at earthlink.net
Sat Oct 29 12:32:01 EDT 2016


On 10/28/16 11:01 PM, Hal Murray wrote:
>
> nsayer at kfu.com said:
>> That single-chip version is going to have a *LOT* less (and less variable)
>> latency than an SDR.
>
> Latency isn't an issue as long as it is known so that you can correct for it.
>
> Has anybody measured the jitter through SDR and/or tried to reduce it?  I'd
> expect that even if you counted cycles and such there would still be jitter
> from not being able to reproduce cache misses and interrupts.
>

Since I make my living doing SDRs with very stringent output vs input 
timing requirements --

This is a little bit like discussing "what is hard real-time software"

distinguish between an SDR where the signal processing chain is 
implemented in a reprogrammable FPGA (hence, "soft") and one where the 
signal processing chain is implemented in a general purpose processor 
(e.g. gnuradio).

Having low jitter in an FPGA implementation isn't hard, down to the 
basic timing resolution of the FPGA: the clock rate, etc.   That gets 
you into questions about jitter relative to what: if I feed in a highly 
accurate 1pps and use that as the sampling clock, then the jitter 
relative to the 1pps will be quite small (if not zero) - if I register 
that 1pps with a 10 MHz clock that is asynchronous to the 1pps, then 
I've bumped the jitter to 100ns - plenty of discussion on this list 
about hardware level jitter (which is what FPGA implementations really are).


In the more traditionally software (running on a CPU) it depends a lot 
on your design architecture:  If your hardware time stamped the samples, 
and the "output" of the system has a hardware control that can read time 
stamps, then you can have a system with large latency, but small jitter. 
You put some FIFOs in the system to accommodate the processing time 
variation, and the latency is determined by hardware tick in to hardware 
tick out.

If you're talking about something like gnu radio - it has almost no 
provision for latency control - it's a non-deterministic processing 
chain where data is passed from block to block like a signal flow 
diagram.  What control is there is limited by the host OS being able to 
"keep up".   Implementing closed loop systems in gnu radio is very 
tricky as a result, likewise, implementing something like a coherent 
transponder (where the output signal has to have a precise and 
consistent phase relationship with the input signal) with gnuradio would 
be tough

Somewhere in between are various "real time" implementation approaches: 
if your processor has a periodic interrrupt capability, and the CPU 
clock is driven from the same clock as everything else (e.g. it's a soft 
core in an FPGA), then you're in the "hardware controlled" area: if I 
get an interrupt every millisecond, counted down from my 100 MHz master 
clock, then I can make a pretty low jitter system with 1 kHz sample 
rates - most processors have fairly deterministic interrupt handling 
time if the interrupt is high enough priority. And you keep your 
interrupt code very simple and short:

my_isr:
	load register from ValueFromMemory
	output register to I/O address
	return from interrupt

you might also need to do some stuff like hold the value in a register 
all the time (so you don't get a variable time in the fetch)

   Would this work on a modern X86 with pipelines, speculative 
execution, multiple thread CPUs, etc.?  I don't know enough about how 
the interrupt logic works - is code executing in an ISR running in a 
mode where the CPU has all the fancy bells and whistles turned off?






More information about the time-nuts mailing list