[time-nuts] Digital Mixing with a BeagleBone Black and D Flip Flop

Simon Marsh subscriptions at burble.com
Wed Oct 15 11:33:21 EDT 2014


I'm merely implementing a poor man's copy of the ideas in the White 
Rabbit project, so thank you for taking the time to post.

On 15/10/2014 14:27, Javier Serrano wrote:


> Do you have a precise idea of what the offset in frequency is between 
> your DUT(s) and the slightly-offset oscillator? If that offset is too 
> big compared with the jitter of your clock signals and your 
> flip-flops, that would explain why you see no glitches. 

At the moment I'm very simply using an 'ebay standard' micro crystal 
ocxo as the offset oscillator, which can tune to about +/-66hz of 10mhz. 
My DUT is then a 10mhz TCXO. These are not time nut standard by any 
means, so I do expect to get glitches and lots of them. The lack of 
glitches (even down to a 5hz beat note) indicates a problem and it's a 
very reasonable assumption that this is down to my setup. The next steps 
will be to clean up my hardware and see how it goes.

As an aside, whilst I clean up the hardware, Mr Postman should have time 
to deliver something a bit more time-nuttery to play with :)

> You should indeed use a synchronizer made of a chain of FFs of length 
> at least two. You should see the typical glitch pattern after the 
> first FF and also after the second, i.e. what you should see in an 
> oscilloscope should pretty much look the same for both FF outputs. 
> Only in the very infrequent cases where you hit the metastability 
> window of the first FF there should be a difference between what you 
> see after the first FF and after the second one (except of course for 
> the fixed one cycle latency).

I'm thinking that with a discrete 74AC74 part rather than an FPGA, it's 
going to be much more frequent (and perhaps a certainty?) that I'll hit 
the metastability window for an extended period ? Ultimately I suspect 
this might be where the limit is for my approach and where Bob D will 
get more accuracy with the FPGA approach.

> As you can see in Tomasz's dissertation [1], there was not a lot of 
> investigation on optimal strategies for DDTMD noise. The precision at 
> the time was deemed more than adequate. It is very timely that you 
> bring up this subject now, because I hope to start looking at ways to 
> optimize phase noise in WR in the coming months, and noise coming from 
> the DDMTD phase detector is definitely something I want to look at. I 
> will be very interested in your ideas and findings regarding optimal 
> strategies for the de-glitcher.

I'm quite unlikely to come up with anything that an undergrad and a few 
hours couldn't think up. However, thank you for linking to the paper, 
for some reason I'd missed it in my Googling.

On the minus side, the paper confirmed that I had completely 
misunderstood the 'zero count algorithm' as it had been described in the 
short summary I'd seen. On the plus side, I'd already been playing 
around with something similar to the 'bit value median' that is really 
being used, so it's good to know I was actually on the right track. (For 
completeness, what I was actually doing was adding zeros and subtracting 
ones, then taking the edge at the point of maximum value).



More information about the time-nuts mailing list