[time-nuts] Question about frequency counter testing

Magnus Danielson magnus at rubidium.dyndns.org
Sun May 27 12:58:56 EDT 2018

Hi Oleg,

On 05/27/2018 05:52 PM, Oleg Skydan wrote:
> Hi!
>>> It looks like the proposed method of decimation can be
>>> efficiently realized on the current HW.
> I had some free time yesterday and today, so I decided to test the new
> algorithms on the real hardware (the HW is still an old "ugly
> construction" one, but I hope I will have some time to make normal HW -
> I have already got almost all components I need).
> I had to modify the original decimation scheme you propose in the paper,
> so it better fits my HW, also the calculation precision and speed should
> be higher now.

The point about the decimation scheme I did was to provide a toolbox,
and as long as you respect the rules within that toolbox you can adapt
it just as you like. As long as the sums C and D becomes correct, your
path to it can be whatever.

> The nice side effect - I do not need to care about phase
> unwrapping anymore.

You should always care about how that works out, and if you play your
cards right, it works out very smoothly.

> I can prepare a short description of the
> modifications and post it here, if it is interesting.

Yes please do, then I can double check it.

> It works like a charm!

Good. :)

> The new algorithm (base on C and D sums calculation and decimation) uses
> much less memory (less than 256KB for any gaiting time/sampling speed,
> the old one (direct LR calculation) was very memory hungry - it used
> 4xSampling_Rate bytes/s - 20MB per second of the gate time for 5MSPS).

This is one of the benefits of this. Assuming the same tau0, it is all
contained in the C, D and N triplet, and the memory need of these values
can be trivially analyzed, but it is very small, so it's a really
effective decimation technique while maintaining the least-square

> Now I can fit all data into the internal memory and have a single chip
> digital part of the frequency counter, well, almost single chip ;) The
> timestamping speed has increased and is limited now by the bus/bus
> matrix switch/DMA unit at a bit more then 24MSPS with continuous real
> time data processing. It looks like it is the limit for the used chip (I
> expected a bit higher numbers).

Yeah, now you can move your harware focus on considering interpolation
techniques beyond the processing power of least-square estimation, which
integrate noise way down.

> The calculation speed is also much higher now (approx 23ns per one
> timestamp, so up to 43MSPS can be processed in realtime).

Just to indicate that my claim for "High speed" is not completely wrong.

For each time-stamp, the pseudo-code becomes:

C = C + x_0
D = D + n*x_0
n = n + 1

Whenever n reaches N, C and D is output, and the values C, D and n is
set to 0.

However, this may be varied in several fun ways, but is left over as an
exercise for the implementer. Much of the other complexity is gone, so
this is the fun problem.

> I plan to stay at 20MSPS rate or 10MSPS with the
> double time resolution (1.25ns). It will leave a plenty of CPU time for
> the UI/communication/GPS/statistics stuff.

Sounds like a good plan.

> I will probably throw out the power hungry and expensive SDRAM chip or
> use much smaller one :).

Yeah, it would only be if you build multi-tau PDEV plots that you would
need much memory, other than that it is just buffer memory to buffer
before it goes to off-board processing, at which time you would need to
convey the C, D, N and tau0 values.

> I have some plans to experiment with doubling the one shoot resolution
> down to 1.25ns. I see no much benefits from it, but it can be made with
> just a piece of coax and a couple of resistors, so it is interesting to
> try :).

Please report on that progress! Sounds fun!


More information about the time-nuts mailing list