[time-nuts] Question about frequency counter testing

Magnus Danielson magnus at rubidium.dyndns.org
Sun May 13 09:36:42 EDT 2018


Hi Oleg,

On 05/13/2018 09:31 AM, Oleg Skydan wrote:
> Hi Magnus!
> 
> From: "Magnus Danielson" <magnus at rubidium.dyndns.org>
>>> The leftmost tau values are skipped and they "stay" inside the counter.
>>> If I setup counter to generate lets say 1s stamps (ADEV starts at 1s) it
>>> will generate internally 1/8sec averaged measurements, but export
>>> combined data for 1s stamps. The result will be strictly speaking
>>> different, but the difference should be insignificant.
>>
>> What is your motivation for doing this?
> 
> My counter can operate in usual Pi mode - I got 2.5ns resolution. I am
> primary interested in high frequency signals (not one shoot events), and
> HW is able to collect and process some millions of timestamps
> continuously. So in Delta or Omega mode I can improve the resolution in
> theory down to several ps (for 1s measurement interval). In reality the
> limit will be somewhat higher.

Fair enough.

> So I can compute the classical ADEV (using Pi mode) with a lot of
> counter noise at low tau (it will probably not be very useful due to the
> counter noise dominance in the leftmost part of ADEV plot), or MDEV
> (using Delta mode) with the counter noise much lower.

Yes, it helps you to suppress noise. As you extend the measures, you
need to do it properly to maintain MDEV property.

> I would like to try to use the excessive data I have to increase counter
> resolution in a manner that ADEV calculation with such preprocessing is
> still possible with acceptable accuracy. After Bob's explanations and
> some additional reading I was almost sure it is not possible (and it is
> so in general case), but then I saw the presentation
> http://www.rubiola.org/pdf-slides/2012T-IFCS-Counters.pdf (E. Rubiola,
> High resolution time and frequency counters, updated version) and saw
> inferences on p.54. They looks reasonable and it is just what I wanted
> to do.

OK, when you do this you really want to filter out the first lower tau,
but as you get out of the filtered part, or rather, when the dominant
part of the ADEV processing is within the filter bandwidth, the biasing
becomes less.

I would be inclined to just continue the MDEV compliant processing
instead. If you want the matching ADEV, rescale it using the
bias-function, which can be derived out of p.51 of that presentation.
You just need to figure out the dominant noise-type of each range of
tau, something which is much simpler in MDEV since White PM and Flicker
PM separates more clearly than the weak separation of ADEV.

Also, on this page you can see how the system bandwith f_H affects white
PM and flicker PM for Allan, but not modified Allan.

>> The mistake is easy to make. Back in the days, it was given that you
>> should always give the system bandwidth alongside a ADEV plot, a
>> practice that later got lost. Many people does not know what bandwidth
>> they have, and the effect it has on the plot. I've even heard
>> distinguished and knowledgeable person in the field admit of doing it
>> incorrect.
> 
> That makes sense.
> 
> We can view at the problem in the frequency domain. We have a DUT,
> reference and instrument (counter) noise. In most cases we are
> interested in suppressing instrument and reference noise and leaving the
> DUT noise. The reference and DUT has more or less the same nature of
> noise, so it should not be possible to filter out reference noise
> without affecting DUT noise (with the simple HW). The counter noise (in
> my case) will look like white noise (at least the noise associated with
> the absence of the HW interpolator). When we process timestamps by Omega
> or Delta data processing we apply filter, so the correctness of the
> resulting data will depend of the DUT noise characteristics and filter
> shape. The ADEV calculation at tau > tau0 will also apply some sort of
> filter during decimation, it also should be counted for (cause we
> actually decimate the high rate timestamp stream making the points data
> for the following postprocessing). Am I right?

As you measure a DUT, the noise of the DUT, the noise of the counter and
the systematics of the counter adds up and we cannot distinguish them in
that measurement. There is measurement setups, such as
cross-correlation, which makes multiple measurements in parallel which
can start combat the noise separation issue.

For short taus, the systematic noise of quantization will create a 1/tau
limit in ADEV. This is in fact more complex than this simple model, but
let's just assume this for the moment, it is sufficient for the time
being and is what most assume anyway.

ADEV however does not really do decimation. It does combine measurements
to form longer observation time of frequency estimation, and subtract
these before squaring, to form the 2-point deviation, which we call
Allan's deviation or Allan deviation.

ADEV is designed to match how a simple counters deviation would become.

> Here is a good illustration how averaging affects ADEV
> http://www.leapsecond.com/pages/adev-avg/ . If we drop the leftmost part
> of the ADEV affected by averaging, the resulting averaging effects on
> the ADEV are minimized. Also they can be minimized by the optimal
> averaging strategy. The question is optimal averaging strategy and well
> defined restrictions when such preprocessing can be applied.

Ehm no. The optimal averaging strategy for ADEV is to do no averaging.
This is the hard lesson to learn. You can't really cheat if you aim to
get proper ADEV.

You can use averaging, and it will cause biased values, so you might use
the part with less bias, but there is safer ways of doing that, by going
full MDEV or PDEV instead.

With biases, you have something similar to, but not being _the_ ADEV.

ITU-T have made standardization on TDEV measurements where they put
requirements on these things, and there a similar strategy was used,
putting a requirement on number of samples per second and bandwidth,
such that when the tau becomes high enough the bias would be tolerable
low for the range of taus that is being measured.

> If it works I would like to add such mode for the compatibility with the
> widely spread post processing SW (TimeLab is a good example). Of cause I
> can do calculations inside the counter without such limitations, but
> that will be another data processing option(s) (which might not be
> always suitable).
> 
>> I'm not saying you are necessarilly incorrect, but it would be
>> interesting to hear your motivation.
> 
> The end goal is to have a counter mode when the counter produces data
> suitable for post processing for ADEV and other similar statistics with
> resolution better (or counter noise lower) that one shoot one (Pi
> counter). I understand that, if it will be possible, the counter
> resolution will be degraded compared to usual Omega or Delta mode, also
> there will be some limitations for the DUT noise when such processing
> can be applied.

Like in the ITU-T case, sure you can use filtering, but one needs to
drop the lower tau part to approximate the ADEV.

>> Cross talk exists for sure, but there is a similar effect too which is
>> not due to cross-talk but due to how the counter is able to interpolate
>> certain frequencies.
> 
> I have no HW interpolator. The similar problem in the firmware was
> discussed earlier and now it is fixed.

>From the 2.5 ns single shot resolution, I deduce a 400 MHz count clock.

>>>> If fact, you can do a Omega-style counter you can use for PDEV, you
>>>> just
>>>> need to use the right approach to be able to decimate the data. Oh,
>>>> there's a draft paper on that:
>>>>
>>>> https://arxiv.org/abs/1604.01004
>>>
>>> Thanks for the document. It needs some time to study and maybe I will
>>> add the features to the counter to calculate correct PDEV.
>>
>> It suggest a very practical method for FPGA based counters, so that you
>> can make use of the high rate of samples that you have and reduce it in
>> HW before handing of to SW. As you want to decimate data, you do not
>> want to lose the Least Square property, and this is a practical method
>> of achieving it.
> 
> I have no FPGA also :) All processing is in the FW, I will see how it
> fits the used HW architecture.
> 
> Doing it all in FPGA has many benefits, but the HW will be more
> complicated and pricier with minimal benefits for my main goals.

Exactly what you mean by FW now I don't get, for me that is FPGA code.

>> You do not want to mix pre-filtering and ADEV that way. We can do things
>> better.
> 
> Are you talking about PDEV?

MDEV and PDEV is better approaches. They continue the filtering action,
and allow for decimation of data with known filtering properties.

>>> Here is another question - how to correctly calculate averaging length
>>> in Delta counter? I have 5e6 timestamps in one second, so Pi and Omega
>>> counters process 5e6 samples totally and one measurement have also 5e6
>>> samples, but the Delta one processes 10e6 totally with each of the
>>> averaged measurement having 5e6 samples. Delta counter actually used two
>>> times more data. What should be equal when comparing different counter
>>> types - the number of samples in one measurement (gating time) or the
>>> total number of samples processed?
>>
>> How does you get so different event-rates?
>>
>> If you have 5 MHz, the rising edge gives you 5E6 events, and which type
>> of processing you do, Pi (none), Delta or Omega, is just different types
>> of post-processing on the raw phase data-set.
> 
> So, if I want to compare "apples to apples" (comparing Delta and
> Omega/Pi processing) the single measurement of the Delta counter should
> use a half of the events (2.5E6) and the same number(2.5e6) of
> measurements should be averaged, is that right? The counter in Delta
> mode currently calculates results with 50% overlapping, it gives 10e6
> stamps for the 1s output data rate (the counter averages 2 seconds of
> data).

Do not report overlapping like that. Report that separate from the event
rate you have. So, ok, if you have a basic rate of 2.5e6 events per
second, and overlapping processing for the Delta, doubling the Delta
processing report rate.

Cheers,
Magnus


More information about the time-nuts mailing list