[time-nuts] Z3805A cooling requirements?

Magnus Danielson magnus at rubidium.dyndns.org
Thu Dec 27 03:09:54 UTC 2012


Mark,

On 12/26/2012 06:24 PM, Mark Spencer wrote:
> Tom, Magnus, Ulrich:
>
> Thanks for the comments and suggestions.  They are appreciated and I
> now have an even better understanding of why ADEV measurements are
> not a good tool for characterizing the performance of oscillators that
> are subject to transient events or glitches.

Good. You do get a gold star for your ADEV over time analysis, also 
known as dynamic ADEV. It helps to see where in time a certain ADEV 
wrinkle occurred so the time-plot makes sense. Trouble is, you already 
have to have a clue to get to that point.

> Just to clarify a few points and ask a few questions:
>
> My concern about not putting much emphasis on Adev data for Tau's of
> less than 80 seconds in the plots I’ve provided  is driven by a
> belief that at shorter Tau's these ADEV plots are largely showing the
> noise of the counter (an HP5370B) vs the noise of the device being
> measured. Perhaps the 80 second cut off point is overly conservative
> but at some point I believe the counter noise will swamp the noise
> from the devices being measured.

I agree with you, but rather than not showing it, show it and point out 
that this is counter-noise. Then that little slope remaining has a 
natural explanation and you also get a good line to follow down and 
understand where the DUT noise takes over.

It would be cool if we could artificially "remove" that limitation and 
see the added noise only.

AVAR_lim(tau) = (ADEV_lim(tau0)/tau)^2
ADEV_corr(tau) = sqrt(AVAR(tau)-AVAR_lim(tau))

It should be fairly simple to fit ADEV_limit to the curve, and it will 
represent the white phase noise limit. Seeing that number should also 
help to see where trigger noise etc. could be improved.

In a similar sense could other noise-forms be removed, so that you would 
have a residual ADEV plot. This is after all what ADEV was developed 
for, to establish the levels of noises and have them in separated form.

> My goal was not to try and use ADEV measurements to characterize
> the performance of the GPSDO in question while it was subject to
> fluctuations in air flow (or subject to other transient events..)
> I did include a frequency plot in my post that provides some
> insight as to what happened when air flow was added.
>
> The goal was to see if operating the GPSDO in question with air
> flow changed the ADEV readings vs operating the GPSDO without air
> flow. I agree ADEV may not be the best tool for this but it is easy
> to collect and I have prior data to compare the results to. ADEV
> also seems to be a commonly used figure of merit for characterizing
> devices such as GPSDO’s.(I realize there are also other commonly
> used figures of merit.)

It all comes down to how "quiet" your ambient air is to your GPSDO/OCXO.
Forced air-flow improves the thermal connectivity between the ambient 
air and the GPSDO/OCXO. For many professional buildings and computer 
halls, the AC/heating system is not as quiet as you would like. I've 
killed several good measurement runs of free-running oscillators just by 
walking up to the lab-bench and with it a wall of colder air sweeps over 
it...

That's why I try to measure things in a cardboard box just to get 
somewhat less airflows on the oscillators, and it works very well.

Forced air as such poses some issues, but ambient air is in my 
experience the real killer.

> The lowest ADEV reading I have ever observed for the GPSDO in
> question came from analyzing a data set collected 45 thru 65 minutes
> after air flow was applied to that GPSDO in that particular
> circumstance.   I found that result surprising although I agree the
> absolute difference in the ADEV figures is very small.

Which I could very well believe.

> It's my understanding (based largely on comments I've read on this
> list over the years) that if you have roughly nx10 data points you
> can begin to draw inferences from ADEV plots for Taus<n.   Is this
> a reasonable practice and or are there caveats one needs to be
> aware of ?

Having spent many times watching the data coming into timelab, seeing 
the high end flap like a whip until it settles down, I'd say that x10 is 
still very unstable, but by all means look at it. The reason you want to 
see real confidence intervals on your measure is to know where about the 
real value could be compared to the value you currently see.
How tight you want your confidence interval to be depends on what form 
of conclusion you want to take. I'd say that even more conservative 
values like 100 time samples could be viewed as incorrect for some 
applications. This is where you need to decide what you need. Sit down 
and see the curve vary for a tau until it settles, that way you learn 
where your confidence in values lie.

>
> I agree that one test of this nature is in sufficient to draw any
> firm conclusions from and much more data is needed.

It's more about building experience of what matters.

Temperature changes rather than temperature as such affects you, as long 
as the oven is operating in linear state.

For one oven I once saw an interesting case, and I realized that the 
oven took a "nap" to cool down and then started heating up again. In 
effect, during the nap, the crystal was cooling down in an unregulated 
environment, and then it was being heated up by a jolt of energy.

Another oven had a self-oscillation in the oven controller, which was 
visible from power-on. It also had my current digits flopping around and 
current measurement gave the controller away finally. That design was 
built on a ceramic brick rather than FR4 board, so it lacked the thermal 
mass to remain stable. When the vendor understood the issue, they kept 
that design running arguing that the other customers didn't complain. Ah 
well.

Cheers,
Magnus



More information about the time-nuts mailing list