Steve Rooke sar10538 at gmail.com
Sun Jun 6 08:59:07 UTC 2010

On 6 June 2010 10:21, Bruce Griffiths <bruce.griffiths at xtra.co.nz> wrote:
> Steve Rooke wrote:
>> On 5 June 2010 19:07, Bruce Griffiths<bruce.griffiths at xtra.co.nz>  wrote:
>>> Wrong again.
>> No, I'm not wrong Bruce.
> Your "contribution" is largely irrelevant to the original discussion.
> The effect of the PLL itself is (or should be) well understood.

Ah, the pathetic attempts to discredit opposition through insults and
dismissal. This is the desperate attempt of a man grasping at straws
trying to prevent going under. First they ignore you, then they mock
you, then they fight you, then you win...

Indeed, it does seem that the effects of the PLL are well understood
by some but perhaps others have yet to learn. We seem to have got over
the integration issue by remembering our our pre-calculus-101 dividing
the area under a graph into strips, oversampling, to perform
integration of a polynomial or any graph shape for that matter.

> However various assertions about the minimum usable value of Tau take no
> account of the low pass filtering built into the 10811 EFC circuit.
> The 100k series resistors plus the capacitance of the EFC varicap
> (50-100pF??) will limit the minimum usable value of Tau.

Wrong again!

And now a new red herring rears it's ugly head. So what have we here
then, "low pass filtering built in", well this forms the biasing
circuit of the varicap diode. The varicap itself forms part of the
tuned circuit with the crystal acting as an inductor in this colpits
oscillator  Seeing as how that's the case, the hot end of the varicap
which is connected to the EFC control via a resistor is in fact
oscillating at 10Mh, having a period of 10^-7s, directly against that
EFC feed. Now, considering that Warren's daq can only achieve a rate
of about 400 sps, 2.5x10^-3, it is extremely unlikely that the "low
pass filtering built in" will have any bearing on this matter.

>> This process is exactly replicated by oversampling the EFC and
>> determining the average for a fixed time period.
> A various times Warren has both claimed to do this and at others appears to
> deny it.

Maybe Warren is not the person who is confused here.

> A clear description of the details of the actual signal processing used is
> sadly lacking.

What need for "signal processing" is there? Is this some way that you
feel there is a need to "massage" the results of actual measured data.
I think there was a very loud discussion about "massaging" and
"processing" data in a very large issue that came up a while ago.

> If and only if the average is calculated sufficiently accurately.

So, say, 10 samples of the EFC voltage are taken over time T, then the
average of the samples is the sum of the samples / T. This is the
principal of oversampling and I cannot see why there is any continued
discussion on this point.

> Using a rectangular approximation with sampled data may not be as accurate
> as one may expect.

Well, if we had an infinite number of samples over time T then we
would have an absolutely accurate answer. Is this your point I wonder,
so it has to be infinitely accurate, let alone and other points of
error in the system which will obviously swamp this out like errors in
the reference oscillator which are impossible to resolve because no
one has yet come up with with an oscillator which is accurate to 1 /
10^(infinity). So lets get real shall we, if we take ten samples of a
waveform over a period and calculate the integral using the
rectangular method the results will be very close to the Riemann
integral. Don't take my word for it, try it for yourself. Perhaps you
believe that the method adopted by other to integrate the measurement
over the whole time period with a filter that has a wider BW than the
fundamental (because it has to let noise through) would give a more
accurate answer, even though its settling time is not optimal for the
measurement time.

> It never ceases to amaze me why the well established and more accurate
> methods known aren't used (details are all given in the paper I cited), all
> it requires is a suitable program running on a PC.  The correct processing
> should have no effect on the hardware cost.

And it never ceases to amaze me how some stick-in-the-muds think that
what was done 50 years ago is the be-all and end-all of research in
any field. I guess if we had sent some ships out to see if the Earth
was flat and they did not come back, we should believe in our
assumptions and think that they must have fallen over the edge of the
Earth. I guess it's a good job that some intrepid researchers
discovered that there was no edge to the Earth and found out that it
was round. Mind you if they stopped there they wound not have
understood they were wrong as some later researchers found it was an
oblate spheroid.

And we are back to "correct processing" again, for some reason the
measured data seems to need some form of manipulation. Well, you are
correct to a certain extent, the oversamples need to added together
and divided by the oversample rate. That is all the processing needed,
just some logical processing of the data as part of the ADEV

> The $10 cost is also misleading as the mixers aren't free nor is the 10811
> or its equivalent.

But it's closer to $10 than a TSC or a dual mixer setup.

> The assertion that this technique is new seems to be somewhat dubious as it
> appears to have been known for several decades.

So who has made this assertion, all along it's been understood that
this was an improved way of implementing the tight-PLL method. Did you
not get that?

Now I'm finding this petty attack on someone else's research, without
fully understanding it, quite tiresome, it's seriously cutting into my
quality porn time but  won't lay down and play dead.

Steve Rooke - ZL3TUV & G8KVD
The only reason for time is so that everything doesn't happen at once.
- Einstein

More information about the time-nuts mailing list