[time-nuts] 1 PPS 50-ohm driver

Florian Teply usenet at teply.info
Mon Apr 18 14:24:59 EDT 2016


Am Sun, 17 Apr 2016 23:03:11 +0200
schrieb Gerhard Hoffmann <dk4xp at arcor.de>:

> Am 17.04.2016 um 16:59 schrieb Wojciech Owczarek:
> > A slightly naive question(s) perhaps, so do excuse me, but I reckon
> > this is a good opportunity to ask since I am approaching the same
> > design questions (this is a 1PPS in + 1PPS out driver for the
> > Beaglebone Black, to/from its PTP clock). This involves 5v / 3.3v
> > conversion but that's another topic.
> >
> > IC spec sheets are one thing, but since the Time Nuts have seen and
> > done it all... Why an inverting buffer? Is there an advantage in
> > using inverted logic for 1PPS? I have come across other timing kit
> > that internally uses falling edge, which is eventually inverted
> > when interfacing with the outside world. Is this common, and why?
> > If my output is rising edge right from the PWM pin I'm using to
> > generate my 1PPS (again, separate topic), do I gain anything by
> > inverting it and using an inverting buffer? Is this a matter of
> > different rise/fall propagation delays over the various ICs?
> >
> 
> In CMOS logic, an inverter is the smallest and fastest gate, just 2 
> transistors.
> A minimum buffer then would be 2 inverters in series. somewhat slower 
> and 4 transistors.
> If you need an inverter or buffer that drives a heavy load, you may
> need more than
> just 1 minimum transistor pair in parallel. That presents more load
> to the source,  so
> one may have to amplify the source signal in several stages. As a
> rule of thumb,
> quadrupling the number of transistors per stage gives the best 
> compromise between
> delay for heavy loading and delay from many stages. (on-chip)
> So for any given source/load combination the optimum may be either an 
> inverting or a
> non-inverting buffer.
> 
Most likely this goes without saying, but as we're addressing a
question that has been marked as somewhat naive by the poster, I'd
still like to point a few things out which are not necessarily clear to
the uninitiated in IC design, especially in CMOS digital core logic.
And, to be honest, quite often they background is even lost on seasoned
digital IC design guys because as soon someone implemented a
digital library, the rest is done on a higher level of abstraction
using VHDL and the like. If you all know this by heart already, just
ignore it...

The reason the simple inverter is the smallest and fastest gate usually
as already pointed out by Gerhard is essentially due to two reasons:
a) It indeed only has two transistors. As MOS transistors pose a
capacitive load to the gate driving it, the more transistors need to be
driven, the higher the capacitive load. Combined with the fact that
drive current is limited, higher capacitance leads to longer rise times
and consequently to longer gate delay.
b) more complex gates require series connections of transistors. As a
first order approximation, two transistors in series have twice the
on-resistance of a single transistor and therefore can source only
half the current than a single transistor.

> In CMOS, the falling edge is usually slightly faster than the rising.
> 
Just for the sake of completeness, there is no natural law that
actually calls for this. It just happens, that for the commonly used
silicon as transistor material, due to the lower hole mobility compared
to electron mobility, p-type MOS devices have approximately half the
saturation current of n-type devices, IF geometry and dimensions are
identical. Usually, this is somewhat offset by different sizing of the
transistors. Often, it is not taken that far that current drive is
actually equal, as, as said above, this would impose higher capacitive
load which again would slow down things. Additionally, higher
capacitance also increases dynamic power consumption in operation as
more charge is stored on the gate which needs to be moved for the
gate to switch. The result of the optimization process happens to be
such that usually, the current drive capability for the PMOS path is
lower than that of the NMOS path, which then leads to the mentioned
sligthly faster falling edges.

In principle, it is perfectly possible to have CMOS core logic where
the falling edge has exactly the same risetime as the rising edge. But
it would need more chip area than the way it usually is done, it would
have higher dynamic power dissipation. There are a few applications,
where the benefit outweighs the drawbacks, but 99% of the users are
fine with the standard logic libraries offered and/or supported by the
foundries.

Hope this clears up a bit the background of why it usually is the way
it is.

Best regards,
Florian


More information about the time-nuts mailing list