[time-nuts] Allan variance by sine-wave fitting
mattia.rizzi at gmail.com
Mon Nov 27 17:04:56 EST 2017
>To make the point a bit more clear. The above means that noise with
a PSD of the form 1/f^a for a>=1 (ie flicker phase, white frequency
and flicker frequency noise), the noise (aka random variable) is:
1) Not independently distributed
2) Not stationary
3) Not ergodic
I think you got too much in theory. If you follow striclty the statistics
theory, you get nowhere.
You can't even talk about 1/f PSD, because Fourier doesn't converge over
infinite power signals.
In fact, you are not allowed to take a realization, make several fft and
claim that that's the PSD of the process. But that's what the spectrum
analyzer does, because it's not a multiverse instrument.
Every experimentalist suppose ergodicity on this kind of noise, otherwise
you get nowhere.
2017-11-27 22:50 GMT+01:00 Attila Kinali <attila at kinali.ch>:
> On Mon, 27 Nov 2017 19:37:11 +0100
> Attila Kinali <attila at kinali.ch> wrote:
> > X(t): Random variable, Gauss distributed, zero mean, i.i.d (ie PSD =
> > Y(t): Random variable, Gauss distributed, zero mean, PSD ~ 1/f
> > Two time points: t_0 and t, where t > t_0
> > Then:
> > E[X(t) | X(t_0)] = 0
> > E[Y(t) | Y(t_0)] = Y(t_0)
> > Ie. the expectation of X will be zero, no matter whether you know any
> > of the random variable. But for Y, the expectation is biased to the last
> > sample you have seen, ie it is NOT zero for anything where t>0.
> > A consequence of this is, that if you take a number of samples, the
> > will not approach zero for the limit of the number of samples going to
> > (For details see the theory of fractional Brownian motion, especially
> > the papers by Mandelbrot and his colleagues)
> To make the point a bit more clear. The above means that noise with
> a PSD of the form 1/f^a for a>=1 (ie flicker phase, white frequency
> and flicker frequency noise), the noise (aka random variable) is:
> 1) Not independently distributed
> 2) Not stationary
> 3) Not ergodic
> Where 1) means there is a correlation between samples, ie if you know a
> sample, you can predict what the next one will be. 2) means that the
> properties of the random variable change over time. Note this is a
> stronger non-stationary than the cyclostationarity that people in
> signal theory and communication systems often assume, when they go
> for non-stationary system characteristics. And 3) means that
> if you take lots of samples from one random process, you will get a
> different distribution than when you take lots of random processes
> and take one sample each. Ergodicity is often implicitly assumed
> in a lot of analysis, without people being aware of it. It is one
> of the things that a lot of random processes in nature adhere to
> and thus is ingrained in our understanding of the world. But noise
> process in electronics, atomic clocks, fluid dynamics etc are not
> ergodic in general.
> As sidenote:
> 1) holds true for a > 0 (ie anything but white noise).
> I am not yet sure when stationarity or ergodicity break, but my guess would
> be, that both break with a=1 (ie flicker noise). But that's only an
> I have come to. I cannot prove or disprove this.
> For 1 <= a < 3 (between flicker phase and flicker frequency, including
> phase, not including flicker frequency), the increments (ie the difference
> between X(t) and X(t+1)) are stationary.
> Attila Kinali
> May the bluebird of happiness twiddle your bits.
> time-nuts mailing list -- time-nuts at febo.com
> To unsubscribe, go to https://www.febo.com/cgi-bin/
> and follow the instructions there.
More information about the time-nuts