Варіант обмеженої випадкової величини


22

Припустимо, що випадкова величина має нижню і верхню межу [0,1]. Як обчислити дисперсію такої змінної?


8
Так само, як і для необмеженої змінної - встановлення інтеграції або підсумовування лімітів відповідним чином.
Scortchi

2
Як сказав @Scortchi Але мені цікаво, чому ви думали, що це може бути інакше?
Пітер Флом - Відновіть Моніку

3
Якщо ви нічого не знаєте про змінну (у такому випадку верхня межа дисперсії може бути обчислена за наявністю меж), навіщо б той факт, що вона обмежена, входив до обчислення?
Glen_b -Встановіть Моніку

6
Корисним верхня межа дисперсії випадкової змінної , яка приймає значення в [ з , Ь ][a,b] з ймовірністю 11 є ( Ь - ) 2 / 4(ba)2/4 , і досягається за рахунок дискретної випадкової змінної , яка приймає значення черезa та Ьb з рівними ймовірність 1212 . Ще один момент, який слід пам’ятати, полягає в тому, що дисперсія гарантовано існує, тоді як необмежена випадкова величина може не мати дисперсії (деякі, наприклад, випадкові змінні Коші навіть не мають середнього значення).
Діліп Сарват

7
Там поза дискретна випадкова величина, дисперсія дорівнює ( б - ) 24(ba)24 саме:випадкова величина, яка приймає значенняaaіbbз однаковою ймовірністю1212 . Отже, принаймні ми знаємо, що універсальна верхня межа дисперсії не може бути меншою, ніж(b-a)24(ba)24 .
Діліп Сарват

Відповіді:


46

Ви можете довести нерівність Поповічу наступним чином. Використовуйте позначення т = інф Хm=infX і М = вир XM=supX . Визначте функцію gg по g ( t ) = E [ ( X - t ) 2 ].

g(t)=E[(Xt)2].
Обчислення похідної g g та розв’язування g ( t ) = - 2 E [ X ] + 2 t = 0,
g(t)=2E[X]+2t=0,
знаходимо, що gg досягає свого мінімуму при t = E [ X ]t=E[X] (зауважимо, що g > 0g′′>0 ).

Тепер розглянемо значення функції gg у спеціальній точці t = M + m2t=M+m2 . Має бути так, що Var[X]=g(E[X])g(M+m2 ).

Var[X]=g(E[X])g(M+m2).
Але g ( M + m2 )=E[(X- M + m2 )2]=14 E[((X-m)+(X-M))2].
g(M+m2)=E[(XM+m2)2]=14E[((Xm)+(XM))2].
Since Xm0Xm0 and XM0XM0, we have ((Xm)+(XM))2((Xm)(XM))2=(Mm)2,
((Xm)+(XM))2((Xm)(XM))2=(Mm)2,
implying that 14E[((Xm)+(XM))2]14E[((Xm)(XM))2]=(Mm)24.
14E[((Xm)+(XM))2]14E[((Xm)(XM))2]=(Mm)24.
Therefore, we proved Popoviciu's inequality Var[X](Mm)24.
Var[X](Mm)24.


3
Nice approach: it's good to see rigorous demonstrations of these kinds of things.
whuber

22
+1 Nice! I learned statistics long before computers were in vogue, and one idea that was drilled into us was that E[(Xt)2]=E[((Xμ)(tμ))2]=E[(Xμ)2]+(tμ)2
E[(Xt)2]=E[((Xμ)(tμ))2]=E[(Xμ)2]+(tμ)2
which allowed for the computation of variance by finding the sum of the squares of the deviations from any convenient point tt and then adjusting for the bias. Here of course, this identity gives a simple proof of the result that g(t)g(t) has minimum value at t=μt=μ without the necessity of derivatives etc.
Dilip Sarwate

18

Let FF be a distribution on [0,1][0,1]. We will show that if the variance of FF is maximal, then FF can have no support in the interior, from which it follows that FF is Bernoulli and the rest is trivial.

As a matter of notation, let μk=10xkdF(x)μk=10xkdF(x) be the kkth raw moment of FF (and, as usual, we write μ=μ1μ=μ1 and σ2=μ2μ2σ2=μ2μ2 for the variance).

We know FF does not have all its support at one point (the variance is minimal in that case). Among other things, this implies μμ lies strictly between 00 and 11. In order to argue by contradiction, suppose there is some measurable subset II in the interior (0,1)(0,1) for which F(I)>0F(I)>0. Without any loss of generality we may assume (by changing XX to 1X1X if need be) that F(J=I(0,μ])>0F(J=I(0,μ])>0: in other words, JJ is obtained by cutting off any part of II above the mean and JJ has positive probability.

Let us alter FF to FF by taking all the probability out of JJ and placing it at 00. In so doing, μkμk changes to

μk=μkJxkdF(x).

μk=μkJxkdF(x).

As a matter of notation, let us write [g(x)]=Jg(x)dF(x)[g(x)]=Jg(x)dF(x) for such integrals, whence

μ2=μ2[x2],μ=μ[x].

μ2=μ2[x2],μ=μ[x].

Calculate

σ2=μ2μ2=μ2[x2](μ[x])2=σ2+((μ[x][x2])+(μ[x][x]2)).

σ2=μ2μ2=μ2[x2](μ[x])2=σ2+((μ[x][x2])+(μ[x][x]2)).

The second term on the right, (μ[x][x]2)(μ[x][x]2), is non-negative because μxμx everywhere on JJ. The first term on the right can be rewritten

μ[x][x2]=μ(1[1])+([μ][x][x2]).

μ[x][x2]=μ(1[1])+([μ][x][x2]).

The first term on the right is strictly positive because (a) μ>0μ>0 and (b) [1]=F(J)<1[1]=F(J)<1 because we assumed FF is not concentrated at a point. The second term is non-negative because it can be rewritten as [(μx)(x)][(μx)(x)] and this integrand is nonnegative from the assumptions μxμx on JJ and 0x10x1. It follows that σ2σ2>0σ2σ2>0.

We have just shown that under our assumptions, changing FF to FF strictly increases its variance. The only way this cannot happen, then, is when all the probability of FF is concentrated at the endpoints 00 and 11, with (say) values 1p1p and pp, respectively. Its variance is easily calculated to equal p(1p)p(1p) which is maximal when p=1/2p=1/2 and equals 1/41/4 there.

Now when FF is a distribution on [a,b][a,b], we recenter and rescale it to a distribution on [0,1][0,1]. The recentering does not change the variance whereas the rescaling divides it by (ba)2(ba)2. Thus an FF with maximal variance on [a,b][a,b] corresponds to the distribution with maximal variance on [0,1][0,1]: it therefore is a Bernoulli(1/2)(1/2) distribution rescaled and translated to [a,b][a,b] having variance (ba)2/4(ba)2/4, QED.


Interesting, whuber. I didn't know this proof.
Zen

6
@Zen It's by no means as elegant as yours. I offered it because I have found myself over the years thinking in this way when confronted with much more complicated distributional inequalities: I ask how the probability can be shifted around in order to make the inequality more extreme. As an intuitive heuristic it's useful. By using approaches like the one laid out here, I suspect a general theory for proving a large class of such inequalities could be derived, with a kind of hybrid flavor of the Calculus of Variations and (finite dimensional) Lagrange multiplier techniques.
whuber

Perfect: your answer is important because it describes a more general technique that can be used to handle many other cases.
Zen

@whuber said - "I ask how the probability can be shifted around in order to make the inequality more extreme." -- this seems to be the natural way to think about such problems.
Glen_b -Reinstate Monica

There appear to be a few mistakes in the derivation. It should be μ[x][x2]=μ(1[1])[x]+([μ][x][x2]).
μ[x][x2]=μ(1[1])[x]+([μ][x][x2]).
Also, [(μx)(x)][(μx)(x)] does not equal [μ][x][x2][μ][x][x2] since [μ][x][μ][x] is not the same as μ[x]μ[x]
Leo

13

If the random variable is restricted to [a,b][a,b] and we know the mean μ=E[X]μ=E[X], the variance is bounded by (bμ)(μa)(bμ)(μa).

Let us first consider the case a=0,b=1a=0,b=1. Note that for all x[0,1]x[0,1], x2xx2x, wherefore also E[X2]E[X]E[X2]E[X]. Using this result, σ2=E[X2](E[X]2)=E[X2]μ2μμ2=μ(1μ).

σ2=E[X2](E[X]2)=E[X2]μ2μμ2=μ(1μ).

To generalize to intervals [a,b][a,b] with b>ab>a, consider YY restricted to [a,b][a,b]. Define X=YabaX=Yaba, which is restricted in [0,1][0,1]. Equivalently, Y=(ba)X+aY=(ba)X+a, and thus Var[Y]=(ba)2Var[X](ba)2μX(1μX).

Var[Y]=(ba)2Var[X](ba)2μX(1μX).
where the inequality is based on the first result. Now, by substituting μX=μYabaμX=μYaba, the bound equals (ba)2μYaba(1μYaba)=(ba)2μYababμYba=(μYa)(bμY),
(ba)2μYaba(1μYaba)=(ba)2μYababμYba=(μYa)(bμY),
which is the desired result.

8

At @user603's request....

A useful upper bound on the variance σ2σ2 of a random variable that takes on values in [a,b][a,b] with probability 11 is σ2(ba)24σ2(ba)24. A proof for the special case a=0,b=1a=0,b=1 (which is what the OP asked about) can be found here on math.SE, and it is easily adapted to the more general case. As noted in my comment above and also in the answer referenced herein, a discrete random variable that takes on values aa and bb with equal probability 1212 has variance (ba)24(ba)24 and thus no tighter general bound can be found.

Another point to keep in mind is that a bounded random variable has finite variance, whereas for an unbounded random variable, the variance might not be finite, and in some cases might not even be definable. For example, the mean cannot be defined for Cauchy random variables, and so one cannot define the variance (as the expectation of the squared deviation from the mean).


this is a special case of @Juho's answer
Aksakal

It was just a comment, but I could also add that this answer does not answer the question asked.
Aksakal

@Aksakal So??? Juho was answering a slightly different and much more recently asked question. This new question has been merged with the one you see above, which I answered ten months ago.
Dilip Sarwate

0

are you sure that this is true in general - for continuous as well as discrete distributions? Can you provide a link to the other pages? For a general distibution on [a,b] it is trivial to show that Var(X)=E[(XE[X])2]E[(ba)2]=(ba)2.

I can imagine that sharper inequalities exist ... Do you need the factor 1/4 for your result?

On the other hand one can find it with the factor 1/4 under the name Popoviciu's_inequality on wikipedia.

This article looks better than the wikipedia article ...

For a uniform distribution it holds that Var(X)=(ba)212.


This page states the result with the start of a proof that gets a bit too involved for me as it seems to require an understanding of the "Fundamental Theorem of Linear Programming". sci.tech-archive.net/Archive/sci.math/2008-06/msg01239.html
Adam Russell

Thank you for putting a name to this! "Popoviciu's Inequality" is just what I needed.
Adam Russell

2
This answer makes some incorrect suggestions: 1/4 is indeed right. The reference to Popoviciu's inequality will work, but strictly speaking it applies only to distributions with finite support (in particular, that includes no continuous distributions). A limiting argument would do the trick, but something extra is needed here.
whuber

2
A continuous distribution can approach a discrete one (in cdf terms) arbitrarily closely (e.g. construct a continuous density from a given discrete one by placing a little Beta(4,4)-shaped kernel centered at each mass point - of the appropriate area - and let the standard deviation of each such kernel shrink toward zero while keeping its area constant). Such discrete bounds as discussed here will thereby also act as bounds on continuous distributions. I expect you're thinking about continuous unimodal distributions... which indeed have different upper bounds.
Glen_b -Reinstate Monica

2
Well ... my answer was the least helpful but I would leave it here due to the nice comments. Cheers,R
Ric
Використовуючи наш веб-сайт, ви визнаєте, що прочитали та зрозуміли наші Політику щодо файлів cookie та Політику конфіденційності.
Licensed under cc by-sa 3.0 with attribution required.