Підведемо підсумок потоку випадкових величин, X i i i d ∼ U ( 0 , 1 )
X 1 + X 2 + ⋯ + X Y > 1.
Чому середнє значення Y
E ( Y ) = e = 10 ! +11 ! +12 ! +13 ! +…
Підведемо підсумок потоку випадкових величин, X i i i d ∼ U ( 0 , 1 )
X 1 + X 2 + ⋯ + X Y > 1.
Чому середнє значення Y
E ( Y ) = e = 10 ! +11 ! +12 ! +13 ! +…
Відповіді:
Перше спостереження: Y
Функція маси ймовірностей p Y ( n )
Кумулятивний розподіл F Y ( n ) = Pr ( Y ≤ n )
Друге спостереження: Y
Ясно , що Y
E ( Y ) = ∞ ∑ n = 0 ˉ F Y ( n ) = ∞ ∑ n = 0 ( 1 - F Y ( n ) )
Насправді Pr ( Y = 0 )
Щодо пізніших термінів, якщо F Y ( n )
Third observation: the (hyper)volume of an n
The n
For example, the 2-simplex above with x1+x2≤1
For a proof that proceeds by directly evaluating an integral for the probability of the event described by ˉFY(n)
Fix n≥1
Given the sequence U1,U2,…,Un
U1=X1 because both are between 0 and 1.
If Ui+1≥Ui, then Xi+1=Ui+1−Ui.
Otherwise, Ui+Xi+1>1, whence Xi+1=Ui+1−Ui+1.
There is exactly one sequence in which the Ui are already in increasing order, in which case 1>Un=X1+X2+⋯+Xn. Being one of n! equally likely sequences, this has a chance 1/n! of occurring. In all the other sequences at least one step from Ui to Ui+1 is out of order. This implies the sum of the Xi had to equal or exceed 1. Thus we see that
Pr(Y>n)=Pr(X1+X2+⋯+Xn≤1)=Pr(X1+X2+⋯+Xn<1)=1n!.
This yields the probabilities for the entire distribution of Y, since for integral n≥1
Pr(Y=n)=Pr(Y>n−1)−Pr(Y>n)=1(n−1)!−1n!=n−1n!.
Moreover,
E(Y)=∞∑n=0Pr(Y>n)=∞∑n=01n!=e,
QED.
In Sheldon Ross' A First Course in Probability there is an easy to follow proof:
Modifying a bit the notation in the OP, Uiiid∼U(0,1) and Y the minimum number of terms for U1+U2+⋯+UY>1, or expressed differently:
Y=min{n:n∑i=1Ui>1}
If instead we looked for:
Y(u)=min{n:n∑i=1Ui>u}
We can apply the following general properties for continuous variables:
E[X]=E[E[X|Y]]=∫∞−∞E[X|Y=y]fY(y)dy
to express f(u) conditionally on the outcome of the first uniform, and getting a manageable equation thanks to the pdf of X∼U(0,1), fY(y)=1. This would be it:
f(u)=∫10E[Y(u)|U1=x]dx
If the U1=x we are conditioning on is greater than u, i.e. x>u, E[Y(u)|U1=x]=1. If, on the other hand, x<u, E[Y(u)|U1=x]=1+f(u−x), because we already have drawn 1 uniform random, and we still have the difference between x and u to cover. Going back to equation (1):
f(u)=1+∫x0f(u−x)dx
If we differentiate both sides of this equation, we can see that:
f′(u)=f(u)⟹f′(u)f(u)=1
with one last integration we get:
log[f(u)]=u+c⟹f(u)=keu
We know that the expectation that drawing a sample from the uniform distribution and surpassing 0 is 1, or f(0)=1. Hence, k=1, and f(u)=eu. Therefore f(1)=e.