This article is about the term used in probability theory and statistics. For other uses, see Expected value (disambiguation)

e x {\displaystyle e^{x}} "E(X)" redirects here. For thefunction, see Exponential function

In probability theory, the expected value of a random variable, intuitively, is the long-run average value of repetitions of the same experiment it represents. For example, the expected value in rolling a six-sided die is 3.5, because the average of all the numbers that come up is 3.5 as the number of rolls approaches infinity. In other words, the law of large numbers states that the arithmetic mean of the values almost surely converges to the expected value as the number of repetitions approaches infinity. The expected value is also known as the expectation, mathematical expectation, EV, average, mean value, mean, or first moment.

More practically, the expected value of a discrete random variable is the probability-weighted average of all possible values. In other words, each possible value the random variable can assume is multiplied by its probability of occurring, and the resulting products are summed to produce the expected value. The same principle applies to an absolutely continuous random variable, except that an integral of the variable with respect to its probability density replaces the sum. The formal definition subsumes both of these and also works for distributions which are neither discrete nor absolutely continuous; the expected value of a random variable is the integral of the random variable with respect to its probability measure.[1][2]

The expected value does not exist for random variables having some distributions with large "tails", such as the Cauchy distribution.[3] For random variables such as these, the long-tails of the distribution prevent the sum or integral from converging.

The expected value is a key aspect of how one characterizes a probability distribution; it is one type of location parameter. By contrast, the variance is a measure of dispersion of the possible values of the random variable around the expected value. The variance itself is defined in terms of two expectations: it is the expected value of the squared deviation of the variable's value from the variable's expected value.

The expected value plays important roles in a variety of contexts. In regression analysis, one desires a formula in terms of observed data that will give a "good" estimate of the parameter giving the effect of some explanatory variable upon a dependent variable. The formula will give different estimates using different samples of data, so the estimate it gives is itself a random variable. A formula is typically considered good in this context if it is an unbiased estimator— that is if the expected value of the estimate (the average value it would give over an arbitrarily large number of separate samples) can be shown to equal the true value of the desired parameter.

In decision theory, and in particular in choice under uncertainty, an agent is described as making an optimal choice in the context of incomplete information. For risk neutral agents, the choice involves using the expected values of uncertain quantities, while for risk averse agents it involves maximizing the expected value of some objective function such as a von Neumann–Morgenstern utility function. One example of using expected value in reaching optimal decisions is the Gordon–Loeb model of information security investment. According to the model, one can conclude that the amount a firm spends to protect information should generally be only a small fraction of the expected loss (i.e., the expected value of the loss resulting from a cyber or information security breach).[4]

Definition [ edit ]

Finite case [ edit ]

Let X {\displaystyle X} be a random variable with a finite number of finite outcomes x 1 {\displaystyle x_{1}} , x 2 {\displaystyle x_{2}} , ..., x k {\displaystyle x_{k}} occurring with probabilities p 1 {\displaystyle p_{1}} , p 2 {\displaystyle p_{2}} , ..., p k {\displaystyle p_{k}} , respectively. The expectation of X {\displaystyle X} is defined as

E ⁡ [ X ] = ∑ i = 1 k x i p i = x 1 p 1 + x 2 p 2 + ⋯ + x k p k . {\displaystyle \operatorname {E} [X]=\sum _{i=1}^{k}x_{i}\,p_{i}=x_{1}p_{1}+x_{2}p_{2}+\cdots +x_{k}p_{k}.}

Since all probabilities p i {\displaystyle p_{i}} add up to 1 ( p 1 + p 2 + ⋯ + p k = 1 {\displaystyle p_{1}+p_{2}+\cdots +p_{k}=1} ), the expected value is the weighted average, with p i {\displaystyle p_{i}} ’s being the weights.

If all outcomes x i {\displaystyle x_{i}} are equiprobable (that is, p 1 = p 2 = ⋯ = p k {\displaystyle p_{1}=p_{2}=\cdots =p_{k}} ), then the weighted average turns into the simple average. This is intuitive: the expected value of a random variable is the average of all values it can take; thus the expected value is what one expects to happen on average. If the outcomes x i {\displaystyle x_{i}} are not equiprobable, then the simple average must be replaced with the weighted average, which takes into account the fact that some outcomes are more likely than the others. The intuition however remains the same: the expected value of X {\displaystyle X} is what one expects to happen on average.

An illustration of the convergence of sequence averages of rolls of a die to the expected value of 3.5 as the number of rolls (trials) grows.

Examples [ edit ]

Let X {\displaystyle X} die. More specifically, X {\displaystyle X} pips showing on the top face of the die after the toss. The possible values for X {\displaystyle X} 1 / 6 ). The expectation of X {\displaystyle X}

E ⁡ [ X ] = 1 ⋅ 1 6 + 2 ⋅ 1 6 + 3 ⋅ 1 6 + 4 ⋅ 1 6 + 5 ⋅ 1 6 + 6 ⋅ 1 6 = 3.5. {\displaystyle \operatorname {E} [X]=1\cdot {\frac {1}{6}}+2\cdot {\frac {1}{6}}+3\cdot {\frac {1}{6}}+4\cdot {\frac {1}{6}}+5\cdot {\frac {1}{6}}+6\cdot {\frac {1}{6}}=3.5.}

If one rolls the die n {\displaystyle n} arithmetic mean) of the results, then as n {\displaystyle n} almost surely converge to the expected value, a fact known as the strong law of large numbers. One example sequence of ten rolls of the die is 2, 3, 1, 2, 5, 6, 2, 2, 2, 6, which has the average of 3.1, with the distance of 0.4 from the expected value of 3.5. The convergence is relatively slow: the probability that the average falls within the range 3.5 ± 0.1 is 21.6% for ten rolls, 46.1% for a hundred rolls and 93.7% for a thousand rolls. See the figure for an illustration of the averages of longer sequences of rolls of the die and how they converge to the expected value of 3.5. More generally, the rate of convergence can be roughly quantified by e.g. Chebyshev's inequality and the Berry–Esseen theorem.

The roulette game consists of a small ball and a wheel with 38 numbered pockets around the edge. As the wheel is spun, the ball bounces around randomly until it settles down in one of the pockets. Suppose random variable X {\displaystyle X} 1 / 38 in American roulette), the payoff is $35; otherwise the player loses the bet. The expected profit from such a bet will be

E ⁡ [ gain from $ 1 bet ] = − $ 1 ⋅ 37 38 + $ 35 ⋅ 1 38 = − $ 0.0526. {\displaystyle \operatorname {E} [\,{\text{gain from }}\$1{\text{ bet}}\,]=-\$1\cdot {\frac {37}{38}}+\$35\cdot {\frac {1}{38}}=-\$0.0526.}

That is, the bet of $1 stands to lose $0.0526, so its expected value is -$0.0526.

Countably infinite case [ edit ]

Let X {\displaystyle X} be a random variable with a countable set of finite outcomes x 1 {\displaystyle x_{1}} , x 2 {\displaystyle x_{2}} , ..., occurring with probabilities p 1 {\displaystyle p_{1}} , p 2 {\displaystyle p_{2}} , ..., respectively, such that the infinite sum ∑ i = 1 ∞ | x i | p i {\displaystyle \textstyle \sum _{i=1}^{\infty }|x_{i}|\,p_{i}} converges. The expected value of X {\displaystyle X} is defined as the series

E ⁡ [ X ] = ∑ i = 1 ∞ x i p i . {\displaystyle \operatorname {E} [X]=\sum _{i=1}^{\infty }x_{i}\,p_{i}.}

Remark 1. Observe that | E ⁡ [ X ] | ≤ ∑ i = 1 ∞ | x i | p i < ∞ . {\displaystyle \textstyle {\Bigl |}\operatorname {E} [X]{\Bigr |}\leq \sum _{i=1}^{\infty }|x_{i}|\,p_{i}<\infty .}

Remark 2. Due to absolute convergence, the expected value does not depend on the order in which the outcomes are presented. By contrast, a conditionally convergent series can be made to converge or diverge arbitrarily, via the Riemann rearrangement theorem.

Example [ edit ]

Suppose x i = i {\displaystyle x_{i}=i} p i = k i 2 i , {\displaystyle p_{i}={\frac {k}{i2^{i}}},} i = 1 , 2 , 3 , … {\displaystyle i=1,2,3,\ldots } k = 1 ln ⁡ 2 {\displaystyle k={\frac {1}{\ln 2}}} ln {\displaystyle \ln } natural logarithm) is the scale factor such that the probabilities sum to 1. Then

E ⁡ [ X ] = 1 ( k 2 ) + 2 ( k 8 ) + 3 ( k 24 ) + ⋯ = k 2 + k 4 + k 8 + ⋯ = k . {\displaystyle \operatorname {E} [X]=1\left({\frac {k}{2}}\right)+2\left({\frac {k}{8}}\right)+3\left({\frac {k}{24}}\right)+\dots ={\frac {k}{2}}+{\frac {k}{4}}+{\frac {k}{8}}+\dots =k.} Since this series converges absolutely, the expected value of X {\displaystyle X} k {\displaystyle k}

For an example that is not absolutely convergent, suppose random variable X {\displaystyle X} c 1 2 , c 2 2 , c 3 2 , c 4 2 {\displaystyle {\frac {c}{1^{2}}},{\frac {c}{2^{2}}},{\frac {c}{3^{2}}},{\frac {c}{4^{2}}}} c = 6 π 2 {\displaystyle c={\frac {6}{\pi ^{2}}}}

∑ i = 1 ∞ x i p i = c ( 1 − 1 2 + 1 3 − 1 4 + ⋯ ) {\displaystyle \sum _{i=1}^{\infty }x_{i}\,p_{i}=c\,{\bigg (}1-{\frac {1}{2}}+{\frac {1}{3}}-{\frac {1}{4}}+\dotsb {\bigg )}} converges and its sum is equal to 6 ln ⁡ 2 π 2 ≈ 0.421383 {\displaystyle {\frac {6\ln 2}{\pi ^{2}}}\approx 0.421383} X {\displaystyle X} E ⁡ [ X ] {\displaystyle \operatorname {E} [X]} Alternating harmonic series).

An example that diverges arises in the context of the St. Petersburg paradox. Let x i = 2 i {\displaystyle x_{i}=2^{i}} p i = 1 2 i {\displaystyle p_{i}={\frac {1}{2^{i}}}} i = 1 , 2 , 3 , … {\displaystyle i=1,2,3,\ldots }

∑ i = 1 ∞ x i p i = 2 ⋅ 1 2 + 4 ⋅ 1 4 + 8 ⋅ 1 8 + 16 ⋅ 1 16 + ⋯ = 1 + 1 + 1 + 1 + ⋯ . {\displaystyle \sum _{i=1}^{\infty }x_{i}\,p_{i}=2\cdot {\frac {1}{2}}+4\cdot {\frac {1}{4}}+8\cdot {\frac {1}{8}}+16\cdot {\frac {1}{16}}+\cdots =1+1+1+1+\cdots \,.} Since this does not converge but instead keeps growing, the expected value is infinite.

Absolutely continuous case [ edit ]

If X {\displaystyle X} is a random variable whose cumulative distribution function admits a density f ( x ) {\displaystyle f(x)} , then the expected value is defined as the following Lebesgue integral:

E ⁡ [ X ] = ∫ R x f ( x ) d x . {\displaystyle \operatorname {E} [X]=\int _{\mathbb {R} }xf(x)\,dx.}

Remark. From computational perspective, the integral in the definition of E ⁡ [ X ] {\displaystyle \operatorname {E} [X]} may often be treated as an improper Riemann integral ∫ − ∞ + ∞ x f ( x ) d x . {\displaystyle \textstyle \int _{-\infty }^{+\infty }xf(x)\,dx.} Specifically, if the function x f ( x ) {\displaystyle xf(x)} is Riemann-integrable on every finite interval [ a , b ] {\displaystyle [a,b]} , and

min ( ( − 1 ) ⋅ (R) ∫ − ∞ 0 x f ( x ) d x , (R) ∫ 0 + ∞ x f ( x ) d x ) < ∞ , {\displaystyle \min \left((-1)\cdot {\hbox{(R)}}\int _{-\infty }^{0}xf(x)\,dx,\ {\hbox{(R)}}\int _{0}^{+\infty }xf(x)\,dx\right)<\infty ,}

then the values (whether finite or infinite) of both integrals agree.

General case [ edit ]

In general, if X {\displaystyle X} is a random variable defined on a probability space ( Ω , Σ , P ) {\displaystyle (\Omega ,\Sigma ,\operatorname {P} )} , then the expected value of X {\displaystyle X} , denoted by E ⁡ [ X ] {\displaystyle \operatorname {E} [X]} , ⟨ X ⟩ {\displaystyle \langle X\rangle } , or X ¯ {\displaystyle {\bar {X}}} , is defined as the Lebesgue integral

E ⁡ [ X ] = ∫ Ω X ( ω ) d P ⁡ ( ω ) . {\displaystyle \operatorname {E} [X]=\int _{\Omega }X(\omega )\,d\operatorname {P} (\omega ).}

Remark 1. If X + ( ω ) = max ( X ( ω ) , 0 ) {\displaystyle X_{+}(\omega )=\max(X(\omega ),0)} and X − ( ω ) = − min ( X ( ω ) , 0 ) {\displaystyle X_{-}(\omega )=-\min(X(\omega ),0)} , then X = X + − X − . {\displaystyle X=X_{+}-X_{-}.} The functions X + {\displaystyle X_{+}} and X − {\displaystyle X_{-}} can be shown to be measurable (hence, random variables), and, by definition of Lebesgue integral,

E ⁡ [ X ] = ∫ Ω X ( ω ) d P ⁡ ( ω ) = ∫ Ω X + ( ω ) d P ⁡ ( ω ) − ∫ Ω X − ( ω ) d P ⁡ ( ω ) = E ⁡ [ X + ] − E ⁡ [ X − ] , {\displaystyle {\begin{aligned}\operatorname {E} [X]&=\int _{\Omega }X(\omega )\,d\operatorname {P} (\omega )\\&=\int _{\Omega }X_{+}(\omega )\,d\operatorname {P} (\omega )-\int _{\Omega }X_{-}(\omega )\,d\operatorname {P} (\omega )\\&=\operatorname {E} [X_{+}]-\operatorname {E} [X_{-}],\end{aligned}}}

where E ⁡ [ X + ] {\displaystyle \operatorname {E} [X_{+}]} and E ⁡ [ X − ] {\displaystyle \operatorname {E} [X_{-}]} are non-negative and possibly infinite.

The following scenarios are possible:

E ⁡ [ X ] {\displaystyle \operatorname {E} [X]} max ( E ⁡ [ X + ] , E ⁡ [ X − ] ) < ∞ ; {\displaystyle \max(\operatorname {E} [X_{+}],\operatorname {E} [X_{-}])<\infty ;}

E ⁡ [ X ] {\displaystyle \operatorname {E} [X]} max ( E ⁡ [ X + ] , E ⁡ [ X − ] ) = ∞ {\displaystyle \max(\operatorname {E} [X_{+}],\operatorname {E} [X_{-}])=\infty } min ( E ⁡ [ X + ] , E ⁡ [ X − ] ) < ∞ ; {\displaystyle \min(\operatorname {E} [X_{+}],\operatorname {E} [X_{-}])<\infty ;}

E ⁡ [ X ] {\displaystyle \operatorname {E} [X]} E ⁡ [ X + ] = E ⁡ [ X − ] = ∞ . {\displaystyle \operatorname {E} [X_{+}]=\operatorname {E} [X_{-}]=\infty .}

Remark 2. If F X ( x ) = P ⁡ ( X ≤ x ) {\displaystyle F_{X}(x)=\operatorname {P} (X\leq x)} is the cumulative distribution function of X {\displaystyle X} , then

E ⁡ [ X ] = ∫ − ∞ + ∞ x d F X ( x ) , {\displaystyle \operatorname {E} [X]=\int _{-\infty }^{+\infty }x\,dF_{X}(x),}

where the integral is interpreted in the sense of Lebesgue–Stieltjes.

Remark 3. An example of a distribution for which there is no expected value is Cauchy distribution.

Remark 4. For multidimensional random variables, their expected value is defined per component, i.e.

E ⁡ [ ( X 1 , … , X n ) ] = ( E ⁡ [ X 1 ] , … , E ⁡ [ X n ] ) {\displaystyle \operatorname {E} [(X_{1},\ldots ,X_{n})]=(\operatorname {E} [X_{1}],\ldots ,\operatorname {E} [X_{n}])}

and, for a random matrix X {\displaystyle X} with elements X i j {\displaystyle X_{ij}} ,

( E ⁡ [ X ] ) i j = E ⁡ [ X i j ] . {\displaystyle (\operatorname {E} [X])_{ij}=\operatorname {E} [X_{ij}].}

Basic properties [ edit ]

The properties below replicate or follow immediately from those of Lebesgue integral.

E ⁡ [ 1 A ] = P ⁡ ( A ) {\displaystyle \operatorname {E} [{\mathbf {1} }_{A}]=\operatorname {P} (A)} [ edit ]

If A {\displaystyle A} is an event, then E ⁡ [ 1 A ] = P ⁡ ( A ) , {\displaystyle \operatorname {E} [{\mathbf {1} }_{A}]=\operatorname {P} (A),} where 1 A {\displaystyle {\mathbf {1} }_{A}} is the indicator function of the set A {\displaystyle A} .

Proof. By definition of Lebesgue integral of the simple function 1 A = 1 A ( ω ) {\displaystyle {\mathbf {1} }_{A}={\mathbf {1} }_{A}(\omega )} ,

E ⁡ [ 1 A ] = 1 ⋅ P ⁡ ( A ) + 0 ⋅ P ⁡ ( Ω ∖ A ) = P ⁡ ( A ) . {\displaystyle \operatorname {E} [{\mathbf {1} }_{A}]=1\cdot \operatorname {P} (A)+0\cdot \operatorname {P} (\Omega \setminus A)=\operatorname {P} (A).}

If X = Y (a.s.) then E[X] = E[Y] [ edit ]

The statement follows from the definition of Lebesgue integral if we notice that X + = Y + {\displaystyle X_{+}=Y_{+}} (a.s.), X − = Y − {\displaystyle X_{-}=Y_{-}} (a.s.), and that changing a simple random variable on a set of probability zero does not alter the expected value.

Expected value of a constant [ edit ]

If X {\displaystyle X} is a random variable, and X = c {\displaystyle X=c} (a.s.), where c ∈ [ − ∞ , + ∞ ] {\displaystyle c\in [-\infty ,+\infty ]} , then E ⁡ [ X ] = c {\displaystyle \operatorname {E} [X]=c} . In particular, for an arbitrary random variable X {\displaystyle X} , E ⁡ [ E ⁡ [ X ] ] = E ⁡ [ X ] {\displaystyle \operatorname {E} [\operatorname {E} [X]]=\operatorname {E} [X]} .

Proof. Let C {\displaystyle C} be a constant random variable, i.e. C ≡ c {\displaystyle C\equiv c} . It follows from the definition of Lebesgue integral that E ⁡ [ C ] = c {\displaystyle \operatorname {E} [C]=c} . It also follows that X = C {\displaystyle X=C} (a.s.). By the previous property, E ⁡ [ X ] = E ⁡ [ C ] = c . {\displaystyle \operatorname {E} [X]=\operatorname {E} [C]=c.}

Linearity [ edit ]

The expected value operator (or expectation operator) E ⁡ [ ⋅ ] {\displaystyle \operatorname {E} [\cdot ]} is linear in the sense that

E ⁡ [ X + Y ] = E ⁡ [ X ] + E ⁡ [ Y ] , E ⁡ [ a X ] = a E ⁡ [ X ] , {\displaystyle {\begin{aligned}\operatorname {E} [X+Y]&=\operatorname {E} [X]+\operatorname {E} [Y],\\[6pt]\operatorname {E} [aX]&=a\operatorname {E} [X],\end{aligned}}}

where X {\displaystyle X} and Y {\displaystyle Y} are (arbitrary) random variables, and a {\displaystyle a} is a scalar.

More rigorously, let X {\displaystyle X} and Y {\displaystyle Y} be random variables whose expected values are defined (different from ∞ − ∞ {\displaystyle \infty -\infty } ).

If E ⁡ [ X ] + E ⁡ [ Y ] {\displaystyle \operatorname {E} [X]+\operatorname {E} [Y]} ∞ − ∞ {\displaystyle \infty -\infty }

E ⁡ [ X + Y ] = E ⁡ [ X ] + E ⁡ [ Y ] . {\displaystyle \operatorname {E} [X+Y]=\operatorname {E} [X]+\operatorname {E} [Y].}

Let E ⁡ [ X ] {\displaystyle \operatorname {E} [X]} a ∈ R {\displaystyle a\in \mathbb {R} } E ⁡ [ a X ] = a E ⁡ [ X ] . {\displaystyle \operatorname {E} [aX]=a\operatorname {E} [X].}

Proof. 1. We prove additivity in several steps. 1a. If X {\displaystyle X} and Y {\displaystyle Y} are simple and non-negative, taking intersections where necessary, one can re-write X {\displaystyle X} and Y {\displaystyle Y} in the form X = ∑ i = 1 n x i ⋅ 1 A i {\displaystyle X=\sum _{i=1}^{n}x_{i}\cdot {\mathbf {1} }_{A_{i}}} and Y = ∑ i = 1 n y i ⋅ 1 A i , {\displaystyle Y=\sum _{i=1}^{n}y_{i}\cdot {\mathbf {1} }_{A_{i}},} for some measurable pairwise-disjoint sets { A i } i = 1 n {\displaystyle \{A_{i}\}_{i=1}^{n}} partitioning Ω {\displaystyle \Omega } , and 1 A i = 1 A i ( ω ) {\displaystyle {\mathbf {1} }_{A_{i}}={\mathbf {1} }_{A_{i}}(\omega )} being the indicator function of the set A i {\displaystyle A_{i}} . By a straightforward check, the additivity follows. 1b. Assuming that X {\displaystyle X} and Y {\displaystyle Y} are arbitrary and non-negative, recall that every non-negative measurable function is a pointwise limit of a pointwise non-decreasing sequence of simple non-negative ones. Let { X n } {\displaystyle \{X_{n}\}} and { Y n } {\displaystyle \{Y_{n}\}} be such sequences converging to X {\displaystyle X} and Y , {\displaystyle Y,} respectively. We see that { X n + Y n } {\displaystyle \{X_{n}+Y_{n}\}} pointwise non-decreases, and X n + Y n → X + Y {\displaystyle X_{n}+Y_{n}\to X+Y} pointwise. By monotone convergence theorem and case 1a, E ⁡ [ X + Y ] = E ⁡ [ lim n ( X n + Y n ) ] = lim n E ⁡ [ X n + Y n ] = lim n ( E ⁡ [ X n ] + E ⁡ [ Y n ] ) = lim n E ⁡ [ X n ] + lim n E ⁡ [ Y n ] = E ⁡ [ lim n X n ] + E ⁡ [ lim n Y n ] = E ⁡ [ X ] + E ⁡ [ Y ] . {\displaystyle {\begin{aligned}\operatorname {E} [X+Y]&=\operatorname {E} [\lim _{n}(X_{n}+Y_{n})]\\&=\lim _{n}\operatorname {E} [X_{n}+Y_{n}]\\&=\lim _{n}(\operatorname {E} [X_{n}]+\operatorname {E} [Y_{n}])\\&=\lim _{n}\operatorname {E} [X_{n}]+\lim _{n}\operatorname {E} [Y_{n}]\\&=\operatorname {E} [\lim _{n}X_{n}]+\operatorname {E} [\lim _{n}Y_{n}]\\&=\operatorname {E} [X]+\operatorname {E} [Y].\end{aligned}}} (The reader can verify that using the monotone convergence theorem this way does not lead to circular logic). 1c. In the general case, if Z = X + Y {\displaystyle Z=X+Y} , then Z + + X − + Y − = Z − + X + + Y + , {\displaystyle Z_{+}+X_{-}+Y_{-}=Z_{-}+X_{+}+Y_{+},} and E ⁡ [ Z + + X − + Y − ] = E ⁡ [ Z − + X + + Y + ] . {\displaystyle \operatorname {E} [Z_{+}+X_{-}+Y_{-}]=\operatorname {E} [Z_{-}+X_{+}+Y_{+}].} Splitting up, E ⁡ [ Z + ] + E ⁡ [ X − ] + E ⁡ [ Y − ] = E ⁡ [ Z − ] + E ⁡ [ X + ] + E ⁡ [ Y + ] , {\displaystyle \operatorname {E} [Z_{+}]+\operatorname {E} [X_{-}]+\operatorname {E} [Y_{-}]=\operatorname {E} [Z_{-}]+\operatorname {E} [X_{+}]+\operatorname {E} [Y_{+}],} which is equivalent to, E ⁡ [ Z + ] − E ⁡ [ Z − ] = E ⁡ [ X + ] + E ⁡ [ Y + ] − E ⁡ [ X − ] − E ⁡ [ Y − ] , {\displaystyle \operatorname {E} [Z_{+}]-\operatorname {E} [Z_{-}]=\operatorname {E} [X_{+}]+\operatorname {E} [Y_{+}]-\operatorname {E} [X_{-}]-\operatorname {E} [Y_{-}],} and finally, E ⁡ [ Z ] = E ⁡ [ X ] + E ⁡ [ Y ] . {\displaystyle \operatorname {E} [Z]=\operatorname {E} [X]+\operatorname {E} [Y].} 2. To prove homogeneity, we first assume that the scalar a {\displaystyle a} above is non-negative. The finiteness of E ⁡ [ X ] {\displaystyle \operatorname {E} [X]} implies that X {\displaystyle X} is finite (a.s.). Therefore, a ⋅ X {\displaystyle a\cdot X} is also finite (a.s.), which guarantees that E ⁡ [ a X ] {\displaystyle \operatorname {E} [aX]} is finite. The equality, thus, is a straightforward check based on the definition of Lebesgue integral. If a < 0 {\displaystyle a<0} , then we first prove that E ⁡ [ − X ] = − E ⁡ [ X ] {\displaystyle \operatorname {E} [-X]=-\operatorname {E} [X]} by observing that ( − X ) + = X − {\displaystyle (-X)_{+}=X_{-}} and vice versa.

E[X] exists and is finite if and only if E[|X|] is finite [ edit ]

The following statements regarding a random variable X {\displaystyle X} are equivalent:

E ⁡ [ X ] {\displaystyle \operatorname {E} [X]}

Both E ⁡ [ X + ] {\displaystyle \operatorname {E} [X_{+}]} E ⁡ [ X − ] {\displaystyle \operatorname {E} [X_{-}]}

E ⁡ [ | X | ] {\displaystyle \operatorname {E} [|X|]}

Sketch of proof. Indeed, | X | = X + + X − {\displaystyle |X|=X_{+}+X_{-}} . By linearity, E ⁡ [ | X | ] = E ⁡ [ X + ] + E ⁡ [ X − ] {\displaystyle \operatorname {E} [|X|]=\operatorname {E} [X_{+}]+\operatorname {E} [X_{-}]} . The above equivalency relies on the definition of Lebesgue integral and measurability of X {\displaystyle X} .

Remark. For the reasons above, the expressions " X {\displaystyle X} is integrable" and "the expected value of X {\displaystyle X} is finite" are used interchangeably when speaking of a random variable throughout this article.

If X ≥ 0 (a.s.) then E[X] ≥ 0 [ edit ]

Proof. Denote SF = { s : Ω → R ∣ s is a simple random variable, and 0 ≤ s ≤ X + } . {\displaystyle \operatorname {SF} =\{s:\Omega \to \mathbb {R} \mid s{\text{ is a simple random variable, and }}0\leq s\leq X_{+}\}.} If s ∈ SF {\displaystyle s\in \operatorname {SF} } , then E ⁡ [ s ] ∈ [ 0 , + ∞ ) {\displaystyle \operatorname {E} [s]\in [0,+\infty )} , and hence, by definition of Lebesgue integral, E ⁡ [ X + ] = sup s ∈ SF E ⁡ [ s ] ≥ 0. {\displaystyle \operatorname {E} [X_{+}]=\sup _{s\in \operatorname {SF} }\operatorname {E} [s]\geq 0.} On the other hand, X − = 0 {\displaystyle X_{-}=0} (a.s.), so, through a similar argument, E ⁡ [ X − ] = 0 {\displaystyle \operatorname {E} [X_{-}]=0} , and therefore E ⁡ [ X ] = E ⁡ [ X + ] − E ⁡ [ X − ] = E ⁡ [ X + ] ≥ 0 {\displaystyle \operatorname {E} [X]=\operatorname {E} [X_{+}]-\operatorname {E} [X_{-}]=\operatorname {E} [X_{+}]\geq 0} .

Monotonicity [ edit ]

If X ≤ Y {\displaystyle X\leq Y} (a.s.), and both E ⁡ [ X ] {\displaystyle \operatorname {E} [X]} and E ⁡ [ Y ] {\displaystyle \operatorname {E} [Y]} exist, then E ⁡ [ X ] ≤ E ⁡ [ Y ] {\displaystyle \operatorname {E} [X]\leq \operatorname {E} [Y]} .

Remark. E ⁡ [ X ] {\displaystyle \operatorname {E} [X]} and E ⁡ [ Y ] {\displaystyle \operatorname {E} [Y]} exist in the sense that min ( E ⁡ [ X + ] , E ⁡ [ X − ] ) < ∞ {\displaystyle \min(\operatorname {E} [X_{+}],\operatorname {E} [X_{-}])<\infty } and min ( E ⁡ [ Y + ] , E ⁡ [ Y − ] ) < ∞ . {\displaystyle \min(\operatorname {E} [Y_{+}],\operatorname {E} [Y_{-}])<\infty .}

Proof follows from the linearity and the previous property if we set Z = Y − X {\displaystyle Z=Y-X} and notice that Z ≥ 0 {\displaystyle Z\geq 0} (a.s.).

| X | ≤ Y {\displaystyle |X|\leq Y} E ⁡ [ Y ] {\displaystyle \operatorname {E} [Y]} E ⁡ [ X ] {\displaystyle \operatorname {E} [X]} If(a.s.) andis finite then so is [ edit ]

Let X {\displaystyle X} and Y {\displaystyle Y} be random variables such that | X | ≤ Y {\displaystyle |X|\leq Y} (a.s.) and E ⁡ [ Y ] < ∞ {\displaystyle \operatorname {E} [Y]<\infty } . Then E ⁡ [ X ] ≠ ± ∞ {\displaystyle \operatorname {E} [X]

eq \pm \infty } .

Proof. Due to non-negativity of | X | {\displaystyle |X|} , E ⁡ | X | {\displaystyle \operatorname {E} |X|} exists, finite or infinite. By monotonicity, E ⁡ | X | ≤ E ⁡ [ Y ] < ∞ {\displaystyle \operatorname {E} |X|\leq \operatorname {E} [Y]<\infty } , so E ⁡ | X | {\displaystyle \operatorname {E} |X|} is finite which, as we saw earlier, is equivalent to E ⁡ [ X ] {\displaystyle \operatorname {E} [X]} being finite.

E ⁡ | X β | < ∞ {\displaystyle \operatorname {E} |X^{\beta }|<\infty } 0 < α < β {\displaystyle 0<\alpha <\beta } E ⁡ | X α | < ∞ {\displaystyle \operatorname {E} |X^{\alpha }|<\infty } Ifandthen [ edit ]

The proposition below will be used to prove the extremal property of E ⁡ [ X ] {\displaystyle \operatorname {E} [X]} later on.

Proposition. If X {\displaystyle X} is a random variable, then so is X α {\displaystyle X^{\alpha }} , for every α > 0 {\displaystyle \alpha >0} . If, in addition, E ⁡ | X β | < ∞ {\displaystyle \operatorname {E} |X^{\beta }|<\infty } and 0 < α < β {\displaystyle 0<\alpha <\beta } , then E ⁡ | X α | < ∞ {\displaystyle \operatorname {E} |X^{\alpha }|<\infty } .

Proof. To see why the first statement holds, observe that X α {\displaystyle X^{\alpha }} X {\displaystyle X} x ↦ x α {\displaystyle x\mapsto x^{\alpha }} X α {\displaystyle X^{\alpha }} To prove the second statement, define Y ( ω ) = max ( | X ( ω ) | β , 1 ) . {\displaystyle Y(\omega )=\max(|X(\omega )|^{\beta },1).} The reader can verify that Y {\displaystyle Y} is a random variable and | X | α ≤ Y {\displaystyle |X|^{\alpha }\leq Y} . By non-negativity, E ⁡ [ Y ] = ∫ { ω ∣ | X ( ω ) | β ≤ 1 } Y d P + ∫ { ω ∣ | X ( ω ) | β > 1 } Y d P = P ⁡ ( | X ( ω ) | β ≤ 1 ) + ∫ { ω ∣ | X ( ω ) | β > 1 } | X | β d P ≤ 1 + E ⁡ | X β | < ∞ . {\displaystyle {\begin{aligned}\operatorname {E} [Y]&=\int \limits _{\{\omega \ \mid \ |X(\omega )|^{\beta }\leq 1\}}Y\,dP+\int \limits _{\{\omega \ \mid \ |X(\omega )|^{\beta }>1\}}Y\,dP\\[6pt]&=\operatorname {P} {\bigl (}|X(\omega )|^{\beta }\leq 1{\bigr )}+\int \limits _{\{\omega \ \mid \ |X(\omega )|^{\beta }>1\}}|X|^{\beta }\,dP\\[6pt]&\leq 1+\operatorname {E} |X^{\beta }|<\infty .\end{aligned}}} By monotonicity, E ⁡ | X α | ≤ E ⁡ [ Y ] ≤ 1 + E ⁡ | X β | < ∞ . {\displaystyle \operatorname {E} |X^{\alpha }|\leq \operatorname {E} [Y]\leq 1+\operatorname {E} |X^{\beta }|<\infty .}

Counterexample for infinite measure [ edit ]

The requirement that P ⁡ ( Ω ) < ∞ {\displaystyle \operatorname {P} (\Omega )<\infty } is essential. By way of counterexample, consider the measurable space

( [ 1 , + ∞ ) , B R [ 1 , + ∞ ) , λ ) , {\displaystyle ([1,+\infty ),{\mathcal {B}}_{\mathbb {R} _{[1,+\infty )}},\lambda ),}

where B R [ 1 , + ∞ ) {\displaystyle {\mathcal {B}}_{\mathbb {R} _{[1,+\infty )}}} is the Borel σ {\displaystyle \sigma } -algebra on the interval [ 1 , + ∞ ) , {\displaystyle [1,+\infty ),} and λ {\displaystyle \lambda } is the linear Lebesgue measure. The reader can prove that ∫ [ 1 , + ∞ ) 1 x d x = ∞ , {\displaystyle \textstyle \int _{[1,+\infty )}{\frac {1}{x}}\,dx=\infty ,} even though ∫ [ 1 , + ∞ ) 1 x 2 d x = 1. {\displaystyle \textstyle \int _{[1,+\infty )}{\frac {1}{x^{2}}}\,dx=1.} (Sketch of proof: ∫ S 1 x d x {\displaystyle \textstyle \int _{S}{\frac {1}{x}}\,dx} and ∫ S 1 x 2 d x {\displaystyle \textstyle \int _{S}{\frac {1}{x^{2}}}\,dx} define a measure μ {\displaystyle \mu } on [ 1 , + ∞ ) = ∪ n = 1 ∞ [ 1 , n ] . {\displaystyle \textstyle [1,+\infty )=\cup _{n=1}^{\infty }[1,n].} Use "continuity from below" w.r. to μ {\displaystyle \mu } and reduce to Riemann integral on each finite subinterval [ 1 , n ] {\displaystyle [1,n]} ).

Extremal property [ edit ]

Recall, as we proved early on, that if X {\displaystyle X} is a random variable, then so is X 2 {\displaystyle X^{2}} .

Proposition (extremal property of E ⁡ [ X ] ) {\displaystyle \operatorname {E} [X])} ). Let X {\displaystyle X} be a random variable, and E ⁡ [ X 2 ] < ∞ {\displaystyle \operatorname {E} [X^{2}]<\infty } . Then E ⁡ [ X ] {\displaystyle \operatorname {E} [X]} and Var ⁡ [ X ] {\displaystyle \operatorname {Var} [X]} are finite, and E ⁡ [ X ] {\displaystyle \operatorname {E} [X]} is the best least squares approximation for X {\displaystyle X} among constants. Specifically,

for every c ∈ R {\displaystyle c\in \mathbb {R} } E ⁡ [ X − c ] 2 ≥ Var ⁡ [ X ] ; {\displaystyle \textstyle \operatorname {E} [X-c]^{2}\geq \operatorname {Var} [X];}

equality holds if and only if c = E ⁡ [ X ] . {\displaystyle c=\operatorname {E} [X].}

( Var ⁡ [ X ] {\displaystyle \operatorname {Var} [X]} denotes the variance of X {\displaystyle X} ).

Remark (intuitive interpretation of extremal property). In intuitive terms, the extremal property says that if one is asked to predict the outcome of a trial of a random variable X {\displaystyle X} , then E ⁡ [ X ] {\displaystyle \operatorname {E} [X]} , in some practically useful sense, is one's best bet if no advance information about the outcome is available. If, on the other hand, one does have some advance knowledge F {\displaystyle {\mathcal {F}}} regarding the outcome, then — again, in some practically useful sense — one's bet may be improved upon by using conditional expectations E ⁡ [ X ∣ F ] {\displaystyle \operatorname {E} [X\mid {\mathcal {F}}]} (of which E ⁡ [ X ] {\displaystyle \operatorname {E} [X]} is a special case) rather than E ⁡ [ X ] {\displaystyle \operatorname {E} [X]} .

Proof of proposition. By the above properties, both E ⁡ [ X ] {\displaystyle \operatorname {E} [X]} and Var ⁡ [ X ] = E ⁡ [ X 2 ] − E 2 ⁡ [ X ] {\displaystyle \operatorname {Var} [X]=\operatorname {E} [X^{2}]-\operatorname {E} ^{2}[X]} are finite, and

E ⁡ [ X − c ] 2 = E ⁡ [ X 2 − 2 c X + c 2 ] = E ⁡ [ X 2 ] − 2 c E ⁡ [ X ] + c 2 = ( c − E ⁡ [ X ] ) 2 + E ⁡ [ X 2 ] − E 2 ⁡ [ X ] = ( c − E ⁡ [ X ] ) 2 + Var ⁡ [ X ] , {\displaystyle {\begin{aligned}\operatorname {E} [X-c]^{2}&=\operatorname {E} [X^{2}-2cX+c^{2}]\\[6pt]&=\operatorname {E} [X^{2}]-2c\operatorname {E} [X]+c^{2}\\[6pt]&=(c-\operatorname {E} [X])^{2}+\operatorname {E} [X^{2}]-\operatorname {E} ^{2}[X]\\[6pt]&=(c-\operatorname {E} [X])^{2}+\operatorname {Var} [X],\end{aligned}}}

whence the extremal property follows.

If E ⁡ | X | = 0 {\displaystyle \operatorname {E} |X|=0} , then X = 0 {\displaystyle X=0} (a.s.).

Proof. For every positive constant r ∈ R > 0 {\displaystyle r\in {\mathbb {R} }_{>0}} , P ⁡ ( | X | ≥ r ) = 0 {\displaystyle \operatorname {P} (|X|\geq r)=0} . Indeed, r ⋅ 1 | X | ≥ r ≤ | X | ⋅ 1 | X | ≥ r ≤ | X | , {\displaystyle r\cdot {\mathbf {1} }_{|X|\geq r}\leq |X|\cdot {\mathbf {1} }_{|X|\geq r}\leq |X|,} where 1 | X | ≥ r = 1 | X | ≥ r ( ω ) {\displaystyle {\mathbf {1} }_{|X|\geq r}={\mathbf {1} }_{|X|\geq r}(\omega )} is the indicator function of the set { ω ∈ Ω ∣ | X ( ω ) | ≥ r } {\displaystyle \{\omega \in \Omega \mid |X(\omega )|\geq r\}} . By a property above, the finiteness of E ⁡ | X | {\displaystyle \operatorname {E} |X|} guarantees that the expected values E ⁡ [ r ⋅ 1 | X | ≥ r ] {\displaystyle \operatorname {E} [r\cdot {\mathbf {1} }_{|X|\geq r}]} and E ⁡ [ | X | ⋅ 1 | X | ≥ r ] {\displaystyle \operatorname {E} [|X|\cdot {\mathbf {1} }_{|X|\geq r}]} are also finite. By monotonicity, r ⋅ P ⁡ ( | X | ≥ r ) = E ⁡ [ r ⋅ 1 | X | ≥ r ] ≤ E ⁡ [ | X | ⋅ 1 | X | ≥ r ] ≤ E ⁡ | X | = 0. {\displaystyle r\cdot \operatorname {P} (|X|\geq r)=\operatorname {E} [r\cdot {\mathbf {1} }_{|X|\geq r}]\leq \operatorname {E} [|X|\cdot {\mathbf {1} }_{|X|\geq r}]\leq \operatorname {E} |X|=0.} For some integer n > 0 {\displaystyle n>0} , set r = 1 n {\displaystyle \textstyle r={\frac {1}{n}}} . Define S n = { ω ∈ Ω ∣ | X ( ω ) | ≥ 1 n } {\displaystyle \textstyle S_{n}=\{\omega \in \Omega \mid |X(\omega )|\geq {\frac {1}{n}}\}} , and S = { ω ∈ Ω ∣ | X ( ω ) | > 0 } . {\displaystyle \textstyle S=\{\omega \in \Omega \mid |X(\omega )|>0\}.} The chain of sets S 1 ⊆ ⋯ ⊆ S n ⊆ S n + 1 ⊆ ⋯ ⊆ S {\displaystyle S_{1}\subseteq \cdots \subseteq S_{n}\subseteq S_{n+1}\subseteq \cdots \subseteq S} monotonically non-decreases, and S = ∪ n = 1 ∞ S n {\displaystyle S=\cup _{n=1}^{\infty }S_{n}} . By "continuity from below", P ⁡ ( S ) = lim n P ⁡ ( S n ) {\displaystyle \textstyle \operatorname {P} (S)=\lim _{n}\operatorname {P} (S_{n})} . Applying this formula, obtain P ⁡ ( X ≠ 0 ) = P ⁡ ( | X | > 0 ) = lim n P ⁡ ( | X | ≥ 1 n ) = lim n 0 = 0 , {\displaystyle \operatorname {P} (X

eq 0)=\operatorname {P} (|X|>0)=\lim _{n}\operatorname {P} \left(|X|\geq {\frac {1}{n}}\right)=\lim _{n}0=0,} as required.

E ⁡ [ X ] < + ∞ {\displaystyle \operatorname {E} [X]<+\infty } X < + ∞ {\displaystyle X<+\infty } Ifthen(a.s.) [ edit ]

Proof. Since E ⁡ [ X ] {\displaystyle \operatorname {E} [X]} is defined (i.e. min ( E ⁡ [ X + ] , E ⁡ [ X − ] ) < ∞ {\displaystyle \min(\operatorname {E} [X_{+}],\operatorname {E} [X_{-}])<\infty } ), and E ⁡ [ X ] = E ⁡ [ X + ] − E ⁡ [ X − ] , {\displaystyle \operatorname {E} [X]=\operatorname {E} [X_{+}]-\operatorname {E} [X_{-}],} we know that E ⁡ [ X + ] {\displaystyle \operatorname {E} [X_{+}]} is finite, and we want to show that X + < + ∞ {\displaystyle X_{+}<+\infty } (a.s.). We will show that P ⁡ ( Ω ∞ ) = 0 , {\displaystyle \operatorname {P} (\Omega _{\infty })=0,} where Ω ∞ = { ω ∈ Ω ∣ X + ( ω ) = + ∞ } . {\displaystyle \Omega _{\infty }=\{\omega \in \Omega \mid X_{+}(\omega )=+\infty \}.} If Ω ∞ = ∅ , {\displaystyle \Omega _{\infty }=\emptyset ,} then P ⁡ ( Ω ∞ ) = 0 , {\displaystyle \operatorname {P} (\Omega _{\infty })=0,} and the proof is complete. Assuming that Ω ∞ ≠ ∅ , {\displaystyle \Omega _{\infty }

eq \emptyset ,} define SF = { s ∣ s is a simple random variable s.t. 0 ≤ s ≤ X + } . {\displaystyle \operatorname {SF} =\{s\mid s\ {\hbox{ is a simple random variable s.t.}}\ 0\leq s\leq X_{+}\}.} Given that S F ≠ ∅ {\displaystyle {\rm {SF}}

eq \emptyset } , pick f ∈ S F . {\displaystyle f\in {\rm {SF}}.} For every n > sup Ω f , {\displaystyle \textstyle n>\sup _{\Omega }f,} define f n ( ω ) = { n if ω ∈ Ω ∞ f ( ω ) if ω ∉ Ω ∞ . {\displaystyle f_{n}(\omega )={\begin{cases}n&{\hbox{if}}\ \omega \in \Omega _{\infty }\\[3pt]f(\omega )&{\hbox{if}}\ \omega

otin \Omega _{\infty }.\end{cases}}} Clearly, f n ∈ S F , {\displaystyle f_{n}\in {\rm {SF}},} and E ⁡ [ f n ] = n ⋅ P ⁡ ( Ω ∞ ) + h , {\displaystyle \operatorname {E} [f_{n}]=n\cdot \operatorname {P} (\Omega _{\infty })+h,} for some constant h ≥ 0 {\displaystyle h\geq 0} independent from n . {\displaystyle n.} (One can easily see that, in fact, h = E ⁡ [ f ⋅ 1 Ω ∖ Ω ∞ ] , {\displaystyle h=\operatorname {E} [f\cdot {\mathbf {1} }_{\Omega \setminus \Omega _{\infty }}],} but this is of no interest to us here). Suppose that P ⁡ ( Ω ∞ ) > 0. {\displaystyle \operatorname {P} (\Omega _{\infty })>0.} The sequence { E ⁡ [ f n ] } {\displaystyle \{\operatorname {E} [f_{n}]\}} strictly increases, so, by definition of Lebesgue integral, E ⁡ [ X + ] = sup s ∈ S F E ⁡ [ s ] ≥ sup n > sup Ω f E ⁡ [ f n ] = + ∞ ⋅ P ⁡ ( Ω ∞ ) + h = + ∞ , {\displaystyle \operatorname {E} [X_{+}]=\sup _{s\in {\rm {SF}}}\operatorname {E} [s]\geq \sup _{n>\sup _{\Omega }f}\operatorname {E} [f_{n}]=+\infty \cdot \operatorname {P} (\Omega _{\infty })+h=+\infty ,} in contradiction with an earlier conclusion that E ⁡ [ X + ] {\displaystyle \operatorname {E} [X_{+}]} is finite.

E ⁡ [ X ] > − ∞ {\displaystyle \operatorname {E} [X]>-\infty } X > − ∞ {\displaystyle X>-\infty } Corollary: ifthen(a.s.) [ edit ]

E ⁡ | X | < ∞ {\displaystyle \operatorname {E} |X|<\infty } X ≠ ± ∞ {\displaystyle X

eq \pm \infty } Corollary: ifthen(a.s.) [ edit ]

| E ⁡ [ X ] | ≤ E ⁡ | X | {\displaystyle |\operatorname {E} [X]|\leq \operatorname {E} |X|} [ edit ]

For an arbitrary random variable X {\displaystyle X} , | E ⁡ [ X ] | ≤ E ⁡ | X | {\displaystyle |\operatorname {E} [X]|\leq \operatorname {E} |X|} .

Proof. By definition of Lebesgue integral,

| E ⁡ [ X ] | = | E ⁡ [ X + ] − E ⁡ [ X − ] | ≤ | E ⁡ [ X + ] | + | E ⁡ [ X − ] | = E ⁡ [ X + ] + E ⁡ [ X − ] = E ⁡ [ X + + X − ] = E ⁡ | X | . {\displaystyle {\begin{aligned}|\operatorname {E} [X]|&={\Bigl |}\operatorname {E} [X_{+}]-\operatorname {E} [X_{-}]{\Bigr |}\leq {\Bigl |}\operatorname {E} [X_{+}]{\Bigr |}+{\Bigl |}\operatorname {E} [X_{-}]{\Bigr |}\\[5pt]&=\operatorname {E} [X_{+}]+\operatorname {E} [X_{-}]=\operatorname {E} [X_{+}+X_{-}]\\[5pt]&=\operatorname {E} |X|.\end{aligned}}}

Note that this result can also be proved based on Jensen's inequality.

In general, the expected value operator is not multiplicative, i.e. E ⁡ [ X Y ] {\displaystyle \operatorname {E} [XY]} is not necessarily equal to E ⁡ [ X ] ⋅ E ⁡ [ Y ] {\displaystyle \operatorname {E} [X]\cdot \operatorname {E} [Y]} . Indeed, let X {\displaystyle X} assume the values of 1 and -1 with probability 0.5 each. Then

E 2 ⁡ [ X ] = ( 1 2 ⋅ ( − 1 ) + 1 2 ⋅ 1 ) 2 = 0 , {\displaystyle \operatorname {E^{2}} [X]=\left({\frac {1}{2}}\cdot (-1)+{\frac {1}{2}}\cdot 1\right)^{2}=0,}

and

E ⁡ [ X 2 ] = 1 2 ⋅ ( − 1 ) 2 + 1 2 ⋅ 1 2 = 1 , so E ⁡ [ X 2 ] ≠ E 2 ⁡ [ X ] . {\displaystyle \operatorname {E} [X^{2}]={\frac {1}{2}}\cdot (-1)^{2}+{\frac {1}{2}}\cdot 1^{2}=1,{\text{ so }}\operatorname {E} [X^{2}]

eq \operatorname {E^{2}} [X].}

The amount by which the multiplicativity fails is called the covariance:

Cov ⁡ ( X , Y ) = E ⁡ [ X Y ] − E ⁡ [ X ] E ⁡ [ Y ] . {\displaystyle \operatorname {Cov} (X,Y)=\operatorname {E} [XY]-\operatorname {E} [X]\operatorname {E} [Y].}

If, however, the random variables X ∈ ( Ω 1 , F 1 , P 1 ) {\displaystyle X\in (\Omega _{1},{\mathcal {F}}_{1},\operatorname {P} _{1})} and Y ∈ ( Ω 2 , F 2 , P 2 ) {\displaystyle Y\in (\Omega _{2},{\mathcal {F}}_{2},\operatorname {P} _{2})} are independent, then E ⁡ [ X Y ] = E ⁡ [ X ] E ⁡ [ Y ] {\displaystyle \operatorname {E} [XY]=\operatorname {E} [X]\operatorname {E} [Y]} , and Cov ⁡ ( X , Y ) = 0 {\displaystyle \operatorname {Cov} (X,Y)=0} .

E ⁡ [ X i ] ↛ E ⁡ [ X ] {\displaystyle \operatorname {E} [X_{i}]

ot \to \operatorname {E} [X]} X i → X {\displaystyle X_{i}\to X} Counterexample:despitepointwise [ edit ]

Let ( [ 0 , 1 ] , B [ 0 , 1 ] , P ) {\displaystyle \left([0,1],{\mathcal {B}}_{[0,1]},{\mathrm {P} }\right)} be the probability space, where B [ 0 , 1 ] {\displaystyle {\mathcal {B}}_{[0,1]}} is the Borel σ {\displaystyle \sigma } -algebra on [ 0 , 1 ] {\displaystyle [0,1]} and P {\displaystyle {\mathrm {P} }} the linear Lebesgue measure. For i ≥ 1 , {\displaystyle i\geq 1,} define a sequence of random variables

X i = i ⋅ 1 [ 0 , 1 i ] {\displaystyle X_{i}=i\cdot {\mathbf {1} }_{\left[0,{\frac {1}{i}}\right]}}

and a random variable

X = { + ∞ if x = 0 0 otherwise. {\displaystyle X={\begin{cases}+\infty &{\text{if}}\ x=0\\0&{\text{otherwise.}}\end{cases}}}

on [ 0 , 1 ] {\displaystyle [0,1]} , with 1 S {\displaystyle {\mathbf {1} }_{S}} being the indicator function of the set S ⊆ [ 0 , 1 ] {\displaystyle S\subseteq [0,1]} .

For every x ∈ [ 0 , 1 ] , {\displaystyle x\in [0,1],} as i → + ∞ , {\displaystyle i\to +\infty ,} X i ( x ) → X ( x ) , {\displaystyle X_{i}(x)\to X(x),} and

E ⁡ [ X i ] = i ⋅ P ( [ 0 , 1 i ] ) = i ⋅ 1 i = 1 , {\displaystyle \operatorname {E} [X_{i}]=i\cdot {\mathrm {P} }\left(\left[0,{\frac {1}{i}}\right]\right)=i\cdot {\dfrac {1}{i}}=1,}

so lim i → ∞ E ⁡ [ X i ] = 1. {\displaystyle \lim _{i\to \infty }\operatorname {E} [X_{i}]=1.} On the other hand, P ⁡ ( { 0 } ) = 0 , {\displaystyle \mathop {\mathrm {P} } (\{0\})=0,} and hence E ⁡ [ X ] = 0. {\displaystyle \operatorname {E} \left[X\right]=0.}

Countable non-additivity [ edit ]

In general, the expected value operator is not σ {\displaystyle \sigma } -additive, i.e.

E ⁡ [ ∑ i = 0 ∞ X i ] ≠ ∑ i = 0 ∞ E ⁡ [ X i ] . {\displaystyle \operatorname {E} \left[\sum _{i=0}^{\infty }X_{i}\right]

eq \sum _{i=0}^{\infty }\operatorname {E} [X_{i}].}

By way of counterexample, let ( [ 0 , 1 ] , B [ 0 , 1 ] , P ) {\displaystyle \left([0,1],{\mathcal {B}}_{[0,1]},{\mathrm {P} }\right)} be the probability space, where B [ 0 , 1 ] {\displaystyle {\mathcal {B}}_{[0,1]}} is the Borel σ {\displaystyle \sigma } -algebra on [ 0 , 1 ] {\displaystyle [0,1]} and P {\displaystyle {\mathrm {P} }} the linear Lebesgue measure. Define a sequence of random variables X i = ( i + 1 ) ⋅ 1 [ 0 , 1 i + 1 ] − i ⋅ 1 [ 0 , 1 i ] {\displaystyle \textstyle X_{i}=(i+1)\cdot {\mathbf {1} }_{\left[0,{\frac {1}{i+1}}\right]}-i\cdot {\mathbf {1} }_{\left[0,{\frac {1}{i}}\right]}} on [ 0 , 1 ] {\displaystyle [0,1]} , with 1 S {\displaystyle {\mathbf {1} }_{S}} being the indicator function of the set S ⊆ [ 0 , 1 ] {\displaystyle S\subseteq [0,1]} . For the pointwise sums, we have

∑ i = 0 n X i = ( n + 1 ) ⋅ 1 [ 0 , 1 n + 1 ] , {\displaystyle \sum _{i=0}^{n}X_{i}=(n+1)\cdot {\mathbf {1} }_{\left[0,{\frac {1}{n+1}}\right]},} ∑ i = 0 ∞ X i ( x ) = { + ∞ if x = 0 0 otherwise. {\displaystyle \sum _{i=0}^{\infty }X_{i}(x)={\begin{cases}+\infty &{\text{if}}\ x=0\\0&{\text{otherwise.}}\end{cases}}}

By finite additivity,

∑ i = 0 ∞ E ⁡ [ X i ] = lim n → ∞ ∑ i = 0 n E ⁡ [ X i ] = lim n → ∞ E ⁡ [ ∑ i = 0 n X i ] = 1. {\displaystyle \sum _{i=0}^{\infty }\operatorname {E} [X_{i}]=\lim _{n\to \infty }\sum _{i=0}^{n}\operatorname {E} [X_{i}]=\lim _{n\to \infty }\operatorname {E} \left[\sum _{i=0}^{n}X_{i}\right]=1.}

On the other hand, P ⁡ ( { 0 } ) = 0 , {\displaystyle \mathop {\mathrm {P} } (\{0\})=0,} and hence

E ⁡ [ ∑ i = 0 ∞ X i ] = 0 ≠ 1 = ∑ i = 0 ∞ E ⁡ [ X i ] . {\displaystyle \operatorname {E} \left[\sum _{i=0}^{\infty }X_{i}\right]=0

eq 1=\sum _{i=0}^{\infty }\operatorname {E} [X_{i}].}

Countable additivity for non-negative random variables [ edit ]

Let { X i } i = 0 ∞ {\displaystyle \{X_{i}\}_{i=0}^{\infty }} be non-negative random variables. It follows from monotone convergence theorem that

E ⁡ [ ∑ i = 0 ∞ X i ] = ∑ i = 0 ∞ E ⁡ [ X i ] . {\displaystyle \operatorname {E} \left[\sum _{i=0}^{\infty }X_{i}\right]=\sum _{i=0}^{\infty }\operatorname {E} [X_{i}].}

E ⁡ [ X Y ] = E ⁡ [ X ] E ⁡ [ Y ] {\displaystyle \operatorname {E} [XY]=\operatorname {E} [X]\operatorname {E} [Y]} X {\displaystyle X} Y {\displaystyle Y} for independentand [ edit ]

Let X {\displaystyle X} and Y {\displaystyle Y} be independent random variables with finite expectations E ⁡ [ X ] {\displaystyle \operatorname {E} [X]} and E ⁡ [ Y ] {\displaystyle \operatorname {E} [Y]} . Then E ⁡ [ X Y ] = E ⁡ [ X ] E ⁡ [ Y ] {\displaystyle \operatorname {E} [XY]=\operatorname {E} [X]\operatorname {E} [Y]} .

Proof. 1. The case of non-negative Q {\displaystyle {\mathbb {Q} }} -valued random variables. Given a positive integer n {\displaystyle n} , let the random variables X : Ω 1 → R {\displaystyle X:\Omega _{1}\to {\mathbb {R} }} and Y : Ω 2 → R {\displaystyle Y:\Omega _{2}\to {\mathbb {R} }} assume their values in the set { m n | m = 0 , 1 , 2 , 3 , … } ⊂ Q ≥ 0 . {\displaystyle \left\{{\frac {m}{n}}\,{\mathrel {\Big |}}\,m=0,1,2,3,\ldots \right\}\subset {\mathbb {Q} }_{\geq 0}.} Then X = ∑ m ≥ 0 m n ⋅ 1 X m n {\displaystyle \textstyle X=\sum _{m\geq 0}{\frac {m}{n}}\cdot {\mathbf {1} }_{X_{mn}}} , Y = ∑ m ≥ 0 m n ⋅ 1 Y m n {\displaystyle \textstyle Y=\sum _{m\geq 0}{\frac {m}{n}}\cdot {\mathbf {1} }_{Y_{mn}}} , and X Y = ∑ m 1 ≥ 0 ⁡ ∑ m 2 ≥ 0 m 1 n 1 X m 1 n ⋅ m 2 n 1 Y m 2 n = 1 n 2 ∑ i ≥ 0 i ⋅ ∑ m 1 ⋅ m 2 = i 1 X m 1 n × Y m 2 n , {\displaystyle {\begin{aligned}XY&=\mathop {\sum _{m_{1}\geq 0}} \sum \limits _{m_{2}\geq 0}{\frac {m_{1}}{n}}{\mathbf {1} }_{X_{m_{1}n}}\cdot {\frac {m_{2}}{n}}{\mathbf {1} }_{Y_{m_{2}n}}\\&={\frac {1}{n^{2}}}\sum _{i\geq 0}i\cdot \sum _{m_{1}\cdot m_{2}=i}{\mathbf {1} }_{X_{m_{1}n}\times Y_{m_{2}n}},\end{aligned}}} or equivalently, X Y ( ω ) = i n 2 ⟺ ω ∈ ⨆ m 1 m 2 = i ( X m 1 n × Y m 2 n ) , {\displaystyle XY(\omega )={\frac {i}{n^{2}}}\Longleftrightarrow \omega \in \bigsqcup _{m_{1}m_{2}=i}(X_{m_{1}n}\times Y_{m_{2}n}),} where 1 S {\displaystyle {\mathbf {1} }_{S}} is the indicator function of the set S {\displaystyle S} , X m n = { ω ∈ Ω 1 | X ( ω ) = m n } , {\displaystyle X_{mn}=\left\{\omega \in \Omega _{1}\,{\mathrel {\Big |}}\,X(\omega )={\frac {m}{n}}\right\},} Y m n = { ω ∈ Ω 2 | Y ( ω ) = m n } , {\displaystyle Y_{mn}=\left\{\omega \in \Omega _{2}\,{\mathrel {\Big |}}\,Y(\omega )={\frac {m}{n}}\right\},} and ⨆ {\displaystyle \bigsqcup } denotes disjoint union. By definition of expected value, E ⁡ [ X Y ] = ∫ Ω 1 × Ω 2 X Y d P = 1 n 2 ∑ i ≥ 0 i ⋅ ∑ m 1 ⋅ m 2 = i P ⁡ ( X ∈ X m 1 n , Y ∈ Y m 2 n ) {\displaystyle {\begin{aligned}\operatorname {E} [XY]&=\int \limits _{\Omega _{1}\times \Omega _{2}}XYd\operatorname {P} \\&={\frac {1}{n^{2}}}\sum _{i\geq 0}i\cdot \sum _{m_{1}\cdot m_{2}=i}\operatorname {P} (X\in X_{m_{1}n},Y\in Y_{m_{2}n})\end{aligned}}} Due to independence, P ⁡ ( X ∈ X m 1 n , Y ∈ Y m 2 n ) = P ⁡ ( X ∈ X m 1 n ) ⋅ P ⁡ ( Y ∈ Y m 2 n ) , {\displaystyle \operatorname {P} (X\in X_{m_{1}n},Y\in Y_{m_{2}n})=\operatorname {P} (X\in X_{m_{1}n})\cdot \operatorname {P} (Y\in Y_{m_{2}n}),} whence E ⁡ [ X Y ] = 1 n 2 ∑ i ≥ 0 i ⋅ ∑ m 1 ⋅ m 2 = i P ⁡ ( X ∈ X m 1 n ) P ⁡ ( Y ∈ Y m 2 n ) = ∑ m 1 ≥ 0 ⁡ ∑ m 2 ≥ 0 m 1 n P ⁡ ( X ∈ X m 1 n ) ⋅ m 2 n P ⁡ ( Y ∈ Y m 2 n ) = ( ∑ m 1 ≥ 0 m 1 n P ⁡ ( X ∈ X m 1 n ) ) ⋅ ( ∑ m 2 ≥ 0 m 2 n P ⁡ ( Y ∈ Y m 2 n ) ) = E ⁡ [ X ] E ⁡ [ Y ] . {\displaystyle {\begin{aligned}\operatorname {E} [XY]&={\frac {1}{n^{2}}}\sum _{i\geq 0}i\cdot \sum _{m_{1}\cdot m_{2}=i}\operatorname {P} (X\in X_{m_{1}n})\operatorname {P} (Y\in Y_{m_{2}n})\\[6pt]&=\mathop {\sum _{m_{1}\geq 0}} \sum \limits _{m_{2}\geq 0}{\frac {m_{1}}{n}}\operatorname {P} (X\in X_{m_{1}n})\cdot {\frac {m_{2}}{n}}\operatorname {P} (Y\in Y_{m_{2}n})\\[6pt]&=\left(\sum _{m_{1}\geq 0}{\frac {m_{1}}{n}}\operatorname {P} (X\in X_{m_{1}n})\right)\cdot \left(\sum _{m_{2}\geq 0}{\frac {m_{2}}{n}}\operatorname {P} (Y\in Y_{m_{2}n})\right)\\[6pt]&=\operatorname {E} [X]\operatorname {E} [Y].\end{aligned}}} 2. The case of non-negative random variables. Let X {\displaystyle X} and Y {\displaystyle Y} be (arbitrary) non-negative random variable. Define X n ( ω ) = { m n if m n ≤ X ( ω ) < m + 1 n , 0 if X ( ω ) = + ∞ , {\displaystyle X_{n}(\omega )={\begin{cases}{\frac {m}{n}}&{\text{if}}\ {\frac {m}{n}}\leq X(\omega )<{\frac {m+1}{n}},\\[6pt]0&{\text{if}}\ X(\omega )=+\infty ,\end{cases}}} for an arbitrary ω ∈ Ω 1 {\displaystyle \omega \in \Omega _{1}} . Note that X n : Ω 1 → R {\displaystyle X_{n}:\Omega _{1}\to {\mathbb {R} }} is a random variable and Range ( X n ) ⊆ { m n | m = 0 , 1 , 2 , 3 , … } ⊂ Q ≥ 0 . {\displaystyle {\text{Range}}(X_{n})\subseteq \left\{{\frac {m}{n}}\,{\mathrel {\Big |}}\,m=0,1,2,3,\ldots \right\}\subset \mathbb {Q} _{\geq 0}.} As we saw previously, the finiteness of E ⁡ [ X ] {\displaystyle \operatorname {E} [X]} implies that X {\displaystyle X} is finite almost sure, and consequently, | X n − X | ≤ 1 n {\displaystyle \textstyle |X_{n}-X|\leq {\frac {1}{n}}} (a.s.) on Ω 1 {\displaystyle \Omega _{1}} . This, in turn, implies that E ⁡ | X n − X | ≤ 1 n {\displaystyle \textstyle \operatorname {E} |X_{n}-X|\leq {\frac {1}{n}}} . Let the random variable Y n {\displaystyle Y_{n}} be defined the same way but with respect to Y {\displaystyle Y} . We have | E ⁡ [ X Y ] − E ⁡ [ X ] E ⁡ [ Y ] | = = | E ⁡ [ X Y ] − E ⁡ [ X n Y ] + E ⁡ [ X n Y ] − E ⁡ [ X ] E ⁡ [ Y ] | = | E ⁡ [ ( X − X n ) Y ] + E ⁡ [ X n Y ] − E ⁡ [ X ] E ⁡ [ Y ] | ≤ 1 n E ⁡ | Y | + | E ⁡ [ X n Y ] − E ⁡ [ X ] E ⁡ [ Y ] | = 1 n E ⁡ | Y | + | E ⁡ [ X n Y ] − E ⁡ [ X n Y n ] + E ⁡ [ X n Y n ] − E ⁡ [ X ] E ⁡ [ Y ] | ≤ 1 n E ⁡ | Y | + 1 n E ⁡ | X n | + | E ⁡ [ X n Y n ] − E ⁡ [ X ] E ⁡ [ Y ] | = 1 n E ⁡ | Y | + 1 n E ⁡ | X n − X + X | + | E ⁡ [ X n Y n ] − E ⁡ [ X ] E ⁡ [ Y ] | ≤ 1 n E ⁡ | Y | + E ⁡ | X n − X | + E ⁡ | X | n + | E ⁡ [ X n Y n ] − E ⁡ [ X ] E ⁡ [ Y ] | ≤ 1 n E ⁡ | Y | + 1 n 2 + E ⁡ | X | n + | E ⁡ [ X n Y n ] − E ⁡ [ X ] E ⁡ [ Y ] | . {\displaystyle {\begin{aligned}&{\Bigl |}\operatorname {E} [XY]-\operatorname {E} [X]\operatorname {E} [Y]{\Bigr |}=\\&={\Bigl |}\operatorname {E} [XY]-\operatorname {E} [X_{n}Y]+\operatorname {E} [X_{n}Y]-\operatorname {E} [X]\operatorname {E} [Y]{\Bigr |}\\&={\Bigl |}\operatorname {E} [(X-X_{n})Y]+\operatorname {E} [X_{n}Y]-\operatorname {E} [X]\operatorname {E} [Y]{\Bigr |}\\&\leq {\frac {1}{n}}\operatorname {E} |Y|+{\Bigl |}\operatorname {E} [X_{n}Y]-\operatorname {E} [X]\operatorname {E} [Y]{\Bigr |}\\&={\frac {1}{n}}\operatorname {E} |Y|+{\Bigl |}\operatorname {E} [X_{n}Y]-\operatorname {E} [X_{n}Y_{n}]+\operatorname {E} [X_{n}Y_{n}]-\operatorname {E} [X]\operatorname {E} [Y]{\Bigr |}\\&\leq {\frac {1}{n}}\operatorname {E} |Y|+{\frac {1}{n}}\operatorname {E} |X_{n}|+{\Bigl |}\operatorname {E} [X_{n}Y_{n}]-\operatorname {E} [X]\operatorname {E} [Y]{\Bigr |}\\&={\frac {1}{n}}\operatorname {E} |Y|+{\frac {1}{n}}\operatorname {E} |X_{n}-X+X|+{\Bigl |}\operatorname {E} [X_{n}Y_{n}]-\operatorname {E} [X]\operatorname {E} [Y]{\Bigr |}\\&\leq {\frac {1}{n}}\operatorname {E} |Y|+{\frac {\operatorname {E} |X_{n}-X|+\operatorname {E} |X|}{n}}+{\Bigl |}\operatorname {E} [X_{n}Y_{n}]-\operatorname {E} [X]\operatorname {E} [Y]{\Bigr |}\\&\leq {\frac {1}{n}}\operatorname {E} |Y|+{\frac {1}{n^{2}}}+{\frac {\operatorname {E} |X|}{n}}+{\Bigl |}\operatorname {E} [X_{n}Y_{n}]-\operatorname {E} [X]\operatorname {E} [Y]{\Bigr |}.\end{aligned}}} X n {\displaystyle X_{n}} and Y n {\displaystyle Y_{n}} were shown to satisfy E ⁡ [ X n Y n ] = E ⁡ [ X n ] E ⁡ [ Y n ] {\displaystyle \operatorname {E} [X_{n}Y_{n}]=\operatorname {E} [X_{n}]\operatorname {E} [Y_{n}]} . Therefore, | E ⁡ [ X n Y n ] − E ⁡ [ X ] E ⁡ [ Y ] | = = | E ⁡ [ X n ] E ⁡ [ Y n ] − E ⁡ [ X ] E ⁡ [ Y ] | = = | E ⁡ [ X n ] E ⁡ [ Y n ] − E ⁡ [ X ] E ⁡ [ Y n ] + E ⁡ [ X ] E ⁡ [ Y n ] − E ⁡ [ X ] E ⁡ [ Y ] | ≤ | E ⁡ [ X n ] E ⁡ [ Y n ] − E ⁡ [ X ] E ⁡ [ Y n ] | + | E ⁡ [ X ] E ⁡ [ Y n ] − E ⁡ [ X ] E ⁡ [ Y ] | ≤ E ⁡ | X n − X | ⋅ E ⁡ | Y n | + E ⁡ | X | ⋅ E ⁡ | Y n − Y | ≤ E ⁡ | Y n | + E ⁡ | X | n = E ⁡ | Y n − Y + Y | + E ⁡ | X | n ≤ E ⁡ | Y n − Y | + E ⁡ | Y | + E ⁡ | X | n ≤ 1 n 2 + E ⁡ | Y | + E ⁡ | X | n . {\displaystyle {\begin{aligned}&{\Bigl |}\operatorname {E} [X_{n}Y_{n}]-\operatorname {E} [X]\operatorname {E} [Y]{\Bigr |}=\\&={\Bigl |}\operatorname {E} [X_{n}]\operatorname {E} [Y_{n}]-\operatorname {E} [X]\operatorname {E} [Y]{\Bigr |}=\\&={\Bigl |}\operatorname {E} [X_{n}]\operatorname {E} [Y_{n}]-\operatorname {E} [X]\operatorname {E} [Y_{n}]+\operatorname {E} [X]\operatorname {E} [Y_{n}]-\operatorname {E} [X]\operatorname {E} [Y]{\Bigr |}\\&\leq {\Bigl |}\operatorname {E} [X_{n}]\operatorname {E} [Y_{n}]-\operatorname {E} [X]\operatorname {E} [Y_{n}]{\Bigr |}+{\Bigl |}\operatorname {E} [X]\operatorname {E} [Y_{n}]-\operatorname {E} [X]\operatorname {E} [Y]{\Bigr |}\\&\leq \operatorname {E} |X_{n}-X|\cdot \operatorname {E} |Y_{n}|+\operatorname {E} |X|\cdot \operatorname {E} |Y_{n}-Y|\\&\leq {\frac {\operatorname {E} |Y_{n}|+\operatorname {E} |X|}{n}}={\frac {\operatorname {E} |Y_{n}-Y+Y|+\operatorname {E} |X|}{n}}\\&\leq {\frac {\operatorname {E} |Y_{n}-Y|+\operatorname {E} |Y|+\operatorname {E} |X|}{n}}\\&\leq {\frac {1}{n^{2}}}+{\frac {\operatorname {E} |Y|+\operatorname {E} |X|}{n}}.\end{aligned}}} It follows that, being independent from n {\displaystyle n} , the constant value | E ⁡ [ X Y ] − E ⁡ [ X ] E ⁡ [ Y ] | {\displaystyle {\Bigl |}\operatorname {E} [XY]-\operatorname {E} [X]\operatorname {E} [Y]{\Bigr |}} can only be equal to 0. 3. The general case. Let X {\displaystyle X} and Y {\displaystyle Y} be arbitrary random variables. We have E ⁡ [ X Y ] = E ⁡ [ ( X + − X − ) ( Y + − Y − ) ] = E ⁡ [ X + Y + ] − E ⁡ [ X + Y − ] − E ⁡ [ X − Y + ] + E ⁡ [ X − Y − ] = E ⁡ [ X + ] E ⁡ [ Y + ] − E ⁡ [ X + ] E ⁡ [ Y − ] − E ⁡ [ X − ] E ⁡ [ Y + ] + E ⁡ [ X − ] E ⁡ [ Y − ] = ( E ⁡ [ X + ] − E ⁡ [ X − ] ) ( E ⁡ [ Y + ] − E ⁡ [ Y − ] ) = E ⁡ [ X + − X − ] E ⁡ [ Y + − Y − ] = E ⁡ [ X ] E ⁡ [ Y ] . {\displaystyle {\begin{aligned}\operatorname {E} [XY]&=\operatorname {E} [(X_{+}-X_{-})(Y_{+}-Y_{-})]\\&=\operatorname {E} [X_{+}Y_{+}]-\operatorname {E} [X_{+}Y_{-}]-\operatorname {E} [X_{-}Y_{+}]+\operatorname {E} [X_{-}Y_{-}]\\&=\operatorname {E} [X_{+}]\operatorname {E} [Y_{+}]-\operatorname {E} [X_{+}]\operatorname {E} [Y_{-}]-\operatorname {E} [X_{-}]\operatorname {E} [Y_{+}]+\operatorname {E} [X_{-}]\operatorname {E} [Y_{-}]\\&=(\operatorname {E} [X_{+}]-\operatorname {E} [X_{-}])(\operatorname {E} [Y_{+}]-\operatorname {E} [Y_{-}])\\&=\operatorname {E} [X_{+}-X_{-}]\operatorname {E} [Y_{+}-Y_{-}]\\&=\operatorname {E} [X]\operatorname {E} [Y].\end{aligned}}}

Inequalities [ edit ]

Cauchy–Bunyakovsky–Schwarz inequality [ edit ]

The Cauchy–Bunyakovsky–Schwarz inequality states that

( E ⁡ [ X Y ] ) 2 ≤ E ⁡ [ X 2 ] ⋅ E ⁡ [ Y 2 ] . {\displaystyle (\operatorname {E} [XY])^{2}\leq \operatorname {E} [X^{2}]\cdot \operatorname {E} [Y^{2}].}

Markov's inequality [ edit ]

For a nonnegative random variable X {\displaystyle X} and a > 0 {\displaystyle a>0} , the Markov's inequality states that

P ⁡ ( X ≥ a ) ≤ E ⁡ [ X ] a . {\displaystyle \operatorname {P} (X\geq a)\leq {\frac {\operatorname {E} [X]}{a}}.}

Bienaymé-Chebyshev inequality [ edit ]

Let X {\displaystyle X} be an arbitrary random variable with finite expected value E ⁡ [ X ] {\displaystyle \operatorname {E} [X]} and finite variance Var ⁡ [ X ] ≠ 0 {\displaystyle \operatorname {Var} [X]

eq 0} . The Bienaymé-Chebyshev inequality states that, for any real number k > 0 {\displaystyle k>0} ,

P ⁡ ( | X − E ⁡ [ X ] | ≥ k Var ⁡ [ X ] ) ≤ 1 k 2 . {\displaystyle \operatorname {P} {\Bigl (}{\Bigl |}X-\operatorname {E} [X]{\Bigr |}\geq k{\sqrt {\operatorname {Var} [X]}}{\Bigr )}\leq {\frac {1}{k^{2}}}.}

Jensen's inequality [ edit ]

Let f : R → R {\displaystyle f:{\mathbb {R} }\to {\mathbb {R} }} be a Borel convex function and X {\displaystyle X} a random variable such that E ⁡ | X | < ∞ {\displaystyle \operatorname {E} |X|<\infty } . Jensen's inequality states that

f ( E ⁡ ( X ) ) ≤ E ⁡ ( f ( X ) ) . {\displaystyle f(\operatorname {E} (X))\leq \operatorname {E} (f(X)).}

Remark 1. The expected value E ⁡ ( f ( X ) ) {\displaystyle \operatorname {E} (f(X))} is well-defined even if X {\displaystyle X} is allowed to assume infinite values. Indeed, E ⁡ | X | < ∞ {\displaystyle \operatorname {E} |X|<\infty } implies that X ≠ ± ∞ {\displaystyle X

eq \pm \infty } (a.s.), so the random variable f ( X ( ω ) ) {\displaystyle f(X(\omega ))} is defined almost sure, and therefore there is enough information to compute E ⁡ ( f ( X ) ) . {\displaystyle \operatorname {E} (f(X)).}

Remark 2. Jensen's inequality implies that | E ⁡ [ X ] | ≤ E ⁡ | X | {\displaystyle |\operatorname {E} [X]|\leq \operatorname {E} |X|} since the absolute value function is convex.

Lyapunov’s inequality [ edit ]

Let 0 < s < t {\displaystyle 0<s<t} . Lyapunov's inequality states that

( E ⁡ | X | s ) 1 / s ≤ ( E ⁡ | X | t ) 1 / t . {\displaystyle {\Bigl (}\operatorname {E} |X|^{s}{\Bigr )}^{1/s}\leq \left(\operatorname {E} |X|^{t}\right)^{1/t}.}

Proof. Applying Jensen's inequality to | X | s {\displaystyle |X|^{s}} and g ( x ) = | x | t / s {\displaystyle g(x)=|x|^{t/s}} , obtain | E ⁡ | X s | | t / s ≤ E ⁡ | X s | t / s = E ⁡ | X | t {\displaystyle {\Bigl |}\operatorname {E} |X^{s}|{\Bigr |}^{t/s}\leq \operatorname {E} |X^{s}|^{t/s}=\operatorname {E} |X|^{t}} . Taking the t {\displaystyle t} th root of each side completes the proof.

Corollary.

E ⁡ | X | ≤ ( E ⁡ | X | 2 ) 1 / 2 ≤ ⋯ ≤ ( E ⁡ | X | n ) 1 / n ≤ ⋯ {\displaystyle \operatorname {E} |X|\leq {\Bigl (}\operatorname {E} |X|^{2}{\Bigr )}^{1/2}\leq \cdots \leq {\Bigl (}\operatorname {E} |X|^{n}{\Bigr )}^{1/n}\leq \cdots }

Hölder’s inequality [ edit ]

Let p {\displaystyle p} and q {\displaystyle q} satisfy 1 ≤ p ≤ ∞ {\displaystyle 1\leq p\leq \infty } , 1 ≤ q ≤ ∞ {\displaystyle 1\leq q\leq \infty } , and 1 / p + 1 / q = 1 {\displaystyle 1/p+1/q=1} . The Hölder's inequality states that

E ⁡ | X Y | ≤ ( E ⁡ | X | p ) 1 / p ( E ⁡ | Y | q ) 1 / q . {\displaystyle \operatorname {E} |XY|\leq (\operatorname {E} |X|^{p})^{1/p}(\operatorname {E} |Y|^{q})^{1/q}.}

Minkowski inequality [ edit ]

Let p {\displaystyle p} be an integer satisfying 1 ≤ p ≤ ∞ {\displaystyle 1\leq p\leq \infty } . Let, in addition, E ⁡ | X | p < ∞ {\displaystyle \operatorname {E} |X|^{p}<\infty } and E ⁡ | Y | p < ∞ {\displaystyle \operatorname {E} |Y|^{p}<\infty } . Then, according to the Minkowski inequality, E ⁡ | X + Y | p < ∞ {\displaystyle \operatorname {E} |X+Y|^{p}<\infty } and

( E ⁡ | X + Y | p ) 1 / p ≤ ( E ⁡ | X | p ) 1 / p + ( E ⁡ | Y | p ) 1 / p . {\displaystyle {\Bigl (}\operatorname {E} |X+Y|^{p}{\Bigr )}^{1/p}\leq {\Bigl (}\operatorname {E} |X|^{p}{\Bigr )}^{1/p}+{\Bigl (}\operatorname {E} |Y|^{p}{\Bigr )}^{1/p}.}

E {\displaystyle \operatorname {E} } Taking limits under thesign [ edit ]

Monotone convergence theorem [ edit ]

Let the sequence of random variables { X n } {\displaystyle \{X_{n}\}} and the random variables X {\displaystyle X} and Y {\displaystyle Y} be defined on the same probability space ( Ω , Σ , P ) . {\displaystyle (\Omega ,\Sigma ,\operatorname {P} ).} Suppose that

all the expected values E ⁡ [ X n ] , {\displaystyle \operatorname {E} [X_{n}],} E ⁡ [ X ] , {\displaystyle \operatorname {E} [X],} E ⁡ [ Y ] {\displaystyle \operatorname {E} [Y]} ∞ − ∞ {\displaystyle \infty -\infty }

E ⁡ [ Y ] > − ∞ ; {\displaystyle \operatorname {E} [Y]>-\infty ;}

for every n , {\displaystyle n,}

− ∞ ≤ Y ≤ X n ≤ X n + 1 ≤ + ∞ (a.s.) ; {\displaystyle -\infty \leq Y\leq X_{n}\leq X_{n+1}\leq +\infty \quad {\hbox{(a.s.)}};}

X {\displaystyle X} { X n } {\displaystyle \{X_{n}\}} X ( ω ) = lim n X n ( ω ) {\displaystyle X(\omega )=\lim

olimits _{n}X_{n}(\omega )}

The monotone convergence theorem states that

lim n E ⁡ [ X n ] = E ⁡ [ X ] . {\displaystyle \lim _{n}\operatorname {E} [X_{n}]=\operatorname {E} [X].}

Proof. Observe that, by monotonicity, the sequence { E ⁡ [ X n ] } {\displaystyle \{\operatorname {E} [X_{n}]\}} monotonically non-decreases, and E ⁡ [ Y ] ≤ E ⁡ [ X n ] ≤ E ⁡ [ X ] . {\displaystyle \operatorname {E} [Y]\leq \operatorname {E} [X_{n}]\leq \operatorname {E} [X].} If E ⁡ [ Y ] = + ∞ , {\displaystyle \operatorname {E} [Y]=+\infty ,} then E ⁡ [ Y ] = E ⁡ [ X n ] = E ⁡ [ X ] , {\displaystyle \operatorname {E} [Y]=\operatorname {E} [X_{n}]=\operatorname {E} [X],} and we are done. If E ⁡ [ Y ] < + ∞ , {\displaystyle \operatorname {E} [Y]<+\infty ,} then, following the assumption that E ⁡ [ Y ] > − ∞ , {\displaystyle \operatorname {E} [Y]>-\infty ,} we conclude that E ⁡ [ Y ] {\displaystyle \operatorname {E} [Y]} is finite which, in turn, implies, as we saw previously, that Y {\displaystyle Y} is finite (a.s.). Denote Z n = X n − Y {\displaystyle Z_{n}=X_{n}-Y} and Z = X − Y {\displaystyle Z=X-Y} . The finiteness of Y {\displaystyle Y} (a.s.) implies that the differences Z n = X n − Y {\displaystyle Z_{n}=X_{n}-Y} and Z = X − Y {\displaystyle Z=X-Y} are defined (do not have the form ∞ − ∞ {\displaystyle \infty -\infty } ) everywhere outside of a null set. On that null set, Z n {\displaystyle Z_{n}} and Z {\displaystyle Z} may be defined arbitrarily (e.g. as zero or in any other way, as long as measurability is preserved) without affecting this proof. As a difference of two random variables, Z n {\displaystyle Z_{n}} and Z {\displaystyle Z} are also random variables. It follows from the definition that Z n ≥ 0 {\displaystyle Z_{n}\geq 0} (a.s.), Z ≥ 0 {\displaystyle Z\geq 0} (a.s.), the sequence { Z n } {\displaystyle \{Z_{n}\}} pointwise non-decreases (a.s.), and Z n → Z {\displaystyle Z_{n}\to Z} pointwise (a.s.). By (the general version of) monotone convergence theorem, ( lim n E ⁡ [ X n ] ) − E ⁡ [ Y ] = lim n ( E ⁡ [ X n ] − E ⁡ [ Y ] ) = lim n E ⁡ [ X n − Y ] = lim n E ⁡ [ Z n ] = E ⁡ [ Z ] = E ⁡ [ X − Y ] = E ⁡ [ X ] − E ⁡ [ Y ] , {\displaystyle {\begin{aligned}(\lim _{n}\operatorname {E} [X_{n}])-\operatorname {E} [Y]&=\lim _{n}(\operatorname {E} [X_{n}]-\operatorname {E} [Y])\\&=\lim _{n}\operatorname {E} [X_{n}-Y]\\&=\lim _{n}\operatorname {E} [Z_{n}]\\&=\operatorname {E} [Z]\\&=\operatorname {E} [X-Y]\\&=\operatorname {E} [X]-\operatorname {E} [Y],\end{aligned}}} whence the assertion follows.

Fatou's lemma [ edit ]

Let the sequence of random variables { X n } {\displaystyle \{X_{n}\}} and the random variable Y {\displaystyle Y} be defined on the same probability space ( Ω , Σ , P ) . {\displaystyle (\Omega ,\Sigma ,\operatorname {P} ).} Suppose that

all the expected values E ⁡ [ X n ] , {\displaystyle \operatorname {E} [X_{n}],} E ⁡ [ lim inf n X n ] , {\displaystyle \textstyle \operatorname {E} [\liminf _{n}X_{n}],} E ⁡ [ Y ] {\displaystyle \operatorname {E} [Y]} ∞ − ∞ {\displaystyle \infty -\infty }

E ⁡ [ Y ] > − ∞ ; {\displaystyle \operatorname {E} [Y]>-\infty ;}

− ∞ ≤ Y ≤ X n ≤ + ∞ {\displaystyle -\infty \leq Y\leq X_{n}\leq +\infty } n . {\displaystyle n.}

Fatou's lemma states that

E ⁡ [ lim inf n X n ] ≤ lim inf n E ⁡ [ X n ] . {\displaystyle \operatorname {E} [\liminf _{n}X_{n}]\leq \liminf _{n}\operatorname {E} [X_{n}].}

(Note that lim inf n X n {\displaystyle \textstyle \liminf _{n}X_{n}} is a random variable, for every n , {\displaystyle n,} by the properties of limit inferior).

Proof. If E ⁡ [ Y ] = + ∞ , {\displaystyle \operatorname {E} [Y]=+\infty ,} then, by monotonicity, E ⁡ [ Y ] = E ⁡ [ X n ] = + ∞ , {\displaystyle \operatorname {E} [Y]=\operatorname {E} [X_{n}]=+\infty ,} so lim inf n E ⁡ [ X n ] = + ∞ , {\displaystyle \textstyle \liminf _{n}\operatorname {E} [X_{n}]=+\infty ,} and the assertion follows. If E ⁡ [ Y ] < + ∞ {\displaystyle \operatorname {E} [Y]<+\infty } , then, following the assumption that E ⁡ [ Y ] > − ∞ , {\displaystyle \operatorname {E} [Y]>-\infty ,} we conclude that E ⁡ [ Y ] {\displaystyle \operatorname {E} [Y]} is finite which, in turn, implies, as we saw previously, that Y {\displaystyle Y} is finite (a.s.). Denote Z n = X n − Y {\displaystyle Z_{n}=X_{n}-Y} . Then Z n ≥ 0 {\displaystyle Z_{n}\geq 0} (a.s.). The finiteness of Y {\displaystyle Y} (a.s.) implies that Z n {\displaystyle Z_{n}} is defined (does not have the form ∞ − ∞ {\displaystyle \infty -\infty } ) everywhere outside of a null set. On that null set Z n {\displaystyle Z_{n}} may be defined arbitrarily (e.g. as zero or in any other way, as long as measurability is preserved) without affecting this proof. As a difference of two random variables, Z n {\displaystyle Z_{n}} is a random variable. By (the general version of) Fatou's lemma, E ⁡ [ lim inf n X n ] − E ⁡ [ Y ] = E ⁡ [ lim inf n ( X n − Y ) ] = E ⁡ [ lim inf n Z n ] ≤ lim inf n E ⁡ [ Z n ] = lim inf n E ⁡ [ X n − Y ] = lim inf n ( E ⁡ [ X n ] − E ⁡ [ Y ] ) = ( lim inf n E ⁡ [ X n ] ) − E ⁡ [ Y ] , {\displaystyle {\begin{aligned}\operatorname {E} [\liminf _{n}X_{n}]-\operatorname {E} [Y]&=\operatorname {E} [\liminf _{n}(X_{n}-Y)]\\&=\operatorname {E} [\liminf _{n}Z_{n}]\\&\leq \liminf _{n}\operatorname {E} [Z_{n}]\\&=\liminf _{n}\operatorname {E} [X_{n}-Y]\\&=\liminf _{n}(\operatorname {E} [X_{n}]-\operatorname {E} [Y])\\&=(\liminf _{n}\operatorname {E} [X_{n}])-\operatorname {E} [Y],\end{aligned}}} whence the assertion follows.

Corollary. Let

X n → X {\displaystyle X_{n}\to X}

E ⁡ [ X n ] ≤ C , {\displaystyle \operatorname {E} [X_{n}]\leq C,} C {\displaystyle C} n {\displaystyle n}

E ⁡ [ Y ] > − ∞ ; {\displaystyle \operatorname {E} [Y]>-\infty ;}

− ∞ ≤ Y ≤ X n ≤ + ∞ {\displaystyle -\infty \leq Y\leq X_{n}\leq +\infty } n . {\displaystyle n.}

Then E ⁡ [ X ] ≤ C . {\displaystyle \operatorname {E} [X]\leq C.}

Proof is by observing that X = lim inf n X n {\displaystyle \textstyle X=\liminf _{n}X_{n}} (a.s.) and applying Fatou's lemma.

Dominated convergence theorem [ edit ]

Let { X n } n {\displaystyle \{X_{n}\}_{n}} be a sequence of random variables. If X n → X {\displaystyle X_{n}\to X} pointwise (a.s.), | X n | ≤ Y ≤ + ∞ {\displaystyle |X_{n}|\leq Y\leq +\infty } (a.s.), and E ⁡ [ Y ] < ∞ {\displaystyle \operatorname {E} [Y]<\infty } . Then, according to the dominated convergence theorem,

the function X {\displaystyle X}

E ⁡ | X | < ∞ {\displaystyle \operatorname {E} |X|<\infty }

all the expected values E ⁡ [ X n ] {\displaystyle \operatorname {E} [X_{n}]} E ⁡ [ X ] {\displaystyle \operatorname {E} [X]} ∞ − ∞ {\displaystyle \infty -\infty }

lim n E ⁡ [ X n ] = E ⁡ [ X ] {\displaystyle \lim _{n}\operatorname {E} [X_{n}]=\operatorname {E} [X]}

lim n E ⁡ | X n − X | = 0. {\displaystyle \lim _{n}\operatorname {E} |X_{n}-X|=0.}

Relationship with characteristic function [ edit ]

The probability density function f X {\displaystyle f_{X}} of a scalar random variable X {\displaystyle X} is related to its characteristic function φ X {\displaystyle \varphi _{X}} by the inversion formula:

f X ( x ) = 1 2 π ∫ R e − i t x φ X ( t ) d t . {\displaystyle f_{X}(x)={\frac {1}{2\pi }}\int _{\mathbb {R} }e^{-itx}\varphi _{X}(t)\,dt.}

For the expected value of g ( X ) {\displaystyle g(X)} (where g : R → R {\displaystyle g:{\mathbb {R} }\to {\mathbb {R} }} is a Borel function), we can use this inversion formula to obtain

E ⁡ [ g ( X ) ] = 1 2 π ∫ R g ( x ) [ ∫ R e − i t x φ X ( t ) d t ] d x . {\displaystyle \operatorname {E} [g(X)]={\frac {1}{2\pi }}\int _{\mathbb {R} }g(x)\left[\int _{\mathbb {R} }e^{-itx}\varphi _{X}(t)\,dt\right]\,dx.}

If E ⁡ [ g ( X ) ] {\displaystyle \operatorname {E} [g(X)]} is finite, changing the order of integration, we get, in accordance with Fubini–Tonelli theorem,

E ⁡ [ g ( X ) ] = 1 2 π ∫ R G ( t ) φ X ( t ) d t , {\displaystyle \operatorname {E} [g(X)]={\frac {1}{2\pi }}\int _{\mathbb {R} }G(t)\varphi _{X}(t)\,dt,}

where

G ( t ) = ∫ R g ( x ) e − i t x d x {\displaystyle G(t)=\int _{\mathbb {R} }g(x)e^{-itx}\,dx}

is the Fourier transform of g ( x ) . {\displaystyle g(x).} The expression for E ⁡ [ g ( X ) ] {\displaystyle \operatorname {E} [g(X)]} also follows directly from Plancherel theorem.

Uses and applications [ edit ]

It is possible to construct an expected value equal to the probability of an event by taking the expectation of an indicator function that is one if the event has occurred and zero otherwise. This relationship can be used to translate properties of expected values into properties of probabilities, e.g. using the law of large numbers to justify estimating probabilities by frequencies.

The expected values of the powers of X are called the moments of X; the moments about the mean of X are expected values of powers of X − E[X]. The moments of some random variables can be used to specify their distributions, via their moment generating functions.

To empirically estimate the expected value of a random variable, one repeatedly measures observations of the variable and computes the arithmetic mean of the results. If the expected value exists, this procedure estimates the true expected value in an unbiased manner and has the property of minimizing the sum of the squares of the residuals (the sum of the squared differences between the observations and the estimate). The law of large numbers demonstrates (under fairly mild conditions) that, as the size of the sample gets larger, the variance of this estimate gets smaller.

This property is often exploited in a wide variety of applications, including general problems of statistical estimation and machine learning, to estimate (probabilistic) quantities of interest via Monte Carlo methods, since most quantities of interest can be written in terms of expectation, e.g. P ⁡ ( X ∈ A ) = E ⁡ [ 1 A ] {\displaystyle \operatorname {P} ({X\in {\mathcal {A}}})=\operatorname {E} [{\mathbf {1} }_{\mathcal {A}}]} , where 1 A {\displaystyle {\mathbf {1} }_{\mathcal {A}}} is the indicator function of the set A {\displaystyle {\mathcal {A}}} .

The mass of probability distribution is balanced at the expected value, here a Beta(α,β) distribution with expected value α/(α+β).

In classical mechanics, the center of mass is an analogous concept to expectation. For example, suppose X is a discrete random variable with values x i and corresponding probabilities p i . Now consider a weightless rod on which are placed weights, at locations x i along the rod and having masses p i (whose sum is one). The point at which the rod balances is E[X].

Expected values can also be used to compute the variance, by means of the computational formula for the variance

Var ⁡ ( X ) = E ⁡ [ X 2 ] − ( E ⁡ [ X ] ) 2 . {\displaystyle \operatorname {Var} (X)=\operatorname {E} [X^{2}]-(\operatorname {E} [X])^{2}.}

A very important application of the expectation value is in the field of quantum mechanics. The expectation value of a quantum mechanical operator A ^ {\displaystyle {\hat {A}}} operating on a quantum state vector | ψ ⟩ {\displaystyle |\psi \rangle } is written as ⟨ A ^ ⟩ = ⟨ ψ | A | ψ ⟩ {\displaystyle \langle {\hat {A}}\rangle =\langle \psi |A|\psi \rangle } . The uncertainty in A ^ {\displaystyle {\hat {A}}} can be calculated using the formula ( Δ A ) 2 = ⟨ A ^ 2 ⟩ − ⟨ A ^ ⟩ 2 {\displaystyle (\Delta A)^{2}=\langle {\hat {A}}^{2}\rangle -\langle {\hat {A}}\rangle ^{2}} .

The law of the unconscious statistician [ edit ]

The expected value of a measurable function of X {\displaystyle X} , g ( X ) {\displaystyle g(X)} , given that X {\displaystyle X} has a probability density function f ( x ) {\displaystyle f(x)} , is given by the inner product of f {\displaystyle f} and g {\displaystyle g} :

E ⁡ [ g ( X ) ] = ∫ R g ( x ) f ( x ) d x . {\displaystyle \operatorname {E} [g(X)]=\int _{\mathbb {R} }g(x)f(x)\,dx.}

This formula also holds in multidimensional case, when g {\displaystyle g} is a function of several random variables, and f {\displaystyle f} is their joint density.[5][6]

Alternative formula for expected value [ edit ]

Formula for non-negative random variables [ edit ]

Finite and countably infinite case [ edit ]

For a non-negative integer-valued random variable X : Ω → { 0 , 1 , 2 , 3 , … } ∪ { + ∞ } , {\displaystyle X:\Omega \to \{0,1,2,3,\ldots \}\cup \{+\infty \},}

E ⁡ [ X ] = ∑ i = 1 ∞ P ⁡ ( X ≥ i ) . {\displaystyle \operatorname {E} [X]=\sum _{i=1}^{\infty }\operatorname {P} (X\geq i).}

Proof. If P ⁡ ( X = + ∞ ) > 0 , {\displaystyle \operatorname {P} (X=+\infty )>0,} then E ⁡ [ X ] = + ∞ . {\displaystyle \operatorname {E} [X]=+\infty .} On the other hand, P ⁡ ( X ≥ i ) ≥ P ⁡ ( X = + ∞ ) > 0 , {\displaystyle \operatorname {P} (X\geq i)\geq \operatorname {P} (X=+\infty )>0,} so the series on the right diverges to + ∞ , {\displaystyle +\infty ,} and the equality holds. If P ⁡ ( X = + ∞ ) = 0 , {\displaystyle \operatorname {P} (X=+\infty )=0,} then ∑ i = 1 ∞ P ⁡ ( X ≥ i ) = ∑ i = 1 ∞ ∑ j = i ∞ P ⁡ ( X = j ) . {\displaystyle \sum _{i=1}^{\infty }\operatorname {P} (X\geq i)=\sum _{i=1}^{\infty }\sum _{j=i}^{\infty }\operatorname {P} (X=j).} Let M = [ P ⁡ ( X = 1 ) P ⁡ ( X = 2 ) P ⁡ ( X = 3 ) ⋯ P ⁡ ( X = n ) ⋯ P ⁡ ( X = 2 ) P ⁡ ( X = 3 ) ⋯ P ⁡ ( X = n ) ⋯ P ⁡ ( X = 3 ) ⋯ P ⁡ ( X = n ) ⋯ ⋱ ⋮ P ⁡ ( X = n ) ⋯ ⋱ ] {\displaystyle M={\begin{bmatrix}\operatorname {P} (X=1)&\operatorname {P} (X=2)&\operatorname {P} (X=3)&\cdots &\operatorname {P} (X=n)&\cdots \\&\operatorname {P} (X=2)&\operatorname {P} (X=3)&\cdots &\operatorname {P} (X=n)&\cdots \\&&\operatorname {P} (X=3)&\cdots &\operatorname {P} (X=n)&\cdots \\&&&\ddots &\vdots &\\&&&&\operatorname {P} (X=n)&\cdots \\&&&&&\ddots \end{bmatrix}}} be an infinite upper triangular matrix. The double series ∑ i = 1 ∞ ∑ j = i ∞ P ⁡ ( X = j ) {\displaystyle \textstyle \sum _{i=1}^{\infty }\sum _{j=i}^{\infty }\operatorname {P} (X=j)} is the sum of M {\displaystyle M} 's elements if summation is done row by row. Since every summand is non-negative, the series either converges absolutely or diverges to + ∞ . {\displaystyle +\infty .} In both cases, changing summation order does not affect the sum. Changing summation order, from row-by-row to column-by-column, gives us ∑ i = 1 ∞ ∑ j = i ∞ P ⁡ ( X = j ) = ∑ j = 1 ∞ ∑ i = 1 j P ⁡ ( X = j ) = ∑ j = 1 ∞ j P ⁡ ( X = j ) = E ⁡ [ X ] . {\displaystyle {\begin{aligned}\sum _{i=1}^{\infty }\sum _{j=i}^{\infty }\operatorname {P} (X=j)&=\sum _{j=1}^{\infty }\sum _{i=1}^{j}\operatorname {P} (X=j)\\&=\sum _{j=1}^{\infty }j\operatorname {P} (X=j)\\&=\operatorname {E} [X].\end{aligned}}} Example [ edit ] In a coin tossing experiment, let the probability of heads be p {\displaystyle p} . Including the final attempt, how many tosses can we expect until the first head? Solution. If N {\displaystyle N} is the random variable indicating the numbers of coin tosses before and including the first head, then, for i ≥ 1 {\displaystyle i\geq 1} , P ⁡ ( N ≥ i ) = 1 − P ⁡ ( N ≤ i − 1 ) = 1 − ∑ j = 0 i − 1 P ⁡ ( N = j ) = 1 − ∑ j = 1 i − 1 ( 1 − p ) j − 1 p = 1 − 1 − ( 1 − p ) i − 1 p ⋅ p = ( 1 − p ) i − 1 , {\displaystyle {\begin{aligned}\operatorname {P} (N\geq i)&=1-\operatorname {P} (N\leq i-1)\\[1pt]&=1-\sum \limits _{j=0}^{i-1}\operatorname {P} (N=j)\\[1pt]&=1-\sum \limits _{j=1}^{i-1}(1-p)^{j-1}p\\[1pt]&=1-{\frac {1-(1-p)^{i-1}}{p}}\cdot p\\[1pt]&=(1-p)^{i-1},\end{aligned}}} where we took into account the geometric series summation formula. We now compute E ⁡ [ N ] = ∑ i = 1 ∞ P ⁡ ( N ≥ i ) = ∑ i = 1 ∞ ( 1 − p ) i − 1 = 1 p . {\displaystyle {\begin{aligned}\operatorname {E} [N]&=\sum \limits _{i=1}^{\infty }\operatorname {P} (N\geq i)\\&=\sum \limits _{i=1}^{\infty }(1-p)^{i-1}\\&={\frac {1}{p}}.\end{aligned}}}

General case [ edit ]

If X : Ω → [ 0 , + ∞ ] {\displaystyle X:\Omega \to [0,+\infty ]} is a non-negative random variable, then

E ⁡ [ X ] = ∫ [ 0 , + ∞ ) P ⁡ ( X ≥ x ) d x = ∫ [ 0 , + ∞ ) P ⁡ ( X > x ) d x , {\displaystyle \operatorname {E} [X]=\int \limits _{[0,+\infty )}\operatorname {P} (X\geq x)\,dx=\int \limits _{[0,+\infty )}\operatorname {P} (X>x)\,dx,}

and

E ⁡ [ X ] = (R) ∫ 0 ∞ P ⁡ ( X ≥ x ) d x = (R) ∫ 0 ∞ P ⁡ ( X > x ) d x , {\displaystyle \operatorname {E} [X]={\hbox{(R)}}\int \limits _{0}^{\infty }\operatorname {P} (X\geq x)\,dx={\hbox{(R)}}\int \limits _{0}^{\infty }\operatorname {P} (X>x)\,dx,}

where (R) ∫ 0 ∞ {\displaystyle {\hbox{(R)}}\textstyle \int _{0}^{\infty }} denotes improper Riemann integral.

Proof. 1. For every ω ∈ Ω {\displaystyle \omega \in \Omega } , X ( ω ) = ∫ ( 0 , X ( ω ) ) d x = ∫ [ 0 , + ∞ ) 1 ( 0 , X ( ω ) ) ( x ) d x = ∫ [ 0 , + ∞ ) 1 ( 0 , X ( ω ) ] ( x ) d x , {\displaystyle X(\omega )=\int \limits _{(0,X(\omega ))}dx=\int \limits _{[0,+\infty )}{\mathbf {1} }_{(0,X(\omega ))}(x)\,dx=\int \limits _{[0,+\infty )}{\mathbf {1} }_{(0,X(\omega )]}(x)\,dx,} where 1 ( 0 , X ( ω ) ) {\displaystyle {\mathbf {1} }_{(0,X(\omega ))}} and 1 ( 0 , X ( ω ) ] {\displaystyle {\mathbf {1} }_{(0,X(\omega )]}} are the indicator functions of ( 0 , X ( ω ) ) {\displaystyle (0,X(\omega ))} and ( 0 , X ( ω ) ] {\displaystyle (0,X(\omega )]} , respectively. Substituting this into the definition of E ⁡ [ X ] {\displaystyle \operatorname {E} [X]} , obtain E ⁡ [ X ] = ∫ Ω X d P = ∫ Ω ∫ [ 0 , + ∞ ) 1 ( 0 , X ( ω ) ] ( x ) d x d P ⁡ ( ω ) . {\displaystyle {\begin{aligned}\operatorname {E} [X]&=\int \limits _{\Omega }Xd\operatorname {P} \\&=\int \limits _{\Omega }\int \limits _{[0,+\infty )}{\mathbf {1} }_{(0,X(\omega )]}(x)\,dx\,d\operatorname {P} (\omega ).\end{aligned}}} Since X ( ω ) ≥ 0 {\displaystyle X(\omega )\geq 0} and 1 ( 0 , X ( ω ) ] ( x ) ≥ 0 , {\displaystyle {\mathbf {1} }_{(0,X(\omega )]}(x)\geq 0,} this integral (finite or infinite) meets the requirements of Tonelli's theorem. Changing the order of integration gives us ∫ [ 0 , + ∞ ) ∫ Ω 1 ( 0 , X ( ω ) ] ( x ) d P ⁡ ( ω ) d x = ∫ [ 0 , + ∞ ) P ⁡ ( X ≥ x ) d x . {\displaystyle {\begin{aligned}&\int \limits _{[0,+\infty )}\int \limits _{\Omega }{\mathbf {1} }_{(0,X(\omega )]}(x)\,d\operatorname {P} (\omega )\,dx\\&=\int \limits _{[0,+\infty )}\operatorname {P} (X\geq x)\,dx.\end{aligned}}} 2a. The function y ( x ) = P ⁡ ( X ≥ x ) {\displaystyle y(x)=\operatorname {P} (X\geq x)} is Riemann-integrable on each finite interval [ a , b ] . {\displaystyle [a,b].} Indeed, since y ( x ) {\displaystyle y(x)} is non-increasing, the set D {\displaystyle D} of its discontinuities is countable. Due to countable additivity, D {\displaystyle D} is a null set with respect to the linear Lebesgue measure. Furthermore, 0 ≤ y ( x ) ≤ 1 , {\displaystyle 0\leq y(x)\leq 1,} for all x ∈ [ − ∞ , + ∞ ] . {\displaystyle x\in [-\infty ,+\infty ].} Using the Lebesgue criterion, Riemann integrability of y ( x ) {\displaystyle y(x)} follows. We also conclude that ∫ [ a , b ] P ⁡ ( X ≥ x ) d x = (R) ∫ a b P ⁡ ( X ≥ x ) d x . {\displaystyle \int \limits _{[a,b]}\operatorname {P} (X\geq x)\,dx={\hbox{(R)}}\int _{a}^{b}\operatorname {P} (X\geq x)\,dx.} 2b. By "continuity from below", ∫ [ 0 , + ∞ ) P ⁡ ( X ≥ x ) d x = lim t → + ∞ ∫ [ 0 , t ] P ⁡ ( X ≥ x ) d x = lim t → + ∞ (R) ∫ 0 t P ⁡ ( X ≥ x ) d x = (R) ∫ 0 ∞ P ⁡ ( X ≥ x ) d x . {\displaystyle {\begin{aligned}\int \limits _{[0,+\infty )}\operatorname {P} (X\geq x)\,dx&=\lim _{t\to +\infty }\int \limits _{[0,t]}\operatorname {P} (X\geq x)\,dx\\&=\lim _{t\to +\infty }{\hbox{(R)}}\int \limits _{0}^{t}\operatorname {P} (X\geq x)\,dx\\&={\hbox{(R)}}\int \limits _{0}^{\infty }\operatorname {P} (X\geq x)\,dx.\end{aligned}}} The case of P ⁡ ( X > x ) {\displaystyle \operatorname {P} (X>x)} is similar.

Formula for non-positive random variables [ edit ]

If X : Ω → [ − ∞ , 0 ] {\displaystyle X:\Omega \to [-\infty ,0]} is a non-positive random variable, then

E ⁡ [ X ] = − ∫ ( − ∞ , 0 ] P ⁡ ( X ≤ x ) d x = − ∫ ( − ∞ , 0 ] P ⁡ ( X < x ) d x , {\displaystyle \operatorname {E} [X]=-\int \limits _{(-\infty ,0]}\operatorname {P} (X\leq x)\,dx=-\int \limits _{(-\infty ,0]}\operatorname {P} (X<x)\,dx,}

and

E ⁡ [ X ] = − (R) ∫ − ∞ 0 P ⁡ ( X ≤ x ) d x = − (R) ∫ − ∞ 0 P ⁡ ( X < x ) d x , {\displaystyle \operatorname {E} [X]=-{\hbox{(R)}}\int \limits _{-\infty }^{0}\operatorname {P} (X\leq x)\,dx=-{\hbox{(R)}}\int \limits _{-\infty }^{0}\operatorname {P} (X<x)\,dx,}

where (R) ∫ − ∞ 0 {\displaystyle {\hbox{(R)}}\textstyle \int _{-\infty }^{0}} denotes improper Riemann integral.

This formula follows from that for the non-negative case applied to − X . {\displaystyle -X.}

If, in addition, X {\displaystyle X} is integer-valued, i.e. X : Ω → { … , − 3 , − 2 , − 1 , 0 } ∪ { − ∞ } {\displaystyle X:\Omega \to \{\ldots ,-3,-2,-1,0\}\cup \{-\infty \}} , then

E ⁡ [ X ] = − ∑ i = − 1 − ∞ P ⁡ ( X ≤ i ) . {\displaystyle \operatorname {E} [X]=-\sum _{i=-1}^{-\infty }\operatorname {P} (X\leq i).}

General case [ edit ]

If X {\displaystyle X} can be both positive and negative, then E ⁡ [ X ] = E ⁡ [ X + ] − E ⁡ [ X − ] {\displaystyle \operatorname {E} [X]=\operatorname {E} [X_{+}]-\operatorname {E} [X_{-}]} , and the above results may be applied to X + {\displaystyle X_{+}} and X − {\displaystyle X_{-}} separately.

History [ edit ]

The idea of the expected value originated in the middle of the 17th century from the study of the so-called problem of points, which seeks to divide the stakes in a fair way between two players who have to end their game before it's properly finished. This problem had been debated for centuries, and many conflicting proposals and solutions had been suggested over the years, when it was posed in 1654 to Blaise Pascal by French writer and amateur mathematician Chevalier de Méré. Méré claimed that this problem couldn't be solved and that it showed just how flawed mathematics was when it came to its application to the real world. Pascal, being a mathematician, was provoked and determined to solve the problem once and for all. He began to discuss the problem in a now famous series of letters to Pierre de Fermat. Soon enough they both independently came up with a solution. They solved the problem in different computational ways but their results were identical because their computations were based on the same fundamental principle. The principle is that the value of a future gain should be directly proportional to the chance of getting it. This principle seemed to have come naturally to both of them. They were very pleased by the fact that they had found essentially the same solution and this in turn made them absolutely convinced they had solved the problem conclusively. However, they did not publish their findings. They only informed a small circle of mutual scientific friends in Paris about it.[7]

Three years later, in 1657, a Dutch mathematician Christiaan Huygens, who had just visited Paris, published a treatise (see Huygens (1657)) "De ratiociniis in ludo aleæ" on probability theory. In this book he considered the problem of points and presented a solution based on the same principle as the solutions of Pascal and Fermat. Huygens also extended the concept of expectation by adding rules for how to calculate expectations in more complicated situations than the original problem (e.g., for three or more players). In this sense this book can be seen as the first successful attempt at laying down the foundations of the theory of probability.

In the foreword to his book, Huygens wrote: "It should be said, also, that for some time some of the best mathematicians of France have occupied themselves with this kind of calculus so that no one should attribute to me the honour of the first invention. This does not belong to me. But these savants, although they put each other to the test by proposing to each other many questions difficult to solve, have hidden their methods. I have had therefore to examine and go deeply for myself into this matter by beginning with the elements, and it is impossible for me for this reason to affirm that I have even started from the same principle. But finally I have found that my answers in many cases do not differ from theirs." (cited by Edwards (2002)). Thus, Huygens learned about de Méré's Problem in 1655 during his visit to France; later on in 1656 from his correspondence with Carcavi he learned that his method was essentially the same as Pascal's; so that before his book went to press in 1657 he knew about Pascal's priority in this subject.

Neither Pascal nor Huygens used the term "expectation" in its modern sense. In particular, Huygens writes: "That my Chance or Expectation to win any thing is worth just such a Sum, as wou'd procure me in the same Chance and Expectation at a fair Lay. ... If I expect a or b, and have an equal Chance of gaining them, my Expectation is worth a+b/2." More than a hundred years later, in 1814, Pierre-Simon Laplace published his tract "Théorie analytique des probabilités", where the concept of expected value was defined explicitly:

… this advantage in the theory of chance is the product of the sum hoped for by the probability of obtaining it; it is the partial sum which ought to result when we do not wish to run the risks of the event in supposing that the division is made proportional to the probabilities. This division is the only equitable one when all strange circumstances are eliminated; because an equal degree of probability gives an equal right for the sum hoped for. We will call this advantage mathematical hope.

The use of the letter E to denote expected value goes back to W.A. Whitworth in 1901,[8] who used a script E. The symbol has become popular since for English writers it meant "Expectation", for Germans "Erwartungswert", for Spanish "Esperanza matemática" and for French "Espérance mathématique".[9]

See also [ edit ]

Notes [ edit ]

Literature [ edit ]