Infinite Expected Value and the St. Petersburg Paradox

Infinite Expected Values

The other day I was working on a physics problem set and started to think about expected values. Recall that if we have a probability density function , the expected value (or mean) of that function is:

\langle x \rangle = \int_{-\infty}^{\infty} x p(x)

So this is useful, because it tells us, if we sample many times, on average we’re going to get . If represents, say, a grade distribution, and we compute , we get the average score for the class.

Now, I’ve heard people say “Look, half the class is going to be below average”. Of course, this is only true for unbiased distributions, if most of the class gets a 50% and one person gets a 90%, everyone with a 50% is going to be below average. It at least seems fairly intuitive that at least one person has to be above average. But we can generalize the problem such that this isn’t the case!

So I decided to find a function that is a valid probability density function with an infinite expected value. All we require of a probability density function is that it is non-negative, finite, and integrates to a finite quantity over the reals. There’s probably a good way to come up with arbitrary functions like this, but a simple example is over the range . The integral is

\int_1^{\infty} \frac{1}{x^2} = \left. -\frac{1}{x}\right|_1^{\infty} = 1

and the mean is

\int_1^{\infty} \frac{1}{x} = \left. \log x\right|_1^{\infty} = \infty

Huh. Here’s a probability density function where every value we can pick from it is less than the mean - and it’s not even a weird function!

Off to the Casino

“Ok,” you say, “this is an interesting mathematical curiosity, but surely you can’t invent a real-life situation where this matters.”

Ah, but I can. Suppose I invite you to a game of chance:

Keep flipping a coin until you get tails.

I’ll pay you

So if you flip a head, then a tails, I’ll give you 2 dollars, if you flip two heads, then a tails, I’ll give you 4 dollars, three heads, I’ll give you 8 dollars, and so on.

How much would you be willing to pay to play this game? If you’re a gamblin’ man, you know that you should play a game when the expected value is positive. The expected value here is discrete instead of continuous, so it is equal to:

\left(\sum_{n=1}^{\infty} 2^n \frac{1}{2^n} \right) - c

where c is the cost of the game, is the reward for flipping heads, and is the probability of flipping heads. We can sum this very easily, since , and we get:

-c + 1 + 1 + 1 + \ldots

So no matter what you pay to play this game, you always come out ahead! This can’t be right! Would you pay 5 dollars to play this game? 10 dollars? 100 dollars? Obviously not. This is known as the St. Petersburg Paradox, and was apparently first analyzed by Bernoulli. I have no idea why it’s named after St. Petersburg.

So this result is pretty disturbing. We have this perfectly rational method that we use to decide when to play the lottery, and somehow it goes drastically wrong here! Does this mean that we should go down to the corner store and buy a bunch of scratch-off tickets? I sure hope not.

Fixes for Bernoulli’s Paradox

A lot of people have tried to come up with other ways of determining the value of a game that give us a finite expected value. Many of these are on the Wikipedia page, and I’ll summarize a few interesting ones here.

Decreasing Marginal Value of Money

This takes advantage of the fact that the difference between a 10,000 dollar salary and a 20,000 dollar salary is much greater than the difference between a 100,000 dollar salary and a 110,000 dollar salary, for example.

Bernoulli suggested that instead of using the dollar amount for the utility of money, we use the log of the dollar amount. This certainly solves our problem, since instead of we have for our expected values, and this converges to 2, a very finite value.

This seems pretty good, but there are two really big problems with this.

First of all, the choice of

Even more importantly, this only appears to solve our problem! For any monotonically increasing value function, we can find a probability function with a finite integral that just barely converges such that the expected value is infinite. We can solve this by saying that any value greater than, say, a trillion dollars is worth exactly as much (or less than) a billion dollars, which might be a reasonable statement, but I don’t really think so.

So let’s move on. Next, we have

Ignoring Unlikely Events

This is basically the other side of the first idea, instead of saying that the value of the reward increases more slowly than we expect, we say that we can treat a 1 in a million event as less than as likely as a 1 in a hundred event. This falls prey to the same pitfalls as before, except I’m a little more willing to approximate a 1 in a trillion event as having zero likelihood than I am to take money as having zero marginal utility at some point. Still, I don’t like this family of solutions.

Back to Reality

So maybe you’ve decided that the marginal utility of money does not decrease for you, and you’re willing to believe in those really rare events. Well, good for you. Let’s play. You flip a heads… then another heads… then a few more, then a tails. “Where’s my two hundred bucks,” you demand. “Sorry, all I’ve got in my pocket is this twenty. Looks like you’ll have to pay for the pizza this time around.”

If we’re going to consider this as a real game, we have to ask who’s on the other side of the table. This has really unfortunate consequences for anyone playing this game, since it means that that infinity in the sum only goes to , where is the wealth of the dealer. If we do our sum again, our expected value is now

\log (w) - c

so even if you’re playing against Bill Gates and his estimated net worth of 61 billion dollars, you can only expect to come out ahead if the game costs less than about 25 dollars.

Does This Actually Matter?

Maybe! What all this means is that if there’s a bet where there’s a small chance of coming out spectacularly ahead, but a very big chance of losing a little, things might not be as good as they seem. Maybe this is one way to look at things in Silicon Valley these days, but maybe this last part is just my attempt to start flame wars on the internet.