Suppose I want to estimate my chances of winning the lottery by buying a ticket every day. That is, I want to do a pure Monte Carlo estimate of my probability of winning. How long will it take before I have an estimate that’s within 10% of the true value?
There’s a big NY state lottery for which there is a 1 in 300M chance of winning the jackpot. Back of the envelope, to get an estimate within 10% of the true value of 1/300M will take many millions of years.
Didn’t you say Monte Carlo only took a hundred draws?
What’s going on? The draws are independent with finite mean and variance, so we have a central limit theorem. The advice around these parts has been that we can get by with tens or hundreds of Monte Carlo draws. With a hundred draws, the standard error on our estimate is one tenth of standard deviation of the variable whose expectation is being estimated.
With the lottery, if you run a few hundred draws, your estimate is almost certainly going to be exactly zero. Did we break the CLT? Nope. Zero has the right absolute error properties. It’s within 1/300M of the true answer after all! But it has terrible relative error probabilities; it’s relative error after a lifetime of playing the lottery is basically infinity.
The moral of the story is that error bounds on Monte Carlo estimates of expectations are absolute, not relative.
The draws are Bernoulli with a p chance of success, so the standard error of the Monte Carlo estimator
is going to be its variance
for draws and a
probability of winning.
Extra credit: Sequential decision theory
How long would it take to convince yourself that playing the lottery has an expected negative return if tickets cost $1, there’s a 1/300M chance of winning, and the payout is $100M?
Although no slot machines are involved, this is the simplest framing of a so-called “bandit” problem. More sophisticated problems would involve several lotteries with generalized payout schedules that might be stateful or non-stationary.