In economics, the foundation of any approach to any decision made under uncertainty is expected utility theory, which quantifies risk aversion through establishing a correspondence with diminishing marginal utility of wealth.
Those familiar with the basics of utility can skip to the end of this post, but should stay tuned for the upcoming parts, where I will build upon these fundamentals in ways which are less common in abstract theoretical models, but which are specifically well-suited for practical poker applications.
Motivation
The common heuristic approach to decision-making in poker is to make decisions in order to maximize one's expected value, with volatility ("variance", as it is usually not-quite-accurately termed) an unquantified afterthought, managed through heuristic rules, if at all. People understand that less risk is preferable to more risk as long as expected value remains the same, but there is usually little consideration given to quantifying the value of risk relative to expected value.
How much expected value should a decision-maker be willing to give up in order to reduce the variance of a random payoff by a certain amount? More generally, how do rational decision-makers value the tradeoff between expectation and risk?
General Utility Functions
Preferences over different levels of wealth are quantified by assigning a utility function to each person or entity, a function which maps a level of wealth to a level of overall personal satisfaction derived from that wealth. The usual assumptions on a utility function are that it is:
- Increasing — Everyone prefers more money to less money.
- Continuous — There's no specific amount of wealth that is suddenly much more preferable to a slightly smaller amount of wealth.
- Concave — The slope of the function is decreasing. As one has more wealth, an additional dollar is less valuable, e.g. a poor person is much happier finding $100 than a millionaire is. This is equivalent to the individual being risk-averse, rather than risk-neutral or risk-seeking.
Isoelastic Utility
One basic example is the isoelastic utility function, given by
Notice that, for ρ=0, this is simply the identity function, which represents no diminishing marginal utility of wealth and no aversion to risk. As ρ increases, the marginal utility of wealth becomes more diminishing, so ρ can be seen as a parameterization of risk aversion. A higher value of ρ means a higher aversion to risk.
For ρ=0.5, this function looks like this:
This function satisfies all of the desired properties. Though the scale of this plot does not indicate it well, the function is always less than that of the identity function, so this utility function can be thought of a means of "discounting" wealth in a way that accounts for diminishing marginal utility of wealth. Note, however, that it is not necessary that the scale of the function match that of the wealth; we shall see that the particular values taken by the utility function are irrelevant for decision-making, as they get mapped back into dollars after accounting for the different random payoffs of an opportunity.
The isoelastic utility function is said to represent constant relative risk aversion (CRRA), as the individual's aversion to risk is always proportional to his wealth. With higher wealth, he is less averse to risk. This is a desirable property and is generally fairly consistent with real-life decisions and the rules of thumb that most poker players use in managing bankroll requirements as they move up in stakes.
Exponential Utility
Another simple example is the exponential utility function, given by
For c=1/150000, this function looks like this:
The exponential utility function is said to represent constant absolute risk aversion (CARA), as the individual's aversion to risk is always constant regardless of his wealth. In practice, few people would exhibit constant absolute risk aversion, as we should expect that most rational individuals' risk aversion should decrease as wealth increases, though perhaps not according to the proportional scale of the CRRA utility function.
The exponential utility function is bounded from above, but that does not mean that an individual with this utility function has any upper bound to the amount of wealth he prefers. We will see, however, that this does make the individual less likely to take risks for large amounts of money.
Utility and Risk Aversion
Let's say an individual who has a net worth of $500,000 and isoelastic utility with ρ=0.5 (defined on his net worth) is given the opportunity to bet all $500,000 on the flip a fair coin, receiving a payoff of $1,000,000 if it comes up heads and being broke if it comes up tails. What is the value to him of taking the bet? The expected value in the amount of wealth he will have after taking the bet is clearly $500,000, but the expected utility of this random payoff is given by
Since the utility function is continuous and increasing, there is a unique dollar value, known as the certainty equivalent, that yields the same expected utility as any random payoff. It is the unique solution of the equation:
Here, the certainty equivalent is $250,000. So while a completely risk-neutral individual should be indifferent between betting his $500,000 net worth on this flip or not, the risk-averse individual with these particular preferences would rather have $250,000 for certain than bet his $500,000 on the flip. Since having $500,000 for certain is even better than having $250,000 for certain, the risk-averse individual of course passes on this opportunity. He would only be willing to spend his net worth to have a 50/50 chance at having either $1,000,000 and $0 if his net worth were less than $250,000.
To get a feel for the practical implications of each of these two basic forms of utility functions, we can look at the certainty equivalents for similar situations of betting one's net worth on a coin flip, for varying values of net worth. The 1st column is the payoff for winning the coin flip (twice the net worth), the 2nd column is the certainty-equivalent value for the individual with isoelastic utility (with parameter ρ=0.5), and the 3rd column is the certainty-equivalent value for the individual with exponential utility (with parameter c=1/150000):
So, for example, an individual with exponential utility (with parameter c=1/150000) would only be willing to spend $4,916.68 on a 50/50 chance of winning $10,000.
A few simple observations:
- For isoelastic utility, the certainty equivalent is always a fixed percentage of the expected value of the coinflip. This is true regardless of what we set the parameter ρ equal to. So an individual with isoelastic utility is willing to bet his entire net worth on any weighted coinflip with fixed probability of winning (or on any 50/50 coinflip with a fixed percentage overlay, as in the example here), regardless of his wealth. This is unlikely to reflect any real person's preferences for such opportunities, but might be OK when we consider situations where the bet is for less than one's net worth.
- The certainty equivalents under exponential utility decrease significantly when more money is at risk. While the certainty equivalents in the table above for the smaller flips seem to be roughly in line with what most well-bankrolled poker players (with CARA utility, one's net worth relative to the bet size does not matter) would be willing to pay for these coinflips, most would likely be willing to pay more for the $1M flip. This disparity can't be rectified by playing with the parameter c; if we reduce c enough that the player would be willing to pay something somewhat closer to $500,000 for the $1M flip, then the certainty equivalents for the smaller flips become extremely close to the pure expected values.
These two utility functions are the most commonly-used in mathematical models due to their desirable analytical properties, but for the purposes of making practical poker decisions, where the discrete-time nature of poker opportunities makes it unlikely that the methods of calculus would lead to nice analytical solutions in many models anyway, we should be fine with choosing any admissible utility function that can be evaluated numerically. If we look at more practical situations where only a portion of one's net worth is at risk, we might be able to find a good fit with the isoelastic utility function, or we might be better-served by building some sort of ugly-but-practical "hybrid" utility function.
Eventually, we'll use the methods of utility functions to look at the following questions:
- When players have practical and tax-conscious utility preferences, how much effective rake are we really paying for our chance at the glory of the WSOP Main Event title?
- What sort of approximate hand-by-hand utility functions should the Loose Cannon on the PokerStars Big Game have?
- How can a backer and a player formulate a split of a payoff in a way which is optimal for each of their personal risk preferences?
- Full Tilt takes $1 out of the pot if you want to run it twice; under what conditions would we prefer to pay this fee to reduce volatility?
- Does whether or not we would take a certain risk ever depend on how many opportunities we will be given to play that game? In particular, is it a fallacy to manage the risk in a unique opportunity differently because we are unable to "reach the long run" with it?
But first, coming up next...



