## Thursday, March 31, 2011

### Is it ever rational to pass up on a unique risk because you "can't reach the long run"?

Yes, as it turns out.

More generally, optimal game/stake selection does depend on what future investment opportunities will be available to you. The premise of the question is a little misleading, because, in fact, whether or not to take ANY risky opportunity today (even in choosing a game to play regularly and "reach the long run" with) depends on how many times you intend to play that game. The intuition, however, is clearest when thinking about rare opportunities with abnormally high risk.

For example, is it possible that it's "not worth" playing the WSOP Main Event because it is so high-variance and only occurs once a year (so you "can't reach the long run"), but that the exact same tournament would be worth playing if it occurred more often, such as once a month? It can be tough to reason through this with intuition alone, but it turns out to be pretty easy to show as a consequence of either rational risk aversion and/or progressive tax rates.

An example of a unique opportunity

Let's take the usual example of a "typical" poker player with our usual after-tax utility function, \$80k net worth prior to this year, and \$40k income on the year (assumed to be all from poker, otherwise tax effects will shift).

Suppose this player were to come across the unique, one-time opportunity on December 31st to risk \$10k on a weighted coin flip with a 51% probability of winning \$10k and a 49% probability of losing \$10k. A +EV opportunity, but a significant percentage of the small-stakes grinder's bankroll. What to do?

If he passes on the opportunity, his total after-tax utility payoff for the year is the utility of his \$80k in existing wealth plus what remains of his \$40k in income after paying taxes. If he takes the opportunity, his expected after-tax utility is 0.51 * util(\$80k + tax(\$50k)), plus 0.49 * util(\$80 + tax(\$30k)).

Which of these two outcomes is higher and whether or not the player should take this risky opportunity will depend on both his risk preferences and his tax bracket. We can look at his optimal decision for each different possible value of his year-to-date poker winnings:

The rows denote his current year-to-date poker winnings and the columns denote how many opportunities he will have available to take this coin flip (in this example, only 1).

A red box denotes that he should pass on the opportunity because taking it reduces his expected after-tax utility. A blue box denotes that he should take the opportunity.

Here, since his year-to-date poker winnings are \$40k, he should pass on the opportunity. He would need to be up \$100k on the year for it to become a profitable opportunity after adjusting for risk aversion and taxes. We see that it again becomes unprofitable in a higher region, when he has \$170k or \$180k, which we will discuss later.

If the opportunity is not unique

We modify the above situation by making it so that the player will have the option (but not requirement) to take this same coin flip opportunity up to 30 times. For example, maybe it's December 1st, and he knows that he will have the option of taking this coin flip opportunity once per day for the rest of the year.

This chart is simply an expansion of the first one, with additional columns added. The leftmost column agrees with the first chart, and it treats the case where the player will have only one option to take the opportunity. From that, we can calculate the next column over, where he will have 2 separate opportunities to take the coin flip.

If he passes on the opportunity when there are 2 days left, his year-to-date winnings will stay the same, and his expected utility will be the same as his expected utility at that bankroll level of \$40k for when there is 1 day left (as he will again have the option of taking or passing on the opportunity). If he takes the opportunity when there are 2 days left, his expected utility will be equal to 0.51 times the expected utility of having year-to-date winnings of \$50k with 1 opportunity left, plus 0.49 times the expected utility of having year-to-date winnings of \$30k with 1 opportunity left. In this manner, we can "work backwards" iteratively from the known case of 1 day left to find his optimal decisions for all previous days.

The black boxes denote regions which are impossible to reach. Since the player's initial year-to-date winnings are \$40k with 30 days left, he'll never be able to reach, say, a \$100k bankroll with 29 days left. He'll also never reach year-to-date winnings of -\$80k, because that would mean he would have taken the opportunity when his year-to-date winnings were \$-70k, but the player would never be able to profitably risk the last \$10k of his net worth on any sort of uncertain outcome.

Before we discuss the results, let's first look at what's causing the different shapes between the red and blue regions. Is the particular form of his decision strategy being shaped by risk aversion, tax effects, or both?

Without risk aversion or taxes

If we set our player's risk aversion to zero and remove the effect of taxes, then, as we might expect, he always takes the opportunity:

Without risk aversion or tax effects, all that matters is whether or not the opportunity has a positive expected value. It always does, so he always takes it.

With risk aversion, without taxes

If we return the player's risk aversion level to ρ=0.8 but still ignore taxes, then he starts to pass on the opportunity when his bankroll is too small to afford the risk:

Here, we see that, with only 1 chance at the opportunity, the player will need year-to-date winnings of \$130k or more in order for the coin flip to be profitable after adjusting for risk.

If he had only \$120k, then he should pass on the coin flip if it were a unique opportunity. However, with more than 1 option to take the coin flip, he should take it when his year-to-date winnings are \$120k! This is because, while the immediate expected utility of taking the coin flip might be slightly negative, the player gets additional benefits to expected utility in the future when he happens to win the coin flip and can then take a profitable opportunity the next day. This turns out to be enough to make his expected utility of taking the opportunity positive relative to passing on it.

We see some other effects at the \$110k and \$100k levels; if there will be enough options to take the same opportunity in the future, that can be enough to turn an unprofitable opportunity into a profitable one.

With taxes, without risk aversion

If we go back to ignoring risk aversion but add in the effects of federal (and the negligible effects of NJ state) income tax, we see some different shapes in the chart:

Without risk aversion, the only factor that would keep the player from taking this +EV coin flip would be adverse tax consequences. Here, with 1 opportunity left (the leftmost column), we see a few regions where the player will pass on the coin flip. These turn out to be the regions where his tax bracket would change based on the outcome of the opportunity.

For example, the player passes on the coin flip when his year-to-date winnings are \$0, because he gets no tax deduction if he loses and hits -\$10k, but will have to pay some income tax if he wins and hits +\$10k, which turns out to be enough to turn the +EV opportunity into a -EV opportunity after taxes.

The effect is similar at \$10k, \$30k, \$40k, and \$170k; a win or loss at any of these points will cause a significant jump in the individual's marginal federal tax rate. This is a great illustration of how progressive taxes create additional risk aversion when a risky opportunity has the possibility of changing one's tax bracket (cough cough).

Just as in the previous case, these tax bracket risks are mitigated when there will be additional options to take the opportunity. With enough time left before the end of the year, the player will take the opportunity regardless of his level of wealth. Essentially, the probability of ending up on the threshold of a different tax bracket gets lower the further out from December 31st you go, and the fundamental +EV nature of the opportunity outweighs this risk if there's enough time left.

With both risk aversion and taxes

Combining both risk aversion and taxes, we get back to the first full chart, where both "shapes" of red regions can be seen together:

A few observations:
• While a pass (red) can turn into a take (blue) when additional options are added (when moving to the right on the chart), the opposite can never happen. Since the additional opportunities are not mandatory, the player could simply blindly pass on some number of them and then be able to realize the full expected utility of the point to the left on the chart. So additional time until utility realization (year-end in our model) can only result in an increase in willingness to take on risk.
• Tax effects are only eliminated entirely when the player's annual income (poker or otherwise) is high enough that the risky opportunity could never move him out of the highest tax bracket. Otherwise, unless the risky opportunity has a large enough pure expected value, risks will never be taken near the boundaries of tax brackets.
• The fact that the opportunities are optional does matter. For example, while the player takes the opportunity at a year-to-date winnings level of \$90k when there will be 9 total options to take the opportunity available to him, he would not accept the opportunity if he had to be locked in to taking the coin flip all 9 times. If he had to commit to taking all 9 flips, it turns out that he would need \$110k to take that opportunity. The optionality lets him quit in the middle if he ends up losing too much. Strategic options always have nonnegative value, and that is as true in this decision theory problem as it is in game theory.

Conclusions

Keep in mind that the implications about unique situations here are about situations with a uniquely high mean/variance tradeoff compared to normal play. All that matters is the shape of the probability distribution, not the particular nature of the opportunity.

For example, if you play \$100 heads-up sit-and-goes for a living where you win 55% of the time and lose 45% of the time (ignore rake), and you happened to come across a one-off investment opportunity (perhaps a prop bet) where you could have a 55% chance of winning \$100 and a 45% chance of losing \$100, that's exactly the same as your usual heads-up sit-and-go, and you should take it. It doesn't matter that this particular opportunity will only happen once; the laws of probability only care about the distribution of the payoffs, and this will "reach the long run" along with your usual results. It doesn't matter that you "can't reach the long run" with a unique opportunity if you'll be able to reach it with other opportunities with similar (or riskier) payoff distributions.

While coding this algorithm in more complicated cases becomes more difficult, there are some powerful prospects for expansion of this model:
• The discrete nature of this model, while it would be only an approximation for a continuous-time problem in traditional finance, is actually a perfect fit for any poker situation, where the number of sessions or risky opportunities will always be discrete.
• The same approach can be used to compare any number of different possible games. For example, the player could choose between 4 options: playing his regular medium-stakes game, a lower-stakes game (higher mean relative to variance), or a higher-stakes game (lower mean relative to variance), or not playing at all. Expanding the number of opportunities to a larger number, such as 365 opportunities (playing one session each day for a year), would give a very powerful dynamic stake selection model. Rather than just working off of bankroll "rules of thumb", this model would optimize adaptively based on the running total of year-to-date winnings and provide strong risk- and tax-adjusted mathematical guidelines for decisions such as when to move up, when to "take a shot" in a particularly soft game, when to move down after significant losses, etc.
• Similarly, this model can answer the question of at what pot size it becomes beneficial to pay \$1 to run it twice. The discrete occurrences of pots over a certain threshold size can be plugged into this model as the two choices (one being to not run it twice, the other being to run it twice) and compared.
• The model could also be expanded to capture randomness in other external investments, such as the player's stock portfolio.
This work so far is quite preliminary and mostly just an illustration of what sort of results a full model would produce. I hope to code the more complicated framework with multiple game choices and normally-distributed payoff distributions in the future — a program that could execute the complete model on an ongoing basis would be very practical.

## Monday, March 14, 2011

### Levels of Randomness: Beyond G-Bucks

\$1/\$2 heads-up cash game with no rake. For some reason, our opponent is down to \$9 before the start of the hand. We are in the small blind. We are dealt the J♦2♥ and move all-in. Our opponent holds the Q♣Q♦ and calls. The board runs out J♥8♥2♠Q♥A♥. We drag the \$18 pot.

Did either player "get lucky" in this hand?

Some would say that we got lucky to beat pocket Queens with a garbage hand. Others might say that the Queens got lucky to outdraw us on the turn, or maybe we got lucky to redraw them on the river. Still others might say that our opponent was lucky to wake up with a hand as strong as Queens when he got shoved on for 4.5 big blinds.

More prudently, are any such perspectives on "luck" and any of the different levels of randomness in a hand useful and deserving of a strategic player's focus? If so, which is most important?

Most approaches use expectations conditioned on some chosen level of early-street random events and player-controlled moves to average out the "luck of the draw" that remains after this conditioning. This is most easily seen in through the first few levels, which have been well-established and are quickly understood by most students of poker. These perspectives are useful, but there's nothing stopping us from going to higher levels when the situation allows for it.

Levels of Randomness
LevelTitleAverages out over
0ResultsNothing
1Sklansky BucksCards dealt after all players are all-in
1.5Galfond Bucks… and one player's range
2Range vs. Range… as well as the other player's range
3Strategy vs. StrategyAll cards dealt and lines taken
4Strategy vs. Distribution… and our Bayesian inference of our opponents' strategies

Level 0: Results

If we condition on everything (in our example, all of the cards, from the holecards to the river), then we're just looking at the results of the hand. A beginning player operating on this level of understanding would conclude that we played the example hand correctly because we went on to win it and achieved the best possible result (+\$9). He would have lost sleep trying to figure out what he did wrong if he had instead lost the hand.

Such novices have always been deservedly and universally mocked by all levels of barely-competent poker players. We don't need to spend much time here... we're all better than Level 0.

Level 0.5: Any nonsense in between

Any perspective in between Level 0 and Level 1, such as one that notices that the final outdraw occurred on the river, is pointless. There is no information to be gleaned from the order in which cards happen to fall after all players are all-in, at least none that has any practical or outcome-based use. Do your emotional, fallacy-driven human brain a favor and just minimize the table once you've gotten all-in. You'll see if you won or lost when the table pops back up, and sweating the details is a distraction at best and an impediment to your performance at worst.

Level 1: Hand vs. Hand, a.k.a. All-in Adjusted EV, a.k.a. Sklansky Bucks

David Sklansky's Fundamental Theorem of Poker, introduced in his book The Theory of Poker, states that players need not worry about what cards happen to fall after the relevant action in the hand has concluded; as long as we play our hand in a way that turns out to be the same way we would have played it if we knew our opponent's cards, we win. If we can make them play their particular hand in a way that they would not have if they could have seen our cards, we also win. The all-in luck averages out in the long run.

This level conditions on all cards that have been dealt prior to all players becoming all-in and averages over only the remaining cards to be dealt out. For our particular example hand, our all-in equity with J♦2♥ against Q♣Q♦ is 11.27%. Therefore we received a Level 1 outcome of -\$6.97, worse than that of just taking the -\$1 of folding our small blind. We lost "Sklansky Bucks" in this hand, and we made a "Fundamental Theorem of Poker mistake" in our example hand because had we seen our opponent's cards, we would have folded preflop.

To term this a "mistake" in any practical sense is absurd; shoving J♦2♥ for 4.5 big blinds is part of the optimal strategy, assuming that any move other then all-in or fold in this subgame would be suboptimal (see The Mathematics of Poker or any other table for short-stacked heads-up NL Nash equilibria).

Then, while it is a useful guideline to lift the thinking of beginners out of the hell of Level 0 and for illustrating that all-in adjusted EV is an unbiased estimator of winnings, the Fundamental Theorem of Poker has few practical applications in most poker situations. Instead, we must go deeper...

Level 1.5: Range vs. Hand (Galfond Bucks)

Phil Galfond expanded Sklansky's idea to a much more practically-useful measure in his 2007 G-Bucks article. Galfond, in pioneering the idea of rangewise thinking, explained that, since from our perspective, our opponent has a probability distribution of hands in any given spot, we can make assumptions about his hand range and average out the results over those hands as well as the remaining board cards. Its applications occur more often than that of any Level 1 measure, since it applies to river calls in hands where players did NOT get all-in.

However, since Level 1.5 randomness requires assumptions to be made about our opponent's ranges (or our own, when looking at a decision from the opponent's perspective), it's not something that can be easily coded and evaluated as a variance reduction measure over a database of hands. Computational simplicity ended in Level 1.

Going back to our example hand, if both our opponent and ourselves are playing the Nash equilibrium for 4.5 big blind heads-up push-or-fold poker (a reasonable assumption), then he will be calling our shove with
[22+,A2s+,K2s+,Q2s+,J2s+,T2s+,95s+,85s+,75s+,65s,54s,A2o+,K2o+,Q2o+,J3o+,T6o+,97o+,87o]
which is about 67.7% of hands, ignoring card removal effects. Our equity with J♦2♥ against this range is 36.83%, so we received a Level 1.5 outcome of -\$2.37, worse than that of just taking the -\$1 of folding our small blind. So, since our opponent happened to have a calling hand, we lost "Galfond Bucks" in this hand as well as "Sklansky Bucks".

Note that this level of randomness ignores the fact that our opponent will be folding to our shove 32.3% of the time, which is enough to make our shove profitable and part of the optimal strategy.

While Level 1.5 randomness is as far as one might be able to go for solving complicated poker decisions within a given hand, we can go further for situations as simple as our 4.5 big blind heads-up push-or-fold spot.

Level 2: Range vs. Range

Galfond's measure exists with the aim of guiding practical decisions for a given player's specific cards. While this is sufficient for making in-hand decisions, if we want to produce a more general measure of randomness, we can expand to looking at the ranges of both players, rather than just one. Moreover, thinking about our ranges away from the table, rather than approaching each new situation on a hand-by-hand basis as we go, helps us balance our overall strategies.

In particular, we know each player's entire optimal range for when 4.5 big blinds are shoved and called heads-up, so we might as well condition on those ranges and average out over even the players' particular hands from those ranges.

When we know all players' ranges in a given spot, the particular cards that they end up holding are as irrelevant as the particular card that comes on the river when they were already all-in.

To put it another way, I don't really care that you happened to wake up with Queens.

The optimal 4.5 big blind shoving range is
[22+,A2s+,K2s+,Q2s+,J2s+,T2s+,93s+,84s+,74s+,64s+,53s+,A2o+,K2o+,Q2o+,J2o+,T6o+,96o+,86o+,76o]
which has 49.03% equity against the noted calling range our opponent will employ. Our Level 2 outcome of -\$0.17 captures the fact that, when we get all-in with these stack sizes, we could each have any of the hands in our ranges.

At least for situations as simple and solvable as this one, we're achieving additional variance reduction and thus better accuracy by taking a Level 2 measure of outcomes. This approach is also consistent with proper strategic rangewise thinking in this spot — no need to invest any emotional or rational energy into the fact that we got called when we had one of the worst hands in our range, as we knew it was correct within our overall strategy to include that hand.

Level 3: Strategy vs. Strategy

All the levels so far have implicitly conditioned on the line of action that occurred in the hand, that is, they only "work" after particular player moves were made which led to an all-in or a river call.

In our example hand, all previous levels have ignored the probability that our opponent folds to our shove. Since we know the exact ranges that our players get all-in with, we also know the complete strategies that each player employs: get all-in with the hands in that range, fold the others. Here, a "strategy" for a particular poker situation is a massive list of probability distributions of ALL possible moves a player will make in EVERY possible poker situation over ALL streets. 4.5 big blind heads-up push-or-fold poker is one of the few games simple enough where a complete strategy can fit onto one sheet of paper.

The range we are shoving is about 73.2% of all starting hands. So the Level 3 outcome of \$0.12 (= .732 * [.677 * -\$0.17 + .323 * \$2] + .268 * -\$1) for the subgame where we are on the button with 4.5 big blinds is what actually captures the value of being dealt a hand of poker in that spot when players are employing optimal strategies. At this level, we could verify optimality of each range by verifying that any other range produces a lesser result.

When we know all players' strategies and thus know how often each particular line of action gets taken, even the particular lines and actions taken are as irrelevant as the cards to come when all-in.

More generally, this level of randomness lets us "work backwards" from all-in or river situations where Level 1.5 randomness applies. For example, we may have reached a river decision in a hand where we made a call that was Level 1.5 correct (positive in "Galfond Bucks" expectation), but the strategies on prior streets that led us to that river spot with the ranges we had may or may not have been winning strategies. Expanding our focus to strategies rather than river calls or all-in moves is the only way to evaluate early street play.

Level 4: Strategy vs. Inferred Strategy Distribution

When we leave the realm of simplified poker games with easy Nash equilibria and enter a world where our opponents make mistakes, we may not be able to assume that our opponent follows a single exact strategy, especially when we are not very familiar with the player. Instead, we can assume that our opponent will be employing a particular strategy from a range of strategies. Just as we don't put our opponents on one specific hand in poker situations, we should allow for the fact that their strategies will be either dynamic or impossible to completely infer.

We could expand our example hand by adding the possibility that our opponent may not be playing optimally. For example, given our assessment of his play so far, we could estimate that there's a 50% chance that he is a smart player employing the Nash equilibrium strategy and a 50% chance that he's playing some suboptimal strategy that includes perhaps folding in the BB too often. Then, under the assumption of these particular probabilities, the profitability of every move we make will be the average of its profitability against the Nash equilibrium and against the suboptimal strategy. Maximizing our Level 4 payoff will involve determining the optimal exploitive strategy based on our Bayesian inference about our opponent's strategy — and the range of strategies we put him on will be updated constantly as we learn more about our opponent and/or he modifies his strategy. Of course, all poker players do this, informally. It's the basis of any strategic decision at any part of a poker hand.

We won't bother trying to calculate this one for the example hand here. In fact, once we make more complicated assumptions on a wider range of different opponent strategy profiles, we are well beyond the realm of computational tractability by the time we reach this level. Nonetheless, if we were the theoretical infinitely-knowledgeable and rational player, this would be what we would do in practice.

Recall that there are "Level 1/Fundamental Theorem of Poker mistakes" that are clearly not play mistakes, and that Level 1.5 randomness expanded upon Level 1 randomness in a way that eliminated many of the most obvious examples of such "false mistakes". Level 4 randomness expands upon Level 3 randomness in the same way: a well-reasoned strategy choice that happens to be thwarted by an unexpected strategy from the opponent was not necessarily a real mistake, just as failing to fold our J♦2♥ when our opponent held Q♣Q♦ was not a real mistake. We can't know our opponent's exact strategy, just as we can't know his exact holecards. We can only make strategically-optimal inferences about each.

Conclusions

Note that each level of randomness is an unbiased estimator of the same (Level 0) true results. That is, just as all-in adjusted EV (Level 1) is an unbiased estimator of results, so are the higher levels of randomness. In a shove-or-fold situation with optimal strategies, the Level 3 result is a constant number which is the exact expected value of the per-hand results. In particular, when applying variance reduction techniques to play in a specific situation such as 4.5 big blind heads-up push-or-fold poker when both you and your opponent are playing the Nash equilibrium, you will get a more accurate measure of the true expected value of your play by computing the Level 3 results.

From each level of randomness, we can look at the type of "luck" associated with it. In our example hand, we were:
• Level 1 lucky, as we managed to win an all-in where we were an underdog,
• Level 1.5 unlucky, as we ran into a hand near the top of our opponent's range,
• Level 2 unlucky, as not only did we run into the top of our opponent's range, we did so with the bottom of our range,
• Level 3 and Level 4 neutral, as we assumed our opponent was playing the Nash equilibrium with probability 1, and we ourselves played the Nash equilibrium.
Since each successive level adds extra information from the previous level into what it averages out over, the types of "luck" are additive.

It's worth noting that, in any all-in situation, whatever the outcome is, at least one player will have been Level 1 lucky. In this hand, it was us.

There is some elegance in the fact that Level 1 luck, the most salient and blatant form of luck, is the one that players have the least control over. Conversely, Level 4 luck is under full control of the players as it is entirely dependent on the players' relative skills at any given moment in time.

So maybe we did "get lucky" to win in that particular hand. The Level 1 luck is, in some sense, the foundation level of any hand involving an all-in. Does that make it the most important type of luck? In some ways, yes, but in many ways, no.

Each level of luck seems to offer its own lesson about avoiding various degrees of results-oriented thinking:

• Level 1 randomness tells us not to second-guess ourselves just because we happen to lose an all-in confrontation.
• Level 1.5 randomness tells us not to second-guess ourselves just because we happened to run into the top of our opponent's range.
• Level 2 randomness tells us not to second-guess ourselves just because we happened to run our well-crafted, profitable range into a perhaps suboptimal range from our opponent, e.g. if our opponent chose to bluff-catch too light with a range that matched up poorly against our value betting and bluffing ranges. (The differences between Level 1.5 and Level 2 are small, but they should exist in spots where we employ mixed strategies.)
• Level 3 randomness tells us not to second-guess ourselves just because we happened to choose an overall strategy which matched up poorly against our opponent's particular strategy, as long as we made our strategy choice in an informed way.
• and, finally...

Level 4 randomness tells us that we should only second-guess ourselves when we chose an overall strategy which was suboptimal relative to the distribution of strategies we could optimally infer our opponent to have.

Level 1 luck is the type of luck that most players have developed the ability to ignore by training themselves not to be results-oriented.

Then, once we're ignoring Level 1 luck, we can start to ignore Level 2 luck and be confident in our overall strategies, even when we run the bottom of our range into the top of our opponent's range.

Once we've achieved indifference to Level 2 luck, we're on the final stretch towards conditioning ourselves to ignore Level 3 and Level 4 luck, and then we've reached nirvana. Completely emotional and strategic neutrality to all instrumental randomness in the game. Complete focus only on making optimal inferences about our opponent's play and executing optimal exploitive strategies. Nobody got lucky in the example hand.

A lofty goal, to be sure, but worthy of pondering and a fine target to shoot for. Even if you don't get all the way there, if you can get past Level 1.5, you'll be further along than most, and you'll be thinking more strategically about the game.

## Thursday, March 3, 2011

### Christie vetoes NJ internet gambling bill — my summary and thoughts

It's been a long time since the New Jersey internet gambling bill S490 passed with a 85% majority through its final legislative committee. In fact, it's been the maximum amount of time. Governor Chris Christie had 45 days to act on the bill by signing or vetoing it, which meant he had until February 24th to decide... or so we thought. After what felt like an eternity of waiting, shortly before February 24th, we all learned that some weird exception surrounding the congressional calendar actually gave him until March 3rd to make his move on the bill.

Earlier today, Christie vetoed the bill. His press release, linking to his official statement to the NJ Senate on this matter, is available here.

Originally, the veto was reported as a conditional veto, and is still recorded as such on the website of the NJ legislature, but the more recent reports are saying that it was upgraded to an absolute veto. This article captures the drama surrounding the nature of this veto.

Christie's decision

It's hard to blame Christie for running his time bank all the way down on this decision. He had many factors to consider, all of which were the topic of much speculation and discussion while we waited for his decision.

Reasons for Christie to sign:
• The tax and job creation of internet gambling would generate revenue for the state, and quite a bit of it (a BIG reason for NJ right now).
• Internet gambling would lead to additional revitalization of Atlantic City in general.
• Domestic licenses on internet gambling would provide repatriation of revenues currently going to perceived "evil offshore operators".
• Winning the "race" by getting NJ's internet gambling network established before a future federal internet gambling bill could work out better for the state's coffers than simply waiting to opt into a federal scheme.

Reasons for Christie to veto:
• Caesar's Entertainment lobbied heavily against this bill, presumably believing that its own interests are better-served by an eventual federal bill.
• Many citizens oppose internet gambling, perhaps as many as 67% of them, according to a recent poll, which I thought was rather surprising. While other polls show overall support for internet gambling, I think it's fair to say that even those people who support gambling in general may not always support internet gambling, especially when concerned with a potential proliferation of gambling in public areas, such as non-casino businesses using internet cafes as effective gambling devices.
• There are various complex legal concerns over whether or not the intrastate framework of S490 would be acceptable under federal gambling laws. In particular, Christie has ties to the federal Department of Justice, the entity which goes against court rulings by insisting that online poker is covered by the ancient anti-sports-betting Wire Act and attacking payment processors accordingly. If the DOJ doesn't want there to be any internet gambling anywhere, for whatever reason, Christie might side with them.
• The NFL doesn't want it to pass and Christie may have owed them a favor... a little conspiratorial, but it is definitely the case that the NFL is inconceivably powerful and that the NFL is inconceivably opposed to all internet gambling, even poker-only initiatives.
• "Competition" from online gambling could reduce the business of brick-and-mortar Atlantic City casinos. I would be shocked if the net effect of AC-based internet gambling wasn't positive for all of Atlantic City, but I think this idea was out there.
• There are rumors that Christie may want to run for president soon. The official Republican Party platform opposes internet gambling in all of its forms, so it might hurt his chances at the nomination to be known as the first U.S. political figure to "legalize" internet gambling.
• Finally, the reason that I didn't consider until Christie gave it as a reason in his veto: there is a potential legal conflict between this bill and the state constitution. The NJ Constitution allows for casino gambling to take place only in Atlantic City. The internet gambling bill treated this by defining that play between a player anywhere and a business with servers in Atlantic City was play that "took place in Atlantic City". Christie found this to be dubious and suggests that a public referendum might be necessary to modify the state constitution to allow for statewide internet gambling.

Reasons which were, at best, taking a backseat to all of the above, and at worst, not even under consideration:
• Whether or not adults should have the right to be able to use the internet to engage in the same financial activities and/or gambling games that they can legally partake in through brick-and-mortar establishments
• Effects on competitive poker players, that silly little bunch... don't they realize that their activity is "just gambling" and exists only for producing corporate and government revenue?

What's next?

A conditional veto of this bill was expected to likely be only a temporary setback, perhaps enough to make NJ lose the race to be the first state to license internet gambling, but still only a short delay of the inevitable. The most recent news surrounding this absolute veto, however, makes it seem probable that there might be nothing happening with this bill for a while.

Though the NJ legislature is able to override a governor's veto with a 2/3 majority, Senator Raymond Lesniak, the sponsor of the bill, has said that he will not attempt to do so. Despite its original passage with a sufficiently-large majority, in light of Christie's veto, New Jersey Republicans are no longer expected to support the bill. Lesniak also seems pessimistic about the chances of any new "fixed" legislation for NJ internet gambling, given the Governor's attitude. His quotes in this article make it seem like any second attempt at getting this bill into law is not likely to come to fruition anytime soon.

Though the conflicting reports of today seem to be settling on the above, I can't help but observe that all of the incentives which compelled this bill in the first place are still there. New Jersey badly needs revenue, and it does not seem that Christie is fundamentally opposed to the notion of licensed internet gambling in NJ. If the positive incentives for such a bill were able to economically overcome the opposing forces of Caesar's and the NFL, it seems that a compromise could be reached and a few legal tweaks could be quickly made to the bill to make it palatable to the governor.

Most likely, though, we're looking at no action for a while. If a public referendum is indeed needed, I believe that can't occur until November at the earliest.

My Thoughts

As all of the information surrounding the bill developed, I was pretty close to indifferent between the bill's passage or failure, which is sad for what would have let my own state create the first explicitly legal online poker in the country. If the bill didn't criminalize unlicensed operators in a manner that might have forced international poker-only sites to stop serving NJ residents, I would have been a strong supporter of this bill. As it was, where passage of the bill would possibly mean that I'd have to move out of state to play online poker with a reasonably-sized pool of players, I had to personally lean towards opposing the bill.

S490 would have been a great bill for gamblers, but potentially a bad-to-terrible bill for serious poker players. I feel that the considerations of poker players were dwarfed by the considerations of gambling, and I can't help but feel disregarded in the wake of the political mess surrounding its veto.

The bill impeded on the rights of competitive strategy gamers by explicitly including poker in the scope of its protectionist regulations at the expense of global competition. No gambling game is negatively impacted by only being able to serve residents of a small region, but limiting poker to only New Jerseyans would possibly mean the end of the availability of sufficiently liquid online games for competitive players. This is the cost of using the public's perception of poker as "gambling" as a vehicle to attempt to secure rights for our game. Granted, the poker-only federal Reid bill also limited the player pool to U.S. players only, but it at least had ambitions to expand to global markets within several years, which is a relatively short amount of time in the world of government.

Also discouraging to me was that the discourse and incentives surrounding Christie's decision on the bill included so many different political factors that the rights of poker players likely held little to no value in his decision. This is the cost of selling legislatures on licensing and regulating internet poker through arguments primarily focused on tax revenues. Poker does not have nearly enough political allies who legitimately care about the rights of adults to participate in competitive games.

I'm also worried about the result if an eventual NJ internet gambling bill were to go to a public referendum.

Even if the bill were poker-only, I would worry that the majority of the public would vote against poker based on misguided moral beliefs or misinformation/ignorance about the game of poker. For example, to be generous, if 10% of citizens play poker at least semi-seriously, and another 20% are close enough to such people to even partially understand why poker is good, that leaves 70% of people whose perception of poker may be based off of indoctrinations that "gambling" is an unbeatable, degenerate, or sinful. Each person in this 70% gets a vote worth just as much as those in the 10%, regardless of how incorrect their beliefs are or, more unfortunately, how irrelevant the outcome of the vote is to their personal lives. I guess this might be a failure of the democratic process in general, but it seems particularly dangerous with something as widely and fiercely misunderstood as the game of poker.

Add in the fact that any future NJ internet gambling bill is still likely to treat all "gambling" instead of only poker and it becomes a lot easier to lose that vote.

I would be passionate in lobbying the public for voting for a good poker-only bill, but I honestly might not even bother asking my friends and family to support a broader bill. While I personally side with the rights of well-minded adults to annihilate their wealth through the methods of their choosing, I don't feel strongly enough about this to try to change the mind of somebody who wants to limit the existence of any form of slot machines at all costs, and I would not be comfortable conflating these issues with the game of poker.

So while I'm glad that I get to continue enjoying the long-term-unstable status quo of online poker for the time being, the stress of this long wait and its overly-politicized result makes me even more eager and desperate for the time when internet poker will be treated in a logical and rational manner by the U.S.

Might be a while.