Texas Hold’em, Afghanistan, City Planning, and Topology

William Cabell
11 min readSep 20, 2023

--

How do I win at poker? I’m down two buy-ins, the fish (or am I the fish?) has just taken my all-in post-flop straight-flush-draw semi-bluff as a challenge to his honor and decided to call it with pocket threes, and instead of me getting my ten or my spade, he has just hit his set on the river.¹ In situations like this, I find myself pondering the answer to this question. Now, I can’t divulge the details of my own particular poker strategies, but Internet poker websites will happily show you the optimal way to play poker, complete with calculated odds and the provably proper course of action to take at any stage of the hand, for any cards, in any position. Should you go research this, you will see that as with many other types of games, mathematics can tell you exactly how to win at poker.

A poker hand is an example of what the intuitively-named field of game theory calls a finite game. The important components of a finite game are that it has a finite number of players², a finite number of turns, a set of possible moves, and a way to determine a winner based on any given sequence of moves. The prisoner’s dilemma is the best-known mathematical example of a finite game, but most things that count colloquially as “games” also fall under this definition: chess, checkers, Pokémon battles, and so on. The goals of studying a game typically involve enumerating possible outcomes or classes of outcomes and determining winning strategies for one player or another. In some cases (with tic-tac-toe, for example) there is no winning strategy, as either player can prevent the other from winning. In other cases, like Connect Four, the first player has a sequence of moves that they can make, depending on the moves that the second player has made before him, that guarantees that he will win. Like in the example of a poker hand, finite games can also be played in circumstances where players have only partial information. This informs the strategies, which may become probabilistic in nature (e.g. choose move A 80% of the time and move B 20% of the time). Strategies like this are called mixed strategies. Once we find a winning strategy for a finite game, we say that we have solved that game. Games with partial information can also be solved, but in this case, paradoxically, a winning strategy may not always guarantee victory. To understand how this might be the case, see the poker hand that sparked this discussion.

What if we want to model a whole game of poker, and not just a single hand? After all, if I only have enough money for one buy-in, it might not be to my benefit to play the optimal strategies on every single hand if they are too risky — but then again, maybe it would be. How can we know? A cash game of poker never ends; in other words, it doesn’t fit the definition of a finite game. So, we have to turn to a different concept to model it.

Infinite games are different from finite games in that the number of turns is infinite (typically countably infinite, of length ω). Because of this, the idea of a “winning strategy” changes. In an infinite game, a winning strategy is not one that achieves any victory condition — in chess, capturing the king, or in a single hand of poker, getting the higher-ranked set of cards — but a strategy that guarantees that the player can continue to play. In an infinite game, a risky strategy is usually far less attractive than a slower, more conservative one. Applications for these types of games are as varied as applications for finite games, and often even more relevant to problems beyond the playful domains of board and video games.

Applications

Sometimes it is difficult to see how abstract mathematical ideas offer any real utility. Throughout mathematical history, we often find gaps of hundreds of years before new results enable new science or technology. Take, for example, the exceedingly abstract field of number theory: started by the ancient Greeks, exploded by Euler in the 1600s, refined by many through the 1700s and 1800s, it was widely known for millennia as one of the least applicable fields in mathematics. As late as 1941, number theorist G.H. Hardy wrote, “No one has yet discovered any warlike purpose to be served by the theory of numbers … and it seems very unlikely that anyone will do so for many years.” But, as if in response to Hardy, Alan Turing, working in secret as a British cryptanalyst at the same time, catalyzed a revolution in cryptography that would kick number theory down to the ignominy of real-world application in the company of linear algebra and analysis. Game theory, on the other hand, is an outlier in this history. John Von Neumann published the first comprehensive study of game theory in 1944 with the book Theory of Games and Economic Behavior, and within a decade, it had overturned the study of strategy, providing a new formalism that immediately began informing international policy. Mathematicians, economists, and strategists have continued to advance the field over the past half-century, and today we continue to discover novel applications for it each year.

One of the first well-known uses of game theory was he American Cold–War-era strategy of nuclear deterrence. Infamously, it was modeled on a finite game that ended, in the worst case, in mutually-assured destruction. In the classical western military tradition, wars follow predictable sequences: cause, conflict, and resolution, and therefore they fit this model well. But in recent history, the United States has experienced several military conflicts that have not followed this classical trajectory. Vietnam and Afghanistan are obvious examples, but current conflicts with Russia and China, conflicts that have not escalated to traditional war but nonetheless require the allocation of military assets, also fall into this category. To attempt to address this discrepancy, the RAND Corporation published a report in 2022 that explores a replacement for the five-phase “Joint Phases of Conflict” model, a formalized version of the classical western idea of war, with one instead based on an iterative four-step model, meant to adapt to changes in rules and goals of enduring conflicts.³ The given rationale for this change in philosophy is that contemporary international conflict is not a punctuated series of finite games, but a single infinite game. Shorter, more decentralized planning cycles allow for better adaptability and resilience to failure, which is the optimum in an infinite game.⁴ It remains to be seen how this model will impact long-term strategic thinking in the United States, but it certainly appears to be a superior planning framework for protracted conflict.

The infinite-game model has also been applied to fields far from military strategy. In 2019, traffic engineer Charles Marohn published Strong Towns: A Bottom-Up Revolution to Rebuild American Prosperity, in which he prescribes a change in American city planning away from automobile and single-family oriented development patterns towards cities that are more dense, mixed-use, and support a variety of modes of transportation.⁵ His justification for this is based on a model of a city as an infinite game. Business management books grounded in models of finite games abound, and their ideas about growth have influenced city planning philosophy in ways that Marohn argues are illogical. Businesses have clear criteria for success, and, in general, a much less grave set of consequences should they fail. On the other hand, a city has no win condition. For a city planner, there can be no final target at which the city can be sold or closed. Instead, a city is an infinite game, where the winning strategies are those that allow the city to continue to survive for as long as possible. Once again, the best strategies for such a game are typically rooted in adaptability and resilience: prescribing cookie-cutter development patterns around a single mode of transportation with a complex supply chain and supporting that development with external private investment whose profitability is predicated on continued economic growth is obviously not a highly resilient strategy to failure. More traditional cities that are composed of neighborhoods, easily accessible by various modes of transportation, that rely on small, local businesses will become far more self-sufficient (and survive better) in the inevitability of adverse macroeconomic circumstances.

Of Mathematical Interest

These applications are interesting for their variety, but they are not particularly faithful to the underlying formalism of infinite games. They are the less rigorous (but more creative) kind of mathematical application, where the essence of the idea is plucked from the definitions and massaged and abstracted to fit more ambiguous circumstances (and, typically, an argument that their author already wanted to make). It would be difficult to deduce any universal theorems for modern warfare from a RAND Corporation report or to calculate any numerical mixed-strategy equilibrium based on the differential geometry of payoff matrices in the case of city planning policies. But infinite games are of interest to the formal mathematician as well. As one example, the Borel determinacy theorem illustrates a deep connection between game theory and topology using surprisingly elementary constructions.

Consider a general infinite game with two players I and II. Let A denote the set of possible moves. denotes the set of possible games. We use w in place of ω due to limitations on the Medium platform. Let W be a subset of games that player I wins. For any game in \ W, player II wins. The following helps to visualize one such game, with moves aᵢ A:

Source: Borel determinacy theorem, Wikipedia.

A winning strategy for the game is a function f that maps sequences from A to elements of A. The rules of the game dictate that only one player can have a winning strategy. If it is a winning strategy for player I, then the function will accept even-length sequences, if player II, then odd-length sequences.

The natural question for such a game is whether either player has a winning strategy. The rules of the game are encapsulated in the subset W of winning games, which is typically called a “payoff set” in this construction. So W must be our main object of study. If there is a winning strategy for either player, the set W is called “determined”. So, equivalently, under what conditions is W determined, and then, which player wins?

There are examples of undetermined games. However, there are no easy examples. The only examples of undetermined payoff sets that I can find rely on the axiom of choice for their proof, and are therefore not constructed examples but merely existence results.⁶ But they are existence results, and so we can ask the question that we did before: under what conditions is W determined? Surprisingly, we turn to topology for an answer.

Endow the set A with the discrete topology and with the corresponding product topology. Consider a closed payoff set X. If player II has a winning strategy, the matter is settled, so suppose that player II does not have a winning strategy for X. In other words, there is no way for player II to ensure for a string p that pX. In this case, for any finite subsequence of p — that is, an intermediate position within a game, say p|ₙ = (p₀, p₁, …, pₙ), player II also does not have a winning strategy. What this means is that player I has a strategy s* such that s*(p|ₙ) (for all n) does not immediately cause him to lose. Any neighborhood of p|ₙ (defined {x: xᵢ = pᵢ, i < n}) thus intersects X, since it does not cause player I to lose. Since X is closed, the limit of these neighborhoods under infinite intersection is also in X. The limit of these neighborhoods is the singleton set {p}. Therefore pX; in other words, player I wins with strategy s*.

This argument, from Mycielski⁷, can be repeated, mutatis mutandis, with an open payoff set. We arrive at the following theorem, proven by David Gale and F. M. Stewart in 1953: if a payoff set is open or closed with respect to the product topology on , then it is determined. Donald Martin improved on this result in 1975, showing that if a payoff set is a Borel set, then it is determined. His result is commonly referred to as the Borel determinacy theorem.

The Borel determinacy theorem is just one of many connections between game theory and topology: in 1987, in commemoration of the 50th anniversary of the Banach-Mazur game, Rastislav Telgârsky published a paper that outlines almost two dozen various games that have been used to study topology.⁸ Since infinite games require intense set-theoretic overhead and often rely on non-constructive methods, they are also of great interest in set theory and related fields.

Game theory is a young and lively field that continues to provide new research results and wide-ranging applications. Infinite games, in particular, offer a useful nuance to the traditional conception of a winning strategy. However, despite their mathematical power, they offer me meager solace when I walk away from the cash game after losing my third buy-in — especially because I know that, mathematically, I should have won. But that’s just the way it goes sometimes.

Footnotes

  1. In the context of Texas hold ’em, this is an example of annoyingly poor luck that results in a boneheaded play beating a reasonable one. Should the hand be replayed with a shuffled deck a large number of times, the reasonable player would win (substantially), but of course each hand is only played once in an actual game, and thus there is some random variance in the outcome.
  2. Canonically, there are two players in a finite game. But for many applications, this can be extended to any finite number of players by increasing the dimension of the model.
  3. Frank, Aaron B. and Elizabeth M. Bartels, eds., Adaptive Engagement for Undergoverned Spaces: Concepts, Challenges, and Prospects for New Approaches. Santa Monica, CA: RAND Corporation, 2022. https://www.rand.org/pubs/research_reports/RRA1275-1.html.
  4. Reading the report, you may notice that the analysts don’t appear to fully understand the nuances of their own infinite-game argument and, as a result, many of their recommendations are logically disconnected from it. Instead, they seem to be grounding their reasoning in their observations of modern conflict. So, their conclusions are likely still valuable, but their derivations of them from the principles of infinite games are largely invalid. Still, it is interesting that they chose to use an infinite game as a conceptual framework.
  5. Marohn, Charles L. Strong Towns: A Bottom-Up Revolution to Rebuild American Prosperity. Wiley, 2019.
  6. Mycielski, Jan. “Games with perfect information.” Handbook of Game Theory With Economic Applications 1 (1992): p. 46. https://www2.math.upenn.edu/~ted/210F10/References/GameTheory/Chapter3copy.pdf. See Proposition 3.1 for an example and this topology blog for a construction of the Bernstein sets used in the example in the context of Banach-Mazur games.
  7. Ibid., p. 46.
  8. Telgârsky, Rastislav. “Topological Games: On the 50th Anniversary of the Banach-Mazur Game.” Rocky Mountain Journal of Mathematics 17, No. 2, Spring 1987. http://www.telgarsky.com/1987-RMJM-Telgarsky-Topological-Games.pdf.

--

--

William Cabell

Mathematician, software engineer, cybersecurity professional.