The Dominant Strategy of Life
Game theory has a clean answer to "what is the optimal move." The harder question is what you are optimizing for — and whether you have correctly identified the game.
There is a question game theory never quite answers.
Not "what is the optimal move?" — that one has solutions.
The deeper question: what are you optimizing for?
The setup
In repeated games, cooperation emerges. Defection wins in the short term. But in iterated Prisoner's Dilemmas with sufficient rounds, Tit-for-Tat — or its gentler variants — consistently outperforms pure defection over time.
This is well-documented. It's cited in evolutionary biology, economics, mechanism design.
What's less often said is what it actually implies.
What it implies
Most people read "cooperation wins in the long run" as a moral lesson dressed in game theory clothing. A scientific endorsement of being nice.
That's a misreading.
What it actually says is this: cooperation is the dominant strategy under specific conditions — infinite or uncertain time horizons, identifiable players, memory, reputation propagation, and sufficient payoff asymmetry between mutual cooperation and mutual defection.
Change the conditions. Change the result.
Defection dominates in anonymous one-shot games. That's not a failure of ethics. It's math.
The conditions of your life
Here is the question worth sitting with.
Which game are you actually playing?
Most people in modern environments are playing iterated games with largely fixed players — family, professional networks, institutions, communities — where reputation propagates and time horizons are long. Under those conditions, the math favors cooperation, patience, and reciprocity.
Some people are playing one-shot games with anonymous counterparties. Under those conditions, the math is different.
The strategic error is not defecting when you should cooperate, or cooperating when you should defect. The strategic error is misidentifying the game.
The harder part
Game theory models agents with fixed utility functions. Clean preferences, stable over time.
Real humans aren't like that.
Your utility function is partially constructed by the moves you make. Repeated defection doesn't just win or lose points — it reshapes what you want. What you're willing to see. Who you become comfortable being.
This is what the models don't capture. The payoff matrix changes you.
Choose long enough to defect in small ways — cut corners, signal false things, treat people as fungible — and the person doing the calculation in round n is no longer the person who started in round 1.
This is not a moral argument. It's a stability argument. An identity coherence argument.
The strategy you execute at scale becomes indistinguishable from character.
Dominant strategy
In classical game theory, a dominant strategy is one that outperforms all alternatives regardless of what others do.
Life has no such strategy. The optimal move is always conditional on the game, the players, the time horizon, the iteration structure, the memory bandwidth of the environment.
What does exist is something more like a meta-strategy: accurately identify the game you're in, maintain the flexibility to switch between modes as conditions change, and do not let your strategy calcify into a fixed behavior pattern that stops tracking reality.
The one thing that consistently underperforms across nearly all conditions: operating on autopilot, running the cached strategy from five years ago, in a game whose structure has quietly changed.
Epilogue
Evolution solved this problem over millions of iterations.
You have one life, roughly 80 years, maybe a few hundred high-stakes games.
The margin for model error is not small.
Calibrate accordingly.