Every week, some small piece of AI competence gets demoted from skill to default.

A prompt trick becomes a button. A careful context-packing routine matters less when the window expands. A five-step workflow becomes a native agent behavior. A model-specific workaround keeps working, but now feels like maintaining a footbridge beside a road that did not exist when you built it.

The work was not fake. It saved time. It clarified judgment. It made clumsy things possible. It also aged in public.

AI progress now depreciates unevenly. Some of what you build becomes scaffolding for the next thing. Some becomes trivia about a tool state that lasted three months. You usually cannot tell which kind of work you are doing while you are doing it.

“Arms race” is close, but not quite right. An arms race imagines speed along a visible track. The AI pressure is stranger: people are building conversion systems before they know which outputs will convert into future value.

Heavy Eurogames describe this better than most of the usual tech metaphors. Their drama is rarely a single decisive attack. It is the slow construction of a conversion system. A cube becomes a resource. A resource buys a card. A card changes the value of a city, a route, a worker, a province. Early moves matter because they commit you to a theory of the later game.

AI work has acquired that shape. Notes, prompts, scripts, evals, memory systems, datasets, workflows, distribution, taste, trust, and reputation are all plausible pieces in a position. They are not points by themselves. Their value depends on the scoring rule that emerges later: what models make cheap, what organizations still need, what customers trust, what platforms absorb, what law permits, what markets reward, and what human judgment still improves.

The multiplier is not yet drawn.

The board converts moves into scores

A beginner sees a turn as a turn: take wood, build road, draw card, place worker. A stronger player sees the chain. This resource is two turns away from a card. This card changes the value of a city. This city matters only if the later scoring condition makes cities matter. The board is full of objects that look concrete but are really claims about future conversion rates.

Serious AI use already works this way. The naive form is prompt in, answer out. The stronger form is a setup: notes that can be retrieved, examples that calibrate the model, code that makes repeated actions cheap, review habits that catch fluent nonsense, and enough taste to know when the output sounds better than it is. Context and persistence and the decision before the decision make the same shift: away from “how do I prompt this model?” and toward “what working situation am I creating around it?”

In a Eurogame, the uncertainty is bounded. You may misread the strategy, but the contingencies exist somewhere: printed on a card, inferable from the rulebook, discovered through repeated play, discussed on BoardGameGeek, compressed into a strategy guide. The rules rarely change because a publisher shipped a stronger model on Tuesday.

AI removes that comfort. The value of today's setup depends on future capabilities, future prices, future defaults, future bottlenecks, future law, and future expectations. Better productivity now is only half the problem. The other half is knowing which parts of the setup will still matter when the next scoring card appears.

The anxiety is not speed alone. It is building under uncertain conversion-to-score.

The ends of eras

In Brass: Birmingham, the board has a memory with an expiration date. The game moves through a Canal Era and a Rail Era. At the end of the first era, level-1 industries are swept off the board, while higher-level industries survive. Some early infrastructure scores and disappears. Some carries forward.11. Brass: Birmingham rules summary, Order of Gamers, v1.2.

That distinction applies at every scale: individual users, organizations, and the laboratories developing the models themselves all have to ask what survives the next era.

Some AI work is level-1 industry. It is valuable because of today's awkwardness, but fragile when the awkwardness goes away. Elaborate prompt chains matter when the model cannot plan reliably. Manual context-packing matters when the window is too small. Glue code matters when the platform has not yet absorbed the obvious integration. None of that work is stupid. It may be exactly right for the current era. But a workflow that felt sophisticated in March can become bookkeeping in May.

The older infrastructure in Brass is not simply “wrong.” It can score. It can fund the next move. A player can spend a long time profitably trading in the old era. Many current AI workflows will work the same way. They will keep paying. They will keep finding buyers. In a less competitive setting, players may thrive for years with the old cards.

In a more competitive game, holding onto old cards converts to a loss.

Someone who becomes excellent at coaxing today's interface into behaving may be doing useful work. Someone who turns repeated work into owned context, domain examples, review standards, better judgment, and stronger switching ability may be building something that survives the next release. Both can be making progress. Only one may be laying rail-era track.

“Build durable assets” is still too quick as a conclusion. A coding script can become a checkbox. A memory system can be over-engineered in light of a longer context window. The label “durable” is itself a bet on a future scoring rule.

Multiplier effects

In Through the Ages, late events can make one kind of earlier investment matter more than another.22. Through the Ages rules summary, UltraBoardGames. In Concordia, final scoring does not simply ask how much stuff you have. It asks how your cards convert what you have. A card of one deity may score houses in certain cities; another may score colonists; another may score money. The same board can be worth different amounts depending on the scoring cards attached to it.33. Concordia official rules, Rio Grande Games.

Multipliers are not always bonuses. A multiplier can also be a discount. If you built heavily toward a category that scores weakly, the pieces do not vanish. They just convert poorly.

Before ChatGPT 3.5, many forms of white-collar competence were valuable because they were scarce. A person who could write acceptable copy, turn requirements into code, summarize a messy document, or produce a polished first draft owned a useful differentiator. Those abilities still cash out. But the multiplier on generic production has fallen because the supply of generic production has exploded.

In other words, AI is not merely multiplying the value of pre-existing virtues. It is also applying fractional multipliers to skills and investments that used to convert more cleanly. “I can summarize” does not score the same way when summarization is a default feature. A multiplier can level a field by making a scarce action cheap. It can also widen the gap if the scarce resource moves somewhere only a few players can reach.

Productivity gains can therefore feel strangely unsatisfying. Doubling output is not doubling score if the scoring card now says raw output is worth a fraction of what it used to be. You can be faster, better equipped, and more productive while the market value of the thing you learned to produce is falling underneath you.

When a move changes the board

In a multi-player game, your action may do more than improve your own position. It can change the board state other players inherit. You take the last worker space. You unlock a technology. You flood a market. You trigger an event. What was a choice for you becomes a constraint for someone else.

A frontier lab releases a cheaper, more capable model. For the lab, that is a move to improve position, attract usage, justify capital spending, and force competitors to respond. For a firm, the same release becomes a new action space: adopt it, ignore it, reorganize around it, or explain why not. For an individual worker, the same release may arrive as a rule change: AI skills required, hiring slowed, entry-level role redesigned, the same team expected to produce more.

The clock changes behavior before the outcome is settled. Jasmine Sun reported a Silicon Valley mood in which founders, engineers, and executives talk as if the window for human economic leverage may be closing: build wealth now, get inside the winning side now, automate before competitors automate you.44. Jasmine Sun, New York Times Opinion, April 30, 2026. Once people believe that, model releases become labor-market events. Labs aim benchmarks at economically recognizable work. Executives slow hiring or cut roles because they do not want to be the firm holding stale cards. Workers race toward AI-adjacent positions not because they know the end state, but because waiting can feel like passing a turn in a game whose remaining turns seem finite yet unknown.

One player's move becomes another player's game mechanic. A model benchmark is a public score for a lab, a procurement signal for a firm, and a threat or opportunity for a worker. A cheaper model is margin for one actor, budget relief for another, and a new baseline expectation for a third.

The practical danger is lost turns. A lab can miss a model cycle and still raise again, or fail outright. A firm can absorb AI well, waste money on it, be acquired, or disappear. A worker can gain leverage, lose leverage, or find that the entry rung into a field has been removed. “The game changed” is not an abstraction when it changes which actions a player is allowed to take next.

The pressure is to improve your position while preserving access to the next round.

When the scoreboard stops telling you who is winning

Many games give players a public track: a marker moves, a leaderboard updates, everyone can see who is ahead. Public tracks are useful because they make progress legible. They are also misleading if legible progress is weakly predictive to the final outcome. The public track may indicate similar standing when there is a hidden imbalance in decisive factors: the conversion rate, the hidden objective, the scarce resource, the timing of the next cash-out, the card no one else noticed.

AI benchmarks are visible scoring tracks. They are not meaningless, but they lose strategic value when they saturate or when top players cluster. Stanford's 2026 AI Index reports that capability is outpacing benchmarks designed to measure it; Humanity's Last Exam improved by 30 percentage points in one year, and some evaluations intended to remain challenging for years saturated in months.55. Stanford HAI, “Technical Performance,” 2026 AI Index Report. The same technical-performance chapter reports top-model convergence on Arena ratings, jaggedness across tasks, and agents that advanced sharply while still failing roughly one in three structured attempts.66. Stanford HAI, “Technical Performance,” with chapter-level model and agent details.

Price is another visible track. A move that once cost too much to repeat can become cheap enough to spam. Epoch's analysis of LLM inference prices found rapid but uneven declines: across selected benchmark thresholds, the price of reaching fixed performance levels fell between 9x and 900x per year, with a median of 50x per year.77. Cottier, Snodin, Owen, and Adamczewski, Epoch AI, March 12, 2025. Anthropic's Claude Haiku 4.5 announcement made the same rule change concrete: performance that had recently been near the frontier became available at roughly one-third the cost and more than twice the speed of a model from five months earlier.88. Anthropic, “Introducing Claude Haiku 4.5,” October 15, 2025.

Those price changes alter the action economy. If a move becomes cheap, players do more of it: more drafts, more experiments, more agents, more background automation. The constraint moves to whatever still has not become cheap: deciding what is worth asking, checking whether it worked, integrating the result, taking responsibility for consequences.

Cheaper AI matters because it changes what a move is worth. If everyone can advance on the visible benchmark track, advantage moves to cost, reliability, latency, domain fit, and trust. If everyone can generate plausible output, advantage moves to deciding which output matters. If everyone can launch agents, advantage moves to designing tasks, evaluating results, and owning the context those agents use.

You can keep optimizing for the public marker after the game has shifted to a private conversion rate.

Which turns buy future turns?

Some turns improve your future turns. Some turns cash out what you can already do.

The first kind is carry-forward work. It creates context, examples, review standards, reusable code, better judgment, clearer positioning, or a distribution path that makes the next output easier to trust. After carry-forward work, the next turn is cheaper, better, or more legible because of what happened before.

The second kind is cash-out work. It turns the current setup into an output: a product, a course, a client deliverable, a publication, a hire, a sale, a launch. The cash-out may matter enormously. It may pay rent. It may validate the setup. But it is not the same thing as making the next turn cheaper.

The difference is concrete. Writing a prompt that produces this week's report may be cash-out work. Turning the report's failures into reusable examples, a review checklist, and a better source bundle is carry-forward work. Building a small tool to survive one awkward interface may be cash-out work. Designing the tool so it can be discarded when the platform changes may be carry-forward work because it preserves switching ability.

Both kinds of work are necessary, and both can be misread.

One mistake is not playing. You wait for the tools to stabilize, and while you wait, other people learn what kinds of moves exist. Another is mistaking absolute productivity for relative position. You double your output, feel the improvement, and miss that the table doubled too. A third is treating cash-out work as if it were carry-forward work: accumulating subscriptions, mastering brittle tool tricks, shipping generic AI-flavored artifacts, or positioning yourself as an “AI expert” on the same visible axis everyone else just discovered. Yet another is cultivating a position you never cash in: publishing without products, taste without distribution, judgment without legibility. A setup that never produces an output eventually decays because nothing in the world tests whether its judgments were right.

Run four checks:

  1. Which of my current activities makes later turns cheaper or better?
  2. Which activities only cash in the current state of the setup?
  3. Which activities am I misclassifying because their visible output flatters me?
  4. Which axis am I competing on: the public axis where the multiplier is collapsing, or a less visible axis where my specific stack converts?

The value of the metaphor is not that it names two neat mistakes. It gives a way to evaluate decisions when the field is competitive, the timeline is contested, and the scoring rule is still hidden. A move can be productive and still put you on the wrong axis. A move can look quieter and still preserve better future turns.

The short run can be a lifetime

The short run matters because people build careers inside it.

Sun reports that many people building the technology fear workers will lose economic leverage faster than society can adapt.44. Jasmine Sun, New York Times Opinion, April 30, 2026. The most extreme version of that fear imagines a permanent underclass. The immediate version is narrower and still severe: career ladders can break before the aggregate economy decides what replaced them.

Stanford's Digital Economy Lab has reported evidence consistent with this early-ladder damage. Its “Canaries in the Coal Mine” work found a relative employment decline for early-career workers in the most AI-exposed occupations, while more experienced workers in the same occupations remained more stable.99. Brynjolfsson, Chandar, and Chen, Stanford Digital Economy Lab. The exact causal story will be debated. It should be. But the pattern matters because it fits the game mechanic: the early turns are where compounding is supposed to begin, and those are the turns most easily removed.

The Oxford economist Carl Benedikt Frey, quoted in that essay, put the human time scale plainly: “the short run can be a lifetime.”44. Jasmine Sun, New York Times Opinion, April 30, 2026. Aggregate curves can hide that. A profession may eventually reorganize. New jobs may eventually appear. A firm may eventually learn to use AI productively. But if the entry rung disappears while you are trying to step on it, the long run is not the one you get to live.

In a Eurogame, early moves matter because they determine what later moves are available. The current AI landscape produces dread for the same reason. Each turn can be both meaningful and provisional. You can do new things every round. You can get more from the same budget. You can build workflows that would have felt impossible a year ago. And still, you may be building on a track whose multiplier is falling, while the track that would have carried you forward is being claimed elsewhere.

The work still matters, but productive moves buy different futures. Ask what the move leaves behind: owned context, better judgment, easier switching, or an output that tests the setup in the world. Ask where it places you: on a track where your particular stack converts, or on the track where everyone is piling in because the marker is visible.

The multiplier is not yet drawn. The work is to keep building without pretending it is.

Share
Continue reading Front page →