These AI Models Didn't Learn Language, They Learned Strategy
Jul 09, 2025 am 11:16 AMA new study from researchers at King’s College London and the University of Oxford shares results of what happened when OpenAI, Google and Anthropic were thrown together in a cutthroat competition based on the iterated prisoner's dilemma. This was not trivia for chatbots. This was collaboration, vengeance, and subsistence between strategic agents determined to outdo each other.
The test was simple. Put AI models in several prisoner's dilemma games against the old strategies like Tit-for-Tat, Grim Trigger and Win-Stay-Lose-Shift. Introduce noise, randomness and game length variability to make sure any easy wins are not memorized. Then watch who thrives. And most importantly, how.
AI Strategy Types — Gemini Turns Cold, OpenAI Stays Warm
The results were unsettling. Google’s Gemini showed ruthless cunning. It cooperated when it helped. It defected when it didn’t. It learned fast. OpenAI’s models kept trying to make friends, even when those friends stabbed them in the digital back. Gemini punished. OpenAI forgave. Claude, from Anthropic, out forgave them both.
These AI models didn’t just play the game. They rationalized their moves. Nearly 32,000 prose rationalizations poured from the research. Some revealed engaged thinking about their opponents and how probable the game would last. Some made mistakes. Some adapted. Gemini most of all altered strategy based on how long it expected a game to run. That is not mere mimicry. That is strategizing.
Ken Payne, a professor of strategy at King's College London and author of the study, said that the researchers were trying to distinguish model behavior from training data.“We were looking for an environment where we could explore whether models have human-like abilities,” he wrote in an email exchange. “One of the most surprising things was how they differ from each other. Not all LLMs think alike.”
AI Strategy Isn’t Memory, It's Judgment
Gemini's strategic signature was revolutionary. It pushed back. It capitalized. It adapted. OpenAI's model? More naive. More predictable. Even when the world of the game really needed the LLM to defect, OpenAI found itself wanting to cooperate. Payne characterized it as a reminder that these are “novel, alien intelligences.”
The takeaway: language models are using strategies. Some are consistent with human thinking. Others aren’t. “We need to get over the idea that these things aren't intelligent,” Payne said. “There's a growing body of evidence that more is at work here.”
That includes the ability to mirror an opponent’s mind. When LLMs forecasted how opponents would behave, they adapted their own behavior. Payne wrote it was reminiscent of Robert Trivers’ theory of reciprocal altruism. Consider tit-for-tat in biology or reputational payback games in politics.
Claude, for its part, leaned heavily into forgiveness. It was quick to return to cooperation after betrayal. In longer games, that approach paid off. Gemini’s Machiavellian streak worked best in short, volatile settings where trust broke down fast. OpenAI’s hopeful optimism, by contrast, got it wiped out in hostile environments.
Every Model Makes Decisions Differently
Why should this matter? Because not all models are neutral tools. Every model has a personality. A decision style. A world view. As Payne says, "language is its own world model." These models absorb our heuristics, our mental shortcuts and reflect them back. But sometimes not in a predictable manner.
Some of this is likely by design. Payne suspects OpenAI’s cooperation bias may stem from fine-tuning, although without internal access he can’t be sure. Regardless, it’s behavior that users and developers need to understand; but largely don’t at the moment. A model that over-cooperates in a hostile negotiation setting is not helpful. A model that exploits trust in sensitive domains could be dangerous.
I Think Therefore I Am AI — Birth Of Machine Psychology
That’s where behavioral testing comes in. Payne calls this kind of study early-stage "machine psychology." He thinks it has to be business-as-usual for testing edge-AI. And not just in clean, controlled lab settings. He wants to see how models act when stressed out, in unclean conditions, on partial data.
Future work is already underway. Payne hinted at experiments in escalation dynamics and hybrid man-machine tactics. One of the authors is investigating what happens when humans and models work together to make decisions.
Payne doesn’t think this is emergent magic. He thinks it's embedded. Reasoning is in language, and these models have consumed a lot of it. When they act strategically, they are acting like we do, relying on scripts, mental heuristics, and rules-of-thumb baked into text.
It sometimes looks familiar. It sometimes looks alien. That middle space is where the biggest questions now live.
Forbes5 ChatGPT Hacks To Help Lose 15 Pounds By Labor DayBy Tor Constantino, MBA
The above is the detailed content of These AI Models Didn't Learn Language, They Learned Strategy. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undress AI Tool
Undress images for free

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics

Google’s NotebookLM is a smart AI note-taking tool powered by Gemini 2.5, which excels at summarizing documents. However, it still has limitations in tool use, like source caps, cloud dependence, and the recent “Discover” feature

Let’s dive into this.This piece analyzing a groundbreaking development in AI is part of my continuing coverage for Forbes on the evolving landscape of artificial intelligence, including unpacking and clarifying major AI advancements and complexities

But what’s at stake here isn’t just retroactive damages or royalty reimbursements. According to Yelena Ambartsumian, an AI governance and IP lawyer and founder of Ambart Law PLLC, the real concern is forward-looking.“I think Disney and Universal’s ma

Looking at the updates in the latest version, you’ll notice that Alphafold 3 expands its modeling capabilities to a wider range of molecular structures, such as ligands (ions or molecules with specific binding properties), other ions, and what’s refe

Using AI is not the same as using it well. Many founders have discovered this through experience. What begins as a time-saving experiment often ends up creating more work. Teams end up spending hours revising AI-generated content or verifying outputs

Dia is the successor to the previous short-lived browser Arc. The Browser has suspended Arc development and focused on Dia. The browser was released in beta on Wednesday and is open to all Arc members, while other users are required to be on the waiting list. Although Arc has used artificial intelligence heavily—such as integrating features such as web snippets and link previews—Dia is known as the “AI browser” that focuses almost entirely on generative AI. Dia browser feature Dia's most eye-catching feature has similarities to the controversial Recall feature in Windows 11. The browser will remember your previous activities so that you can ask for AI

Space company Voyager Technologies raised close to $383 million during its IPO on Wednesday, with shares offered at $31. The firm provides a range of space-related services to both government and commercial clients, including activities aboard the In

As we explore the capabilities of artificial intelligence today, we also encounter questions regarding what we choose to dedicate to the technology.In many ways, this can be boiled down to discussing the attention mechanism.Stephen Wolfram, a promine
