last month i ran an experiment. four frontier LLMs – GPT-5.2, Claude Opus 4.5, Grok 4, and Gemini 3 pro — received strategic decision making tasks under genuine uncertainty. Not the type of uncertainty where you don’t know the answer yet, but will look it up when given the opportunity; the type of uncertainty where there literally is no answer until your opponent decides what to do.
Results were fascinating and somewhat embarrassing for models worth billions in compute.

The study
Tasks involved variations of classic imperfect information games. Those types of scenarios include situations where each participant holds private information invisible to others and must make sequential decisions modeling what their opponents may do. Think competitive bidding, negotiation or any situation where you’re committing irreversibly without full knowledge of all the facts.
All models performed well in the structured opening phase of each task. That’s because decisions made in that part of the process mapped to defined ranges and used mathematical guidelines. The models selected reasonable actions and built sensible strategies. Adjustments based on observable signals were consistent across each step. This part was almost human-like.
However, the moment the scenario became dynamic, requiring multi-step planning over uncertain future states — everything fell apart. The models couldn’t maintain a coherent Plan between sequential decisions. They treated each step independently rather than as part of an integrated strategy.
Where the reasoning breaks down
Sequential processing of information is how LLMs function. Therefore, they don’t update a probability distribution over hidden variables as new observations are obtained. They don’t “think” “if i commit here, how will my opponent respond, and how does that impact my options three steps from now?”
Nate silver recently described this perfectly in an essay. These models Reason like someone who read extensively about strategy but never actually had to execute under pressure. They understand concepts isolately. However, they cannot integrate those concepts into a multi-step Plan where each decision constrains future decisions.
Deepmind’s google kaggle game Arena confirmed this at scale in early 2026. Ten leading LLMs competed across multiple imperfect information benchmarks. Although the winner outperformed all other models competing, its performance would not have survived against a moderately experienced human strategist.
Specialized systems tell a different story
Where things get interesting is with purpose-built AI systems. While general-purpose LLMs struggle with imperfect information tasks, purpose-built AI systems have been super-human at such tasks since 2017.
Carnegie mellon’s Libratus achieved this using Counterfactual Regret Minimization — a technique specifically designed for environments containing hidden information.
These systems don’t “understand” strategy similar to how a language model attempts to. They don’t analyze case studies in natural language or discuss tactics. Instead, they play billions of scenarios against themselves and minimize regret — literally calculating how much better they could have done if they chose each alternative action and then adjust accordingly.
The gap between an llm handling uncertainty and a Specialized system is roughly equivalent to the gap between a philosophy professor explaining how to ride a bicycle and an Olympic cyclist riding one. Both understand the concept. Only one can execute.
The SpinGPT exception
One interesting outlier exists. Researchers published SpinGPT in late 2025 — the first LLM fine-tuned specifically for imperfect information decision-making. Instead of utilizing a general-purpose model and hopping it figures out strategy, the researchers trained a language model on solver outputs and actual game data.
SpinGPT matched expert-level recommendations 78 percent of the time and achieved a positive performance rate vs established benchmarks. Not superhuman — but solidly competent — better than most casual practitioners.
That indicates the architecture isn’t the problem. LLMs can learn to handle uncertainty when trained with the right data and objective. A general-purpose chatbot which learns strategy from internet discussions will perform like someone who learned strategy from internet discussions.
What this means for AI builders in 2026
I believe imperfect information benchmarks represent the best test we currently have for evaluating AI reasoning. They force a system to:
Reason under genuine uncertainty where you cannot know the correct answer Plan across multiple sequential decisions with irreversible consequences Model an adversary whose goal is to deceive you Balance information gathering against exploitation make decisions where the optimal strategy depends on hidden variables.
The fact that frontier LLMs still struggle with these tasks — while Specialized systems resolved the two-player version eight years ago — tells us that general reasoning and domain-specific expertise are fundamentally different things.
My bet is that hybrid systems will be seen first. Something similar to Spingpt’s approach where an llm-type architecture handles high-level strategic reasoning while a dedicated module tracks belief states and calculates expected outcomes in real-time. Not pure language model. Not pure solver. Something in between.
Currently, if you’re building AI agents which need to handle genuine uncertainty — not just missing data, but also adversarial hidden information — don’t begin with an llm. Begin with the literature on game theory. CFR and its derivatives are your foundation. Layer language understanding on top if needed.
Models will improve. However, the gap between “can discursively discuss strategy eloquently” and “can execute strategy under pressure” remains tremendously large. Closing this gap will require more than scaling transformers.

![[Two Pronged] Grandma wants to gift her grandkids a dog against her daughter’s wishes](https://www.rappler.com/tachyon/2026/03/two-pronged-FAMILY-DOG-advice.jpg)


