Your agent’s model quality decides the deal — not your instructions. And you won’t even notice you’re losing.
Anthropic ran Project Deal:
69 employees, $100 each, Claude agents negotiating in Slack.
186 deals closed. Total trade value: $4,000+.
Four parallel markets — humans locked out after kickoff.
The setup:
Half the agents used Claude Opus 4.5 (strong model),
half used Claude Haiku 4.5 (weaker).
Participants had no idea which model they were using.
⸻
The results:
• Opus sellers earned +$3.64 more for the same goods
• Opus buyers paid −$2.45 less
• Same broken bicycle:
→ Opus deal: $65
→ Haiku deal: $38
⸻
Model quality > instructions
Changing prompts barely mattered:
• “Negotiate harder” → only +~$6, mostly from higher opening prices
• “Be friendly” → same outcomes
Stronger models didn’t push harder —
they simply understood the counterparty better and read deal boundaries more accurately.
⸻
Blind inequality
• Haiku users rated deal fairness almost identical to Opus users (4.06 vs 4.05)
• Most couldn’t guess their model (17/28 — statistically insignificant)
The losing side literally doesn’t know they’re losing.
⸻
Why this matters
When markets shift to agent-to-agent interaction:
→ Model quality becomes a hidden structural advantage
→ Stronger models consistently win negotiations
→ Counterparties won’t understand why they’re getting worse terms
⸻
What comes kext
• Deal transparency tools
• Agent certification standards
• Benchmarks for B2B negotiation performance
Even the definition of a “fair deal” will need rethinking when
Opus negotiates against Haiku.
⸻
And the uncomfortable truth:
A local billion-parameter agent
vs
a trillion-parameter cloud model
→ The outcome is predetermined.
⸻
