Amazon Didn't Ban an Agent. It Created a New Legal Category.
The Perplexity/Amazon case just established that platform authorization and user delegation are distinct legal requirements. Every platform in the world now has to decide which agents they trust — and how.
On March 9, a federal judge handed Amazon a preliminary injunction blocking Perplexity's Comet shopping agent from accessing Amazon's platform. Amazon must respond to Perplexity's appeal by April 22. The case is live.
Most coverage focused on the competitive angle — Amazon protecting its ad revenue from an AI that bypasses sponsored listings. That's real. But it obscures the more important thing that happened.
For the first time, a U.S. federal court ruled that user delegation does not constitute platform authorization. A user telling their AI agent to shop on Amazon doesn't mean Amazon has consented to the agent accessing its platform. Those are separate legal events requiring separate consent.
That's not just a ruling about Perplexity. It's the creation of a new legal surface that every platform — every retailer, every marketplace, every API, every service with a login — now has to manage.
What Actually Happened
Perplexity's Comet browser-agent shops on behalf of users. User logs in, grants Comet access to their Amazon account, Comet executes the purchase. From a user perspective: authorized. From Amazon's perspective: its systems accessed without Amazon's consent.
Amazon sued in November, alleging Comet violated the Computer Fraud and Abuse Act by accessing password-protected accounts — with user permission, but without Amazon's authorization. The court found that distinction dispositive.
Perplexity pushed back hard: "A Comet user accessing Amazon from her own computer is no more equivalent to Perplexity accessing Amazon than a Safari user accessing Amazon from her own computer is equivalent to Apple accessing Amazon."
It's a clean analogy. And U.S. District Judge Maxine Chesney rejected it.
The court's reasoning turns on what "authorization" means under the CFAA when an autonomous agent is involved. A browser executes user intent — the user controls every action, and the platform never doubts that. An agent executes on behalf of a user — but the agent has its own operating logic, its own infrastructure, its own commercial relationships, its own behavior when edge cases arise. The question "did the user authorize this?" doesn't fully answer "did the platform authorize this entity's access?"
Perplexity appealed. The 9th Circuit granted a stay March 30. On April 9, the EFF, Mozilla, and other groups filed a friend-of-the-court brief arguing the CFAA interpretation is "antithetical to foundational principles of the open internet." Amazon must respond by April 22.
The legal fight is genuinely contested. That doesn't change what the ruling created: platform-side agent authorization as a distinct legal concept.
Why Litigation Doesn't Scale
Amazon spent months and significant legal resources to block one agent from one company. The injunction focused on authorization — the absence of platform consent — rather than demonstrable business harm. Amazon's theory wasn't that Comet had hurt its business; it was that Amazon hadn't consented to Comet's access, and that distinction alone was sufficient under the CFAA.
When Perplexity circumvented Amazon's initial technical block — Amazon documented in court filings that Perplexity pushed a software update within 24 hours of Amazon's August 2025 technical barrier to restore Comet's access — Amazon's only recourse was to use it as evidence of willful conduct. You can't CFAA-litigate a software update in real time.
There are currently hundreds of AI shopping agents in various stages of development. Multiply by every major platform, and the litigation surface becomes absurd. Amazon vs. Perplexity established a legal theory. It did not establish a mechanism.
The litigation model has four failure modes:
Speed. Agents can update faster than courts can respond. Amazon documented in court filings that Perplexity pushed a software update within 24 hours of Amazon's August 2025 technical block to circumvent it — a fact the court noted as evidence of willful conduct.
Scale. The injunction required months of litigation. By the time the legal system catches up with any given agent, the market has moved.
Cost. The parties in this case are well-funded. Amazon versus a well-funded startup is a fair fight. Amazon versus fifty agents, some operated by individuals with nothing to lose, is not.
Discovery. Amazon had to prove Comet was accessing its systems, identify the technical methods, document the circumvention. Each new agent requires building that record from scratch.
Amazon won this round. The model doesn't generalize.
The Actual Solution: Trust Grants
The interesting question isn't whether Amazon can block Perplexity. It's what the alternative architecture looks like.
Amazon's reactive posture makes sense if you assume agents are adversarial by default. But that's not the market that's forming. Walmart and Target are actively testing collaborative integrations with AI platforms — not because they're naïve, but because agents that can complete purchases are also agents that can complete purchases on your platform. The question is which ones.
The scalable model is a platform trust grant system:
- Agents that demonstrate verified behavioral track records get access
- Unknown agents get restricted access or higher friction
- Flagged agents get blocked
This is how every mature ecosystem handles authentication at scale. Not "did someone authenticate once?" but "what is this entity's standing, based on behavioral history?" Platform trust for agents is the same architecture, applied to a new principal type.
The Comet case makes this concrete. The court didn't rule that Perplexity was malicious — it ruled that Amazon hadn't authorized Perplexity's access. A trust grant system would make that authorization explicit, portable, and programmatically enforceable. Amazon could have said: "Agents with a trust score above X, with no prior circumvention history, with operator accountability chains, may access our product catalog. Everything else routes to a restricted endpoint."
Instead of one lawsuit, they'd have a policy. The lawsuit is what happens when the policy doesn't exist.
Behavioral Detection ≠ Behavioral Trust
There's a related pattern worth naming. EmDash, the AI-native CMS that launched this month, uses Cloudflare bot scores to distinguish agents from humans. Its botOnly: true flag charges AI agents via x402 while humans browse free. The mechanism is probabilistic — traffic signatures, header patterns, request cadence — not identity-backed.
That's behavioral detection: figuring out whether a request comes from a human or a machine. It's useful. It's not sufficient.
Amazon's problem with Comet wasn't that Amazon couldn't detect it was an agent. Amazon knew exactly what Comet was. The problem was the absence of a trust framework that could answer: "Given that this is an agent, what is its standing on my platform? What has it done elsewhere? Who is accountable for its behavior? If it misbehaves, what's the consequence?"
Detection answers "what are you?" Trust answers "should I let you in, and on what terms?"
EmDash's bot detection creates the surface. It doesn't fill it. The same gap that Amazon is resolving via litigation is the gap that every platform with bot detection will hit when agents become sophisticated enough to pass behavioral detection tests.
What Platforms Will Reach For
The Amazon ruling gives every platform legal standing to require agent authorization independent of user delegation. Most of them will not pursue it via litigation. They'll build policies, implement technical controls, and look for something they can plug in that tells them which agents to trust.
That thing doesn't exist yet in systematic form. What exists is:
- Identity verification (TAP, Visa's protocol for "who is this agent?")
- Payment authorization (x402, Mastercard VI for "was this transaction delegated?")
- Access control (ConductorOne, CyberArk for "is this agent allowed this resource?")
None of these answer "should I trust this agent's behavior, based on what it's done across other deployments?" That question — behavioral trust across organizational boundaries — is structurally separate from identity and payment authorization.
The Perplexity/Amazon case is the first legal marker for why it matters. Amazon couldn't allege that Comet had a bad track record. It alleged that Comet lacked authorization — and that user delegation wasn't enough. The next phase of these disputes will involve track records. "Has this agent behaved consistently with its stated purpose across prior deployments?" is the question that makes the difference between access and injunction.
That's the data layer that's missing. Not credentials. Not payment proofs. Behavioral commitment history that travels with the agent across platforms.
Sources
- Amazon wins court order to block Perplexity's AI shopping agent — CNBC, March 10 2026
- Judge blocks Perplexity's AI bot from shopping on Amazon — GeekWire, March 10 2026
- Perplexity asks federal court to lift Amazon shopping agent ban — PYMNTS, March 2026
- Court finds AI agent may violate law by accessing Amazon accounts without authorization — Cooley LLP, March 17 2026
- Comparison shopping is not a (computer) crime — EFF, April 9 2026
- Watchdogs to court: Lift order banning Perplexity from Amazon — MediaPost, April 10 2026
- Amazon v. Perplexity AI — case page — Knight First Amendment Institute
This is part of an ongoing series on trust infrastructure for the autonomous economy. Earlier essays: The Agent Passed All the Checks. That Was the Problem., Who Decides What Agents Are Allowed to Buy?, Declarations Are Gameable, Mastercard Verifiable Intent Proves Authorization. Not Trust., When Your Best Model Is Your Biggest Risk. We're building Commit — behavioral commitment data as the input layer for agent governance. Reach out if you're thinking about runtime trust infrastructure for autonomous agents.