AI recommendations lack accountability. Reviews are fake. Content is generated. The commitment layer is missing.
The problem
When ChatGPT tells you a restaurant is excellent, it's drawing on reviews that may be fabricated, ratings that may be gamed, and content generated at scale. There is no accountability chain from recommendation to reality.
AI has made content infinitely cheap. Any signal that can be expressed in words can be faked. The question is: what cannot?
The thesis
PageRank worked because hyperlinks were costly acts — a website owner putting their reputation behind another page was a meaningful signal. That signal was hard to fake in 1998. In 2026, it is easy.
But there is a category of human action that remains structurally hard to fake: commitment. A person who visits the same restaurant twelve times in thirty days. A company with twelve years of profitable operation. A customer who has purchased from the same supplier across three different economic cycles.
These are behavioral signals rooted in real cost — time, money, attention, reputation on the line. No language model can manufacture them at scale without bearing the actual cost.
"When content becomes free, commitment becomes scarce. The commitment layer is what remains hard to fake."
Commit captures, aggregates, and surfaces these signals — so AI recommendations, search results, and trust scores are grounded in reality, not manufactured consensus.
Think of it as the trust layer that should have been built alongside the information layer — but wasn't, because we didn't need it until now.
Three curves converged in early 2026. AI search is wrong about local businesses a third of the time — the trust problem is acute. Zero-knowledge proofs hit production (zkTLS: 3M verifications, zero fraud). And proof of personhood reached scale (World ID: 18M verified humans; eIDAS 2.0 mandates wallets for 450M Europeans by year-end).
Each component existed in isolation. The integration — behavioral proofs from verified humans, consumed by AI systems — is what nobody has built.
What we're building
AI agents and recommendation systems query a simple API: how many real humans committed to this, and how deeply? Instead of scraped reviews and gamed ratings, they get behavioral signals rooted in real cost — time, money, sustained engagement.
When ChatGPT, Perplexity, or Claude recommends a business, the extension surfaces what's real: years of operation, financial health, repeat visitor rate — verified from public records and anonymous behavioral data. Useful from install one.
A privacy-preserving protocol for contributing behavioral commitments anonymously. Zero-knowledge proofs let anyone prove they committed — without revealing who they are. The foundation for trust infrastructure that can't be gamed.
Reputation can be manufactured. Reviews can be bought. But repeat purchases, staked capital, and sustained behavioral patterns require real cost.
Any system that substitutes opinion for commitment will be gamed. We're building the alternative.
Where Commit sits
AI agents, recommendation engines, autonomous commerce. They need trust data. They don't produce it.
Commitment signals aggregated across sessions, operators, time. Not "is this agent authenticated?" but "has this agent earned trust?"
x402 (23 Foundation members inc. Visa, Mastercard, Stripe) standardized agent payments. Mastercard Verifiable Intent proves delegation. The easier payment gets, the more critical governance becomes.
Visa TAP, eIDAS 2.0, World ID. Agent authentication and proof of personhood. Necessary foundation. Not sufficient for trust.
L2 and L3 are being built by Visa, Mastercard, the EU, and the Linux Foundation. L4 — the behavioral trust layer between identity and application — is the gap. That's where we operate. Read more →
Foundation layer
Registry data — years of operation, financial health, regulatory status — is the verifiable foundation. It tells you a business has skin in the game: capital committed, years survived, filings maintained.
The full picture requires behavioral data: repeat customers, return rates, sustained engagement. That layer is what we're building. The extension shows where it starts. The protocol defines where it ends.