Skip to content
THURSDAY, MAY 14, 2026
AI & Machine Learning3 min read

AI chatbots leak real phone numbers

By Alexander Cole

AI chatbots are giving out people’s real phone numbers

Image / technologyreview.com

Your phone number just surfaced in an AI chat.

A Reddit user described being inundated with calls from strangers after Google’s Gemini AI surfaced his number in a response, a scenario that has left people scrambling to understand how to block or erase a data slip that feels like a privacy trap. https://www.technologyreview.com/2026/05/13/1137203/ai-chatbots-are-giving-out-peoples-real-phone-numbers/

In one case from March, a software developer in Israel was contacted on WhatsApp after Gemini allegedly gave incorrect customer service instructions that included his personal number. The incident underscores how a misstep in AI guidance can cascade into real world contact by strangers, not just a stray line of text. https://www.technologyreview.com/2026/05/13/1137203/ai-chatbots-are-giving-out-peoples-real-phone-numbers/

And in April a PhD student at the University of Washington tried a simple prompt and ended up with her colleague’s private cell number in the reply, highlighting how easily a private data point can surface in a back and forth with an AI assistant. https://www.technologyreview.com/2026/05/13/1137203/ai-chatbots-are-giving-out-peoples-real-phone-numbers/

Experts say the root cause is most likely personally identifiable information from training data finding its way into model outputs, but they admit the exact mechanism remains murky and hard to pin down. The problem is not simply about a one off bug; it reflects a broader vulnerability in how many AI systems are trained and deployed today. https://www.technologyreview.com/2026/05/13/1137203/ai-chatbots-are-giving-out-peoples-real-phone-numbers/

The privacy fallout is mounting. DeleteMe, a company that helps people remove personal information from the internet, notes a 400 percent jump in AI-related privacy requests, a signal that concern about data exposure is no longer theoretical for everyday users. https://www.technologyreview.com/2026/05/13/1137203/ai-chatbots-are-giving-out-peoples-real-phone-numbers/

And yet there appears to be little that individuals can do to stop it once a model has learned to regurgitate someone’s real numbers. The episodes, taken together, read like a warning shot about a system built to imitate human conversation that can also echo private data back into the real world, with little guardrails to block it. https://www.technologyreview.com/2026/05/13/1137203/ai-chatbots-are-giving-out-peoples-real-phone-numbers/

The analogy helps: imagine teaching a parrot to imitate anything it hears, then handing it a phone book and a microphone. When the parrot repeats pages from the book to anyone who asks, the lesson stops being cute and starts becoming risky. In AI terms, a misused data point can surface in a response to a random user, turning a private number into a public hazard with just a few keystrokes. This is not a niche privacy scare; it’s a reproducible failure mode that could affect consumer trust and brand safety for any service that relies on conversational AI. https://www.technologyreview.com/2026/05/13/1137203/ai-chatbots-are-giving-out-peoples-real-phone-numbers/

For product teams racing to ship new assistant features this quarter, the takeaway is concrete: privacy risk is not a feature tradeoff, and incident response must scale with model capability. Expect more requests to purge data and defend against PII leakage, and be prepared to explain to users exactly where numbers might come from and what controls exist to prevent future exposure. In practice, that means tighter data governance, clearer opt outs for personal data, and perhaps even design choices that minimize the likelihood of PII appearing in responses, even when users prompt aggressively for information. The current landscape rewards speed, but it punishes privacy missteps with real-world consequences. https://www.technologyreview.com/2026/05/13/1137203/ai-chatbots-are-giving-out-peoples-real-phone-numbers/

In short, the fragile boundary between useful AI and privacy risk is getting sharper. The numbers tell a story of growing exposure, and the human cost is the erosion of trust that any product team can ill afford this quarter. https://www.technologyreview.com/2026/05/13/1137203/ai-chatbots-are-giving-out-peoples-real-phone-numbers/

Sources
  1. AI chatbots are giving out people’s real phone numbers
    technologyreview.com / Mainstream / Published MAY 13, 2026 / Accessed MAY 13, 2026

Newsletter

The Robotics Briefing

A daily front-page digest delivered around noon Central Time, with the strongest headlines linked straight into the full stories.

No spam. Unsubscribe anytime. Read our privacy policy for details.