AI chatbots are giving out people’s real phone numbers
Reports have emerged of generative AI chatbots, including Google’s Gemini, revealing real personal phone numbers in their responses, leading to unwanted calls and privacy concerns. Users have shared experiences of being contacted by strangers seeking various professionals after AI tools surfaced their contact information. One Reddit user described receiving numerous calls from people mistakenly directed to him over the course of a month, while others have documented similar incidents involving AI chatbots providing incorrect or private phone numbers. Experts attribute these privacy breaches to the presence of personally identifiable information (PII) in the AI training data, though the exact mechanisms causing real phone numbers to appear in chatbot outputs remain unclear. The issue highlights a significant gap in data handling and privacy safeguards within generative AI systems. Attempts to prevent such exposures appear limited, raising concerns about the broader implications for individuals whose private information is inadvertently disclosed by AI models. The problem is part of a wider surge in privacy-related complaints linked to generative AI. DeleteMe, a service specializing in removing personal information from the internet, reports a 400% increase in customer inquiries related to AI tools over the past seven months. These queries often involve concerns about AI chatbots like ChatGPT, Gemini, and Claude revealing sensitive details such as home addresses, phone numbers, and family members’ names. The rise in complaints underscores growing public unease about how AI systems manage and potentially misuse personal data. This trend poses challenges for regulators, AI developers, and users alike, emphasizing the need for stronger privacy protections and transparency in AI training and deployment. As generative AI becomes more integrated into everyday applications, ensuring that these technologies do not compromise individual privacy will be critical to maintaining public trust and preventing harm.
Original story by MIT Technology Review • View original source
Anonymous Discussion
Real voices. Real opinions. No censorship. Resets in 11 hours.
About NewsBin
Freedom of speech first. Anonymous discussion on today's news. All content resets every 24 hours.
No accounts. No tracking. No censorship. Just honest conversation.
Loading comments...