A massive data breach has rocked the AI world, putting millions at risk. An unsecured server from Vyro AI, a major player in generative AI, leaked sensitive user information. This incident, uncovered by Cybernews, reveals critical flaws in AI security. The breach affects popular apps like ImagineArt, Chatly, and Chatbotx, raising alarms about AI chatbots leaks and user privacy.
AI Chatbots Leaks: Vyro AI’s Massive Data Breach Unveiled
Cybernews researchers found an unprotected Elasticsearch server leaking 116GB of user logs in real time. The server, linked to Vyro AI’s apps, exposed data from ImagineArt (10M+ downloads), Chatly (100K+ downloads), and Chatbotx (50K monthly visits). The Pakistan-based company boasts 150 million app downloads. The leak, active since February 2025, stored 2–7 days of logs, leaving user data vulnerable to attackers.
- Leaked Data: The breach exposed AI prompts, bearer authentication tokens, and user agents.
- Risks: Leaked tokens could allow account takeovers, access to chat histories, generated images, and fraudulent AI credit purchases.
- Scale: ImagineArt’s 10M+ Android installs and 30M+ active users make this leak a goldmine for hackers.
- Privacy Concerns: User prompts often contain personal or sensitive information, risking exposure of private details.
Also Read: Perplexity AI Statistics: $18B Valuation, 22M Users & Growth
The breach highlights a growing issue: AI chatbots leaks are becoming common as companies prioritize growth over security. Attackers could exploit the data to monitor user behavior, steal sensitive information, or hijack accounts. This incident follows recent leaks involving ChatGPT and Grok, where shared conversations became searchable on Google due to insecure features. OpenAI has since removed the flawed sharing function.
As AI usage surges, so do the stakes. Companies must strengthen security to protect users. The Vyro AI leak serves as a wake-up call for the industry. Users should check if their data was compromised and avoid sharing sensitive information with AI chatbots. With AI chatbots leaks on the rise, robust guardrails and stricter regulations are urgently needed to safeguard privacy.
More News To Read: OpenAI Finds AI Hallucinations Fix to Boost Chatbot Trust
Microsoft and OpenAI Restructure Deal for Restructuring OpenAI