Vyro AI Leak Reveals Poor Cyber Hygiene

Vyro AI Leak Reveals Poor Cyber Hygiene

Summary

Vyro AI accidentally exposed around 116GB of sensitive user data across three products: ImagineArt, Chatly and Chatbotx. Cybernews researchers found the dataset had been indexed by IoT search engines in February, suggesting it may have been discoverable for months. The exposed information included user prompts, bearer authentication tokens and user agents; logs came from both production and development environments and represented roughly two days to one week of logs at any one time.

Exposed tokens are a major concern because they can enable account hijackings, give access to chat histories and generated images, or be abused to purchase AI tokens. Vyro — a Pakistan-based company with over 150 million app downloads across its portfolio and Chatly having 100,000+ downloads — is at the centre of this incident.

Key Points

  1. 116GB of user data leaked from Vyro AI apps ImagineArt, Chatly and Chatbotx, including prompts, bearer tokens and user agents.
  2. The dataset was indexed by IoT search engines in February, so it may have been exposed for months before discovery.
  3. Exposed bearer tokens could allow attackers to hijack accounts, access full chat histories and generated assets, or buy AI tokens fraudulently.
  4. The leak included both production and development logs, covering short windows (about two days to a week) but sufficient for meaningful compromise.
  5. Broader research (Harmonic Security) shows many users and enterprises submit sensitive files and prompts to GenAI tools—22% of files and ~4% of prompts in a sample contained sensitive data.
  6. Immediate mitigations: revoke exposed tokens, monitor GenAI usage, educate staff on safe prompt behaviour, and favour self‑hosted LLMs or strict policies when using third‑party models.

Why should I read this?

Look — this is exactly the kind of avoidable mess that keeps security folks up at night. Tokens leaked, logs left wide open, and months of potential exposure. If you or your teams use chatbots or third‑party AI tools, this is a quick wake‑up call to fix token handling, tighten log access and stop pasting secrets into prompts.

Context and relevance

This incident feeds into a bigger pattern: organisations rushing to adopt GenAI without commensurate security controls. As enterprises push sensitive data into AI services, failures in basic hygiene (storage, access controls, logging) translate directly into compromise. Regulators, risk teams and security ops should treat these incidents as indicative of systemic gaps around AI usage policies and infrastructure configuration.

Author’s take

Punchy and plain: if you care about protecting customer data, treat GenAI as an extension of your attack surface. Train users, lock down tokens, monitor what’s sent to models and consider self‑hosted options where possible. This one’s worth acting on, not just reading.

Source

Source: https://www.darkreading.com/cyberattacks-data-breaches/vyro-ai-leak-cyber-hygiene

Leave a Reply

Your email address will not be published. Required fields are marked *