Researchers have discovered a new way to steal user data using images. This method hides harmful instructions in pictures processed by AI systems. When these images are downscaled, the hidden prompts can become visible.
The technique was created by Kikimora Morozova and Suha Sabi Hussain from Trail of Bits. They built on ideas from a 2020 study by TU Braunschweig in Germany, which looked at similar image-scaling attacks.
Here’s how it works. When users upload images to AI platforms, the images are often made smaller for efficiency. Different resampling methods, like nearest neighbor or bicubic, can introduce artifacts. These artifacts might reveal hidden patterns, allowing harmful messages to surface in a downscaled image. In their example, a dark area in an image becomes red, causing black text to appear when it’s resized.
The AI then treats this text as valid instructions, leading to potential data leaks. For instance, the researchers managed to extract Google Calendar data to an email address without alerting the user.
This attack isn’t just limited to one AI model. The researchers confirmed its feasibility against several systems, including:
- Google Gemini CLI
- Vertex AI Studio
- Gemini’s web interface and API
- Google Assistant on Android
- Genspark
Given its potential, the threat may extend beyond these tested tools. To help demonstrate their findings, they developed Anamorpher, an open-source tool that creates images for each mentioned downscaling method.
To counteract this threat, the researchers recommend limiting the size of uploaded images. If downscaling is necessary, users should see a preview before their data is processed. Additionally, explicit confirmation from users should be required for sensitive actions, especially when images contain text.
For a stronger defense, they emphasize the need for secure design patterns in AI systems to prevent prompt injection attacks. This connects to recent discussions within the tech community about improving AI security. According to a June paper detailing design patterns to enhance AI resilience, systematic approaches will be vital to address such vulnerabilities.
Interestingly, a recent report noted that almost half of all environments experienced password breaches, indicating a growing threat landscape. As such data theft techniques evolve, staying informed and implementing strong defenses is more crucial than ever.

