Security flaws have been found in popular AI and machine learning libraries used in models hosted on Hugging Face. These libraries, developed by major companies like Nvidia, Salesforce, and Apple, allow malicious code to hide in metadata. This code can execute automatically when a contaminated file is opened, leading to potential risks.
What Are the Vulnerabilities?
The libraries in question—NeMo, Uni2TS, and FlexTok—rely on the Hydra library for configuration management. The main issue lies in Hydra’s instantiate() function, which can be misused to execute remote code if not handled properly. Palo Alto Networks’ Unit 42 discovered these flaws and alerted the library maintainers. While no actual attacks have been reported yet, the risks are significant. As Curtis Carmony from Unit 42 noted, attackers could easily exploit these weaknesses by modifying existing models.
The Bigger Picture
More than 100 libraries operate on Hugging Face, and around 50 utilize Hydra. This large attack surface heightens the risk of vulnerabilities being exploited. Recently, cybersecurity has become a hot topic, with experts stressing the importance of securing AI platforms. A recent report indicated that 86% of organizations view AI security as a priority in 2023.
How Vulnerabilities Were Exploited
The ways in which these vulnerabilities may be exploited are alarming. For instance, attackers could create a modified version of a popular model and incorporate harmful metadata. Hugging Face’s metadata isn’t flagged as unsafe, making it easier for such attacks to occur.
For example, NeMo’s files can make API calls without sanitizing metadata. This loophole lets an attacker trigger remote code execution (RCE) when a compromised file is downloaded. In response, heightened scrutiny has been placed on how the libraries process data.
Steps Taken by Companies
In response to these vulnerabilities, Nvidia issued a CVE to track the issue and patched NeMo. Salesforce also released a fix for Uni2TS, while Apple updated FlexTok to reduce risks. These companies are now emphasizing safer coding practices and actively monitoring for unauthorized access.
User Reactions
The AI and tech communities have been vocal about these issues on social media. Many users expressed concerns over the safety of downloading models from Hugging Face after learning about the vulnerabilities. This discourse highlights a growing awareness of cybersecurity in AI and its implications for developers and end-users.
Conclusion
In light of these findings, it’s clear that as AI technology evolves, so do the risks associated with it. Moving forward, developers must prioritize security and consider the potential vulnerabilities in libraries they utilize. Ensuring safe practices is critical to maintaining trust in AI systems.
For further information on these vulnerabilities and security practices, check out Palo Alto Networks’ report.

