Skip to content

Hugging Face Removes Malicious AI Models, Strengthens Security Measures

Two malicious AI models slipped through Hugging Face's security net. Now, the platform is beefing up its defenses to protect users.

In the picture we can see three boys standing near the desk on it, we can see two computer systems...
In the picture we can see three boys standing near the desk on it, we can see two computer systems towards them and one boy is talking into the microphone and they are in ID cards with red tags to it and behind them we can see a wall with an advertisement board and written on it as Russia imagine 2013.

Hugging Face Removes Malicious AI Models, Strengthens Security Measures

Hugging Face, a renowned AI model hub, faced a security challenge recently. Researchers discovered two malicious models on the platform, which were subsequently removed.

The malicious models were stored in PyTorch format but cleverly compressed using the 7z format, bypassing Hugging Face's default loading function. This allowed them to evade the platform's security scanning mechanisms, which didn't flag them as 'unsafe'.

The models exploited a novel malware distribution technique using Pickle file serialization. Pickle is a Python module used for serializing and deserializing machine learning model data. However, it's risky to use Pickle files with untrusted sources due to potential security issues. In this case, the models used broken Pickle files, suggesting they were proof-of-concept models.

Hugging Face's security auditing tool, Picklescan, initially struggled to detect these threats in broken Pickle files. However, upon discovery, the company swiftly removed the malicious models and updated Picklescan to better handle such threats.

The security flaws were discovered by NCC Group and Reversing Labs, who notified Hugging Face about the issue.

In response to the discovery of malicious models, Hugging Face has taken necessary actions to bolster its security measures. The company removed the problematic models and enhanced its Picklescan tool to better detect threats in broken Pickle files. Users are advised to remain vigilant when dealing with untrusted sources of machine learning models.

Read also:

Latest