Hugging Face is widely used in research and enterprise AI, supporting everything from text generation to image recognition, ...
Cybersecurity researchers found that malware was being distributed on Hugging Face by abusing Pickle file serialisation.
The technique, called nullifAI, allows the models to bypass Hugging Face’s protective measures against malicious AI models ...
Researchers discovered two malicious ML models on Hugging Face exploiting “broken” pickle files to evade detection, bypassing ...
DeepSeek-R1 expands across Nvidia, AWS, GitHub, and Azure, boosting accessibility for developers and enterprises.
The popular Python Pickle serialization format, which is common for distributing AI models, offers ways for attackers to ...
Learn how to fine-tune DeepSeek R1 for reasoning tasks using LoRA, Hugging Face, and PyTorch. This guide by DataCamp takes ...
Hugging Face researchers released an open source AI research agent called "Open Deep Research," created by an in-house team ...
Hugging Face has been notified and the ML models in question were taken down. “While the files discovered by our researchers appear to be ‘proof of concept’ rather than active threats ...
Released on Hugging Face on Monday amid an ongoing cyberattack, Janus Pro 1B and 7B are a family of multimodal large language ...
Dubbed “nullifAI,” a Tactic for Evading Detection in ML Models Targeted Pickle Files, Demonstrates Fast-Growing Cybersecurity Risks Presented by ...