Red teaming research by leading AI security and compliance platform Enkrypt AI has uncovered serious ethical and security flaws in
DeepSeek’s technology. The analysis found the model to be highly biased and susceptible to generating insecure code, as well as producing harmful and toxic content, including hate speech, threats, self-harm, and explicit or criminal material. Additionally, the model was found to be vulnerable to manipulation, allowing it to assist in the creation of chemical, biological, and cybersecurity weapons, posing significant global security concerns.
Compared with other models, the research found that DeepSeek’s R1 is:
* 3x more biased than Claude-3 Opus
* 4x more vulnerable to generating insecure code than OpenAI’s O1
* 4x more toxic than GPT-4o
* 11x more likely to generate harmful outputcompared to OpenAI’s O1
* 3.5x more likely to produce Chemical, Biological, Radiological, and Nuclear (CBRN) content than OpenAI’s O1 and Claude-3 Opus
"DeepSeek-R1 offers significant cost advantages in AI deployment, but these come with serious risks. Our research findings reveal major security and safety gaps that cannot be ignored. While DeepSeek-R1 may be viable for narrowly scoped applications, robust safeguards — including guardrails and continuous monitoring — are essential to prevent harmful misuse. AI safety must evolve alongside innovation, not as an afterthought."
enkryptai.com/blog/deepseek-