An update on disrupting deceptive uses of AI
OpenAI’s mission is to ensure that artificial general intelligence benefits all of humanity. We are dedicated to identifying, preventing, and disrupting attempts to abuse our models for harmful ends. In this year of global elections, we know it is particularly important to build robust, multi-layered defenses against state-linked cyber actors and covert influence operations that may attempt to use our models in furtherance of deceptive campaigns on social media and other internet platforms.
Since the beginning of the year, we’ve disrupted more than 20 operations and deceptive networks from around the world that attempted to use our models. To understand the ways in which threat actors attempt to use AI, we’ve analyzed the activity we’ve disrupted, identifying an initial set of trends that we believe can inform debate on how AI fits into the broader threat landscape. Today, we are publishing OpenAI’s latest threat intelligence report, which represents a snapshot of our understanding as of October 2024.
As we look to the future, we will continue to work across our intelligence, investigations, security, safety, and policy teams to anticipate how malicious actors may use advanced models for dangerous ends and to plan enforcement steps appropriately. We will continue to share our findings with our internal safety and security teams, communicate lessons to key stakeholders, and partner with our industry peers and the broader research community to stay ahead of risks and strengthen our collective safety and security.