AI-Cyber Workstream

The Frontier Model Forum’s AI-Cyber workstream advances the safety and security of leading AI models and systems in the cybersecurity domain. While frontier AI promises significant advances in computing and automation, it also risks amplifying existing cyber threats or introducing novel ones. Understanding and mitigating these risks through robust technical safeguards is an urgent challenge for the field.

The workstream aims to develop shared understandings of AI-Cyber threat models, safety evaluations, and mitigation measures, along with common approaches to capability and risk thresholds in the cyber domain. Effectively managing cybersecurity risks requires greater coordination around technically grounded best practices and continued research into innovative safety methods and approaches. This includes developing protocols for assessing AI systems’ potential cyber capabilities, establishing guidelines for responsible development, and creating robust testing frameworks.

The AI-Cyber workstream leverages the deep cybersecurity expertise of the Forum’s member firms, while maintaining strong collaboration with external technical experts. The Forum actively engages with leading researchers and practitioners across network security, cryptography, threat detection, and incident response, alongside experts in AI development and global security. By combining insights from academia, industry, and government, the workstream integrates cutting-edge technical understanding from diverse sectors and disciplines.

The Forum maintains a strong commitment to information-sharing and transparency. However, given that publications related to AI and cyber threats can introduce significant security risks and information hazards, all materials undergo rigorous expert review and careful deliberation before publication. This ensures responsible disclosure while advancing the field’s understanding of AI cybersecurity challenges and solutions.