
Frontier Model Forum: Advancing frontier AI safety and security
The Frontier Model Forum draws on the technical and operational expertise of its member companies to ensure that the most advanced AI systems remain safe and secure, so that they can meet society’s most pressing needs.

Technical Report: Frontier Capability Assessments
Frontier capability assessments are procedures conducted on frontier models with the goal of determining whether they have capabilities that could increase risks to public safety and security, such as by facilitating the development of chemical, biological, radiological, or nuclear (CBRN) weapons, advanced cyber threats, or some categories of advanced autonomous behavior.
This report discusses emerging industry practices for implementing Frontier Capability Assessments. As the science of these assessments is rapidly advancing, this overview represents a snapshot of current practices.
Core objectives of the Forum
The Frontier Model Forum is committed to turning vision into action. We recognize the importance of safe and secure AI development, and we’re here to make it happen.
Advancing AI safety research
Research will help promote the responsible development of frontier models, minimize risks, and enable independent, standardized evaluations of capabilities and safety.
Identifying best practices
Establish best practices for frontier AI safety and security, and develop shared understanding about threat models, evaluations, thresholds and mitigations for key risks to public safety and security.
Collaborating across sectors
Work across academia, civil society, industry and government to advance solutions to public safety and security risks of frontier AI.
Information-Sharing
Facilitate information-sharing about unique challenges to frontier AI safety and security.
Join us in turning these objectives into reality, as we shape the future of safe and secure AI.