Head of AI Safety
US Preferred | Remote-Friendly (Select Locations)
We are actively looking for a Head of AI Safety to oversee our growing AI safety portfolio and to push forward industry best practices on frontier AI safety.
As Head of AI Safety, you will manage our workstreams on safety evaluations and safety frameworks, as well as our work related to chemical, biological, radiological and nuclear (CBRN) risks. You will be responsible for ensuring the regular delivery of rigorous and scientifically-informed outputs across our AI safety workstreams, and for driving forward consensus viewpoints on threat models, safety evaluations, capability and/or risk thresholds, and risk mitigations within our AI safety portfolio.
As Head of AI Safety, you will be adept at fostering collaboration, monitoring and synthesizing AI safety research, and managing multiple projects simultaneously. This will include organizing and facilitating workshops with domain experts from our member firms, as well as overseeing the drafting, revising, and publication of best practice guidelines and research briefs.
About the FMF
The Frontier Model Forum is an industry non-profit dedicated to the safe development and deployment of frontier AI models. By drawing on the technical and operational expertise of our members, we aim to 1) identify best practices for frontier AI safety and support the development of frontier AI safety standards, 2) advance AI safety research for frontier models, and (3) facilitate information sharing about frontier AI safety among government, academia, civil society and industry.
At the Frontier Model Forum (FMF), we value diversity of experience, knowledge, backgrounds and perspectives, harnessing these qualities to create extraordinary impact. We are committed to equal employment opportunities regardless of sex, race, religion or belief, ethnic or national origin, disability, age, citizenship, marital, domestic or civil partnership status, sexual orientation, gender identity, pregnancy, or related condition (including breastfeeding) or any other basis as protected by applicable law. If you have a disability or additional need that requires accommodation, please do not hesitate to let us know.
Key responsibilities
- Design and execute AI safety workstreams, working with Forum leadership to develop a relevant workshops and outputs
- Act as a key partner to Forum leadership, helping to inform and shape the Forum’s strategy for AI safety
- Organize, moderate and lead multiple AI safety working groups, workstreams, and other research-oriented initiatives
- Oversee the development of issue briefs for publication on the FMF website and/or circulation among member firms and key stakeholders and partners
- Independently represent the FMF’s strategy and narrative with external AI safety stakeholders and partners
- Evaluate the success of all AI safety workstreams and workshops against their aims, goals and objectives
Additional responsibilities
- Lead on and coordinate expert workshops, liaising with key experts and researchers from FMF firms and non-member organizations
- Balance making progress on long-term scientific objectives of the FMF with regular short-term technical deliverables (i.e, workshops, memos, and guidelines)
- Conduct informational conversations across research teams in member firms and external stakeholders to identify emerging safety practices
- Facilitate conversations, meetings and opportunities for internal and external collaboration on AI safety workstreams
- Monitor AI safety literature and research, staying abreast of key developments in capabilities evaluations, risk assessments, interpretability, and related topics
- Work with Executive Director to identify opportunities to develop best practices, guidelines, and other public goods
About You
You may be a good fit for Head of AI Safety if you:
- Have a clear passion for advancing frontier AI safety and actively seek high-impact opportunities to push the field forward
- Have significant experience in program management and driving forward collaborative projects
- Have strong writing and editorial skills. Skilled at drafting, revising, and finalizing collaborative research documents.
- Have a clear track record of managing collaborative writing projects from conception to publication
- Have strong communication skills, both written and verbal, and an ability to develop constructive relationships with key partners and stakeholders
- Thrive at organizing, facilitating, and synthesizing expert workshops and research convenings
- Have experience supporting teams and leadership in fast-paced and constantly changing environments
You may be a strong candidate if you:
- Have a PhD or advanced degree in a STEM field or computational social science
- Have extensive experience carrying out or documenting safety evaluations on advanced general-purpose AI models, including automated benchmarks, redteaming exercises, and uplift studies
- Have extensive familiarity and experience with natural language processing, computer vision, causal reasoning and/or multi-modal models
To Apply
Please send a cover letter and a resume to careers@frontiermodelforum.org. Applications will be considered on a rolling basis.