About the Team
The Interpretability team studies internal representations of deep learning models.
We are interested in using representations to understand model behavior, and in engineering models to have more understandable representations.
We are particularly interested in applying our understanding to ensure the safety of powerful AI systems.
Our working style is collaborative and curiosity-driven.
About the Role
OpenAI is seeking a researcher passionate about understanding deep networks, with a strong background in engineering, quantitative reasoning, and the research process.
You will develop and carry out a research plan in mechanistic interpretability, in close collaboration with a highly motivated team.
You will play a critical role in helping OpenAI ensure future models remain safe even as they grow in capability.
This will make a significant impact on our goal of building and deploying safe AGI.
In this role, you will:
Develop and publish research on techniques for understanding representations of deep networks.
Engineer infrastructure for studying model internals at scale.
Collaborate across teams to work on projects that OpenAI is uniquely suited to pursue.
Guide research directions toward demonstrable usefulness and/or long-term scalability.
You might thrive in this role if you:
Are excited about of ensuring AGI benefits all of humanity, and are aligned with .
Show enthusiasm for long-term AI safety, and have thought deeply about technical paths to safe AGI.
Bring experience in the field of AI safety, mechanistic interpretability, or spiritually related disciplines.
Hold a Ph.D. or have research experience in computer science, machine learning, or a related field.
Thrive in environments involving large-scale AI systems, and are excited to make use of OpenAI’s unique resources in this area.
Possess 2+ years of research engineering experience and proficiency in Python or similar languages.
Are deeply curious.