Courses
Curricula and reading lists for self-led learning at all levels
Communities
Groups based online and locally around the world
Projects
Online initiatives seeking your volunteer help – all skill levels welcome
Jobs
A list of all current open positions in AI safety
Events & training
Gatherings and training programs, both online and in-person
Funders
Sources of financial support for AI safety projects
Map
An overview of the organizations, programs, and projects in the AI safety space
AI safety guide
A comprehensive introduction to AI safety, written and curated by our team and affiliates
Donation guide
How to donate most effectively given the funds you have available
Speak to an advisor
Organizations offering guidance calls to help you discover how best to contribute to AI safety
Stay informed
Newsletters, podcasts, books etc. to help you learn more and keep up to date
AI Safety Awareness Foundation (AISAF)
Volunteer organization dedicated to raising awareness about modern AI, highlighting its benefits and risks, and letting the public know how they can help – mainly through workshops.
Category
Advocacy
Collective Action for Existential Safety (CAES)
Nonprofit aiming to catalyze collective effort towards reducing existential risk, including through an extensive action list for individuals, organizations, and nations.
Existential Risk Observatory (ERO)
Informing the public debate on existential risks, on the basis that awareness is the first step to reducing those risks.
Global AI Moratorium (GAIM)
Calling on policymakers to implement a global moratorium on large AI training runs until alignment is solved.
PauseAI
Campaign group aiming to convince governments to pause AI development – through public outreach, engaging with decision-makers and organizing protests.
The Midas Project (TMP)
Watchdog nonprofit monitoring tech companies, countering corporate propaganda, raising awareness about corner-cutting, and advocating for the responsible development of AI.
Future of Life Institute (FLI)
Steering transformative technology towards benefitting life and away from extreme large-scale risks through outreach, policy advocacy, grantmaking, and event organisation.
Advocacy, Governance, Funding
AI Frontiers
Platform from the Center for AI Safety (CAIS) posting articles written by experts from a wide range of fields discussing the impacts of AI.
Blog
AI Prospects
Blog by Eric Drexler on AI prospects and their surprising implications for technology, economics, environmental concerns, and military affairs.
AI Safety Takes
Blog by Daniel Paleka, an AI safety researcher discussing AI safety news, with posts about every two months.
Astral Codex Ten (ACX)
Blog covering many topics, including reasoning, science, psychiatry, medicine, ethics, genetics, AI, economics, and politics. Includes book summaries and commentary on AI safety.
Bounded Regret
Blog on AI safety by Jacob Steinhardt, a UC Berkeley statistics professor, analysing risks, forecasting future breakthroughs, and discussing alignment strategies.
Cold Takes
Blog about transformative AI, futurism, research, ethics, philanthropy etc. by Holden Karnofsky. Includes the "Most Important Century" post series.
Don't Worry about the Vase
Blog by Zvi Mowshowitz on various topics including AI, often with detailed analysis, personal insights, and a rationalist perspective.
Miles's Substack
Blog from ex-OpenAI (now independent) AI policy researcher Miles Brundage on the rapid evolution of AI and the urgent need for thoughtful governance.
Obsolete
Publication by Garrison Lovely on the intersection of capitalism, geopolitics, and AI. Posts about once or twice a month.
The Power Law
Top forecaster Peter Wildeford forecasts the future and discusses AI, national security, innovation, emerging technology, and the powers – real and metaphorical – that shape the world.
AISafety.com: Stay informed
The AI safety space is changing rapidly. This directory of key information sources can help you keep up to date with the latest developments.
Blog, Newsletter, Podcast, Video
DeepSeek
Chinese capabilities lab developing and releasing open-weights large language models. Created DeepSeek-R1.
Capabilities research
xAI
Capabilities lab led by Elon Musk with the mission of advancing our collective understanding of the universe. Created Grok.
Obelisk
Team of researchers aiming to engineer AGI by pursuing an exploratory approach heavily inspired by cognitive science and neuroscience.
Capabilities research, Conceptual research
Anthropic
Research lab focusing on LLM alignment, particularly interpretability. Featuring Chris Olah, Jack Clark, and Dario Amodei. Created Claude.
Capabilities research, Empirical research
Cyborgism
A strategy for accelerating alignment research by using human-in-the-loop systems which empower human agency rather than outsource it.
Google DeepMind
London-based AI capabilities lab with a strong safety team, led by Demis Hassabis. Created AlphaGo, AlphaFold, and Gemini.
Gray Swan
For-profit company developing tools that automatically assess the risks of AI models and developing its own AI models aiming to provide best-in-class safety and security.
OpenAI
San Francisco-based capabilities lab and creator of ChatGPT, led by Sam Altman. Throughout 2024, roughly half of then-employed AI safety researchers left the company.
Safe Superintelligence Inc. (SSI)
Research lab founded by Ilya Sutskever comprised of a small team of engineers and researchers working towards building a safe superintelligence.
80,000 Hours Career Guide
Regularly-updated article with motivation and advice around pursuing a career in AI safety.
Career support
80,000 Hours Job Board
Curated list of job posts around the world tackling pressing problems, including AI safety. Also has a newsletter.
AI Safety Google Group
Information about how to get into technical research, including updates on academic posts, grad school, and training programs.
AI Safety Quest
Helps people navigate the AI safety space with a welcoming human touch, offering personalized guidance and fostering collaborative study and project groups.
AISafety.com: Speak to an advisor
Directory of advisors offering free guidance calls to help you discover how best to contribute to AI safety, tailored to your skills and interests.
Effective Thesis
Empowering students to use their theses as a pathway to impact, including in AI safety. Lists research topic ideas and runs a fellowship coaching people working on them.
High Impact Professionals (HIP)
Supporting working professionals to maximize their positive impact through their talent directory and Impact Accelerator Program.
How to pursue a career in technical AI alignment
A guide written for people who are familiar with the arguments for the importance of AI alignment and are considering pursuing a career working on it.
Successif
Helping professionals transition to high-impact work by performing market research on impactful jobs and providing career mentoring, opportunity matching, and professional training.
Upgradable
Applied research lab helping existential safety advocates to systematically optimize their lives and work.
Alignment Research Center (ARC)
Research organization trying to understand how to formalize mechanistic explanations of neural network behavior.
Conceptual research
Alignment of Complex Systems Research Group (ACS)
Studying questions about multi-agent systems composed of humans and advanced AI. Based at Charles University, Prague.
Arbital
Wiki on AI alignment theory, mostly written by Eliezer Yudkowsky. Includes foundational concepts, open problems, and proposed solutions.
Center for Human-Compatible AI (CHAI)
Developing the conceptual and technical wherewithal to reorient the general thrust of AI research towards provably beneficial systems. Led by Stuart Russell at UC Berkeley.
Dr. Roman V. Yampolskiy
Professor at University of Louisville with a background in cybersecurity, and author of over 100 publications – including two books on AI safety.
Dylan Hadfield-Menell
Assistant professor at MIT working on agent alignment. Runs the Algorithmic Alignment Group.
John Wentworth
Independent alignment researcher working on selection theorems, abstraction, and agency.
Modeling Cooperation
Conducting long-term future research on improving cooperation in competition for the development of transformative AI.
Orthogonal
Formal alignment organization led by Tamsin Leake, focused on agent foundations. Also has a public Discord server.
Steve Byrnes's Brain-Like AGI Safety
Brain-inspired framework using insights from neuroscience and model-based reinforcement learning to guide the design of aligned AGI systems.
Team Shard
A small team of independent researchers trying to find reward functions which reliably instill certain values in agents.
Association for Long Term Existence and Resilience (ALTER)
Israeli research and advocacy nonprofit working to investigate, demonstrate, and foster useful ways to safeguard and improve the future of humanity.
Conceptual research, Advocacy, Governance
Softmax
Research organization dedicated to developing a theory of "organic alignment" to foster adaptive, non-hierarchical cooperation between humans and digital agents.
Conceptual research, Empirical research
AE Studio
Large team taking a 'Neglected Approaches' approach to alignment, tackling the problem from multiple, often overlooked angles in both technical and policy domains.
Conceptual research, Empirical research, Governance
Formation Research
Nonprofit aiming to reduce lock-in risks by researching fundamental lock-in dynamics and power concentration.
Frontier AI Research (FAIR)
Argentine nonprofit conducting both theoretical and empirical research to advance frontier AI safety as a sociotechnical challenge.
MIT Algorithmic Alignment Group
Working towards better conceptual understanding, algorithmic techniques, and policies to make AI safer and more socially beneficial.
AI Alignment Forum
Hub for researchers to discuss all ideas related to AI safety. Discussion ranges from technical models of agency to the strategic landscape, and everything in between.
Conceptual research, Empirical research, Governance, Strategy
Center on Long-Term Risk (CLR)
Research, grants and community-building around AI safety, focused on conflict scenarios as well as technical and philosophical aspects of cooperation.
Conceptual research, Strategy, Funding
Aligned AI
Oxford-based startup attempting to use mathematical and theoretical techniques to achieve safe off-distribution generalization.
Empirical research
Apollo Research
Aiming to detect deception by designing AI model evaluations and conducting interpretability research to better understand frontier models. Also provides guidance to policymakers.
Cavendish Labs
AI safety (and pandemic prevention) research community based in a small town in Vermont, USA.
Conjecture
Alignment startup born out of EleutherAI, employing a “Cognitive Emulation” approach to build controllable large language models and tackle core AI safety challenges.
EquiStamp
Providing a platform that allows companies and individuals to evaluate the capabilities of AI models and therefore know how much they can trust them.
Fund for Alignment Research (FAR.AI)
Ensuring AI systems are trustworthy and beneficial to society by incubating and accelerating research agendas too resource-intensive for academia but not yet ready for commercialisation.
Krueger AI Safety Lab (KASL)
AI safety research group at the University of Cambridge, led by David Krueger. Part of the Computational and Biological Learning Lab.
Meaning Alignment Institute (MAI)
Research organization applying their expertise in meaning and human values to AI alignment and post-AGI futures.
Model Evaluation & Threat Research (METR)
Researches, develops, and runs cutting-edge tests of AI capabilities, including broad autonomous capabilities and the ability of AI systems to conduct AI R&D.
NYU Alignment Research Group (ARG)
Group of researchers at New York University doing empirical work with language models aiming to address longer-term concerns about the impacts of deploying highly-capable AI systems.
Ought
Product-driven research lab developing mechanisms for delegating high-quality reasoning to ML systems. Built Elicit, an AI assistant for researchers and academics.
Redwood Research
Nonprofit researching interpretability and alignment. Also consults governments and AI labs on AI safety practices.
Timaeus
Research nonprofit using singular learning theory to develop the science of understanding how training data determines model behavior.
Transluce
Nonprofit research lab building open source, scalable, AI-driven tools to understand and analyze AI systems and steer them in the public interest.
University of Cambridge Computational and Biological Learning Lab (CBL)
Research group using engineering approaches to understand the brain and to develop artificial learning systems.
AI Objectives Institute (AOI)
Nonprofit research lab building AI tools to defend and enhance human agency – by researching and experimenting with novel AI capabilities.
Empirical research, Capabilities research
EleutherAI
Open-source research lab focused on interpretability and alignment. Operates primarily through a public Discord server, where research is discussed and projects are coordinated.
DeepMind Safety Research
Blog from the Google DeepMind safety team discussing research ideas about building AI safely and responsibly.
Empirical research, Conceptual research
Center for AI Safety (CAIS)
San Francisco-based nonprofit conducting safety research, building the field of AI safety researchers, and advocating for safety standards.
Empirical research, Conceptual research, Advocacy
Center for Long-Term Cybersecurity (CLTC)
UC Berkeley research center bridging academic research and practical policy needs in order to anticipate and address emerging cybersecurity challenges.
Empirical research, Governance, Strategy
AI Futures Project
Small research group forecasting the future of AI. Created 'AI 2027', a detailed forecast scenario projecting the development of artificial superintelligence.
Forecasting
Forecasting Research Institute (FRI)
Advancing the science of forecasting for the public good by working with policymakers and nonprofits to design practical forecasting tools, and test them in large experiments.
Manifold Markets
Prediction market platform on many topics using play money (called "mana"). Includes markets on AI and AI safety.
Metaculus
Well-calibrated forecasting platform covering a wide range of topics, including AI and AI safety.
Quantified Uncertainty Research Institute (QURI)
Advancing forecasting and epistemics to improve the long-term future of humanity. Writing research and software.
Transformative Futures Institute (TFI)
Exploring the use of underutilized foresight methods and tools in order to better anticipate societal-scale risks from AI.
Epoch AI
Research institute investigating key trends and questions that will shape the trajectory and governance of AI.
Forecasting, Strategy
AE Grants
Empowering innovators and scientists to increase human agency by creating the next generation of responsible AI. Providing support, resources, and open-source software.
Funding
AI Risk Mitigation (ARM) Fund
Aiming to reduce catastrophic risks from advanced AI through grants towards technical research, policy, and training programs for new researchers.
AI2050
Philanthropic initiative supporting researchers working on key opportunities and hard problems that are critical to get right for society to benefit from AI. Proposals by invite only.
AISafety.com: Donation Guide
Regularly-updated guide on how to donate most effectively with the funding and time you have available.
AISafety.com: Funders
Comprehensive and up-to-date directory of sources of financial support for AI safety projects, ranging from grant programs to venture capitalists.
An Overview of the AI Safety Funding Situation
An analysis of the main funding sources in AI safety over time, useful for gaining a better understanding of what opportunities exist in the space.
Center on Long-Term Risk (CLR): Fund
Supports projects and individuals aiming to address worst-case suffering risks from the development and deployment of advanced AI systems.
Cooperative AI Foundation (CAIF)
Charity foundation backed by a large philanthropic commitment supporting research into improving cooperative intelligence of advanced AI.
Donations List Website
Minimalistic site focused on understanding donations, donors, donees, and the thoughts and discussions that go alongside donation decisions.
EA Infrastructure Fund (EAIF)
Aiming to increase the impact of effective altruism projects by increasing their access to talent, capital, and knowledge.
Ergo Impact
Helping major donors find, fund, and scale the most promising solutions to the world’s most pressing problems.
Foresight Institute: Funding
Funding projects in 1) automating research and forecasting, 2) security technologies, 3) neurotech, and 4) safe multipolar human AI scenarios.
Future of Life Foundation (FLF)
Accelerator aiming to steer transformative technology towards benefiting life and away from extreme large-scale risks.
Future of Life Institute (FLI): Fellowships
Fellowships include PhD and postdoctoral fellowships in technical AI safety, and a PhD fellowship in US-China AI governance.
GiveWiki
Crowdsourced charity evaluation and philanthropic networking platform that lets funders, donors, and scouts discover and support high-impact projects.
Giving What We Can (GWWC)
Community of donors who have pledged to donate a significant portion of their income to highly effective charities, including those in AI safety.
Lightspeed Grants
Fast funding for projects that have a chance of substantially changing humanity's future trajectory for the better. Last funding round was 2023.
Lionheart Ventures
VC firm investing in ethical founders developing transformative technologies that have the potential to impact humanity on a meaningful scale.
Long-Term Future Fund (LTFF)
Making grants addressing global catastrophic risks, promoting longtermism, and otherwise increasing the likelihood that future generations will flourish.
Longview Philanthropy
Devises and executes bespoke giving strategies for major donors, working with them at every stage of their giving journey.
Macroscopic Ventures
Swiss nonprofit focused on reducing suffering risks, including that posed by catastrophic AI misuse and conflict. Previously known as Polaris Ventures.
Manifund
Marketplace for new charities, including in AI safety. Find impactful projects, buy impact certificates, and weigh in on what gets funded.
Meta Charity Funders
Network of donors funding charitable projects that work one level removed from direct impact, often cross-cutting between cause areas.
Mythos Ventures
Aiming to empower founders building a radically better world with safe AI systems by investing in ambitious teams with defensible strategies that can scale to post-AGI.
Nonlinear Network
Funder network for AI existential risk reduction. Applications are shared with donors, who then reach out if they are interested.
Open Philanthropy (OP)
The largest funder in the existential risk space, backed by Cari Tuna and Dustin Moskovitz, co-founder of Facebook and Asana.
Saving Humanity from Homo Sapiens (SHfHS)
Small organization with a long history of finding the people doing the best work to prevent human-created existential risks and financially supporting them.
Survival and Flourishing Fund (SFF)
The second largest funder in AI safety, using an algorithm and meeting procedure called “The S-process” to allocate grants.
The Navigation Fund
Offers grants to high-impact organizations and projects that are taking bold action and making significant changes.
U.S. National Science Foundation (NSF): Safe Learning-Enabled Systems
Funds research into the design and implementation of learning-enabled systems in which safety is ensured with high levels of confidence.
AI Governance & Safety Canada (AIGS Canada)
Nonpartisan nonprofit and community of people across Canada, working to ensure that advanced AI is safe and beneficial for all.
Governance
AI Policy Institute (AIPI)
Channeling public concern into effective regulation by engaging with policymakers, media, and the public to ensure AI is developed responsibly and transparently.
Beijing Institute of AI Safety and Governance (Beijing-AISI)
R&D institution dedicated to developing AI safety and governance frameworks to provide a safe foundation for AI innovation and applications.
Center for Security and Emerging Technology (CSET)
Georgetown University think tank providing decision-makers with data-driven analysis on the security implications of emerging technologies.
Centre for Future Generations (CFG)
Brussels think tank focused on helping governments anticipate and responsibly govern the societal impacts of rapid technological change.
Centre for the Governance of AI (GovAI)
AI governance research group at Oxford, producing research tailored towards decision-makers and running career development programmes.
European AI Office
Established within the European Commission as the centre of AI expertise, playing a key role in implementing the AI Act.
Institute for Law & AI (LawAI)
Think tank researching and advising on the legal challenges posed by AI, premised on the idea that sound legal analysis will promote security, welfare, and the rule of law.
SaferAI
French nonprofit working to incentivize responsible AI practices through policy recommendations, research, and risk assessment tools.
Simon Institute for Longterm Governance
Geneva-based think tank working to foster international cooperation in mitigating catastrophic risks from AI.
The Future Society (TFS)
Nonprofit based in the US and Europe aiming to define, design, and deploy projects that address institutional barriers in AI governance.
U.S. AI Safety Institute (USAISI)
US government organization working to advance the science, practice, and adoption of AI safety across the spectrum of risks.
UK AI Security Institute (UK AISI)
UK government organisation conducting research and building infrastructure to test the safety of advanced AI and measure its impacts. Also working to shape global policy.
Vista Institute for AI Policy
Promoting informed policymaking to navigate emerging challenges from AI through research, knowledge-sharing, and skill building.
Center for AI Policy (CAIP)
Nonpartisan research organization developing policy and conducting advocacy to mitigate catastrophic risks from AI.
Governance, Advocacy
ControlAI
Nonprofit fighting to keep humanity in control of AI by developing policy and conducting public outreach.
International AI Governance Alliance (IAIGA)
Nonprofit dedicated to establishing an independent global organization capable of effectively mitigating extinction risks from AI and fairly distributing its economic benefits to all.
Machine Intelligence Research Institute (MIRI)
The original AI safety technical research organization, co-founded by Eliezer Yudkowsky. Now focusing on policy and public outreach.
Governance, Advocacy, Conceptual research
Effective Institutions Project
Advisory and research organization focused on improving the way institutions make decisions on critical global challenges.
Governance, Strategy
Institute for AI Policy and Strategy (IAPS)
Research and field-building organization focusing on policy and standards, compute governance, and international governance and China.
International Association for Safe & Ethical AI (IASEAI)
Nonprofit aiming to ensure that AI systems are guaranteed to operate safely and ethically; and to shape policy, promote research, and build understanding and community around this goal.
AI Policy Weekly
Weekly newsletter from the Center for AI Policy (CAIP). Each issue explores three key AI policy developments for professionals in the field.
Newsletter
AI Safety Events & Training
Weekly newsletter listing newly-announced AI safety events and training programs, both online and in-person.
AI Safety Newsletter
Newsletter published every few weeks discussing developments in AI and AI safety. No technical background required.
AI Safety in China
Newsletter from Concordia AI, a Beijing-based social enterprise, providing updates on AI safety developments in China.
Import AI
Weekly developments in AI research (including governance) written by Jack Clark, co-founder of Anthropic.
Transformer
Weekly briefing and occasional analyses of what matters in AI and AI policy. Written by Shakeel Hashim, ex-news editor at The Economist.
AI Alignment Awards
Ran research paper/essay-writing contests to advance alignment. Last round was 2023.
No longer active
AI Safety Communications Centre (AISCC)
Connected journalists to AI safety experts and resources.
AI Safety Hub (ASH)
Supported individuals pursuing a career in AI safety by running AI Safety Hub Labs, where participants complete their first research project. Now run by LASR Labs.
AI Safety Ideas
Crowdsourced repository of possible research projects and testable hypotheses, run by Apart Research.
AI Safety Support (AISS) Newsletter
Listed opportunities in AI alignment. Dropped due to lack of a maintainer.
Campaign for AI Safety (CAS)
Increased public understanding of AI safety and called for strong laws to stop the development of dangerous and overly powerful AI. Now merged with the Existential Risk Observatory.
From AI to ZI
Blog on AI safety work by a PhD mathematician and AI safety researcher, Robert Huben. Currently semi-dormant.
Future of Humanity Institute (FHI)
Longtermist/x-risk research organization led by Nick Bostrom at the University of Oxford. Killed by university politics disallowing them from fundraising.
ML & AI Safety Updates
Weekly podcast, YouTube and newsletter with updates on AI safety. Dropped due to organizational reprioritization.
Machine Learning for Alignment Bootcamp (MLAB)
Bootcamp aimed at teaching ML relevant to doing alignment research. Run by Redwood Research for two iterations in 2022.
OpenBook
Database of grants in effective altruism. No longer maintained.
Preamble Windfall Foundation
Funding organization aiming to minimize the risk of AI systems. Died for reasons unknown to us.
Stop AGI
Website communicating the risks of god-like AI to the public and offering proposals on preventing its development.
Superlinear Prizes
Decentralized bounty platform for existential risk reduction and other effective altruist cause areas.
generative.ink
The blog of janus the GPT cyborg. Last post was published in 2023.
80,000 Hours Podcast
In-depth conversations about the world’s most pressing problems (including AI safety) and what you can do to help solve them.
Podcast
AI X-risk Research Podcast (AXRP)
Interviews with (mostly technical) AI safety researchers about their research, aiming to get a sense of why it was written and how it might reduce existential risk from AI.
Dwarkesh Podcast
Well-researched interviews with influential intellectuals going in-depth on AI, technology, and their broader societal implications. Hosted by Dwarkesh Patel.
For Humanity
Podcast by John Sherman, a Peabody and Emmy Award-winning former investigative journalist, aiming to speak to regular, non-tech people about the existential threat AGI poses.
Future of Life Institute (FLI) Podcast
Interviews with existential risk researchers, policy experts, philosophers, and a range of other influential thinkers.
The Cognitive Revolution
Biweekly podcast where host Nathan Labenz interviews AI innovators and thinkers, diving into the transformative impact AI will likely have in the near future.
AI Safety Asia (ASIA)
Platform for connecting junior researchers and seasoned civil servants from Southeast Asia with senior AI safety researchers from developed countries.
Research support
Alignment Ecosystem Development (AED)
Building and maintaining key online resources for the AI safety community, including AISafety.com. Volunteers welcome.
Ashgro
Providing fiscal sponsorship to AI safety projects, saving them time and allowing them to access more philanthropic funding.
Berkeley Existential Risk Initiative (BERI)
Providing flexible funding and operations support to university research groups working on existential risk, enabling projects otherwise hindered by university administration.
Catalyze Impact
Incubating early-stage AI safety research organizations. The program involves co-founder matching, mentorship, and seed funding, culminating in an in-person building phase.
Centre for Enabling EA Learning & Research (CEEALAR aka EA Hotel)
Free or subsidised accommodation and board in Blackpool, UK, for people working on/transitioning to working on global catastrophic risks.
China AI Development and Safety Network
Network of industry, academia, and research institutions within China, representing China in conducting exchanges and cooperation with AI research institutions worldwide.
Constellation
Center for collaborative research in AI safety, supporting promising work through fellowships, workshops, and hosting individuals and teams.
European Network for AI Safety (ENAIS)
Community of researchers and policymakers from over 13 countries across Europe, united in their efforts to advance AI safety.
Future Matters
Provides strategy consulting services to clients trying to advance AI safety through policy, politics, coalitions or social movements.
Impact Ops
Providing consultancy and hands-on support to help high-impact organizations upgrade their operations.
Lightcone Infrastructure
Nonprofit maintaining LessWrong, the Alignment Forum, and Lighthaven (an event space in Berkeley, USA).
London Initiative for Safe AI (LISA)
Coworking space hosting organizations (including BlueDot Impact, Apollo, Leap Labs), acceleration programs (including MATS, ARENA), and independent researchers.
Nonlinear
"Means-neutral" AI safety organization, doing miscellaneous work including offering bounties on small-to-large AI safety projects and running a funder network.
Safe AI Forum (SAIF)
Fostering responsible governance of AI to reduce catastrophic risks through shared understanding and collaboration among key global actors.
Third Opinion
Helping concerned individuals working at the frontier of AI get expert opinions on their questions, anonymously and securely.
Apart Research
Non-profit AI safety research lab hosting open-to-all research sprints, publishing papers, and incubating talented researchers to make AI safe and beneficial for humanity.
Research support, Career support
Arkose
AI safety field-building nonprofit. Runs support programs facilitating technical research, does outreach, and curates educational resources.
Arcadia Impact
Runs various projects aimed at education, skill development, and creating pathways into impactful careers.
Research support, Training and education
AI Digest
Concise visual digest of important trends in AI, grounded in concrete examples of what AI models can do right now.
Resource
AI Safety Map Anki Deck
Flashcards for helping to learn and memorize the main organizations, projects, and programs currently operating in the AI safety space.
AI Safety Support (AISS)
Field-building organization now chiefly serving as the home for an extensive resources list called Lots of Links.
AI Timeline
Visual overview of the major events in AI over the last decade, from cultural trends to technical advancements.
AI Watch
Database tracking people, organisations, and “products” in the AI safety community, serving as a reference for positions, affiliations, and related data.
AISafety.com
Hub for key resources for the AI safety community, including directories of courses, jobs, upcoming events and training programs etc.
Effective Altruism Domains
Directory of domains freely available to be used for high-impact projects, including those contributing to AI safety.
Effective Altruism Forum
Forum on doing good as effectively as possible, including AI safety. Also has a podcast featuring text-to-speech narrations of top posts.
Governance Map
Cartoon map displaying some key organizations, projects, and policies in the global AI governance ecosystem.
LessWrong
Online forum dedicated to improving human reasoning, containing a lot of AI safety content. Also has a podcast featuring text-to-speech narrations of top posts.
AI Lab Watch
Collects actions for frontier Al labs to avert extreme risks from AI, then evaluates particular labs accordingly.
Strategy
AI-Plans
Ranked and scored contributable compendium of alignment plans and their problems. Runs regular hackathons.
Center for AI Risk Management & Alignment (CARMA)
Conducting interdisciplinary research supporting global AI risk management. Also produces policy and technical research.
Center for Long-term AI (CLAI)
Interdisciplinary research organization based in China exploring contemporary and long-term impacts of AI on society and ecology.
Centre for the Study of Existential Risk (CSER)
Interdisciplinary research centre at the University of Cambridge dedicated to the study and mitigation of existential risks.
Forethought Research
Small research nonprofit focused on how to navigate the transition to a world with superintelligent AI systems.
Global Catastrophic Risk Institute (GCRI)
Small think tank developing solutions for reducing existential risk by leveraging both scholarship and the demands of real-world decision-making.
Global Partnership on AI (GPAI)
International initiative with 44 member countries working to implement human-centric, safe, secure, and trustworthy AI embodied in the principles of the OECD Recommendation on AI.
Global Priorities Institute (GPI)
University of Oxford research center conducting foundational research to inform the decision-making of individuals and institutions seeking to do as much good as possible.
Leverhulme Centre for the Future of Intelligence (CFI)
Interdisciplinary research centre at the University of Cambridge exploring the nature, ethics, and impact of AI.
Median Group
Research nonprofit working on models of past and future progress in AI, intelligence enhancement, and sociology related to existential risks.
Narrow Path
A series of proposals developed by ControlAI intended for action by policymakers in order for humanity to survive artifical superintelligence.
Partnership on AI (PAI)
Convening academic, civil society, industry, and media organizations to create solutions so that AI advances positive outcomes for people and society.
Rethink Priorities
Nonprofit researching solutions and strategies, mobilizing resources, and uncovering actionable insights to safeguard a flourishing present and future.
The Compendium
Living document aiming to present a coherent worldview explaining the race to AGI and extinction risks and what to do about them – in a way that is accessible to non-technical readers.
AI Governance and Safety Institute (AIGSI)
Aiming to improve institutional response to existential risk from future AI systems by conducting research and outreach, and developing educational materials.
Strategy, Advocacy
Convergence Analysis
Building a foundational series of sociotechnical reports on key AI scenarios and governance recommendations, and conducting AI awareness efforts to inform the general public.
AI Impacts
Answering decision-relevant questions about the future of AI, including through research, a wiki, and expert surveys. Run by MIRI.
Strategy, Forecasting
Center for Long-Term Resilience (CLTR)
Think tank aiming to transform global resilience to extreme risks by improving relevant governance, processes, and decision-making.
Strategy, Governance
AI Safety Camp (AISC)
3-month part-time online research program with mentorship, aimed at helping people who want to work on AI safety team up together on concrete projects.
Training and education
AI Safety Fundamentals (AISF)
Runs the standard introductory courses, each three months long and split into two tracks: Alignment and Governance. Also runs shorter intro courses.
AI Safety Hungary
Supports students and professionals in contributing to the safe development of AI, including through introductory seminars.
AI Safety Initiative at Georgia Tech (AISI)
Georgia Tech community hosting fellowships and research projects investigating open problems in AI safety, including specification, robustness, interpretability, and governance.
AI Safety Student Team (AISST)
Group of Harvard students conducting AI safety research and running fellowships, workshops, and reading groups.
AI Safety, Ethics and Society (AISES)
Course from the Center for AI Safety (CAIS) covering a wide range of risks while leveraging concepts and frameworks from existing research fields to analyze AI safety.
AISafety.com: Courses
Comprehensive, up-to-date directory of AI safety curricula and reading lists for self-led learning at all levels.
AISafety.info
Accessible guide to AI safety for those new to the space, in the form of a comprehensive FAQ and AI safety chatbot. Project of Rob Miles.
Alignment Research Engineer Accelerator (ARENA)
4–5 week ML engineering upskilling program, focusing on alignment. Aims to provide individuals with the skills, community, and confidence to contribute directly to technical AI safety.
Apart Sprints
Short hackathons and challenges, both online and in-person around the world, focused on important questions in AI safety.
Cambridge AI Safety Hub (CAISH)
Network of students and professionals in Cambridge conducting research, running educational and research programs, and creating a vibrant community of people with shared interests.
Cambridge Boston Alignment Initiative (CBAI)
Helping students get into AI safety research via upskilling programs and fellowships. Supports HAISST and MAIA.
Cambridge ERA:AI Fellowship
In-person, paid, 8-week summer research fellowship at the University of Cambridge for aspiring AI safety and governance researchers.
Center for Human-Compatible AI (CHAI): Internship
Internship at UC Berkeley aimed at training highly qualified postdoctoral researchers to carry out research in human-compatible AI.
Center on Long-Term Risk (CLR): Summer Research Fellowship
2-3 month summer research fellowship in London working on challenging research questions relevant to reducing suffering in the long-term future.
Foresight Fellowship
1-year program catalyzing collaboration among young scientists, engineers, and innovators working to advance technologies for the benefit of life.
Future Impact Group (FIG) Fellowship
Remote, part-time research opportunities in AI safety, policy, and philosophy. Also provide ongoing support, including coworking sessions, issue troubleshooting, and career guidance.
Global Challenges Project (GCP)
Intensive 3-day workshops for students to explore the foundational arguments around risks from advanced AI (and biotechnology).
Human-aligned AI Summer School
4-day program held in Prague, Czech Republic, teaching alignment research methodology through talks, workshops, and discussions.
Impact Academy: Global AI Safety Fellowship
Fully-funded research program connecting exceptional STEM researchers with full-time placement opportunities at AI safety labs and organizations.
London AI Safety Research (LASR) Labs
12-week technical research program aiming to assist individuals in transitioning to full-time careers in AI safety.
ML Alignment & Theory Scholars (MATS)
Research program connecting talented scholars with top mentors in AI safety. Involves 10 weeks onsite mentored research in Berkeley, and, if selected, 4 months extended research.
ML4Good
10-day intensive in-person bootcamps upskilling participants in technical AI safety research. Generally held in Europe and South America.
Mentorship for Alignment Research Students (MARS)
Research program connecting aspiring researchers with experienced mentors to conduct AI safety (technical or policy) research for 2–3 months.
Non-Trivial
Aiming to empower bright high schoolers to start solving the world's most pressing problems through various research programs.
Pivotal Research Fellowship
Annual 9-week program designed to enable promising researchers to produce impactful research and accelerate their careers in AI safety (or biosecurity).
Principles of Intelligent Behavior in Biological and Social Systems (PIBBSS): Summer Research Fellowship
Pairs fellows from disciplines studying complex and intelligent behaviour in natural and social systems with mentors from AI alignment.
Stanford AI Alignment (SAIA)
Student group and research community under SERI. Accelerating students into AI safety careers in AI safety, building the community at Stanford, and conducting research.
Stanford Existential Risk Initiative (SERI)
Working to preserve the future of humanity, including through research, courses, and fellowships at Stanford University.
Supervised Program for Alignment Research (SPAR)
Virtual, part-time research program offering early-career individuals and professionals the chance to engage in AI safety research for 3 months.
Talos Fellowship
7-month program enabling ambitious graduates to launch EU policy careers reducing risks from AI. Fellows participate in one of two tracks: training or placement.
UChicago Existential Risk Laboratory (XLab) Fellowship
10-week summer research fellowship giving undergraduate and graduate students the opportunity to produce high impact research on various emerging threats, including AI.
WhiteBox Research
Filipino nonprofit aiming to develop more AI interpretability and safety researchers, particularly in Southeast Asia.
Safe AI London (SAIL)
Supports individuals in London interested in working on AI safety by raising awareness of the risks, especially in universities, and providing high-quality resources and support.
Training and education, Research support
Centre pour la Sécurité de l'IA (CeSIA)
French AI safety nonprofit dedicated to education (including university courses and ML4Good bootcamps), advocacy (including events and publications), and research.
Training and education, Strategy, Advocacy
AI Explained
YouTuber discussing the latest AI developments as they happen, offering clear explanations and in-depth analysis of important research and events.
Video
AI Safety Videos
Comprehensive directory of AI safety video content, from beginner-friendly introductions to in-depth expert talks.
Rational Animations
Animated videos aiming to foster good thinking, promote altruistic causes, and help ensure humanity's future goes well – particularly regarding AI safety.
Robert Miles
YouTube channel featuring visually-rich explanations of AI safety concepts; highlighting potential risks, clarifying advanced topics, and advocating for responsible development.
Siliconversations
YouTube channel explaining (mostly) AI safety concepts through entertaining stickman videos.
The Inside View
Interviews with AI safety researchers, explainers, fictional stories of concrete threat models, and paper walk-throughs.
Suggest entry →
Suggest something to be listed on the map
Suggest correction →
Let us know of changes to something listed here
(ɔ) 2025 · This site is released under a CC BY-SA license