The AI Safety Reading group has over the past 5 years read and discussed a broad selection of the AI Safety literature. During this time, a community of engaged and competent participants has emerged. We are now expanding beyond the reading group to start working towards making practical contributions to solving the Alignment Problem.
Ajeya Cotra’s suggests that alignment work on large models should be genuinely useful. We take this to the logical conclusion that the work should result in a product. This makes AISafety.com something between a startup and a non-profit.
The object-level task that AISafety.com will be working on is aligning business processes with business interests. This is a subject where we expect alignment work will be very useful, while at the same time providing a deep and fertile domain for alignment research.
The goal of AISafety.com is to contribute to solving the Alignment Problem, not maximizing shareholder value. When the goals conflict, we will choose the Alignment Problem every time, but we envision that the goals could be complementary.
More details will be forthcoming as the concept is refined.