Reading Group

The Reading Group meets weekly, usually Wednesdays at 19:45 UTC. To join, add “soeren.elverlin” on Skype.

Age of Em (Chapter 27) Robin Hanson 27-09-2017
Meditations on Moloch Scott Alexander 20-09-2017
Incorrigibility in the CIRL Framework Ryan Carey 13-09-2017
OpenAI Makes Humanity Less Safe Ben Hoffman 06-09-2017
Open Problems Regarding Counterfactuals: An Introduction For Beginners Alex Appel 30-08-2017
A Game-Theoretic Analysis of the Off-Switch Game Tobias Wängberg et al. 23-08-2017
Benevolent Artificial Anti-Natalism Thomas Metzinger 16-08-2017
Where the Falling Einstein Meets the Rising Mouse Scott Alexander 09-08-2017
Superintelligence Risk Project Jeff Kaufman 03-08-2017
Staring into the Singularity Eliezer Yudkowsky 26-07-2017
Artificial Intelligence and the Future of Defense Matthijs Maas et al. 19-07-2017
Prosaic AI Alignment Paul Christiano 12-07-2017
A model of the Machine Intelligence Research Institute Sindy Li 05-07-2017
Deep Reinforcement Learning from Human Preferences Paul Christiano et al. 28-06-2017
–Holiday– 21-06-2017
The Singularity: A Philosophical Analysis (2/2) David J. Chalmers 14-06-2017
The Singularity: A Philosophical Analysis (1/2) David J. Chalmers 07-06-2017
Why Tool AIs want to be Agent AIs Gwern Branwen 31-05-2017
A Map: AGI Failure Modes and Levels Alexey Turchin 24-05-2017
Neuralink and the Brain’s Magical Future Tim Urban 17-05-2017
The Myth of Superhuman AI Kevin Kelly 10-05-2017
Merging our brains with machines won’t stop the rise of the robots Michael Milford 03-05-2017
Building Safe AI Andrew Trask 26-04-2017
AGI Safety Solutions Map Alexey Turchin 19-04-2017
Strong AI Isn’t Here Yet Sarah Constantin 12-04-2017
Robotics: Ethics of artificial intelligence Stuart Russell et al. 05-04-2017
Using machine learning to address AI risk Jessica Taylor 29-03-2017
Racing to the Precipice: a Model of Artificial Intelligence Development Armstrong et al. 22-03-2017
Politics is Upstream of AI Raymond Brannen 15-03-2017
Coherent Extrapolated Volition Eliezer Yudkowsky 08-03-2017
–Cancelled due to illness– 01-03-2017
Towards Interactive Inverse Reinforcement Learning Armstrong, Leike 22-02-2017
Notes from the Asilomar Conference on Beneficial AI Scott Alexander 15-02-2017
My current take on the Paul-MIRI disagreement on alignability of messy AI Jessica Taylor 08-02-2017
How feasible is the rapid development of Artificial Superintelligence? Kaj Sotala 01-02-2017
Response to Cegłowski on superintelligence Matthew Graves 25-01-2017
Disjunctive AI scenarios: Individual or collective takeoff? Kaj Sotala 18-01-2017
Policy Desiderata in the Development of Machine Superintelligence Nick Bostrom 11-01-2017
Concrete Problems in AI Safety Dario Amodei et al. 04-01-2017
–Holiday– 28-12-2016
A Wager on the Turing Test: Why I Think I Will Win Ray Kurzweil 21-12-2016
Responses to Catastrophic AGI Risk: A Survey Sotala, Yampolskiy 14-12-2016
Discussion of ‘Superintelligence: Paths, Dangers, Strategies’ Neil Lawrence 07-12-2016
Davis on AI capability and motivation Rob Bensinger 30-11-2016
Ethical guidelines for a Superintelligence Ernest Davis 22-11-2016
Superintelligence: Chapter 15 Nick Bostrom 15-11-2016
Superintelligence: Chapter 14 Nick Bostrom 09-11-2016
Superintelligence: Chapter 11 Nick Bostrom 01-11-2016
Superintelligence: Chapter 9 (2/2) Nick Bostrom 25-10-2016
Superintelligence: Chapter 9 (1/2) Nick Bostrom 18-10-2016
Superintelligence: Chapter 8 Nick Bostrom 11-10-2016
Superintelligence: Chapter 7 Nick Bostrom 04-10-2016
Superintelligence: Chapter 6 Nick Bostrom 27-09-2016
Superintelligence: Chapter 5 Nick Bostrom 20-09-2016
Taxonomy of Pathways to Dangerous Artificial Intelligence Roman V. Yampolskiy 13-09-2016
Unethical Research: How to Create a Malevolent Artificial Intelligence Roman V. Yampolskiy 06-09-2016
Superintelligence: Chapter 4 Nick Bostrom 30-08-2016
Superintelligence: Chapter 3 Nick Bostrom 23-08-2016
Superintelligence: Chapter 1+2 Nick Bostrom 16-08-2016
Why I am skeptical of risks from AI Alexander Kruel 09-08-2016
–Break due to family extension– 02-08-2016
–Break due to family extension– 26-07-2016
Intelligence Explosion FAQ Luke Muehlhauser 19-07-2016
A toy model of the treacherous turn Stuart Armstrong 12-07-2016
The Fable of the Dragon Tyrant Nick Bostrom 05-07-2016
The Fun Theory Sequence Eliezer Yudkowsky 28-06-2016
Intelligence Explosion Microeconomics Eliezer Yudkowsky 21-06-2016
Strategic Implications of Openness in AI Development Nick Bostrom 14-06-2016
That Alien Message Eliezer Yudkowsky 07-06-2016
The Value Learning Problem Nate Soares 31-05-2016
Decisive Strategic Advantage without a Hard Takeoff Kaj Sotala 24-05-2016