Reading Group

The Reading Group meets weekly, usually Wednesdays at 18:45 UTC. To join, add “soeren.elverlin” on Skype.

Usually, we start with small-talk and a presentation round, then the host gives a summary of the paper for roughly 20 minutes. The summary of the article is uploaded on the Youtube Channel. This is followed by discussion (both on the article and in general) and finally we decide on a paper to read the following week.

Join us by Skype, by adding ‘soeren.elverlin’. Also check out our Facebook Group.

MIRI’s Strategic Background Malo Bourgon and 22-08-2018
The Malicious use of AI Miles Brundage et al. 15-08-2018
The Learning-Theoretic AI Alignment Research Agenda Vadim Kosoy 09-08-2018
No Basic AI Drives and A Rebuttal to Omohundro’s ‘Basic A.I. Drives’ Alexander Kruel and Scott Jackish 01-08-2018
The Basic AI Drives Stephen Omohundro 24-07-2018
AI and compute / Interpreting AI Compute trends Amodei et al., Ryan Carey 18-07-2018
Learning which reward to maximise Stuart Armstrong et al. 11-07-2018
AlphaGo Zero and the Foom Debate Eliezer Yudkowsky 04-07-2018
The Hanson-Yudkowsky AI-Foom Debate (2/2) Kaj Sotala 28-06-2018
The Hanson-Yudkowsky AI-Foom Debate (1/2) Kaj Sotala 20-06-2018
Taking AI Risk Seriously Andrew Critch 14-06-2018
Current thoughts on Paul Christano’s research agenda Jessica Taylor 06-06-2018
Challenges to Christiano’s capability amplification proposal Eliezer Yudkowsky 30-05-2018
Long-term strategies for ending existential risk from fast takeoff Daniel Dewey 24-05-2018
Machines that Think Toby Walsh 16-05-2018
Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm David Silver et al. 10-05-2018
Iterated Distillation and Amplification Ajeya Cotra 02-05-2018
Deciphering China’s AI Dream Jeffrey Ding 24-04-2018
Why the Singularity is not a Singularity Edward Felten 18-04-2018
The Ethics of Artificial Intelligence Yudkowsky and Bostrom 22-03-2018
An Untrollable Mathematician Abram Demski 14-03-2018
Takeoff Speeds Paul Christiano 07-03-2018
We’re told to fear robots. But why do we think they’ll turn on us? Steven Pinker 01-03-2018
Cognitive Biases Potentially Affecting Judgment of Global Risks Eliezer Yudkowsky 21-02-2018
Goodhart Taxonomy Scott Garrabrant 13-02-2018
An AI Race for Strategic Advantage: Rhetoric and Risks Seán S ÓhÉigeartaigh et al. 07-02-2018
Reply to Bostrom’s arguments for a hard takeoff Brian Tomasik 31-01-2018
Superintelligence as a Cause or Cure for Risks of Astronomical Suffering Kaj Sotala et al. 24-01-2018
Impossibility of deducing preferences and rationality from human policy Stuart Armstrong et al. 17-01-2018
On the Promotion of Safe and Socially Beneficial Artificial Intelligence Seth Baum 09-01-2018
Refuting Bostrom’s Superintelligence Argument Sebastian Benthall 03-01-2018
Logical Induction (1+7) Scott Garrabrant et al. 27-12-2017
Conceptual Confusions in Assessing AGI Chris Cooper 20-12-2017
Disjunctive Scenarios of Catastrophic AI Risk (2/2) Kaj Sotala 06-12-2017
Disjunctive Scenarios of Catastrophic AI Risk (1/2) Kaj Sotala 01-12-2017
Artificial Intelligence in Life Extension: from Deep Learning to Superintelligence Alexey Turchin 22-11-2017
Good and safe uses of AI Oracles Stuart Armstrong 08-11-2017
Positively shaping the development of artificial intelligence Robert Wilbin 01-11-2017
There is no Fire Alarm for Artificial General Intelligence Eliezer Yudkowsky 25-10-2017
Fitting Values to Inconsistent Humans Stuart Armstrong 18-10-2017
Age of Em (Intelligence Explosion) Robin Hanson 11-10-2017
Age of Em (Chapter 27) Robin Hanson 27-09-2017
Meditations on Moloch Scott Alexander 20-09-2017
Incorrigibility in the CIRL Framework Ryan Carey 13-09-2017
OpenAI Makes Humanity Less Safe Ben Hoffman 06-09-2017
Open Problems Regarding Counterfactuals: An Introduction For Beginners Alex Appel 30-08-2017
A Game-Theoretic Analysis of the Off-Switch Game Tobias Wängberg et al. 23-08-2017
Benevolent Artificial Anti-Natalism Thomas Metzinger 16-08-2017
Where the Falling Einstein Meets the Rising Mouse Scott Alexander 09-08-2017
Superintelligence Risk Project Jeff Kaufman 03-08-2017
Staring into the Singularity Eliezer Yudkowsky 26-07-2017
Artificial Intelligence and the Future of Defense Matthijs Maas et al. 19-07-2017
Prosaic AI Alignment Paul Christiano 12-07-2017
A model of the Machine Intelligence Research Institute Sindy Li 05-07-2017
Deep Reinforcement Learning from Human Preferences Paul Christiano et al. 28-06-2017
–Holiday– 21-06-2017
The Singularity: A Philosophical Analysis (2/2) David J. Chalmers 14-06-2017
The Singularity: A Philosophical Analysis (1/2) David J. Chalmers 07-06-2017
Why Tool AIs want to be Agent AIs Gwern Branwen 31-05-2017
A Map: AGI Failure Modes and Levels Alexey Turchin 24-05-2017
Neuralink and the Brain’s Magical Future Tim Urban 17-05-2017
The Myth of Superhuman AI Kevin Kelly 10-05-2017
Merging our brains with machines won’t stop the rise of the robots Michael Milford 03-05-2017
Building Safe AI Andrew Trask 26-04-2017
AGI Safety Solutions Map Alexey Turchin 19-04-2017
Strong AI Isn’t Here Yet Sarah Constantin 12-04-2017
Robotics: Ethics of artificial intelligence Stuart Russell et al. 05-04-2017
Using machine learning to address AI risk Jessica Taylor 29-03-2017
Racing to the Precipice: a Model of Artificial Intelligence Development Armstrong et al. 22-03-2017
Politics is Upstream of AI Raymond Brannen 15-03-2017
Coherent Extrapolated Volition Eliezer Yudkowsky 08-03-2017
–Cancelled due to illness– 01-03-2017
Towards Interactive Inverse Reinforcement Learning Armstrong, Leike 22-02-2017
Notes from the Asilomar Conference on Beneficial AI Scott Alexander 15-02-2017
My current take on the Paul-MIRI disagreement on alignability of messy AI Jessica Taylor 08-02-2017
How feasible is the rapid development of Artificial Superintelligence? Kaj Sotala 01-02-2017
Response to Cegłowski on superintelligence Matthew Graves 25-01-2017
Disjunctive AI scenarios: Individual or collective takeoff? Kaj Sotala 18-01-2017
Policy Desiderata in the Development of Machine Superintelligence Nick Bostrom 11-01-2017
Concrete Problems in AI Safety Dario Amodei et al. 04-01-2017
–Holiday– 28-12-2016
A Wager on the Turing Test: Why I Think I Will Win Ray Kurzweil 21-12-2016
Responses to Catastrophic AGI Risk: A Survey Sotala, Yampolskiy 14-12-2016
Discussion of ‘Superintelligence: Paths, Dangers, Strategies’ Neil Lawrence 07-12-2016
Davis on AI capability and motivation Rob Bensinger 30-11-2016
Ethical guidelines for a Superintelligence Ernest Davis 22-11-2016
Superintelligence: Chapter 15 Nick Bostrom 15-11-2016
Superintelligence: Chapter 14 Nick Bostrom 09-11-2016
Superintelligence: Chapter 11 Nick Bostrom 01-11-2016
Superintelligence: Chapter 9 (2/2) Nick Bostrom 25-10-2016
Superintelligence: Chapter 9 (1/2) Nick Bostrom 18-10-2016
Superintelligence: Chapter 8 Nick Bostrom 11-10-2016
Superintelligence: Chapter 7 Nick Bostrom 04-10-2016
Superintelligence: Chapter 6 Nick Bostrom 27-09-2016
Superintelligence: Chapter 5 Nick Bostrom 20-09-2016
Taxonomy of Pathways to Dangerous Artificial Intelligence Roman V. Yampolskiy 13-09-2016
Unethical Research: How to Create a Malevolent Artificial Intelligence Roman V. Yampolskiy 06-09-2016
Superintelligence: Chapter 4 Nick Bostrom 30-08-2016
Superintelligence: Chapter 3 Nick Bostrom 23-08-2016
Superintelligence: Chapter 1+2 Nick Bostrom 16-08-2016
Why I am skeptical of risks from AI Alexander Kruel 09-08-2016
–Break due to family extension– 02-08-2016
–Break due to family extension– 26-07-2016
Intelligence Explosion FAQ Luke Muehlhauser 19-07-2016
A toy model of the treacherous turn Stuart Armstrong 12-07-2016
The Fable of the Dragon Tyrant Nick Bostrom 05-07-2016
The Fun Theory Sequence Eliezer Yudkowsky 28-06-2016
Intelligence Explosion Microeconomics Eliezer Yudkowsky 21-06-2016
Strategic Implications of Openness in AI Development Nick Bostrom 14-06-2016
That Alien Message Eliezer Yudkowsky 07-06-2016
The Value Learning Problem Nate Soares 31-05-2016
Decisive Strategic Advantage without a Hard Takeoff Kaj Sotala 24-05-2016