Reading Group

The AISafety.com Reading Group meets weekly, usually Wednesdays at 19:45 UTC. To join, add “soeren.elverlin” on Skype.

TITLE AUTHOR DATE SLIDES PRESENTATION
Age of Em (Chapter 27) Robin Hanson 27-09-2017 https://www.dropbox.com/s/i50zb7iswpivbgm/Age%20of%20Em.pdf?dl=0
Meditations on Moloch Scott Alexander 20-09-2017 https://www.dropbox.com/s/nn0ep22szo2h9gs/Meditations_on_Moloch.pdf?dl=0 https://youtu.be/YQ_2HFtXBDM
Incorrigibility in the CIRL Framework Ryan Carey 13-09-2017 https://www.dropbox.com/s/yyryrngcs7qhsz5/Incorrigibility_In_CIRL.pdf?dl=0 https://youtu.be/n2X1QKEUrt4
OpenAI Makes Humanity Less Safe Ben Hoffman 06-09-2017 https://www.dropbox.com/s/v8ugc4uo5ds533b/OpenAI_Makes_Humanity_Less_Safe.pdf?dl=0 https://youtu.be/nwh9ZR3yO2M
Open Problems Regarding Counterfactuals: An Introduction For Beginners Alex Appel 30-08-2017 https://www.dropbox.com/s/0ztr9lwd9z8md2l/Counterfactuals.pdf?dl=0 https://youtu.be/JqtJXr9QHkM
A Game-Theoretic Analysis of the Off-Switch Game Tobias Wängberg et al. 23-08-2017 https://www.dropbox.com/s/n4rx49hq49m5bxi/Off-Switch-Game.pdf?dl=0 https://youtu.be/8w2_cb6cBY0
Benevolent Artificial Anti-Natalism Thomas Metzinger 16-08-2017 https://www.dropbox.com/s/6gfbt568tadnflo/Benevolent_Artificial_Anti-Natalism.pdf?dl=0 https://youtu.be/Zjid5CgLaac
Where the Falling Einstein Meets the Rising Mouse Scott Alexander 09-08-2017 https://www.dropbox.com/s/wu4pc3qc8zi13h0/Where_the_Falling_Einstein.pdf?dl=0  https://youtu.be/oua1fMxYXvo
Superintelligence Risk Project Jeff Kaufman 03-08-2017 https://www.dropbox.com/s/9jk00oohc912izx/Superintelligence_Risk_Project.pdf?dl=0 https://youtu.be/vG8SuD66NLA
Staring into the Singularity Eliezer Yudkowsky 26-07-2017 https://www.dropbox.com/s/f7g9mpcwk3qr9pk/Staring_Into_the_Singularity.pdf?dl=0 https://youtu.be/qud4WvehRho
Artificial Intelligence and the Future of Defense Matthijs Maas et al. 19-07-2017 https://www.dropbox.com/s/gxigsjyyd1thnb1/AI_and_the_Future_of_Defense.pdf?dl=0 https://youtu.be/UO6Px7-AL4w
Prosaic AI Alignment Paul Christiano 12-07-2017 https://www.dropbox.com/s/vlg3pb0pewx1w7r/Prosaic_AI_Alignment.pdf?dl=0 https://youtu.be/YvBj620UPBg
A model of the Machine Intelligence Research Institute Sindy Li 05-07-2017 https://www.dropbox.com/s/d2fxbnqyay104df/A_Model_of_MIRI.pdf?dl=0 https://youtu.be/kNlU3kAB2ks
Deep Reinforcement Learning from Human Preferences Paul Christiano et al. 28-06-2017 https://www.dropbox.com/s/ajajtd7fs3fhw8u/Deep_Reinforcement_Learning.pdf?dl=0 https://youtu.be/3zK1kNremWA
–Holiday– 21-06-2017
The Singularity: A Philosophical Analysis (2/2) David J. Chalmers 14-06-2017 https://www.dropbox.com/s/pan93bzvfroj58k/The_Singularity_2.pdf?dl=0
The Singularity: A Philosophical Analysis (1/2) David J. Chalmers 07-06-2017 https://www.dropbox.com/s/lu4qk2205htlku2/The_Singularity.pdf?dl=0 https://youtu.be/U-0ZD9Irfw
Why Tool AIs want to be Agent AIs Gwern Branwen 31-05-2017 https://www.dropbox.com/s/i9jvrj43r7xvocl/Tool_AIs.pdf?dl=0 https://youtu.be/Tnnn6LtZGiQ
A Map: AGI Failure Modes and Levels Alexey Turchin 24-05-2017 https://www.dropbox.com/s/to6qowvhh14wfut/AGI_Failure_modes.pdf?dl=0 https://youtu.be/kBTNrprdKiU
Neuralink and the Brain’s Magical Future Tim Urban 17-05-2017 https://www.dropbox.com/s/e00gsu629zkzl4b/Neuralink.pdf?dl=0 https://youtu.be/9NpNzlCptJI
The Myth of Superhuman AI Kevin Kelly 10-05-2017 https://www.dropbox.com/s/00cnhpyndlo4jru/The_Myth_of_a_Superhuman_AI.pdf?dl=0 https://youtu.be/WLSOmVXweSs
Merging our brains with machines won’t stop the rise of the robots Michael Milford 03-05-2017 https://www.dropbox.com/s/og3pn5o7ofi101e/Humans_Merging_with_AI.pdf?dl=0 https://youtu.be/Rgm6xMt54VA
Building Safe AI Andrew Trask 26-04-2017 https://www.dropbox.com/s/3fnx251f9oiga8p/Building_Safe_AI.pdf?dl=0 https://youtu.be/Ys-U-4vjRjw
AGI Safety Solutions Map Alexey Turchin 19-04-2017 https://www.dropbox.com/s/ldyb7a32nd2089k/AGI_Safety_Solutions_Map.pdf?dl=0 https://youtu.be/ZNSfUiXZwz0
Strong AI Isn’t Here Yet Sarah Constantin 12-04-2017 https://www.dropbox.com/s/297amvxrl58wgil/Strong_AI_Isnt_Here_Yet.pdf?dl=0 https://youtu.be/GpuQlJ3IHBM
Robotics: Ethics of artificial intelligence Stuart Russell et al. 05-04-2017 https://www.dropbox.com/s/8t5o990d1hf7ew6/Robotics_Ethics_of_artificial_intelligence.pdf?dl=0 https://youtu.be/z_WhxqCWJ4s
Using machine learning to address AI risk Jessica Taylor 29-03-2017 https://www.dropbox.com/s/52k4u10f95c6fvb/Using_Machine_Learning.pdf?dl=0 https://youtu.be/vXNi4L5PH0A
Racing to the Precipice: a Model of Artificial Intelligence Development Armstrong et al. 22-03-2017 https://www.dropbox.com/s/2zybpfb667vy9tl/Racing_To_The_Precipice.pdf?dl=0
Politics is Upstream of AI Raymond Brannen 15-03-2017 https://www.dropbox.com/s/kvcyf4kwmqmlufx/Politics_Is_Upstreams_of_AI.pdf?dl=0
Coherent Extrapolated Volition Eliezer Yudkowsky 08-03-2017 https://www.dropbox.com/s/2jldifzkpc82rmk/Coherent_Extrapolated_Volition.pdf?dl=0
–Cancelled due to illness– 01-03-2017
Towards Interactive Inverse Reinforcement Learning Armstrong, Leike 22-02-2017 https://www.dropbox.com/s/ouom3qzx8aofulv/Towards_Interactive_Inverse_Reinforcement_Learning_.pdf?dl=0
Notes from the Asilomar Conference on Beneficial AI Scott Alexander 15-02-2017 https://www.dropbox.com/s/4ohpo4fpewwdz7q/Notes_from_the_Asilomar_Conference_on_Beneficial_AI.pdf?dl=0
My current take on the Paul-MIRI disagreement on alignability of messy AI Jessica Taylor 08-02-2017 https://www.dropbox.com/s/9jtu8njaloxucrv/My_Current_take_on_the_Paul_MIRI_disagreement.pdf?dl=0
How feasible is the rapid development of Artificial Superintelligence? Kaj Sotala 01-02-2017 https://www.dropbox.com/s/5u79rex6czszt23/How_Feasible_is_the_Rapid_Development_of_Artificial_Superintelligence.pdf?dl=0
Response to Cegłowski on superintelligence Matthew Graves 25-01-2017 https://www.dropbox.com/s/bzlw8mc7k1fs0ox/Response_to_Ceglowski.pdf?dl=0
Disjunctive AI scenarios: Individual or collective takeoff? Kaj Sotala 18-01-2017 https://www.dropbox.com/s/sdsm2mpaiq892o3/Individual_or_collective_takeoff.pdf?dl=0
Policy Desiderata in the Development of Machine Superintelligence Nick Bostrom 11-01-2017 https://www.dropbox.com/s/jt6w0fzli5b0vg1/Policy%20Desiderata.pdf?dl=0
Concrete Problems in AI Safety Dario Amodei et al. 04-01-2017 https://www.dropbox.com/s/wthme4pnhlipz2q/Concrete.pdf?dl=0
–Holiday– 28-12-2016
A Wager on the Turing Test: Why I Think I Will Win Ray Kurzweil 21-12-2016 https://www.dropbox.com/s/iurbqzyaq9tt69f/Kurzweil.pdf?dl=0
Responses to Catastrophic AGI Risk: A Survey Sotala, Yampolskiy 14-12-2016 https://www.dropbox.com/s/iywy8znxx8yn1xt/Responses%20to%20AI.pdf?dl=0
Discussion of ‘Superintelligence: Paths, Dangers, Strategies’ Neil Lawrence 07-12-2016 https://www.dropbox.com/s/pyhb55mz65bhe9m/Neil%20Lawrence%20-%20Future%20of%20AI.pdf?dl=0
Davis on AI capability and motivation Rob Bensinger 30-11-2016 https://www.dropbox.com/s/eatjziiqsj5bmmg/Rob%20Bensinger%20Reply%20to%20Ernest%20Davis.pdf?dl=0
Ethical guidelines for a Superintelligence Ernest Davis 22-11-2016 https://www.dropbox.com/s/7j14li21igzi5gx/Ethical%20Guidelines%20for%20a%20Superintelligence.pdf?dl=0
Superintelligence: Chapter 15 Nick Bostrom 15-11-2016 https://www.dropbox.com/s/5jsusue656rdf2r/15%20Crunch%20Time.pdf?dl=0
Superintelligence: Chapter 14 Nick Bostrom 09-11-2016 https://www.dropbox.com/s/l2myz5c7t3a6at9/14%20Science%20and%20Technology%20Strategy.pdf?dl=0
Superintelligence: Chapter 11 Nick Bostrom 01-11-2016 https://www.dropbox.com/s/vj9j5saz39ese5i/11%20Multipolar%20Scenarios.pdf?dl=0
Superintelligence: Chapter 9 (2/2) Nick Bostrom 25-10-2016 https://www.dropbox.com/s/ux66z2ujz9jgofe/9.%20Motivation%20Selection%20Methods.pdf?dl=0
Superintelligence: Chapter 9 (1/2) Nick Bostrom 18-10-2016 https://www.dropbox.com/s/0mgnqcq075vehfv/Capability%20Control%20Methods.pdf?dl=0
Superintelligence: Chapter 8 Nick Bostrom 11-10-2016 https://www.dropbox.com/s/ihj35vxbevfghal/Default%20doom.pdf?dl=0
Superintelligence: Chapter 7 Nick Bostrom 04-10-2016 https://www.dropbox.com/s/pps6di0pza7wvab/The%20superintelligent%20Will.pdf?dl=0
Superintelligence: Chapter 6 Nick Bostrom 27-09-2016
Superintelligence: Chapter 5 Nick Bostrom 20-09-2016
Taxonomy of Pathways to Dangerous Artificial Intelligence Roman V. Yampolskiy 13-09-2016
Unethical Research: How to Create a Malevolent Artificial Intelligence Roman V. Yampolskiy 06-09-2016
Superintelligence: Chapter 4 Nick Bostrom 30-08-2016
Superintelligence: Chapter 3 Nick Bostrom 23-08-2016
Superintelligence: Chapter 1+2 Nick Bostrom 16-08-2016
Why I am skeptical of risks from AI Alexander Kruel 09-08-2016
–Break due to family extension– 02-08-2016
–Break due to family extension– 26-07-2016
Intelligence Explosion FAQ Luke Muehlhauser 19-07-2016
A toy model of the treacherous turn Stuart Armstrong 12-07-2016
The Fable of the Dragon Tyrant Nick Bostrom 05-07-2016
The Fun Theory Sequence Eliezer Yudkowsky 28-06-2016
Intelligence Explosion Microeconomics Eliezer Yudkowsky 21-06-2016
Strategic Implications of Openness in AI Development Nick Bostrom 14-06-2016
That Alien Message Eliezer Yudkowsky 07-06-2016
The Value Learning Problem Nate Soares 31-05-2016
Decisive Strategic Advantage without a Hard Takeoff Kaj Sotala 24-05-2016