Nonlinear Computation In Linear Networks by Open Ai – Floating point arithmatic is fundamentally non-linear near the limit of machine precision. OpenAI managed to exploit these non-linear effects with an evolutionary algorithm to achieve much better performance than a normal deep normal network on MNIST.
I submitted the following dissertation to the UK’s Open University as the final part of my work for an MA in philosophy. The course had involved study of philosophical questions relating to aspects of the self: personal identity, free will, moral responsibility, political philosophy, etc. The subject I chose for my dissertation , AGI, seemed tangential to these, but I was becoming increasingly fascinated by the topic and it seemed that the advent of AGI would transform all these questions. The subject-matter was accepted as appropriate for a dissertation, and in due course I got my MA.
The conclusion I argue for is simple enough: all the popular notions and analogies that come up in discussion of AGI are dangerously misleading and complacent, except the rather lurid idea of AGIs as potentially dangerous alien beings.
I’ve approached the subject solely through the sorts of informal analogies and images that popular discussion works with. I’m not competent to attempt analyses using formal logic; but anyway, I happen to believe such that informal, metaphorical ways of thinking, for all their vagueness, are the most powerful and relevant in public debate. I hope that even the hard-core coders who visit aisafety.com will find something of interest here.
Superintelligence Risk Project: Conclusion by Jeff Kaufman – “I’m not convinced that AI risk should be highly prioritized, but I’m also not convinced that it shouldn’t. Highly qualified researchers in a position to have a good sense the field have massively different views on core questions like how capable ML systems are now, how capable they will be soon, and how we can influence their development.” There are links to all the previous posts. The final write up goes into some detail about MIRI’s research program and an alternative safety paradigm connected to openAI.
Learning To Model Other Minds by Open Ai – “We’re releasing an algorithm which accounts for the fact that other agents are learning too, and discovers self-interested yet collaborative strategies like tit-for-tat in the iterated prisoner’s dilemma.”
Tegmarks Book Of Foom by Robin Hanson – Tegmark’s recent book basically described Yudkowsky’s intelligence explosion. Tegmark is worried the singularity might be soon and we need to have figured out big philosophical issues by then. Hanson thinks Tegmark overestimates the generality of intelligence. AI weapons and regulations.
Ideological Engineering And Social Control A by Geoffrey Miller (EA forum) – China is trying hard to develop advanced AI. A major goal is to use AI to monitor both physical space and social media. Supressing wrong-think doesn’t require radically advanced AI.
Incorrigibility In Cirl by The MIRI Blog – Paper. Goal: Incentivize a value learning system to follow shut down instructions. Demonstration that some assumptions are not stable with respect to model mis-specification (ex programmer error). Weaker sets of assumptions: difficulties and simple strategies.