I submitted the following dissertation to the UK’s Open University as the final part of my work for an MA in philosophy. The course had involved study of philosophical questions relating to aspects of the self: personal identity, free will, moral responsibility, political philosophy, etc. The subject I chose for my dissertation , AGI, seemed tangential to these, but I was becoming increasingly fascinated by the topic and it seemed that the advent of AGI would transform all these questions. The subject-matter was accepted as appropriate for a dissertation, and in due course I got my MA.
The conclusion I argue for is simple enough: all the popular notions and analogies that come up in discussion of AGI are dangerously misleading and complacent, except the rather lurid idea of AGIs as potentially dangerous alien beings.
I’ve approached the subject solely through the sorts of informal analogies and images that popular discussion works with. I’m not competent to attempt analyses using formal logic; but anyway, I happen to believe such that informal, metaphorical ways of thinking, for all their vagueness, are the most powerful and relevant in public debate. I hope that even the hard-core coders who visit aisafety.com will find something of interest here.