For the last few weeks I’ve been listening to some of Nick Boström talks on youtube. It has been an interesting journey.
I started to look at a video from one of the companies Google bought - Deepmind - a company with knowledge within AI and presumable interesting for Google and their robotic “moonshot”.
Deepmind was partly owned by Elon Musk before Google bought it, and Elon Musk, turns out to have said some remarkable claims about AI.(or so they say?). One deleted message claims to have stated:
-“The risk of something seriously dangerous happening is in the five year timeframe. 10 years at most,” he wrote, adding that “Please note that I am normally super pro technology, and have never raised this issue until recent months. This is not a case of crying wolf about something I don’t understand.”
The statement that lead me to Nick Bostrom was Elon Musk message: “Worth reading Superintelligence by Bostrom. We need to be super careful with AI. Potentially more dangerous than nukes.”
The great thing is that a large collection of Boströms talks are available on youtube, and for those of you that are interested in AI, philosophy and science, these talks are very interesting.
The end of humanity: Nick Bostrom at TEDxOxford
Nick Bostrom - The SuperIntelligence Control Problem - Oxford Winter Intelligence
Nick bostrom on the Fermi Paradox.
Nick Boström on the Simulation argument