I’ve always been fascinated by AI, probably because I’ve watched waaay too much sci-fi movies ever since I was a kid. Most predictions in movies about the future did not become a reality, but eventually some of them will.
As a programmer and a generally curious person I’ve spent more than a hundred hours researching this topic, and I’ve came to the conclusion that AI will be extremely valuable in the next couple of years, solving problems no human ever could in a reasonable time-frame.
Unfortunately after these few fruitful years there is a very high chance that artificial general intelligence will wreak havoc with the world in the next 40 years. I’m not talking about something obvious, like we’ve all seen in The Terminator movies. The real problem is so complex, that even people working in the AI field tend to reject it as a non-issue.
For your convenience I’ve selected 3 of the best sources of information on this to get a grip on the problem in the least amount of time.
Are there any jobs that cannot be made obsolete by automation?
Why AI is a human existential risk?
Wake Up Call
Fascinating In Depth Explanation
It might take 2 hours to read through these, but they are so well written that it is impossible to put them down halfway.
As a side note it is worth mentioning that Elon Musk personally contacted the author after he wrote these articles, because he wanted Tim to write about Tesla, SpaceX and the Mars mission.
I’ve talked with lots of people around me and no-one seemed to even care about any of this, which is understandable, because at the moment there is nothing to do for people not directly involved in AI R&D.
So the real question is will the relevant people understand this problem well enough, in time, and will they be able to figure out a way to avoid disaster?