Hundreds of the world’s brightest minds — engineers from Google and IBM, hedge funds quants, and Defense Department contractors building artificial intelligence — were gathered in rapt attention inside the auditorium of the San Francisco Masonic Temple atop Nob Hill. It was the first day of the seventh annual Singularity Summit, and Julia Galef, the President of the Center for Applied Rationality, was speaking onstage. On the screen behind her, Galef projected a giant image from the film Blade Runner: the replicant Roy, naked, his face stained with blood, cradling a white dove in his arms.
At this point in the movie, Roy is reaching the end of his short, pre-programmed life, “The poignancy of his death scene comes from the contrast between that bitter truth and the fact that he still feels his life has meaning, and for lack of a better word, he has a soul,” said Galef. “To me this is the situation we as humans have found ourselves in over the last century. Turns out we are survival machines created by ancient replicators, DNA, to produce as many copies of them as possible. This is the bitter pill that science has offered us in response to our questions about where we came from and what it all means.”
The Singularity Summit bills itself as the world’s premier event on robotics, artificial intelligence, and other emerging technologies. The attendees, who shelled out $795 for a two-day pass, are people whose careers depend on data, on empirical proof. Peter Norvig, Google’s Director of Research, discussed advances in probabilistic first-order logic. The Nobel prize-winning economist Daniel Kahneman lectured on the finer points of heuristics and biases in human psychology. The Power Point presentations were full of math equations and complex charts. Yet time and again the conversation drifted towards the existential: the larger, unanswerable questions of life.
A really great article about the Singularity Summit, where some very smart people come together to discuss the possibility, and implications, of an artificial intelligence technological singularity. There is a focus in the article on Ray Kurweil, the most vocal and visible proponent of the merging of human beings with machine intelligences.
I’ve read a couple of Kurweil’s books, and while there are some really interesting insights in them, I think in other areas he’s simply blinded by his own overwhelming desire not to die. A perfectly understandable desire, but it clouds his objectivity to a degree. To Kurzweil, the Singularity has to happen in the next several decades because that’s all the natural lifespan he has left, and if he’s to take advantage of its promise of greatly extended life—and perhaps true immortality—then that has to be the timeframe.
Personally, I believe a true AI is a long way off—it’s not going to be just a function of computational power, but something more ineffable, harder to define and quantify, and therefore much harder to program.
Read the full article at theverge.com.