“The rise of AI could be the worst or the best thing that has happened for humanity. AI could develop a will of its own ,” said
physicist Stephen Hawking .
“Success in creating effective AI, could be the biggest event in the history of our civilization. Or the worst. We just don’t know. So we cannot know if we will be infinitely helped by AI, or ignored by it and side-lined, or conceivably destroyed by it.”
“Unless we learn how to prepare for, and avoid, the potential risks, AI could be the worst event in the history of our civilization. It brings dangers, like powerful autonomous weapons, or new ways for the few to oppress the many. It could bring great disruption to our economy,” Hawking added.
“Perhaps we should all stop for a moment and focus not only on making our AI better and more successful, but also on the benefit of humanity,” Hawking said.
Who Gets to Determine?
Excuse me for asking, but who gets to determine, in advance, what is or what isn't a "benefit of humanity"?
Is it me, you, Goldman Sachs, President Trump. the EU, China, ISIS, or AI itself?
I can partially answer that question. It sure isn't me.
Mike "Mish" Shedlock