AI CEOs, experts worried about the “risk of extinction” from AI – Times of India
AI researchers, engineers, and CEOs have released a statement expressing their concern about the potential danger AI poses, that it could wipe off humanity, comparing the threat level to the pandemics and nuclear war. “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” reads their 22-word statement.
This statement, published by a San Francisco-based non-profit, the Center for AI Safety, co-signed by prominent figures such as OpenAI CEO Sam Altman, Google DeepMind CEO Demis Hassabis, as well as Geoffrey Hinton and Youshua Bengio — two of the three AI researchers who won the 2018 Turing Award, sometimes referred to as the “Nobel Prize of computing.”
“AI experts, journalists, policymakers, and the public are increasingly discussing a broad spectrum of important and urgent risks from AI,” the preamble to the 22-word statement reads. “Even so, it can be difficult to voice concerns about some of advanced AI’s most severe risks. The succinct statement below aims to overcome this obstacle and open up discussion. It is also meant to create common knowledge of the growing number of experts and public figures who also take some of advanced AI’s most severe risks seriously.”
More voices for ‘control’ of AI
This statement adds to the ongoing and controversial discussion about the safety of artificial intelligence. At the start of this year, a group of people supporting the 22-word warning, including Steve Wozniak, Elon Musk, and 1000 others, wrote an open letter asking for a six-month “pause” in AI development. “Recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control,” the letter read.
Dan Hendrycks, the executive director of the Center for AI Safety, told the New York Times that the statement released today was intentionally kept simple to prevent disagreements. Hendrycks explained that they did not want to overwhelm people with too many potential solutions and dilute the message.
The statement was a way for people in the industry to express their concerns about the dangers of AI, which is often misunderstood as being only a concern for a few individuals. Hendrycks clarified that many people in the AI community privately share these concerns.
This statement, published by a San Francisco-based non-profit, the Center for AI Safety, co-signed by prominent figures such as OpenAI CEO Sam Altman, Google DeepMind CEO Demis Hassabis, as well as Geoffrey Hinton and Youshua Bengio — two of the three AI researchers who won the 2018 Turing Award, sometimes referred to as the “Nobel Prize of computing.”
“AI experts, journalists, policymakers, and the public are increasingly discussing a broad spectrum of important and urgent risks from AI,” the preamble to the 22-word statement reads. “Even so, it can be difficult to voice concerns about some of advanced AI’s most severe risks. The succinct statement below aims to overcome this obstacle and open up discussion. It is also meant to create common knowledge of the growing number of experts and public figures who also take some of advanced AI’s most severe risks seriously.”
More voices for ‘control’ of AI
This statement adds to the ongoing and controversial discussion about the safety of artificial intelligence. At the start of this year, a group of people supporting the 22-word warning, including Steve Wozniak, Elon Musk, and 1000 others, wrote an open letter asking for a six-month “pause” in AI development. “Recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control,” the letter read.
Dan Hendrycks, the executive director of the Center for AI Safety, told the New York Times that the statement released today was intentionally kept simple to prevent disagreements. Hendrycks explained that they did not want to overwhelm people with too many potential solutions and dilute the message.
The statement was a way for people in the industry to express their concerns about the dangers of AI, which is often misunderstood as being only a concern for a few individuals. Hendrycks clarified that many people in the AI community privately share these concerns.
For all the latest Technology News Click Here
For the latest news and updates, follow us on Google News.
Denial of responsibility! NewsUpdate is an automatic aggregator around the global media. All the content are available free on Internet. We have just arranged it in one platform for educational purpose only. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials on our website, please contact us by email – [email protected]. The content will be deleted within 24 hours.