Updated News Around the World

Artificial Intelligence: AI doomsday scenarios are gaining traction in Silicon Valley – Times of India

Controversial AI theorist Eliezer Yudkowsky sits on the fringe of the industry’s most extreme circle of commentators, where extinction of the human species is the inevitable result of developing advanced artificial intelligence.
“I think we’re not ready, I think we don’t know what we’re doing, and I think we’re all going to die,” Yudkowsky said on this week’s episode of the Bloomberg Originals series AI IRL.
For the past two decades, Yudkowsky has consistently promoted his theory that hostile AI could spark a mass extinction event. As many in the AI industry shrugged or raised eyebrows at this assessment, he created the Machine Intelligence Research Institute with funding from Peter Thiel, among others, and collaborated on written work with futurists such as Nick Bostrom.
To say that some of his visions for the end of the world are unpopular would be a gross understatement; they’re on par with the prophecy that the world would end in 2012. That prediction was based on a questionable interpretation of an ancient text, as well as a dearth of supportive evidence.
While Yudkowsky’s views are draconian, concern over AI’s potential for harm has gained traction at the highest echelons of the AI community, including among chief executive officers of some of the leading companies in artificial intelligence, such as OpenAI, Anthropic and Alphabet Inc.’s DeepMind. The rapid rise of generative AI in just even the past eight months has prompted calls for regulation and a pause in training of advanced AI systems.
In May, Sam Altman, Demis Hassabis and Dario Amodei joined hundreds of other leaders and researchers in co-signing a brief statement released by the nonprofit Center for AI Safety that said “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war.” Microsoft co-founder Bill Gates was a signatory, as was Yudkowsky.
Some skeptics contend that AI isn’t advanced enough to justify fears that it will destroy humanity, and that focusing on doomsday scenarios is only a distraction from issues such as algorithmic bias, racism and the risk posed by the rampant spread of disinformation.
“This kind of talk is dangerous because it’s become such a dominant part of the discourse,” Sasha Luccioni, a research scientist at AI startup Hugging Face, said in an interview. “Companies who are adding fuel to the fire are using this as a way to duck out of their responsibility. If we’re talking about existential risks we’re not looking at accountability. It’s an echo chamber that’s fueling panic and a real risk is that it leads to regulation that focuses on extinction scenarios and as opposed to addressing concrete, present-day harms.”
It’s also a continuation of a historic trend for transformative technologies sparking fear, uncertainty and doubt about the risk to health and humanity. Ahead of the year 2000, society had an expensive, collective panic about the so-called millennium bug. In reality, the Y2K frenzy was a case of well-meaning preparation that kickstarted system administrators the world over to check for problems, just to be on the safe side.
More recently, the proliferation of fifth-generation mobile networks was met with anger by individuals who felt the promise of benefits to communication, intelligent buildings and connected vehicles was vastly outweighed by the unsubstantiated health risks caused by electromagnetic radiation from 5G towers.
But even if some of Yudkowsky’s allies don’t entirely buy his regular predictions of AI doom, they argue his motives are altruistic and that for all his hyperbole, he’s worth hearing out.

For all the latest Technology News Click Here 

 For the latest news and updates, follow us on Google News

Read original article here

Denial of responsibility! NewsUpdate is an automatic aggregator around the global media. All the content are available free on Internet. We have just arranged it in one platform for educational purpose only. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials on our website, please contact us by email – [email protected]. The content will be deleted within 24 hours.