Site icon News Update

Google unit DeepMind tried—and failed—to win AI autonomy from parent

DeepMind told staff late last month that Google called off those talks, according to people familiar with the matter. The end of the long-running negotiations, which hasn’t previously been reported, is the latest example of how Google and other tech giants are trying to strengthen their control over the study and advancement of artificial intelligence. Earlier this month, Google unveiled plans to double the size of its team studying the ethics of artificial intelligence and to consolidate that research.

Google Chief Executive Sundar Pichai has called the technology key to the company’s future, and parent Alphabet Inc. has invested billions of dollars in AI. The technology, which handles tasks once the exclusive domain of humans, making life more efficient at home and work, has raised complex questions about the growing influence of computer algorithms in a wide range of public and private life.

Alphabet’s approach to AI is closely watched because the conglomerate is seen as an industry leader in sponsoring research and developing new applications for the technology. The nascent field has proved to be a challenge for Alphabet management at times as the company has dealt with controversies involving top researchers and executives. The technology also has attracted the attention of governments, such as the European Union, which has promised to regulate it.

Founded in 2010 and bought by Google in 2014, DeepMind specializes in building advanced AI systems to mimic the way human brains work, an approach known as deep learning. Its long-term goal is to build an advanced level of AI that can perform a range of cognitive tasks as well as any human. “Guided by safety and ethics, this invention could help society find answers to some of the world’s most pressing and fundamental scientific challenges,” DeepMind says on its website.

DeepMind’s founders had sought, among other ideas, a legal structure used by nonprofit groups, reasoning that the powerful artificial intelligence they were researching shouldn’t be controlled by a single corporate entity, according to people familiar with those plans.

On a video call last month with DeepMind staff, co-founder Demis Hassabis said the unit’s effort to negotiate a more autonomous corporate structure was over, according to people familiar with the matter. He also said DeepMind’s AI research and its application would be reviewed by an ethics board staffed mostly by senior Google executives.

Google said DeepMind co-founder Mustafa Suleyman, who moved to Google last year, sits on that oversight board. DeepMind said its chief operating officer, Lila Ibrahim, would also be joining the board, which reviews new projects and products for the company.

DeepMind’s leaders had talked with staff about securing more autonomy as far back as 2015, and its legal team was preparing for the new structure before the pandemic hit last year, according to people familiar with the matter. The founders hired an outside lawyer to help, while staff drafted ethical rules to guide the company’s separation and prevent its AI from being used in autonomous weapons or surveillance, according to people familiar with the matter. DeepMind leadership at one point proposed to Google a partial spinout, several people said.

According to people familiar with DeepMind’s plans, the proposed structure didn’t make financial sense for Alphabet given its total investment in the unit and its willingness to bankroll DeepMind.

Google bought the London-based startup for about $500 million. DeepMind has about 1,000 staff members, most of them researchers and engineers. In 2019, DeepMind’s pretax losses widened to £477 million, equivalent to about $660 million, according to the latest documents filed with the U.K.’s Companies House registry.

Google has grappled with the issue of AI oversight. In 2019, Google launched a high-profile, independent council to guide its AI-related work. A week later, it disbanded the council following an outpouring of protests about its makeup. Around the same time, Google disbanded another committee of independent reviewers who oversaw DeepMind’s work in healthcare.

Google vice president of engineering Marian Croak earlier this month unveiled plans to bolster AI ethics work. She told The Wall Street Journal’s Future of Everything Festival that the company had spent “many, many, many years of deep investigation around responsible AI.” She said the effort has been focused, but “somewhat diffused as well.”

Speaking generally, and not about DeepMind, Ms. Croak said she thought “if we could bring the teams together and have a stronger core center of expertise…we could have a much bigger impact.”

Google’s cloud-computing business uses DeepMind’s technology, but some of the unit’s biggest successes have been noncommercial. In 2016, a DeepMind computer made headlines when it beat the reigning human champion of Go, a Chinese board game with more than 180 times as many opening moves as chess.

This story has been published from a wire agency feed without modifications to the text.

Subscribe to Mint Newsletters

* Enter a valid email

* Thank you for subscribing to our newsletter.

Never miss a story! Stay connected and informed with Mint.
Download
our App Now!!

For all the latest Technology News Click Here 

 For the latest news and updates, follow us on Google News

Read original article here

Denial of responsibility! NewsUpdate is an automatic aggregator around the global media. All the content are available free on Internet. We have just arranged it in one platform for educational purpose only. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials on our website, please contact us by email – abuse@newsupdate.uk. The content will be deleted within 24 hours.
Exit mobile version