Updated News Around the World

ChatGPT’s parent OpenAI wants humans to regulate AI, proposes a regulatory body

ChatGPT maker OpenAI has proposed a new international body for regulating artificial intelligence (AI). Led by CEO Sam Altman, the company said that AI systems in the next ten years could have expert skills in most domains and could do tasks productively akin to the largest corporations today.

In a blog post on Monday, OpenAI explained the reason for regularizing AI, the company said, “The governance of the most powerful systems, as well as decisions regarding their deployment, must have strong public oversight. We believe people around the world should democratically decide on the bounds and defaults for AI systems.”

“We don’t yet know how to design such a mechanism, but we plan to experiment with its development.” OpenAI added 

The blog post was written by Open AI founder Sam Altman, President Greg Brockman and  Chief Scientist Ilya Sutskever. The post compares ‘superintelligence’ to nuclear energy and suggested the creation of an authority similar to the International Atomic Energy Agency to mitigate the risks of AI.

OpenAI’s plan for tackling the challenges posed by AI:

OpenAI proposed a three-point agenda to mitigate the risks of superintelligent AI systems of the future.

1) Coordination among AI makers: OpenAI’s blog post suggest that companies that make AI systems such as Bard, Anthropic, Bing should make a coordinated effort to ensure that the development of ‘superintelligence’ happens in a way that ensures safety and helps the smooth integration of these systems into society.

The ChatGPT maker has suggested two ways in which this coordination could take place: governments around the world could set up a regulatory system involving leading AI manufacturers, or these companies could agree among themselves to limit AI growth to a certain rate per year.

2) International regulatory body: OpenAI has suggested a new international body much like International Atomic Energy Agency in order to mitigate the existential risks posed by superintelligent AI systems. As per OpenAI, the proposed new body should have the authority to inspect systems, require audits, test for compliance with safety standards, place restrictions on degrees of deployment and levels of security, etc.

3) Safer superintelligence: OpenAI says that the company is working on making artificial intelligence systems safer and more aligned with human values and following human intent.

Catch all the Technology News and Updates on Live Mint.
Download The Mint News App to get Daily Market Updates & Live Business News.

More
Less

For all the latest Technology News Click Here 

 For the latest news and updates, follow us on Google News

Read original article here

Denial of responsibility! NewsUpdate is an automatic aggregator around the global media. All the content are available free on Internet. We have just arranged it in one platform for educational purpose only. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials on our website, please contact us by email – [email protected]. The content will be deleted within 24 hours.