Google, OpenAI And Others Just Made A Huge AI Safety Promise To The White House – SlashGear
The focus is on safety, transparency, security, and trust. The companies invited for the White House briefing agreed to internal as well as external testing of their AI tools by experts before they enter the public domain. The AI labs have also promised that they won’t sit on threats posed by artificial intelligence, committing to share it with industry experts, academic experts, and civil society.
Coming to the security aspect, the companies have pledged that they would create internal safety guards and only after rigorous testing will they release their AI models. To ensure that the cybersecurity risk are minimized, it has also been agreed that these AI stalwarts will allow independent experts to check their products and that there will also be an open pathway for reporting vulnerabilities.
Another notable piece of the commitment is that AI labs will report their “AI systems’ capabilities, limitations, and areas of appropriate and inappropriate use.” This is of crucial importance, because current-gen AI systems have some well-known issues with accuracy and bias in multiple forms.
Finally, the AI tool-makers have also agreed to dedicate efforts and resources on AI development in a fashion that contributes to the well-being of society, instead of harming it. Efforts would be made towards using AI to solve problems like climate crisis and cancer research.
AI experts and industry stakeholders have already signed pledges towards responsible AI development, and this is another ethical step forward for AI.
For all the latest Games News Click Here
For the latest news and updates, follow us on Google News.