Microsoft’s Brad Smith raises concern about deep fakes generated by AI
While addressing the issue of how to regulate artificial intelligence, Microsoft President Brad Smith has raised a concern about deep fakes, realistic looking but false content, generated by AI.
Smith stressed on a need to take steps to protect against the alteration of legitimate content.
“We’re going have to address the issues around deep fakes. We’re going to have to address in particular what we worry about most foreign cyber influence operations, the kinds of activities that are already taking place by the Russian government, the Chinese, the Iranians,” Smith said in a speech in Washington on Thursday.
“We need to take steps to protect against the alteration of legitimate content with an intent to deceive or defraud people through the use of AI.”
Smith also called for licensing for the most critical forms of AI with “obligations to protect security, physical security, cybersecurity, national security.”
“We will need a new generation of export controls, at least the evolution of the export controls we have, to ensure that these models are not stolen or not used in ways that would violate the country’s export control requirements,” he added.
Last week, Sam Altman, CEO of OpenAI, the startup behind ChatGPT, told a Senate panel in his first appearance before Congress that use of AI interfere with election integrity is a “significant area of concern”, adding that it needs regulation.
OpenAI is backed by Microsoft.
Altman also called for global cooperation on AI and incentives for safety compliance.
Smith argued in a blog post issued on Thursday that people needed to be held accountable for any problems caused by AI and he urged lawmakers to ensure that safety brakes be put on AI used to control the electric grid, water supply and other critical infrastructure so that humans remain in control.
He also urged the use of a “Know Your Customer”-style system for developers of powerful AI models to keep tabs on how their technology is used and to inform the public of what content AI is creating so they can identify faked videos.
For weeks, lawmakers in Washington have struggled with what laws to pass to control AI even as companies large and small have raced to bring increasingly versatile AI to market.
Some proposals being considered on Capitol Hill would focus on AI that may put people’s lives or livelihoods at risk, like in medicine and finance. Others are pushing for rules to ensure AI is not used to discriminate or violate civil rights.
(With inputs from Reuters)
Download The Mint News App to get Daily Market Updates & Live Business News.
More
Less
For all the latest Technology News Click Here
For the latest news and updates, follow us on Google News.