Updated News Around the World

Meet Microsoft’s new AI tool that can imitate your voice in three seconds – Times of India

Microsoft recently released VALL-E – an artificial intelligence (AI) tool that can replicate people’s voices with a 3-second sample. It is claimed to replicate any voice, including the emotions and tone of a speaker while creating a recording.
As per a report by Windows Central, the AI tool was trained on 60,000 hours of English speech data and small clips of specific voices to generate content. The report also says that while some recordings sound natural, others sound like they have been created by a robot or machine-generated.
It is also reported that if supplied with a larger sample set, VALL-E may be able to create more realistic samples.

Implications of VALL-E
While VALL-E has many positive use cases, such as in the production industry, it also poses a threat. For example, people could use VALL-E to make spam calls sound real and con unaware users. Politicians or people with social clout can also be impersonated. It also poses a security threat in cases where a voice password is needed.
Furthermore, VALL-E may also render voice artists working in movies and audiobooks jobless. What’s comforting is that VALL-E is not generally available yet, which is a good thing. Microsoft has an ethics statement on the use of VALL-E.
Concerns related to ChatGPT
Similar job-related concerns were expressed soon after OpenAI’s ChatGPT became an overnight sensation following its launch last year.

Recently, Chester Wisniewski, principal research scientist at Sophos said that as ChatGPT continues to fascinate the online world, we cannot afford to ignore the security aspect of the same.
“ChatGPT is an interesting experiment at the moment, but its wider availability certainly appears to present new challenges. I have been playing with it since its public availability in November of 2022 and it is quite easy to convince it to assist with creating very convincing phishing lures and responding in a conversational way that could advance romance scams and business email compromise attacks. OpenAI seems to be trying to limit the high risk activities from abusing its use, but the cat is now out of the bag,” said Wisniewski.
“Today the biggest risk is to English speaking populations, but it is likely only a matter of time before it is available to generate believable text in most commonly spoken languages of the world. We have reached a stage where humans are unlikely to be able to discern machine generated prose from human written in casual conversations with those we are not intimately familiar which will security filters to aid in preventing humans from being victimised,” the scientist said.

iQoo 11: First look of ‘India’s fastest’ smartphone

For all the latest Technology News Click Here 

 For the latest news and updates, follow us on Google News

Read original article here

Denial of responsibility! NewsUpdate is an automatic aggregator around the global media. All the content are available free on Internet. We have just arranged it in one platform for educational purpose only. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials on our website, please contact us by email – [email protected]. The content will be deleted within 24 hours.