Updated News Around the World

Fighting AI With AI: Google, Microsoft, OpenAI, Anthropic join hands to tackle dangerous algorithms

Fighting AI With AI: Google, Microsoft, OpenAI, Anthropic join hands to tackle dangerous algorithms

Some of the top most AI studios have vowed to come together to fight the dangers of AI. These include OpenAI, Google, Microsoft and Anthropic. The kicker in all this? They plan to fight the dangers of AI, using AI

The development of Artificial Intelligence (AI) has brought remarkable progress and opportunities across various sectors. However, it is undeniable that this advancement also carries significant security risks.

While governing bodies are striving to establish regulations for AI safety, the primary responsibility lies with the pioneering AI companies themselves. A joint effort has been initiated by industry giants Anthropic, Google, Microsoft, and OpenAI, known as the Frontier Model Forum.

The Frontier Model and its Mission
The Frontier Model Forum is an industry-led organization with a focused mission: ensuring the safe and cautious development of AI, particularly in the context of frontier models. These frontier models represent large-scale machine-learning models that surpass current capabilities, possess a wide range of abilities, and hold a significant potential impact on society.

Related Articles

Epic

Epic Fail: Even ChatGPT makers can’t tell if text is AI generated, shuts down its detector

Epic

ChatGPT’s Sam Altman creates new cryptocurrency called Worldcoin, meant ‘only for humans’

To achieve its objectives, the Forum plans to establish an advisory committee, develop a charter, and secure funding. Its work will be grounded in four core pillars:

The Forum aims to make substantial contributions to ongoing AI safety research. By fostering collaboration and knowledge sharing among member organizations, they intend to identify and address potential security vulnerabilities in frontier models.

Creating standardized best practices is essential for the responsible deployment of frontier models. The Forum will diligently work towards establishing guidelines that AI companies can adhere to, ensuring the safe and ethical use of these powerful AI tools.

Collaboration with various stakeholders is crucial to building a safe and beneficial AI landscape. The Forum seeks to closely work with policymakers, academics, civil society, and other companies to align efforts and address the multifaceted challenges posed by AI development.

Fighting AI Using AI
The Forum aims to promote the development of AI technologies that can effectively address society’s greatest challenges. By fostering responsible and safe AI practices, the potential positive impacts on areas like healthcare, climate change, and education can be harnessed for the greater good.

The Forum’s members are dedicated to focusing on the first three objectives over the next year. The initiative’s announcement highlighted the criteria for membership, emphasizing the importance of a track record in developing frontier models and a strong commitment to ensuring their safety.

The Forum firmly believes that AI companies, especially those working on powerful models, need to unite and establish a common ground to advance safety practices thoughtfully and adaptably.

OpenAI’s vice president of global affairs, Anna Makanju, stressed the urgency of this work and expressed confidence in the Forum’s ability to act swiftly and effectively in pushing AI safety boundaries.

Issues with The Frontier Model
However, some voices in the AI community, like Dr Leslie Kanthan, CEO, and Co-founder of TurinTech, have raised concerns about the Forum’s representation. They suggest that it lacks participation from major open-source entities like HuggingFace and Meta.

Dr Kanthan believes it’s crucial to broaden the participant pool to include AI ethics leaders, researchers, legislators, and regulators to ensure a balanced representation. This inclusivity would help avoid the risk of big tech companies creating self-serving rules that may exclude startups. Additionally, Dr Kanthan points out that the Forum’s primary focus on the threats posed by more potent AI diverts attention from other pressing regulatory issues like copyright, data protection, and privacy.

This industry collaboration among leaders follows a recent safety agreement established between the White House and top AI companies, some of which are involved in the Frontier Model Forum’s formation. The safety agreement commits to subjecting AI systems to tests to identify and prevent harmful behaviour, as well as implementing watermarks on AI-generated content to ensure accountability and traceability.

For all the latest Technology News Click Here 

 For the latest news and updates, follow us on Google News

Read original article here

Denial of responsibility! NewsUpdate is an automatic aggregator around the global media. All the content are available free on Internet. We have just arranged it in one platform for educational purpose only. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials on our website, please contact us by email – [email protected]. The content will be deleted within 24 hours.