Google-backed Anthropic launches ChatGPT rival chatbot Claude
In a bid to challenge OpenAI’s authority over the artificial intelligence space, Google-backed Anthropic has released its AI Chatbot Claude. Anthopic is co-founded by ex-Open AI employees Dario and Daniela Amodei and it recently received $400 million in funding from Google.
In a blog post announcing the launch of Claude, Anthropic said, “Claude is a next-generation AI assistant based on Anthropic’s research into training helpful, honest, and harmless AI systems”
“Claude is capable of a wide variety of conversational and text processing tasks while maintaining a high degree of reliability and predictability” the Google-backed startup added.
Also Read: All you need to know about Visual ChatGPT
Claude can do many of the same tasks as ChatGPT such as writing blog posts, summarizing text, replying to emails, coding, etc. However, Anthropic claims Claude is ‘less likely to produce harmful outputs, easier to converse with, and more steerable’. The chatbot’s tone, personality, and behaviour can also be tweaked to match the needs of users.
Similar to ChatGPT, Claude cannot access the internet and has been trained on data till the spring of 2021.
However, Anthropic has taken a more principled approach to its AI chatbot known as ‘Constitutional AI’. Claude has been trained on a large chunk of data that gives it the ability to avoid potentially dangerous topics based on its principles and even going to extent of recognizing its own biases.
Before its launch, Claude had partnered with various companies to test out its technology. Most prominently, Quora is using Claude to power its AI Chatbot app while DuckDuckGo is using Claude along with a combination of ChatGPT to summarize text from Wikipedia. Productivity app Notion is also using tech from Anthropic to power its Notion AI.
Claude also suffers from many similar problems as ChatGPT and Microsoft’s Bing Chat such as hallucinating and users being able to bypass the chatbot’s safety features through clever commands.
News agency Bloomberg also quoted Anthropic CEO Dario Amodei admitting that their chatbot can sometimes make things up similar to other language models. He said, “I don’t want to say all the problems have been solved…I think all of these models, including ours, they sometimes hallucinate, they sometimes make things up.”
Download The Mint News App to get Daily Market Updates & Live Business News.
More
Less
For all the latest Technology News Click Here
For the latest news and updates, follow us on Google News.