Google Bard: Google lists 5 known limitations of LLM-based AI chatbots like Bard | – Times of India
Google, in a blog post in which it announced the open access to limited users in the UK and the US, also listed known limitations of large language model (LLM)-based interfaces like Bard and ChatGPT.
Google has focussed on five areas that it continues to work on accuracy, bias, persona, false positives and false negatives, and vulnerability. Here are some of the limitations of Bard, as per Google.
Bard Accuracy
Google said that Bard is trained to generate responses that are relevant to the context and in line with users’ intent. In spite of that, Bard can sometimes generate responses that contain inaccurate or misleading information even as it is presenting that information confidently and convincingly.
Google explained that the underlying mechanism of an LLM is that of predicting the next word or sequences of words, therefore, the models are not fully capable yet of distinguishing between accurate and inaccurate information.
Bard Bias
All LLMs use training data, including from publicly available sources, which reflects a diversity of perspectives and opinions. Due to this reason gaps, biases and stereotypes in training data can result in a model reflecting those in its responses.
“We continue to research how to use this data in a way that ensures that an LLM’s response incorporates a wide range of viewpoints, while preventing offensive responses,” Google said.
For subjective topics, such as politics, Google said that Bard is designed to provide users with multiple perspectives. The reason for this is that Bard cannot verify the prompt/ input from primary source facts or well-established expert consensus.
Persona in Bard
Google highlighted that Bard might, at times, generate responses that seem to suggest it has opinions or emotions since it has been trained in a language that people use to reflect the human experience.
To keep that in check, Google said that it has developed a set of guidelines around how Bard might represent itself and continue to finetune the model to provide objective, neutral responses.
False positives/ negatives by Bard
Google said that it has put in place a set of technical guardrails that prevents Bard from returning problematic responses to prompts it’s not yet trained to address, such as harmful/ offensive content. However, Bard can sometimes misinterpret these guardrails, producing “false positives” and “false negatives.”
In the case of a “false positive,” Bard might not provide a response to a reasonable prompt, misinterpreting it as inappropriate. In the case of “false negative,” Bard might generate an inappropriate response, despite the guardrails in place.
Vulnerability to adversarial prompting
Google said that it expects users to test the limits of what Bard can do and attempt to break Bard’s protections – similar to how the company has tried it in the run-up to the opening limit access.
Google wants to use that data to refine the Bard model, “especially in these early days” so that it can prevent the AI chatbot from outputting problematic or sensitive information. That’s why Google has said that users must be 18 years old or older to try it.
For all the latest Technology News Click Here
For the latest news and updates, follow us on Google News.