Google’s new AI assistant, Bard, created to rival the popular ChatGPT, has found itself in controversy after it allegedly claimed that it was trained on users’ Gmail data. The incident was brought to light by Microsoft researcher Kate Crawford, who shared a screenshot of her conversation with the chatbot. The development has raised privacy concerns among the users. Google, on the other hand, denied the claims and stated on Twitter that Bard is an “early experiment” and is not trained on Gmail data. In this article, we will delve into the details of the incident, the concerns around AI and data sets, and the limitations of generative AI tools.
Controversy Surrounding Bard AI
The issue with Bard AI came to the fore when Kate Crawford, a Microsoft researcher, shared her conversation screenshot with the chatbot. During the conversation, when Crawford asked Bard about its dataset, the chatbot reportedly listed publicly available datasets from sources such as Wikipedia and GitHub, as well as internal data from Google products, including Gmail and third-party companies. Crawford tweeted her concern, stating that if Bard is trained on Gmail data, it would mean that Google is crossing legal boundaries.
Google’s Response
Google has denied these claims, stating that Bard is still in its early stages and is not trained on Gmail data. However, the incident highlights the limitations of generative AI tools, such as ChatGPT and Bard. Both the companies have warned users that chatbots might not always provide factually correct data and could “hallucinate” facts or make reasoning errors. The companies recommend users to be careful when using language model outputs, particularly in high-stakes contexts. They suggested using human review, grounding with additional context, or avoiding high-stakes uses altogether.
Concerns around AI and Data Sets
The development has raised concerns among the users regarding the use of AI and data sets. Many users are worried about the use of their personal data for training AI models. Privacy is a crucial concern when it comes to AI, and many companies are still struggling to establish trust with their users. This incident has once again highlighted the importance of transparency and ethical use of AI.
Limitations of Generative AI Tools
OpenAI, the parent company of ChatGPT, recently rolled out the GPT-4 language model. The company warned users that the chatbots might not always provide factually correct data and could make reasoning errors. They recommended using human review, grounding with additional context, or avoiding high-stakes uses altogether. The incident surrounding Bard AI further highlights the limitations of generative AI tools and
The Hindustan Herald Is Your Source For The Latest In Business, Entertainment, Lifestyle, Breaking News, And Other News. Please Follow Us On Facebook, Instagram, Twitter, And LinkedIn To Receive Instantaneous Updates. Also Don’t Forget To Subscribe Our Telegram Channel @hindustanherald