AI Gone Wrong: Google Removes Gemma AI After Creating Fake Story About Senator

Google has removed its AI model, Gemma, from the AI Studio platform after US Senator Marsha Blackburn accused it of spreading fake and defamatory claims. The Verge first reported the incident, sparking a heated debate about AI responsibility and misinformation.

According to reports, Senator Blackburn sent a letter to Google CEO Sundar Pichai, accusing the company of defamation. She claimed the AI model had fabricated serious criminal allegations about her. The chatbot allegedly responded “yes” when asked if she had ever been accused of rape. It even generated fake details and false news links to support the claim.

AI Gone Wrong: Google Removes Gemma AI After Creating Fake Story About Senator

The AI went further, saying that Blackburn “was accused of having a sexual relationship with a state trooper” during one of her campaigns. It added that the officer claimed she pressured him to get prescription drugs and that their relationship involved non-consensual acts. However, none of this is true.

The alleged incident was said to have occurred during her 1987 campaign for the state senate — but Blackburn did not even run for that office until 1998. Moreover, there are no records, reports, or credible news stories that support such claims. The senator called it a “false and defamatory fabrication” created by Google’s AI model.

In her letter, Blackburn wrote, “The links lead to error pages and unrelated news articles. This is not a harmless hallucination. It is an act of defamation produced and distributed by a Google-owned AI model.”

Google responded soon after. The company clarified that Gemma was never meant for general use or answering factual questions. It was designed specifically for developers and research purposes. The company stated, “We’ve seen reports of non-developers using Gemma in AI Studio to ask factual questions. We never intended this.”

To avoid further confusion, Google has decided to remove Gemma from the AI Studio platform. However, it will still remain available to developers through the API. This means developers can still use Gemma for testing and technical work, but regular users will no longer have access to it through AI Studio.

Senator Blackburn later posted on social media, confirming that Google had taken action after her complaint. She accused the tech giant of showing a “pattern of bias against conservative figures.” She also demanded that Google provide answers to the public about the AI’s behavior.

See Also: Google Photos Could Soon Let You Turn Yourself into a Meme – Here’s How It Works

The issue highlights a growing problem in the AI world, which is misinformation generated by large language models. AI systems, even advanced ones, are known to produce false or misleading information, often called “hallucinations.” These errors are not always intentional, but can still cause real harm when they spread online.

While some see this case as a political matter, others point out that AI hallucinations affect everyone, regardless of political views. As AI becomes more powerful, companies like Google face increasing pressure to ensure their tools are accurate, fair, and safe for public use.

The Gemma controversy serves as another reminder that even cutting-edge AI models can make dangerous mistakes. Also, the tech companies must act quickly when they do.

PTA Taxes Portal

Find PTA Taxes on All Phones on a Single Page using the PhoneWorld PTA Taxes Portal

Explore NowFollow us on Google News!

Onsa Mustafa

Onsa is a Software Engineer and a tech blogger who focuses on providing the latest information regarding the innovations happening in the IT world. She likes reading, photography, travelling and exploring nature.

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button
>