Google Confirms Attackers Used Over 100,000 Prompts in Attempt to Clone Gemini

Google has revealed that its flagship artificial intelligence chatbot, Gemini, has been targeted by attackers attempting to copy its capabilities through large-scale prompting campaigns. According to the company, some groups have submitted thousands of queries to the system, with one campaign exceeding 100,000 prompts before it was detected.
In a report released on Thursday, Google said it has faced a growing number of what are known as “distillation attacks.” These attacks involve repeatedly asking a chatbot carefully crafted questions to extract information about how it functions internally. The goal is to uncover patterns, logic, and reasoning methods that power the system. Google described the activity as “model extraction,” where attackers probe the AI in an effort to replicate or improve their own models.
Google Confirms Attackers Used Over 100,000 Prompts in Attempt to Clone Gemini
The company believes that most of the activity is commercially motivated. Rather than individual hackers, Google suspects that private companies and researchers are behind many of the attempts. These actors are likely seeking a competitive edge in the fast-moving AI industry. While Google said the activity appears to come from different parts of the world. It declined to provide specific details about those involved.
John Hultquist, chief analyst at Google’s Threat Intelligence Group, said the attacks on Gemini may signal what lies ahead for other organizations developing artificial intelligence tools. He suggested that large technology firms like Google may be the first to face such threats. Smaller companies with custom-built AI systems could soon encounter similar risks.
Google considers distillation attacks to be a form of intellectual property theft. Major technology companies have invested billions of dollars in developing advanced AI systems such as large language models (LLMs). The internal structures, training techniques, and reasoning capabilities of these systems are treated as highly valuable proprietary assets.
Despite efforts to detect and block suspicious activity, AI chatbots that are accessible online remain inherently vulnerable. Because these systems respond to user queries, they can be tested repeatedly by anyone with internet access. This open design makes it challenging to completely prevent extraction attempts.
See Also: Google Confirms AirDrop Support Is Coming to All Android Devices
The report noted that many of the prompts used in the attacks were specifically designed to uncover how Gemini “reasons.” It describes how it processes and evaluates information before generating responses. By studying the outputs from large numbers of prompts, attackers may try to reverse-engineer elements of the model’s decision-making process.
The issue is not unique to Google. Other AI developers have also raised concerns about distillation. Last year, OpenAI accused a Chinese rival of attempting similar tactics to improve its own models.
Hultquist warned that as more businesses create customized AI systems trained on sensitive or proprietary data, the risks could increase. For example, if a company trains a language model on decades of confidential business strategies, repeated probing could potentially expose valuable insights.
As competition in artificial intelligence intensifies, protecting these systems from model extraction is becoming a growing priority for technology firms worldwide.
PTA Taxes Portal
Find PTA Taxes on All Phones on a Single Page using the PhoneWorld PTA Taxes Portal
Explore NowFollow us on Google News!