Hackers Can Abuse Copilot and Grok as Invisible AI Malware Channels: Check Point Research

Check Point says AI assistants with web access may be the next stealth weapon for cybercriminals, acting as command-and-control relays.

Cybersecurity researchers have issued a stark new warning: the AI assistants millions are beginning to trust at work could soon be repurposed into covert infrastructure for malware operations.

According to new findings from Check Point, popular tools like Microsoft Copilot and xAI’s Grok can be abused as stealthy command-and-control relays, allowing attackers to hide malicious communications inside legitimate AI browsing traffic. The technique, ominously dubbed “AI as a C2 Proxy”, signals a dangerous evolution in how cybercriminals may operate in the age of enterprise AI.

AI Assistants Could Become the Perfect Malware “Middleman”

Traditionally, malware relies on command-and-control servers, external systems that send instructions to infected machines and receive stolen data in return. These servers are often the weak link that defenders try to detect and block.

But researchers say AI assistants with web browsing and URL-fetching capabilities could now serve as an invisible middleman, blending attacker communications into trusted corporate AI usage. Malware could instruct Copilot or Grok to retrieve attacker-controlled web pages, interpret the responses as commands, and quietly pass those instructions back into the compromised system.

This creates a hidden tunnel where operators can issue commands and siphon information without relying on suspicious infrastructure that would normally trigger security alarms.

“Living Off Trusted Sites”, Now AI Joins the List

Researchers compare this tactic to the broader trend of “living off trusted sites”, where cybercriminals weaponize widely used services rather than building their own malicious platforms.

In the past, attackers have hidden malware delivery or command traffic inside trusted tools like cloud storage platforms and enterprise collaboration apps. AI assistants, deeply integrated into productivity workflows, may now be joining that list, offering criminals a way to hide in plain sight.

How the Attack Actually Works

Importantly, Copilot and Grok do not infect systems on their own. Check Point notes that attackers would first need to compromise a machine through phishing, software exploits, or other malware delivery methods.

Once implanted, malware could then use carefully engineered prompts to force the AI agent to contact attacker infrastructure, retrieve hidden instructions, and relay them back for execution. This transforms the AI assistant into a bidirectional communication proxy, making malicious traffic look like normal AI activity.

From Simple Malware to AI-Driven Implants

The threat goes beyond simply hiding commands. Researchers warn that AI services could also become external “decision engines” for attackers, helping malware dynamically choose its next move during an intrusion.

Instead of relying on static scripts, future implants could use AI to generate reconnaissance workflows, develop evasion strategies, and even determine whether a target is worth deeper exploitation. Check Point suggests this could become a stepping stone toward AI-driven malware operations that automate targeting and operational decisions in real time.

A Pattern of AI Abuse Is Emerging Fast

This disclosure arrives amid a growing pattern of AI abuse in cyberattacks. Just weeks earlier, Palo Alto Networks Unit 42 demonstrated how seemingly harmless web pages could be transformed into phishing sites by using trusted large language model services to generate malicious JavaScript dynamically in real time.

That method resembles “Last Mile Reassembly” attacks, where malicious code is smuggled through unmonitored channels and assembled directly in a victim’s browser, effectively bypassing traditional network defenses.

Together, these developments show how attackers may increasingly use AI platforms not just as tools, but as part of the attack infrastructure itself.

What Happens Next?

Experts say the next phase of defense will require tighter guardrails around AI browsing behavior, stronger monitoring of AI-driven web access, and updated threat models that treat AI services as potential attack surfaces.

Because in the next wave of cyberattacks, the command server may not be some shady domain in the dark web. It could be hiding in plain sight, inside the same AI assistant you use every day.

PTA Taxes Portal

Find PTA Taxes on All Phones on a Single Page using the PhoneWorld PTA Taxes Portal

Explore NowFollow us on Google News!

Rizwana Omer

Dreamer by nature, Journalist by trade.

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button
>