North Korea and Iran Using Generative AI for Hacking, Microsoft Says

Microsoft reported that US adversaries, particularly Iran and North Korea, and to a lesser extent Russia and China, are starting to utilize generative AI for offensive cyber operations. These nations have begun to exploit AI technology for cyber threats, prompting Microsoft to detect and disrupt these activities in collaboration with OpenAI.

While the techniques observed are still in their early stages and not particularly innovative, Microsoft emphasized the importance of exposing them publicly. The use of large-language models by US rivals enhances their ability to breach networks and conduct influence operations, posing a cybersecurity threat.

North Korea and Iran Using AI for Hacking, Microsoft Says

Cybersecurity firms have traditionally used machine learning for defence, focusing on detecting abnormal behaviour in networks. However, criminals and offensive hackers are also leveraging this technology. The introduction of large-language models, led by OpenAI’s ChatGPT, has further escalated this technological competition.

Microsoft has heavily invested in OpenAI, and its recent announcement coincided with a report highlighting the potential of generative AI to enhance malicious social engineering, leading to more sophisticated deepfakes and voice cloning. This poses a significant threat to democracy, especially in a year when over 50 countries will hold elections, potentially magnifying disinformation campaigns.

Microsoft provided examples of how US adversaries have utilized generative AI:

  • North Korea’s Kimsuky group has used these models to research foreign think tanks and generate content for spear-phishing campaigns.
  • Iran’s Revolutionary Guard has employed large-language models for social engineering and troubleshooting software errors.
  • Russia’s GRU military intelligence unit, Fancy Bear, has used these models to research satellite and radar technologies related to the war in Ukraine.
  • China’s cyber-espionage groups, Aquatic Panda and Maverick Panda, have explored ways to augment their technical operations using large-language models.

OpenAI stated that its current GPT-4 model chatbot has limited capabilities for malicious cybersecurity tasks beyond what is achievable with non-AI-powered tools. However, cybersecurity researchers anticipate that this could change in the future.

The director of the US Cybersecurity and Infrastructure Security Agency, Jen Easterly, has highlighted the importance of addressing the security implications of artificial intelligence, particularly in the context of China’s influence.

Critics have raised concerns about the rapid release of large-language models, such as ChatGPT, without adequate consideration for security. They argue that more efforts should be made to ensure the security of these models, rather than creating defensive tools to address vulnerabilities.

PTA Taxes Portal

Find PTA Taxes on All Phones on a Single Page using the PhoneWorld PTA Taxes Portal

Explore NowFollow us on Google News!

Onsa Mustafa

Onsa is a Software Engineer and a tech blogger who focuses on providing the latest information regarding the innovations happening in the IT world. She likes reading, photography, travelling and exploring nature.

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button
>