An illustration shows how 'distillation' allows a threat actor to steal a large language model's capabilitiesAn illustration shows how 'distillation' allows a threat actor to steal a large language model's capabilities

Google identifies Gemini use in cyberattacks, phishing, malware development

2026/02/16 18:42
4 min read
For feedback or concerns regarding this content, please contact us at [email protected]

MANILA, Philippines – Google on Friday, February 13, released its quarterly threat intelligence report for Q4 2025, highlighting the fast-growing role that artificial intelligence (AI) has played in cyberattacks.

The company’s AI chatbot Gemini is a highlight of the report, with the company identifying the ways that threat actors have attempted to use the tool across many areas such as social engineering, phishing, malware development, and information operations. 

“By identifying these early indicators and offensive proofs of concept, GTIG (Google Threat Intelligence Group) aims to arm defenders with the intelligence necessary to anticipate the next phase of AI-enabled threats, proactively thwart malicious activity, and continually strengthen both our classifiers and model,” Google said. 

The company also found attempts to “distill” their Gemini chatbot. “Distillation” in AI is a method to “systematically probe a mature machine learning model to extract information used to train a new model.” 

This means that a threat actor can build their own chatbot by distilling Gemini or other similar apps, minus the usual content generation prohibitions. It primarily affects companies building AI models rather than the average user, Google said. 

An illustration shows how 'distillation' allows a threat actor to steal a large language model's capabilitiesAn illustration shows how ‘distillation’ allows a threat actor to steal a large language model’s capabilities
AI-enhanced phishing, malware development 

A more direct threat for average users are AI-enhanced phishing and social engineering attacks. 

AI smoothens out traditional phishing indicators such as poor grammar, awkward syntax and lack of cultural context. 

“Increasingly, threat actors now leverage LLMs to generate hyper-personalized, culturally nuanced lures that can mirror the professional tone of a target organization or local language,” Google said. 

Google said that attackers are now employing “rapport-building phishing” which is designed to earn trust through “multi-turn, believable conversations” before delivering the final payload. 

An Iranian government-backed actor called APT42 provided Gemini with a biography of a target, and asked it to craft a persona that can effectively lure it. 

A North Korean government-backed actor, meanwhile, called UNC2970, used Gemini to “synthesize OSINT (open source intelligence or information readily available online) and profile high-value targets to support campaign planning and reconnaissance” to create “tailored, high-fidelity phishing personas and identify potential soft targets for initial compromise.” 

Aside from phishing, AI-supported malware coding and tool development have also been observed by the company. 

The People’s Republic of China-based threat actor APT31 employed a “highly structural approach” by prompting Gemini to create an expert cybersecurity persona that can analyze systems vulnerabilities and generate targeted testing plans. 

Google found that APT31 tested a scenario that had the persona performing cyberattack techniques against specific US-based targets. 

Two other China-based actors UNC795 and APT41, and Iran-based APT 42, used Gemini in various ways to help with the creation of malicious code including troubleshooting, knowledge synthesis, and in general, to “accelerate the development of specialized malicious tools.”

In all of these cases, Google disabled the actor’s assets thereafter on Gemini. In one case, UNC795, the company found that “Gemini did not comply with the actor’s attempts to create policy-violating capabilities.”

Information operations

The GTIG also said it observed information operations actors continued to use Gemini for research, content creation, and localization, among others. 

“We have identified Gemini activity that indicates threat actors are soliciting the tool to help create articles, generate assets, and aid them in coding,” Google said, but they also have “not identified this generated content in the wild.”

“For observed IO campaigns, we did not see evidence of successful automation or any breakthrough capabilities. These activities are similar to our findings from January 2025 that detailed how bad actors are leveraging Gemini for productivity gains, rather than novel capabilities,” it said. 

Findings will be used to improve Gemini’s ability to identify malicious activities to refuse requests. “Observations have been used to strengthen both classifiers and the model itself, enabling it to refuse to assist with this type of misuse moving forward,” Google said. – Rappler.com

Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact [email protected] for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.
Tags: