The research offers a new window into the scale of potentially illicit use cases for thousands of open-source large language model deploymentsThe research offers a new window into the scale of potentially illicit use cases for thousands of open-source large language model deployments

Open-source AI models vulnerable to criminal misuse, researchers warn

2026/01/30 15:15

Hackers and other criminals can easily commandeer computers operating open-source large language models outside the guardrails and constraints of the major artificial-intelligence platforms, creating security risks and vulnerabilities, researchers said on Thursday, January 29.

Hackers could target the computers running the LLMs and direct them to carry out spam operations, phishing content creation, or disinformation campaigns, evading platform security protocols, the researchers said.

The research, carried out jointly by cybersecurity companies SentinelOne and Censys over the course of 293 days and shared exclusively with Reuters, offers a new window into the scale of potentially illicit use cases for thousands of open-source LLM deployments. These include hacking, hate speech and harassment, violent or gore content, personal data theft, scams or fraud, and in some cases child sexual abuse material, the researchers said.

While thousands of open-source LLM variants exist, a significant portion of the LLMs on the internet-accessible hosts are variants of Meta’s Llama, Google DeepMind’s Gemma, and others, according to the researchers. While some of the open-source models include guardrails, the researchers identified hundreds of instances where guardrails were explicitly removed.

AI industry conversations about security controls are “ignoring this kind of surplus capacity that is clearly being utilized for all kinds of different stuff, some of it legitimate, some obviously criminal,” said Juan Andres Guerrero-Saade, executive director for intelligence and security research at SentinelOne. Guerrero-Saade likened the situation to an “iceberg” that is not being properly accounted for across the industry and open-source community.

Study examines system prompts

The research analyzed publicly accessible deployments of open-source LLMs deployed through Ollama, a tool that allows people and organizations to run their own versions of various large-language models.

The researchers were able to see system prompts, which are the instructions that dictate how the model behaves, in roughly a quarter of the LLMs they observed. Of those, they determined that 7.5% could potentially enable harmful activity.

Roughly 30% of the hosts observed by the researchers are operating out of China, and about 20% in the US.

Rachel Adams, the CEO and founder of the Global Center on AI Governance, said in an email that once open models are released, responsibility for what happens next becomes shared across the ecosystem, including the originating labs.

“Labs are not responsible for every downstream misuse (which are hard to anticipate), but they retain an important duty of care to anticipate foreseeable harms, document risks, and provide mitigation tooling and guidance, particularly given uneven global enforcement capacity,” Adams said.

A spokesperson for Meta declined to respond to questions about developers’ responsibilities for addressing concerns around downstream abuse of open-source models and how concerns might be reported, but noted the company’s Llama Protection tools for Llama developers, and the company’s Meta Llama Responsible Use Guide.

Microsoft AI Red Team Lead Ram Shankar Siva Kumar said in an email that Microsoft believes open-source models “play an important role” in a variety of areas, but, “at the same time, we are clear‑eyed that open models, like all transformative technologies, can be misused by adversaries if released without appropriate safeguards.”

Microsoft performs pre-release evaluations, including processes to assess “risks for internet-exposed, self-hosted, and tool-calling scenarios, where misuse can be high,” he said. The company also monitors for emerging threats and misuse patterns. “Ultimately, responsible open innovation requires shared commitment across creators, deployers, researchers, and security teams.”

Ollama did not respond to a request for comment. Alphabet’s Google and Anthropic did not respond to questions. – Rappler.com

Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact [email protected] for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

Why Vitalik Buterin Just Pulled 16,384 ETH From His Holdings

Why Vitalik Buterin Just Pulled 16,384 ETH From His Holdings

The post Why Vitalik Buterin Just Pulled 16,384 ETH From His Holdings appeared first on Coinpedia Fintech News Ethereum co-founder Vitalik Buterin just withdrew
Share
CoinPedia2026/01/30 18:19
Record-breaking streak ends – Rabobank

Record-breaking streak ends – Rabobank

The post Record-breaking streak ends – Rabobank appeared on BitcoinEthereumNews.com. Rabobank’s report notes that Gold has seen a significant retracement, ending
Share
BitcoinEthereumNews2026/01/30 18:24
World Liberty Financial Approves WLFI Token Buyback Plan

World Liberty Financial Approves WLFI Token Buyback Plan

The post World Liberty Financial Approves WLFI Token Buyback Plan appeared on BitcoinEthereumNews.com. Key Points: WLFI plans significant token buyback. Buyback aims to enhance token value. 99.84% approval received for the strategy. World Liberty Financial’s governance proposal mandates using all liquidity fees for WLFI token buybacks and permanent removal, receiving 99.84% voter support by September 19, 2025. This initiative aims to boost WLFI’s price stability, targeting committed investors, amid volatile market conditions post-launch. WLFI Buyback Gains Overwhelming 99.84% Support World Liberty Financial (WLFI) announced a significant governance decision regarding its native token. With a notable 99.84% voter approval, all liquidity-generated fees will fund buybacks and permanent burns of WLFI tokens, enhancing long-term value. This effort marks a substantial shift in the project’s financial strategy, as the Trump family continues to play a shaping role with their association. The immediate results of this vote are expected to stabilize WLFI’s price, which experienced turbulence after its introduction. The strategy’s broader goal is to remove circulating tokens that participants not aligned with WLFI’s long-term goals hold, thereby improving value for those invested long-term. Market analysts anticipate that a consistent buyback-and-burn approach could strengthen WLFI’s market position, despite no formal endorsements from major regulatory bodies. However, notable community figures, including influential investors, have voiced both support and reservations regarding the plan’s impact on market dynamics. Lookonchain Analysis: Recent Trends in Crypto Transactions highlights a similar trend in interest within the broader cryptocurrency market. WLFI’s Market Outlook Following Buyback Strategy Did you know? Advanced buyback strategies similar to World Liberty Financial’s approach have observed increased adoption in 2024, offering short-term price boosts and encouraging long-term token holding, especially during volatile periods. World Liberty Financial’s WLFI token recently saw a 0.67% increase in 24 hours, reaching $0.23, with a market cap of $5.54 billion according to CoinMarketCap. Trading volume dropped by 48.92%, yet over the past seven days, WLFI…
Share
BitcoinEthereumNews2025/09/21 06:41