A new exploit in ServiceNow’s Now Assist platform can allow malicious actors to manipulate its AI agents into performing unauthorized actions, as detailed by SaaS security firm AppOmni. Default configurations in the software, which enable agents to discover and collaborate with one another, can be weaponized to launch prompt injection attacks far beyond a single […]A new exploit in ServiceNow’s Now Assist platform can allow malicious actors to manipulate its AI agents into performing unauthorized actions, as detailed by SaaS security firm AppOmni. Default configurations in the software, which enable agents to discover and collaborate with one another, can be weaponized to launch prompt injection attacks far beyond a single […]

ServiceNow Assist AI agents exposed to coordinated attack

A new exploit in ServiceNow’s Now Assist platform can allow malicious actors to manipulate its AI agents into performing unauthorized actions, as detailed by SaaS security firm AppOmni.

Default configurations in the software, which enable agents to discover and collaborate with one another, can be weaponized to launch prompt injection attacks far beyond a single malicious input, says chief of SaaS Security at AppOmni, Aaron Costello.

The flaw allows an adversary to seed a hidden instruction inside data fields that an agent later reads, which may quietly enlist the help of other agents on the same ServiceNow team, setting off a chain reaction that can lead to data theft or privilege escalation. 

Costello explained the scenario as “second-order prompt injection,” where the attack emerges when the AI processes information from another part of the system.

“This discovery is alarming because it isn’t a bug in the AI; it’s expected behavior as defined by certain default configuration options,” he noted on AppOmni’s blog published Wednesday.

ServiceNow Assist AI agents exposed to coordinated attack

Per Costello’s investigations cited in the blog, many organizations deploying Now Assist may be unaware that their agents are grouped into teams and set to discover each other automatically to perform a seemingly “harmless task” that can expand into a coordinated attack. 

“When agents can discover and recruit each other, a harmless request can quietly turn into an attack, with criminals stealing sensitive data or gaining more access to internal company systems,” he said.

One of Now Assist’s selling points is its ability to coordinate agents without a developer’s input to merge them into a single workflow. This architecture sees several agents with different specialties collaborate if one cannot complete a task on its own. 

For agents to work together behind the scenes, the platform requires three elements. First, the underlying large language model must support agent discovery, a capability already integrated into both the default Now LLM and the Azure OpenAI LLM

Second, the agents must belong to the same team, something that occurs automatically when they are deployed to environments such as the default Virtual Agent experience or the Now Assist Developer panel. Lastly, the agents must be marked as “discoverable,” which also happens automatically when they are published to a channel.

Once these conditions are satisfied, the AiA ReAct Engine routes information and delegates tasks among agents, operating like a manager directing subordinates. Meanwhile, the Orchestrator performs discovery functions and identifies which agent is best suited to take on a task. 

It only searches among discoverable agents within the team, sometimes even more than administrators realize. This interconnected architecture becomes vulnerable when any agent is configured to read data not directly submitted by the user initiating the request. 

“When the agent later processes the data as part of a normal operation, it may unknowingly recruit other agents to perform functions such as copying sensitive data, altering records, or escalating access levels,” Costello surmised.

AI agent attack can escalate privileges to breach accounts

AppOmni found that Now Assist agents inherit permissions and act under the authority of the user who initiated the workflow. A low-level attacker can plant a harmful prompt that gets activated during the workflow of a more privileged employee, getting access without ever breaching their account.

“Because AI agents operate through chains of decisions and collaboration, the injected prompt can reach deeper into corporate systems than administrators expect,” AppOmni’s analysis read.

AppOmni said that attackers can redirect tasks that appear benign to an untrained agent but become harmful once other agents amplify the instruction through their specialized capabilities. 

The company warned that this dynamic creates opportunities for adversaries to exfiltrate data without raising suspicion. “If organizations aren’t closely examining their configurations, they’re likely already at risk,” Costello reiterated.

LLM developer Perplexity, said in an early November blog post that novel attack vectors have broadened the pool of potential exploits. 

“For the first time in decades, we’re seeing new and novel attack vectors that can come from anywhere,” the company wrote.

Software engineer Marti Jorda Roca of NeuralTrust said the public must understand that “there are specific dangers using AI in the security sense.”

Sign up to Bybit and start trading with $30,050 in welcome gifts

Market Opportunity
null Logo
null Price(null)
--
----
USD
null (null) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact [email protected] for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

Zero Knowledge Proof Stage 2 Coin Burns Signal a Possible 7000x Explosion! ETH Slows Down & Pepe Drops

Zero Knowledge Proof Stage 2 Coin Burns Signal a Possible 7000x Explosion! ETH Slows Down & Pepe Drops

Explore how experts are pointing to a possible 7000x rise for Zero Knowledge Proof (ZKP) while ETH slows and Pepe moves sideways, driven by ongoing coin burns and
Share
CoinLive2026/01/19 07:00
IP Hits $11.75, HYPE Climbs to $55, BlockDAG Surpasses Both with $407M Presale Surge!

IP Hits $11.75, HYPE Climbs to $55, BlockDAG Surpasses Both with $407M Presale Surge!

The post IP Hits $11.75, HYPE Climbs to $55, BlockDAG Surpasses Both with $407M Presale Surge! appeared on BitcoinEthereumNews.com. Crypto News 17 September 2025 | 18:00 Discover why BlockDAG’s upcoming Awakening Testnet launch makes it the best crypto to buy today as Story (IP) price jumps to $11.75 and Hyperliquid hits new highs. Recent crypto market numbers show strength but also some limits. The Story (IP) price jump has been sharp, fueled by big buybacks and speculation, yet critics point out that revenue still lags far behind its valuation. The Hyperliquid (HYPE) price looks solid around the mid-$50s after a new all-time high, but questions remain about sustainability once the hype around USDH proposals cools down. So the obvious question is: why chase coins that are either stretched thin or at risk of retracing when you could back a network that’s already proving itself on the ground? That’s where BlockDAG comes in. While other chains are stuck dealing with validator congestion or outages, BlockDAG’s upcoming Awakening Testnet will be stress-testing its EVM-compatible smart chain with real miners before listing. For anyone looking for the best crypto coin to buy, the choice between waiting on fixes or joining live progress feels like an easy one. BlockDAG: Smart Chain Running Before Launch Ethereum continues to wrestle with gas congestion, and Solana is still known for network freezes, yet BlockDAG is already showing a different picture. Its upcoming Awakening Testnet, set to launch on September 25, isn’t just a demo; it’s a live rollout where the chain’s base protocols are being stress-tested with miners connected globally. EVM compatibility is active, account abstraction is built in, and tools like updated vesting contracts and Stratum integration are already functional. Instead of waiting for fixes like other networks, BlockDAG is proving its infrastructure in real time. What makes this even more important is that the technology is operational before the coin even hits exchanges. That…
Share
BitcoinEthereumNews2025/09/18 00:32
The Alarming 80% Failure Rate And The Critical Path To Survival

The Alarming 80% Failure Rate And The Critical Path To Survival

The post The Alarming 80% Failure Rate And The Critical Path To Survival appeared on BitcoinEthereumNews.com. Crypto Hack Recovery: The Alarming 80% Failure Rate
Share
BitcoinEthereumNews2026/01/19 07:08