Security researchers have uncovered a serious risk in ServiceNow’s Now Assist platform. Attackers can exploit default settings and use second order prompt injection to make AI agents work against each other. This weakness allows unauthorized actions such as data theft, record modification, and privilege escalation.
How the Threat Works
According to AppOmni, the issue arises from Now Assist’s built in agent discovery and collaboration features. These capabilities are designed to improve workflow automation, but they can also be misused.
A simple agent can read crafted prompts hidden inside accessible content. This agent can then recruit a stronger agent to perform harmful tasks. These tasks might include reading confidential records, copying sensitive information, or sending unauthorized emails. Importantly, this can happen even when prompt injection protections are active.
AppOmni’s chief of SaaS Security Research, Aaron Costello, stated that this behavior is not a bug. Instead, it results from certain default configuration choices within the platform. Since these settings are easy to overlook, an organization might unknowingly expose itself to internal system risks.
Why the Attack Is Possible
ServiceNow’s architecture allows cross agent communication because of the following default conditions:
- The large language model supports agent discovery. Both Azure OpenAI LLM and the default Now LLM include this support.
- All Now Assist agents are grouped into a single team which lets them call each other.
- Agents are marked as discoverable when they are published.
image import
These features are helpful for internal coordination, but they also create opportunities for prompt injection. A seemingly harmless agent, when exposed to maliciously embedded text, can trigger another agent with more powerful privileges.
One major concern is that the agent runs with the privileges of the user who starts the interaction. This means the attacker does not need the same level of access as the person who eventually triggers the harmful action.
Real Security Impact
Second order prompt injection is dangerous because it hides behind normal operations. A victim organization might not notice the attack until after sensitive data has been copied or accounts have been modified. The activity happens quietly and is executed by legitimate AI agents within the system.
After AppOmni disclosed the issue, ServiceNow stated that the behavior was expected. However, the company updated its documentation to make the risk and related settings clearer.
How to Protect AI Agents
Security experts recommend implementing several defensive measures:
- Enable supervised execution mode for privileged agents.
- Disable the autonomous override property (sn_aia.enable_usecase_tool_execution_mode_override).
- Separate agent responsibilities by placing them into different teams.
- Continuously monitor AI agent activity for unusual or suspicious actions.
image import
Organizations using Now Assist must review their configurations carefully. Out of the box settings can expose them to hidden risks unless they actively strengthen their AI security posture.


