AI Security

Researchers Disclose Reprompt Attack Enabling One-Click Data Exfiltration From Microsoft Copilot

Cybersecurity researchers have uncovered a new attack technique named Reprompt that allows threat actors to silently extract sensitive information from AI chatbots such as Microsoft Copilot with just a single click. The attack operates without requiring plugins, user interaction, or visible prompts, creating a serious blind spot for enterprise security controls. According to Varonis security researcher Dolev […]

Researchers Disclose Reprompt Attack Enabling One-Click Data Exfiltration From Microsoft Copilot Read More »

ServiceNow Fixes Critical AI Platform Flaw Enabling Unauthenticated User Impersonation

ServiceNow has disclosed and patched a critical security vulnerability in its artificial intelligence platform that could have allowed unauthenticated attackers to impersonate legitimate users and perform actions on their behalf. The flaw, tracked as CVE-2025-12420 and rated 9.3 on the CVSS scale, affects components within the ServiceNow AI ecosystem. The vulnerability has been named BodySnatcher

ServiceNow Fixes Critical AI Platform Flaw Enabling Unauthenticated User Impersonation Read More »

Featured Chrome Extension Caught Intercepting Millions of Users AI Chats

A browser extension carrying a “Featured” badge on Google Chrome has been discovered quietly collecting artificial intelligence chat conversations from millions of users. The extension, installed by more than six million people, was observed intercepting prompts and responses from popular AI platforms without clear user awareness. Security researchers revealed that the extension, Urban VPN Proxy,

Featured Chrome Extension Caught Intercepting Millions of Users AI Chats Read More »

Google Introduces Layered Chrome Defenses to Stop Indirect Prompt Injection Threats

Google has expanded the security framework of Chrome after adding agentic AI features to the browser. The company unveiled a new series of defenses designed to reduce the risk of indirect prompt injections that may occur when an AI agent interacts with untrusted web content. The most notable addition is the User Alignment Critic, a

Google Introduces Layered Chrome Defenses to Stop Indirect Prompt Injection Threats Read More »

Researchers Find More Than 30 Flaws in AI Coding Tools Allowing Data Theft and RCE Attacks

Security analysts have uncovered more than 30 vulnerabilities across several artificial intelligence powered Integrated Development Environments that blend prompt injection weaknesses with trusted development features. These issues enable information theft and remote code execution. The combined flaws have been named IDEsaster by security researcher Ari Marzouk, also known as MaccariTA. The findings affect a wide

Researchers Find More Than 30 Flaws in AI Coding Tools Allowing Data Theft and RCE Attacks Read More »

Picklescan Bugs Let Malicious PyTorch Models Bypass Scans and Run Unauthorized Code

A set of three serious vulnerabilities has been uncovered in Picklescan, an open source security tool created by Matthieu Maitre, designed to inspect Python pickle files and detect dangerous behavior before any code is executed. These flaws make it possible for attackers to hide harmful commands inside PyTorch models and completely bypass the scanner, posing

Picklescan Bugs Let Malicious PyTorch Models Bypass Scans and Run Unauthorized Code Read More »

Malicious npm Package Uses Hidden Prompt and Script to Bypass AI Security Tools

Cybersecurity researchers have uncovered a malicious npm package designed to manipulate AI-driven security scanners and steal sensitive data. The package, eslint-plugin-unicorn-ts-2, pretends to be a TypeScript extension of the popular ESLint plugin. It was published in February 2024 by a user named “hamburgerisland” and has been downloaded nearly 19,000 times. The package is still available.

Malicious npm Package Uses Hidden Prompt and Script to Bypass AI Security Tools Read More »

Chinese DeepSeek R1 AI Produces Insecure Code When Prompts Reference Tibet or Uyghurs

A new investigation by CrowdStrike has uncovered that DeepSeek R1, a reasoning model developed by the Chinese company DeepSeek, generates significantly more insecure code when prompts include topics considered politically sensitive by China. The researchers noted that the model introduces severe security flaws up to fifty percent more frequently whenever such trigger terms appear. Sensitive

Chinese DeepSeek R1 AI Produces Insecure Code When Prompts Reference Tibet or Uyghurs Read More »

ServiceNow AI Agents Can Be Manipulated to Work Against Each Other Through Second Order Prompts

Security researchers have uncovered a serious risk in ServiceNow’s Now Assist platform. Attackers can exploit default settings and use second order prompt injection to make AI agents work against each other. This weakness allows unauthorized actions such as data theft, record modification, and privilege escalation. How the Threat Works According to AppOmni, the issue arises

ServiceNow AI Agents Can Be Manipulated to Work Against Each Other Through Second Order Prompts Read More »

Researchers Discover Critical AI Bugs Affecting Meta, Nvidia, and Microsoft Inference Frameworks

Cybersecurity researchers have identified critical remote code execution (RCE) vulnerabilities impacting major AI inference frameworks, including those maintained by Meta, Nvidia, Microsoft, and open-source projects like vLLM and SGLang. These flaws, collectively termed the ShadowMQ pattern, stem from unsafe deserialization of Python objects over ZeroMQ (ZMQ) sockets. Root Cause: Unsafe Deserialization According to Avi Lumelsky

Researchers Discover Critical AI Bugs Affecting Meta, Nvidia, and Microsoft Inference Frameworks Read More »