AI Security

Microsoft Identifies “Summarize with AI” Prompts Manipulating Chatbot Recommendations

Microsoft has identified a new tactic used by legitimate businesses to influence artificial intelligence chatbot responses through so-called “Summarize with AI” buttons embedded on websites. The technique mirrors traditional search engine optimization abuse but targets AI systems instead of search rankings. The research, conducted by the Microsoft Defender Security Research Team, describes the method as AI Recommendation […]

Microsoft Identifies “Summarize with AI” Prompts Manipulating Chatbot Recommendations Read More »

Infostealer Steals OpenClaw AI Agent Configuration Files and Gateway Tokens

Cybersecurity researchers have identified a case in which an information-stealing malware successfully extracted sensitive configuration files linked to OpenClaw, the open-source AI agent platform previously known as Clawdbot and Moltbot. According to researchers at Hudson Rock, the incident represents a turning point in infostealer evolution. Instead of focusing solely on browser credentials, threat actors are now harvesting

Infostealer Steals OpenClaw AI Agent Configuration Files and Gateway Tokens Read More »

Google Reports State-Backed Hackers Leveraging Gemini AI for Reconnaissance and Attack Support

Google has reported that the North Korea-linked threat actor UNC2970 is using its generative AI model Gemini for reconnaissance, highlighting a growing trend of hacking groups weaponizing AI to accelerate cyber attack operations. These capabilities include information gathering, model extraction, and enhancing attack efficiency. According to the Google Threat Intelligence Group (GTIG), UNC2970 leveraged Gemini

Google Reports State-Backed Hackers Leveraging Gemini AI for Reconnaissance and Attack Support Read More »

North Korea-Linked UNC1069 Uses AI Lures to Target Cryptocurrency Organizations

The North Korea-associated threat group UNC1069 has intensified its cyber operations against the cryptocurrency sector, leveraging advanced social engineering and artificial intelligence techniques to compromise Windows and macOS systems. The campaign is primarily designed to extract sensitive credentials and enable large-scale financial theft. According to findings from Google Mandiant researchers Ross Inman and Adrian Hernandez, the operation

North Korea-Linked UNC1069 Uses AI Lures to Target Cryptocurrency Organizations Read More »

Microsoft Builds a Scanner to Identify Backdoors in Open-Weight Large Language Models

Microsoft has introduced a lightweight security scanner designed to detect hidden backdoors in open-weight large language models (LLMs), aiming to strengthen trust and safety across artificial intelligence systems. According to Microsoft’s AI Security team, the scanner relies on three observable behavioral signals that can reliably indicate whether a model has been compromised, while keeping false

Microsoft Builds a Scanner to Identify Backdoors in Open-Weight Large Language Models Read More »

Researchers Uncover Chrome Extensions Exploiting Affiliate Links and Stealing ChatGPT Access

Cybersecurity researchers have discovered a cluster of malicious Google Chrome extensions designed to hijack affiliate links, exfiltrate user data, and steal OpenAI ChatGPT authentication tokens. These extensions exploit the trust users place in popular e-commerce and AI-related browser tools to gain persistent access to sensitive information. Amazon Ads Blocker and Affiliate Hijacking One notable extension, Amazon

Researchers Uncover Chrome Extensions Exploiting Affiliate Links and Stealing ChatGPT Access Read More »

Researchers Discover 175,000 Publicly Exposed Ollama AI Servers Across 130 Countries

Cybersecurity researchers have uncovered a large scale exposure of artificial intelligence infrastructure after identifying more than 175,000 publicly accessible Ollama AI servers operating across 130 countries. The findings come from a joint investigation conducted by SentinelOne SentinelLABS and Censys, which highlights the rapid growth of unmanaged AI compute environments on the public internet. According to

Researchers Discover 175,000 Publicly Exposed Ollama AI Servers Across 130 Countries Read More »

VoidLink Linux Malware Framework Created with AI Assistance Hits 88,000 Lines of Code

Cybersecurity researchers have uncovered new details about a highly advanced Linux malware framework known as VoidLink, revealing that the project was likely developed by a single threat actor using artificial intelligence assistance. The findings suggest a major shift in how sophisticated malware can now be created with limited human resources. According to a detailed analysis released

VoidLink Linux Malware Framework Created with AI Assistance Hits 88,000 Lines of Code Read More »

Chainlit AI Framework Vulnerabilities Enable Data Theft via File Read and SSRF Bugs

Security researchers have disclosed high-severity vulnerabilities in the popular open-source AI framework Chainlit that could allow attackers to steal sensitive data and potentially move laterally inside affected environments. The issues were identified by Zafran Security and collectively named ChainLeak. According to the researchers, the flaws can be abused to leak cloud API keys, access sensitive server files, and perform server-side

Chainlit AI Framework Vulnerabilities Enable Data Theft via File Read and SSRF Bugs Read More »

Google Gemini Prompt Injection Flaw Exposes Private Calendar Data Through Malicious Invites

Cybersecurity researchers have uncovered a security vulnerability that abused indirect prompt injection techniques against Google Gemini, allowing attackers to bypass authorization safeguards and misuse Google Calendar as a covert data exfiltration channel. According to Miggo Security’s Head of Research, Liad Eliyahu, the flaw enabled attackers to evade Google Calendar privacy controls by embedding a hidden

Google Gemini Prompt Injection Flaw Exposes Private Calendar Data Through Malicious Invites Read More »