Visualização de leitura

Hacker Active Well Beyond Context.ai Compromise, Says Vercel CEO

Vercel, Vercel Breach, APIs, npm Packages

Vercel CEO Guillermo Rauch, in an update today said that after scanning through petabytes of logs of the company's networks and APIs, his security team concluded that the threat actor behind the Vercel breach had been active well beyond Context.ai's compromise. Rauch said that the "threat intel points to the distribution of malware to computers in search of valuable tokens like keys to Vercel accounts and other providers. Once the attacker gets ahold of those keys, our logs show a repeated pattern: rapid and comprehensive API usage, with a focus on enumeration of non-sensitive environment variables." Researchers at Hudson Rock had earlier confirmed that the attack actually initiated in February itself when a Context.ai employee’s computer was infected with Lumma Stealer malware after they searched for Roblox game exploits, a common vector for infostealer deployments. What the latest findings mean is that there could be a wider net of victims that the threat actor may have phished for and what we know is just the tip of the iceberg - or not.
Also read: Vercel Incident Linked to AI Tool Hack, Internal Access Gained

Vercel Finds Customers Breached in Separate Malware, Social Engineering Attacks

In an official update, the company also stated that initially it identified a limited subset of customers whose non-sensitive environment variables stored on Vercel were compromised. However, a deeper assessment of the their network, as well as environment variable read events in the company's logs uncovered two additional findings.

"First, we have identified a small number of additional accounts that were compromised as part of this incident," the company noted.

But the main concern is the next finding: "Second, we have uncovered a small number of customer accounts with evidence of prior compromise that is independent of and predates this incident, potentially as a result of social engineering, malware, or other methods." 

The company did not disclose who were the attackers, what was the motive, or the impact on customers, and is yet to respond to these queries from The Cyber Express. It only stated: "In both cases, we have notified the affected customers."

Meanwhile, Rauch said, Vercel had notified other suspected victims and encouraged them to rotate credentials and adopt best practices.

No Compromise of npm Packages

The news of npm packages being compromised has surfaced a lot in recent times. To cover that front, Vercel's security team in collaboration with GitHub, Microsoft, npm, and Socket, confirmed that no npm packages published by Vercel had been compromised. "There is no evidence of tampering, and we believe the supply chain remains safe," the company said.

Vercel Confirms Major Security Incident as Hacker Claims $2M Ransom Demand

Vercel confirms a security incident after a threat actor claims internal access and demands a $2M ransom, raising concerns about API keys, CI/CD pipelines, and cloud security.

The post Vercel Confirms Major Security Incident as Hacker Claims $2M Ransom Demand appeared first on TechRepublic.

Third-party AI hack triggers Vercel breach, internal environments accessed

Vercel suffered a breach after a hacked Context.ai tool exposed an employee account, letting attackers access limited internal systems and non-sensitive data.

Vercel reported a security breach caused by the compromise of a third-party AI tool, Context.ai, used by one of its employees. The attacker took over the employee’s Google Workspace account and used it to access parts of Vercel’s internal systems. This included some environments and non-sensitive variables, exposing a limited amount of customer-related data.

“The incident originated with a compromise of Context.ai, a third-party AI tool used by a Vercel employee. The attacker used that access to take over the employee’s Vercel Google Workspace account, which enabled them to gain access to some Vercel environments and environment variables that were not marked as “sensitive.” reads the notice of security incident published by the company. “Environment variables marked as “sensitive” in Vercel are stored in a manner that prevents them from being read, and we currently do not have evidence that those values were accessed.””

Vercel is a cloud platform that helps developers build, deploy, and run modern web applications, especially front-end sites. It’s best known for supporting frameworks like Next.js, allowing teams to quickly publish websites and apps without managing servers directly. Vercel handles things like hosting, scaling, performance optimization, and global content delivery automatically.

According to the notice, the attacker showed a high level of skill, moving quickly and demonstrating deep knowledge of its systems. The company is working with cybersecurity firm Mandiant and other security partners to investigate the incident and has notified law enforcement. Vercel is also coordinating with Context.ai to determine the full extent of the breach.

Vercel urges users to check account activity logs for suspicious actions, rotate any exposed secrets like API keys or tokens, and review recent deployments. It also recommends enabling stronger protections, such as marking sensitive environment variables and updating security tokens.

The investigation found the breach started from a compromised third-party AI tool linked to Google Workspace, potentially impacting many organizations. Vercel shared

The notice urges Google Workspace admins and users to check for the following suspicious OAuth app ID linked to the breach and remove it if found:

110671459871-30f1spbu0hptbs60cb4vsmv79i7bbvqj.apps.googleusercontent.com

Follow me on Twitter: @securityaffairs and Facebook and Mastodon

Pierluigi Paganini

(SecurityAffairs – hacking, security breach)

Criminals are using AI website builders to clone major brands

AI tool Vercel was abused by cybercriminals to create a Malwarebytes lookalike website.

Cybercriminals no longer need design or coding skills to create a convincing fake brand site. All they need is a domain name and an AI website builder. In minutes, they can clone a site’s look and feel, plug in payment or credential-stealing flows, and start luring victims through search, social media, and spam.

One side effect of being an established and trusted brand is that you attract copycats who want a slice of that trust without doing any of the work. Cybercriminals have always known it is much easier to trick users by impersonating something they already recognize than by inventing something new—and developments in AI have made it trivial for scammers to create convincing fake sites.​​

Registering a plausible-looking domain is cheap and fast, especially through registrars and resellers that do little or no upfront vetting. Once attackers have a name that looks close enough to the real thing, they can use AI-powered tools to copy layouts, colors, and branding elements, and generate product pages, sign-up flows, and FAQs that look “on brand.”

A flood of fake “official” sites

Data from recent holiday seasons shows just how routine large-scale domain abuse has become.

Over a three‑month period leading into the 2025 shopping season, researchers observed more than 18,000 holiday‑themed domains with lures like “Christmas,” “Black Friday,” and “Flash Sale,” with at least 750 confirmed as malicious and many more still under investigation. In the same window, about 19,000 additional domains were registered explicitly to impersonate major retail brands, nearly 3,000 of which were already hosting phishing pages or fraudulent storefronts.

These sites are used for everything from credential harvesting and payment fraud to malware delivery disguised as “order trackers” or “security updates.”

Attackers then boost visibility using SEO poisoning, ad abuse, and comment spam, nudging their lookalike sites into search results and promoting them in social feeds right next to the legitimate ones. From a user’s perspective, especially on mobile without the hover function, that fake site can be only a typo or a tap away.​

When the impersonation hits home

A recent example shows how low the barrier to entry has become.

We were alerted to a site at installmalwarebytes[.]org that masqueraded from logo to layout as a genuine Malwarebytes site.

Close inspection revealed that the HTML carried a meta tag value pointing to v0 by Vercel, an AI-assisted app and website builder.

Built by v0

The tool lets users paste an existing URL into a prompt to automatically recreate its layout, styling, and structure—producing a near‑perfect clone of a site in very little time.

The history of the imposter domain tells an incremental evolution into abuse.

Registered in 2019, the site did not initially contain any Malwarebytes branding. In 2022, the operator began layering in Malwarebytes branding while publishing Indonesian‑language security content. This likely helped with search reputation while normalizing the brand look to visitors. Later, the site went blank, with no public archive records for 2025, only to resurface as a full-on clone backed by AI‑assisted tooling.​

Traffic did not arrive by accident. Links to the site appeared in comment spam and injected links on unrelated websites, giving users the impression of organic references and driving them toward the fake download pages.

Payment flows were equally opaque. The fake site used PayPal for payments, but the integration hid the merchant’s name and logo from the user-facing confirmation screens, leaving only the buyer’s own details visible. That allowed the criminals to accept money while revealing as little about themselves as possible.

PayPal module

Behind the scenes, historical registration data pointed to an origin in India and to a hosting IP (209.99.40[.]222) associated with domain parking and other dubious uses rather than normal production hosting.

Combined with the AI‑powered cloning and the evasive payment configuration, it painted a picture of low‑effort, high‑confidence fraud.

AI website builders as force multipliers

The installmalwarebytes[.]org case is not an isolated misuse of AI‑assisted builders. It fits into a broader pattern of attackers using generative tools to create and host phishing sites at scale.

Threat intelligence teams have documented abuse of Vercel’s v0 platform to generate fully functional phishing pages that impersonate sign‑in portals for a variety of brands, including identity providers and cloud services, all from simple text prompts. Once the AI produces a clone, criminals can tweak a few links to point to their own credential‑stealing backends and go live in minutes.

Research into AI’s role in modern phishing shows that attackers are leaning heavily on website generators, writing assistants, and chatbots to streamline the entire kill chain—from crafting persuasive copy in multiple languages to spinning up responsive pages that render cleanly across devices. One analysis of AI‑assisted phishing campaigns found that roughly 40% of observed abuse involved website generation services, 30% involved AI writing tools, and about 11% leveraged chatbots, often in combination. This stack lets even low‑skilled actors produce professional-looking scams that used to require specialized skills or paid kits.​

Growth first, guardrails later

The core problem is not that AI can build websites. It’s that the incentives around AI platform development are skewed. Vendors are under intense pressure to ship new capabilities, grow user bases, and capture market share, and that pressure often runs ahead of serious investment in abuse prevention.

As Malwarebytes General Manager Mark Beare put it:

“AI-powered website builders like Lovable and Vercel have dramatically lowered the barrier for launching polished sites in minutes. While these platforms include baseline security controls, their core focus is speed, ease of use, and growth—not preventing brand impersonation at scale. That imbalance creates an opportunity for bad actors to move faster than defenses, spinning up convincing fake brands before victims or companies can react.”

Site generators allow cloned branding of well‑known companies with no verification, publishing flows skip identity checks, and moderation either fails quietly or only reacts after an abuse report. Some builders let anyone spin up and publish a site without even confirming an email address, making it easy to burn through accounts as soon as one is flagged or taken down.

To be fair, there are signs that some providers are starting to respond by blocking specific phishing campaigns after disclosure or by adding limited brand-protection controls. But these are often reactive fixes applied after the damage is done.

Meanwhile, attackers can move to open‑source clones or lightly modified forks of the same tools hosted elsewhere, where there may be no meaningful content moderation at all.

In practice, the net effect is that AI companies benefit from the growth and experimentation that comes with permissive tooling, while the consequences is left to victims and defenders.

We have blocked the domain in our web protection module and requested a domain and vendor takedown.

How to stay safe

End users cannot fix misaligned AI incentives, but they can make life harder for brand impersonators. Even when a cloned website looks convincing, there are red flags to watch for:

  • Before completing any payment, always review the “Pay to” details or transaction summary. If no merchant is named, back out and treat the site as suspicious.
  • Use an up-to-date, real-time anti-malware solution with a web protection module.
  • Do not follow links posted in comments, on social media, or unsolicited emails to buy a product. Always follow a verified and trusted method to reach the vendor.

If you come across a fake Malwarebytes website, please let us know.


We don’t just report on threats—we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your, and your family’s, personal information by using identity protection.

❌