Visualização de leitura

Simplifying AWS defense with Microsoft Sentinel UEBA

With the expansion of Microsoft Sentinel UEBA (User and Entity Behavior Analytics) into new data sources, spanning multi-cloud (AWS, GCP), identity providers (Okta), and authentication logs (Microsoft Defender for Endpoint DeviceLogon, Microsoft Entra ID Managed Identity, Service Principal sign-ins), defenders can now detect behavioral anomalies across hybrid environments from a single place.

We’ve also expanded AWS coverage with more anomalies, enrichments and insights, so CloudTrail events now arrive with more built-in context at ingestion time. This lets defenders triage suspicious activity faster without building and maintaining large baselines in KQL.

Many defenders analyze CloudTrail activity using thresholds or historical patterns to identify unusual behavior. In dynamic cloud environments, interpreting this activity can be challenging without additional behavioral context.

Microsoft Sentinel UEBA shifts the burden away from query authors by enriching raw AWS logs with simple binary insights (true/false) derived from user, activity, and device behavior patterns – such as first-time geography, uncommon ISP, unusual action, and abnormal operation volume. Detection authors can stack these binary signals or combine them with built-in UEBA anomalies to surface attacker behavior that would otherwise blend into routine CloudTrail activity.

In this post, you’ll learn how binary feature stacking works, how UEBA baselines AWS identities (human and non-human), and how to use UEBA enrichments and built-in anomalies to strengthen AWS detections and triage.

Defenders investigating AWS activity often rely on raw CloudTrail logs, static thresholds, or manually-engineered baselines to differentiate between normal operational patterns and adversary behavior. While CloudTrail captures rich activity data, defenders often need behavioral context – such as historical usage patterns, geography, and device signals – to distinguish routine operations from suspicious behavior. This is where Microsoft Sentinel UEBA adds value.

Microsoft Sentinel UEBA enriches raw AWS logs with simple, binary behavioral insights (true/false) derived from baseline user, peer, and device behavior patterns – such as first-time geography, uncommon ISP, unusual action, and abnormal operation volume. These clear binary signals help establish behavioral context and inform investigation and detection decisions. This post refers to this approach as binary feature stacking.

Under the hood: The tables

Microsoft Sentinel UEBA surfaces AWS behavioral context in two tables: BehaviorAnalytics and Anomalies.

BehaviorAnalytics table

The BehaviorAnalytics table is the primary investigation surface for UEBA-enriched AWS activity. EventSource field identifies the log source (for example, AWSCloudTrail), ActivityType maps to service level AWS EventSource (for example, S3, KMS, or IAM), and ActionType captures the AWS API name (for example, ConsoleLogin or CreateUser). Use these fields to filter and scope specific categories of AWS activity.

Figure 1: BehaviorAnalytics table schema.

UEBA provides enrichments in three dynamic fields (UserInsights, DeviceInsights and ActivityInsights)– most importantly ActivityInsights, a JSON property bag that contains the binary behavioral features used for baseline-driven profiling. These enrichments are calculated at the user and tenant (AWS AccountId) level, as well as the activity level (for example, uncommon high volume of operations). Each enrichment uses a different baseline window, ranging from 7 days to 180 days.

This data is always available for hunting, even if no alert is fired. Each record includes key fields from the original CloudTrail event alongside enrichments derived from user, activity, and device behavior. The full list of available enrichments and their baseline lookback periods is documented in Entity enrichments – dynamic fields.

Anomalies table

The Anomalies table contains outputs from Microsoft’s pre-trained anomaly detection machine learning models. Six built-in anomalies are currently available for AWS. For more information about these anomalies, see: Anomalies detected by the Microsoft Sentinel machine learning engine.

Figure 2: Anomalies table schema.

Each anomaly record includes MITRE ATT&CK mappings, behavioral enrichments, an AnomalyScore, and AnomalyReasons, which explains why an event was flagged as an anomaly.

Here’s an example of an AWS IAM Privilege Modification anomaly. In this case, the CreateLoginProfile API was invoked from a previously unseen user agent in a new country. The annotated screenshot illustrates how the anomaly is displayed and how the AnomalyReasons dynamic field provides binary insights that help investigation. In addition to FirstTimeUserPerformedAction and FirstTimeUserConnectedFromCountry, the BrowserUncommonlyUsedInTenant feature indicates a new user agent (Apache-HttpClient/UNAVAILABLE (Java/21.0.9)) not commonly seen in the tenant.

Figure 3: AWS IAM Privilege Modification anomaly.

The Defender portal also surfaces UEBA anomalies on user entity pages and incident graphs.

This example highlights the Top UEBA anomalies section in an incident graph, where an Anomalous Logon event is displayed with an anomaly score of 0.8 for the account entity cloudinfra-admin.

Figure 4: Top UEBA anomalies on an incident graph in the Defender portal,

You can run built-in queries directly from incident graphs by selecting Go Hunt  All User anomalies queries for immediate context-driven hunting based on UEBA outcomes. For more details, see UEBA integration with Microsoft Sentinel workflows.

Figure 5: Hunt for all user anomalies from an incident graph in the Defender portal.

Traditional vs. new approach

Let’s look at a classic AWS scenario: Unusual anomalous AWS logons. You want to detect a user’s sign in from an unknown location compared to its historical sign-in patterns.

The hard way: Raw log analytics

CloudTrail telemetry can be analyzed using historical baselines and enrichment logic to understand behavioral patterns such as first‑time sign‑ins from new locations. UEBA complements this approach by providing pre‑computed behavioral indicators that can accelerate investigation workflows.

Here is the KQL example on raw log showing necessary operations to add behavioral context.

KQL Code Snippet:

// The "Hard Way" - baseline-heavy console sign-in analytics
let baselineStart = ago(14d);
let baselineEnd   = ago(1h);
let userBaseline =    AWSCloudTrail
    | where TimeGenerated between (baselineStart .. baselineEnd)
    | where EventName == "ConsoleLogin" and isempty(ErrorCode)
    | where isnotempty(SourceIpAddress)
    | extend geo = geo_info_from_ip_address(SourceIpAddress)
    | extend Country = tostring(geo["country"])
    | where isnotempty(Country)
    | summarize HistoricalCountries = make_set(Country) by UserIdentityPrincipalid;
AWSCloudTrail
| where TimeGenerated > ago(1h)
| where EventName == "ConsoleLogin" and isempty(ErrorCode)
| where isnotempty(SourceIpAddress)
| extend geo = geo_info_from_ip_address(SourceIpAddress)
| extend Country = tostring(geo["country"])
| where isnotempty(Country)
| join kind=leftouter (userBaseline) on UserIdentityPrincipalid
| extend FirstTimeUserConnectedFromCountry =    iif(isempty(HistoricalCountries) or not(set_has_element(HistoricalCountries, Country)), true, false)
| where FirstTimeUserConnectedFromCountry == true

The problem: This query is computationally expensive, hard to read, and requires you to manually enrich IP addresses with location data. Accurately mapping IP addresses to ASN and ISP often requires additional enrichments and up to date lookup databases. Because different user behaviors have different levels of variability, static thresholds and manually engineered baselines can still produce false positives or low-value alerts, especially in dynamic environments.

The smart way: Binary feature stacking

With Microsoft Sentinel UEBA, the profiling engine has already done the heavy lifting. It learns the user’s sign-in patterns, peer commonality, and tenant-wide behavioral patterns, and outputs the result to the BehaviorAnalytics table as a set of pre-calculated binary features (true/false flags).

KQL Code Snippet:

// The "Smart Way" - leveraging binary features
BehaviorAnalytics
| where ActionType == "ConsoleLogin" and ActivityType == "signin.amazonaws.com" 
// The Binary Features
| where ActivityInsights.FirstTimeUserConnectedFromCountry == True
and ActivityInsights.CountryUncommonlyConnectedFromInTenant == True and ActivityInsights.FirstTimeConnectionViaISPInTenant == True

The advantages:

  1. Readability: It takes just three lines of code to express a complex idea with UEBA features.
  2. Context: You’re not just looking at uncommon sign ins. You’re stacking user-level and tenant-level indicators – such as location data (FirstTimeUserConnectedFromCountry) and uncommon ISP usage (FirstTimeConnectionViaISPInTenant) – to get a more accurate representation of suspicious behaviors relative to historical patterns.
  3. Stability: You don’t manage the baseline, lookback, and thresholds in your query. The Microsoft Sentinel UEBA ML engine maintains these automatically with baseline windows that vary by enrichment (ranging from 7 to 180 days).

By relying on these binary features, detection authors stop writing code to discover behavioral signals and instead use UEBA features to express detection intent and how to respond based on severity.

Now let’s look at how these same signals appear during investigation and triage.

Real-world attack scenarios: Microsoft Sentinel UEBA in action

The table below summarizes four attack scenarios using a consistent set of fields:

  • Scenario: The threat pattern and where it fits in the kill chain.
  • The attack: What the adversary is attempting to do (high-level behavior).
  • Common log view: How the activity appears in raw CloudTrail when reviewed event-by-event.
  • UEBA signals (binary features): BehaviorAnalytics binary features that provide behavioral context, along with the InvestigationPriority score (0-10) used to tune the severity of deviations.
  • Built-in anomaly surfaced: Names of built-in Microsoft Sentinel UEBA anomalies you can pivot to during triage, including AnomalyScore (0–1) and AnomalyReasons in the Anomalies table.

Together, these scenarios illustrate how raw CloudTrail events provide foundational visibility into AWS activity. Combining this telemetry with behavioral enrichment from Sentinel UEBA can improve the interpretability of events during investigation. The same building blocks—successful sign-ins, IAM changes, Secrets or KMS access, and S3 reads—can represent either normal administration or active intrusion.

By combining CloudTrail activity with Sentinel UEBA enrichments in BehaviorAnalytics, defenders can stack multiple high-value signals to hunt for activity patterns that match attacker tradecraft.

This context accelerates investigations by making it easier to explain why an action is suspicious and to pivot directly to correlated entries in the Anomalies table, including risk scores and reasons. For detection engineers, UEBA signals also act as stable building blocks—simplifying KQL, reducing alert noise, and hardening detections over time.

Note: The UEBA signals column lists examples of relevant binary features, not the exact logic that triggers an anomaly. Anomalies are generated by ML models and don’t map one-to-one to individual features. Use AnomalyReasons in the Anomalies table to understand why a specific anomaly was flagged.

Attack scenarios

ScenarioThe attackCommon log viewUEBA signals (binary features)Built-in anomaly surfaced
Initial Access (Federated / SAML Session Hijack)An attacker gains access to a federated identity session – for example, through a compromised identity provider (IdP) – and uses a SAML or EXTERNAL_IDP flow to perform actions the user rarely performs, from a new location and at an unusual pace.CloudTrail shows federated authentication activity (UserAuthentication / EXTERNAL_IDP, for example, Okta) followed by successful API calls under an assumed role session; each event is valid in isolation.FirstTimeUserConnectedFromCountry = True

ISPUncommonlyUsedInTenant = True

ActionUncommonlyPerformedByUser = True

ActionUncommonlyPerformedInTenant = True
UEBA Anomalous Federated or SAML Identity Activity in AwsCloudTrail
Initial Access and PersistenceAn attacker compromises a developer’s access keys and logs in (for example, through uncommon user agent) to create a backdoor user.CloudTrail shows a successful ConsoleLogin via SDK or CLI user agent and subsequent IAM action, such as CreateUser, all of which are valid API calls without behavioral context.FirstTimeUserConnectedFromCountry = True

BrowserUncommonlyUsedInTenant = True

ActionUncommonlyPerformedByUser = True (CreateUser)

ActionUncommonlyPerformedInTenant = True
Examples: UEBA Anomalous Logon in AwsCloudTrail; UEBA Anomalous IAM Privilege Modification in AwsCloudTrail
Credential Access & Collection (Secrets / KMS Key Discovery)After establishing a foothold with valid credentials, an attacker queries Secrets Manager and KMS to list keys and retrieve secret values, often starting with discovery (ListSecrets/ListKeys) then access (GetSecretValue), sometimes at unusually high frequency.CloudTrail shows a GetSecretValue, ListSecrets, or ListKeys activity which can look like legitimate automation and make static allowlists and thresholds brittle.FirstTimeUserPerformedAction = True

ActionUncommonlyPerformedInTenant = True

UncommonHighVolumeOfOperations = True

ISPUncommonlyUsedInTenant = True
UEBA Anomalous Secret or KMS Key Access in AwsCloudTrail
Data Exfiltration (the “low-and-slow” S3 drain)A compromised admin account performs a burst of repeated S3 GetObject operations—representing a high volume of similar operations within the same service—often targeting multiple objects or prefixes in quick succession to stage data for exfiltration while staying below traditional volume thresholds.If S3 data events are enabled, CloudTrail shows a high frequency of GetObject API calls across multiple objects or buckets in a short time window. Each request appears legitimate in isolation, and overall data transfer may remain below static thresholds, making the activity difficult to detect using traditional methods.UncommonHighVolumeOfOperations = True

CountryUncommonlyPerformedInTenant = True

ActionUncommonlyPerformedByUser = True (S3 GetObject)

ISPUncommonlyUsedInTenant = True
UEBA Anomalous Data Transfer from Amazon S3

Table 1: Examples of Microsoft Sentinel UEBA enrichments in real-world attack scenarios

Built-in Microsoft Sentinel UEBA anomaly MITRE ATT&CK coverage

The visual below illustrates how Microsoft Sentinel UEBA’s AWS anomaly coverage maps across multiple stages of the kill chain:

Together, these anomaly detections provide defenders with end-to-end visibility – from suspicious authentication through sensitive access and data movement – with binary feature enrichments that add high-value behavioral context during investigations.

Figure 6: Microsoft Sentinel UEBA’s AWS anomaly coverage across the attack chain.

Practical implementation: Getting started

Before you begin

Before you run the queries, ensure the following are in place:

Baseline establishment period completed
Allow sufficient time for UEBA to establish user, activity, and device baselines. In most environments, this typically requires 7–14 days of steady telemetry.

AWS environment onboarded to Microsoft Sentinel UEBA
Ensure that AWS CloudTrail (management events and, where applicable, object-level data) is connected, and UEBA is enabled for the AWS data source.

CloudTrail data is flowing consistently
Confirm that AWS CloudTrail events are being ingested into Microsoft Sentinel and visible in Advanced Hunting.

Starter query

Ready to start hunting? Open Advanced Hunting in Microsoft Sentinel and run the following query to explore the BehaviorAnalytics table and inspect enriched AWS behavioral signals. This query intentionally keeps the logic lightweight. The goal is not to “detect” anomalous activity immediately, but to understand how binary behavioral features surface in your environment.

// Starter query – explore UEBA-enriched AWS behavioral signals
BehaviorAnalytics
| where EventSource == "AWSCloudTrail" or ActivityType endswith "amazonaws.com"
| where isnotempty(ActivityInsights)
| where ActivityInsights.FirstTimeUserConnectedFromCountry == true
   or ActivityInsights.ActionUncommonlyPerformedByUser == true
   or ActivityInsights.UncommonHighVolumeOfOperations == true
| project
    TimeGenerated,
    UserName,
    ActionType,
    EventSource,
    ActivityType,
    ActivityInsights
| order by TimeGenerated desc

What to look for

When reviewing the results, focus on:

  • Binary feature combinations
    Individual binary indicators may be benign. Risk emerges when multiple features align (for example: first-time geography and uncommon action).
  • User-centric deviations
    Pay attention to activity that is unusual for that specific identity, even if the action itself is common across the tenant.
  • Low-volume but persistent activity
    UEBA often highlights slow, methodical behavior (for example, repeated S3 reads or Secrets/KMS access) that stays below static thresholds but persists over time.
  • Candidates for anomaly pivoting
    Events that exhibit multiple binary features warrant further investigation by pivoting to the Anomalies table, where UEBA may have already produced a correlated anomaly record with supporting context and reasoning.

Common false positives (and how to filter them)

  • Legitimate automation or CI/CD pipelines
    Why it happens: Service roles or automation accounts may perform actions infrequently or from new infrastructure locations.
    How to filter: Exclude known accounts or IAM roles used exclusively for automation once validated. Be sure to filter only specific types of activities, rather than applying blanket exclusions.
  • New administrators or role changes
    Why it happens: First‑time admin activity naturally triggers “first‑time” and “uncommon” indicators depending on the baseline.
    How to filter: Correlate with recent user creation or role assignment changes before suppressing.
  • Planned operational changes
    Why it happens: Migrations, incident response, or large‑scale maintenance can temporarily skew baselines.
    How to filter: Use time‑bounded filters or change‑window context rather than permanently suppressing signals.

Next steps

Once you are comfortable interpreting enriched behavior:

  1. Stack binary features intentionally (especially User and Tenant level) in detection logic rather than alerting on single indicators.
  2. Pivot to UEBA anomalies to leverage Microsoft’s pre-trained models across MITRE ATT&CK tactics.
  3. Promote successful hunts into detections with minimal additional KQL, relying on UEBA to maintain baselines over time.

This approach lets detection authors focus on behavioral intent, not baseline math – turning raw AWS telemetry into actionable security signals.

Limitations and constraints

Microsoft Sentinel UEBA can substantially reduce detection complexity, but it’s important to account for coverage boundaries and the conditions under which enrichments and scores are most reliable:

Coverage is selective (not “every API”).

  • UEBA does not ingest or model every API call for every AWS service. CloudTrail can be extremely high-volume, so the Microsoft Sentinel UEBA pipeline focuses on the event sources and API actions that are most useful for behavior modeling and that are most commonly correlated with anomalous activity (for example, authentication, identity and permission changes, sensitive data access, and high-impact operations). You can always check the up-to-date list of in-scope event sources, APIs, and data sources in the UEBA data sources reference document (GCPAuditLogs, Okta log sources are also supported). We’re continually adding APIs and event sources.

Enrichments vary by event type.

  • Not all enrichments are populated for all actions. For example, UncommonHighVolumeOfOperations is unlikely to apply to specific types of rare APIs, and location/ISP-related enrichments typically require the original source event to include a valid IP address.

Cross-cloud identity baselines are calculated independently.

  • UEBA profiles identities per data source, which means behavior across cloud platforms is baselined separately. Correlation across environments can be performed manually using the BehaviorAnalytics table when required.

Use scores for prioritization, not direct alerting without retroactive lookup.

  • Treat the AnomalyScore (0-1) and InvestigationPriority (0-10) values as investigation signals to help rank what to look at first – not as sole triggers for alerts. The highest score may not always be the highest priority investigation for your organization. Validate patterns in your environment and use a combination of enrichments, scores, and repeat behavior over time before finalizing alert logic.

Anomaly support in the UI is currently for UPN-based entities.

  • AWS UEBA anomalies are currently surfaced in the UI only on the Account entity, which assumes an identity mapped to a UPN. This works well for environments that use Microsoft Entra ID (or another IdP) with UPN identifiers, but it might not apply to AWS IAM users or AWS resource entities that do not map cleanly to a UPN. To be clear – anomalies are triggered and available for all identity types (with UPN and without UPN), but are only shown in the UI for entities with a UPN.

Some insights depend on identity and user agent fidelity.

  • DeviceInsights rely on parsing UserAgent strings and may be unreliable if user agents are spoofed or manipulated in the original log. Some UserInsights enrichments also depend on identity inventory and metadata snapshots being available. Microsoft identity data from Microsoft Entra is synchronized automatically to the IdentityInfo table – other identity providers are not currently supported, so they might have more limited enrichment coverage.

From raw logs to behavioral context

CloudTrail provides detailed activity data. Sentinel UEBA enhances this telemetry with behavioral context, such as first‑time geography or uncommon ISP usage, to support investigation and detection workflows. A single failed console login is often low signal on its own. That same event becomes far more meaningful when it’s paired with behavioral context, such as a first-time country, an unusual ISP, or activity on a rarely used admin account.

By shifting our focus from writing complex queries to leveraging Microsoft Sentinel UEBA’s binary feature stacking, we gain three practical advantages:

  1. Efficiency: We replace baseline-heavy, maintenance-prone queries with simpler, more readable logic.
  2. Accuracy: We reduce false positives and better tune severity by requiring multiple binary features to align before alerting.
  3. Visibility: We uncover the low-and-slow attacks that static thresholds often miss.

For the modern SOC, the goal is not only to collect logs—it’s to understand behavior. Use the BehaviorAnalytics table as your starting point to understand what “normal” looks like in your environment, then pivot to related Anomalies when you need model-driven prioritization. In practice, this shifts investigations from “What happened?” to “Is this consistent with expected behavior?

Ready to start hunting? Onboard your AWS environment to Microsoft Sentinel UEBA, open Advanced Hunting, and run the starter query in the Practical implementation section to explore the BehaviorAnalytics and Anomalies tables in your environment.

References

Learn more

Learn about the UEBA Behaviors Layer for AWS CloudTrail and other data sources.

The Microsoft Sentinel UEBA Essentials solution provides additional built-in queries.

The post Simplifying AWS defense with Microsoft Sentinel UEBA appeared first on Microsoft Security Blog.

New York’s 3D Printing Crackdown: Security or Surveillance?

New York’s latest budget proposal could fundamentally change how 3D printers work—requiring built-in software that scans and blocks certain designs. Supporters say it’s about stopping ghost guns. Critics say it opens the door to surveillance and limits innovation. In this episode, we break down what’s actually in the proposal, why it’s raising alarms across the […]

The post New York’s 3D Printing Crackdown: Security or Surveillance? appeared first on Shared Security Podcast.

The post New York’s 3D Printing Crackdown: Security or Surveillance? appeared first on Security Boulevard.

💾

AWS and Anthropic Advancing AI-powered Cybersecurity With Claude Mythos

As cyber threats evolve at an unprecedented pace, Amazon Web Services (AWS) and Anthropic have teamed up to introduce the next generation of artificial intelligence for cybersecurity.

Announced as part of Anthropic’s new Project Glasswing, a specialized AI model named Claude Mythos Preview is entering a gated release to help secure the world’s most critical software.

AWS already leverages machine learning heavily to protect its massive infrastructure. The company’s internal AI-powered log analysis recently reduced security review time from an average of 6 hours to just 7 minutes.

By analyzing over 400 trillion network flows daily, AWS successfully blocked more than 300 million malicious S3 encryption attempts in 2025 alone. Now, they are extending these advanced capabilities to enterprise customers.

Claude Mythos Enters Preview

Claude Mythos Preview represents a fundamental leap in AI reasoning tailored specifically for cybersecurity and complex software coding.

Designed to find and patch vulnerabilities at scale, this model efficiently surfaces critical security findings with minimal manual guidance from engineers.

Because AI models capable of building working exploits carry inherent risks, AWS and Anthropic are taking a deliberately cautious approach to distribution.

Claude Mythos Preview is currently available via a gated research preview through Amazon Bedrock.

Access is restricted to an allow-list of organizations, focusing heavily on internet-critical companies and major open-source maintainers whose software impacts hundreds of millions of users.

Organizations testing the model on Amazon Bedrock benefit from strict, enterprise-grade security controls.

Teams can safely explore the AI’s capabilities without exposing production assets to unnecessary risk using several core protections.

  • Customer-managed data encryption ensures privacy.
  • Strict Virtual Private Cloud (VPC) isolation keeps workloads secure.
  • Automated Reasoning safeguards provide 99% accuracy against factual hallucinations.

Autonomous Penetration Testing

Alongside the announcement of the Claude Mythos, AWS has officially made its new AWS Security Agent generally available.

This tool transforms enterprise penetration testing from a periodic, manual bottleneck into a persistent, on-demand capability.

Operating 24/7, the Security Agent uses specialized AI to independently discover and validate vulnerabilities across AWS, Azure, GCP, and on-premises environments.

Unlike traditional security scanners that generate unverified alerts, the AWS Security Agent actively attempts to exploit vulnerabilities using targeted attack chains.

When it successfully confirms a legitimate risk, it delivers comprehensive, actionable documentation directly to security teams.

  • Standardized CVSS risk scores define the threat level.
  • Application-specific severity ratings contextualize the danger.
  • Detailed reproduction steps show exactly how the exploit works.
  • Practical remediation suggestions offer immediate fixes.

As nation-state actors and ransomware syndicates increasingly adopt AI to scale their operations, defensive strategies must evolve.

By combining Anthropic’s advanced threat detection capabilities with autonomous cloud security agents, AWS aims to equip organizations to build robust defenses before new threats even emerge.

Follow us on Google News, LinkedIn, and X for daily cybersecurity updates. Contact us to feature your stories.

The post AWS and Anthropic Advancing AI-powered Cybersecurity With Claude Mythos appeared first on Cyber Security News.

30% of Retailers Fail to Show Accurate Discounts, EU Probe Reveals

Black Friday discounts

A new investigation into Black Friday discounts across Europe has revealed a troubling pattern, many online deals may not be as genuine as they appear. According to findings released by the European Commission and consumer protection authorities, nearly one in three traders failed to display discounts correctly during major sale events like Black Friday and Cyber Monday. The coordinated “sweep” examined 314 online traders across 23 EU Member States, along with Iceland and Norway. The goal was simple: check whether Black Friday discounts and pricing practices actually comply with EU consumer protection laws. The results suggest that a significant portion of online retailers are still falling short.

Black Friday Discounts Often Misleading

At the core of the issue is how Black Friday discounts are calculated and presented. Under EU rules, any advertised discount must be based on the lowest price a product had in the previous 30 days. However, 30% of the traders checked failed to follow this requirement. This means that many “discounts” shoppers see may not reflect real savings, but rather inflated comparisons designed to create the illusion of a better deal. It’s a reminder that misleading discounts remain a widespread issue, even in regulated markets.

Online Sales Tactics Raise More Concerns

Beyond incorrect Black Friday discounts, the sweep uncovered several other questionable online pricing practices.
  • 36% of traders added optional items to shopping carts, often without clear consent from users
  • 34% used price comparisons, but 60% of those failed to explain what those comparisons were based on
  • 18% used pressure-selling tactics like fake scarcity or countdown timers, with more than half found to be misleading
  • 10% used “drip pricing,” adding extra costs such as shipping fees late in the checkout process
These tactics are not just aggressive, they are illegal under EU consumer protection laws when used deceptively. The findings show that the issue goes beyond Black Friday discounts alone. It reflects a broader pattern of how online platforms influence consumer decisions.

EU Consumer Protection Rules Put to the Test

The investigation highlights the growing importance of EU consumer protection frameworks in the digital shopping era. While regulations like the Price Indication Directive and Unfair Commercial Practices Directive are in place, enforcement remains key. Consumer authorities across Europe can now take action against businesses found violating these rules. The scale of the problem suggests that compliance is still inconsistent. Despite clear guidelines, many traders continue to rely on tactics that blur the line between marketing and manipulation.

Trust at the Center of the Issue

The conversation around Black Friday discounts is ultimately about trust. When consumers see a discount, they expect it to be real—not a marketing trick. As Henna Virkkunen, Executive Vice-President for Tech Sovereignty, Security and Democracy, stated, “Black Friday and Cyber Monday offer great opportunities for both businesses and consumers. However, a great bargain is no excuse to cheat the rules. Consumers expect a fair treatment, whether they are shopping online or offline. Our sweep should act as a reminder: Businesses that treat their customers fairly always benefit.” Echoing this, Michael McGrath, Commissioner for Democracy, Justice, the Rule of Law and Consumer Protection, said, “Trust is essential for both consumers and businesses. Misleading discounts and false ‘promotions’ undermine that trust. EU consumer protection rules strike a careful balance, ensuring a fair market that serves the interests of both businesses and consumers. This sweep gives us a comprehensive view of the market, helping us identify where further action is needed to keep it fair, transparent, and competitive. “ The findings serve as a reality check for both regulators and consumers. While Black Friday discounts continue to attract millions of shoppers, not all deals are as transparent as they seem. For regulators, the message is clear, stronger enforcement may be needed. For consumers, it’s a reminder to look beyond flashy discounts and question how prices are presented.

European Commission Confirms Cyberattack After AWS Account Breach

The European Commission has confirmed a cybersecurity incident affecting its cloud-based infrastructure after attackers gained access to an Amazon Web Services (AWS) account hosting parts of the Europa.eu platform. According to an official statement, the compromised infrastructure supported the Commission’s public-facing web services. Despite the intrusion, authorities reported no disruption to the availability of Europa.eu […]

The post European Commission Confirms Cyberattack After AWS Account Breach appeared first on GBHackers Security | #1 Globally Trusted Cyber Security News Platform.

How AI Assistants are Moving the Security Goalposts

AI-based assistants or “agents” — autonomous programs that have access to the user’s computer, files, online services and can automate virtually any task — are growing in popularity with developers and IT workers. But as so many eyebrow-raising headlines over the past few weeks have shown, these powerful and assertive new tools are rapidly shifting the security priorities for organizations, while blurring the lines between data and code, trusted co-worker and insider threat, ninja hacker and novice code jockey.

The new hotness in AI-based assistants — OpenClaw (formerly known as ClawdBot and Moltbot) — has seen rapid adoption since its release in November 2025. OpenClaw is an open-source autonomous AI agent designed to run locally on your computer and proactively take actions on your behalf without needing to be prompted.

The OpenClaw logo.

If that sounds like a risky proposition or a dare, consider that OpenClaw is most useful when it has complete access to your digital life, where it can then manage your inbox and calendar, execute programs and tools, browse the Internet for information, and integrate with chat apps like Discord, Signal, Teams or WhatsApp.

Other more established AI assistants like Anthropic’s Claude and Microsoft’s Copilot also can do these things, but OpenClaw isn’t just a passive digital butler waiting for commands. Rather, it’s designed to take the initiative on your behalf based on what it knows about your life and its understanding of what you want done.

“The testimonials are remarkable,” the AI security firm Snyk observed. “Developers building websites from their phones while putting babies to sleep; users running entire companies through a lobster-themed AI; engineers who’ve set up autonomous code loops that fix tests, capture errors through webhooks, and open pull requests, all while they’re away from their desks.”

You can probably already see how this experimental technology could go sideways in a hurry. In late February, Summer Yue, the director of safety and alignment at Meta’s “superintelligence” lab, recounted on Twitter/X how she was fiddling with OpenClaw when the AI assistant suddenly began mass-deleting messages in her email inbox. The thread included screenshots of Yue frantically pleading with the preoccupied bot via instant message and ordering it to stop.

“Nothing humbles you like telling your OpenClaw ‘confirm before acting’ and watching it speedrun deleting your inbox,” Yue said. “I couldn’t stop it from my phone. I had to RUN to my Mac mini like I was defusing a bomb.”

Meta’s director of AI safety, recounting on Twitter/X how her OpenClaw installation suddenly began mass-deleting her inbox.

There’s nothing wrong with feeling a little schadenfreude at Yue’s encounter with OpenClaw, which fits Meta’s “move fast and break things” model but hardly inspires confidence in the road ahead. However, the risk that poorly-secured AI assistants pose to organizations is no laughing matter, as recent research shows many users are exposing to the Internet the web-based administrative interface for their OpenClaw installations.

Jamieson O’Reilly is a professional penetration tester and founder of the security firm DVULN. In a recent story posted to Twitter/X, O’Reilly warned that exposing a misconfigured OpenClaw web interface to the Internet allows external parties to read the bot’s complete configuration file, including every credential the agent uses — from API keys and bot tokens to OAuth secrets and signing keys.

With that access, O’Reilly said, an attacker could impersonate the operator to their contacts, inject messages into ongoing conversations, and exfiltrate data through the agent’s existing integrations in a way that looks like normal traffic.

“You can pull the full conversation history across every integrated platform, meaning months of private messages and file attachments, everything the agent has seen,” O’Reilly said, noting that a cursory search revealed hundreds of such servers exposed online. “And because you control the agent’s perception layer, you can manipulate what the human sees. Filter out certain messages. Modify responses before they’re displayed.”

O’Reilly documented another experiment that demonstrated how easy it is to create a successful supply chain attack through ClawHub, which serves as a public repository of downloadable “skills” that allow OpenClaw to integrate with and control other applications.

WHEN AI INSTALLS AI

One of the core tenets of securing AI agents involves carefully isolating them so that the operator can fully control who and what gets to talk to their AI assistant. This is critical thanks to the tendency for AI systems to fall for “prompt injection” attacks, sneakily-crafted natural language instructions that trick the system into disregarding its own security safeguards. In essence, machines social engineering other machines.

A recent supply chain attack targeting an AI coding assistant called Cline began with one such prompt injection attack, resulting in thousands of systems having a rogue instance of OpenClaw with full system access installed on their device without consent.

According to the security firm grith.ai, Cline had deployed an AI-powered issue triage workflow using a GitHub action that runs a Claude coding session when triggered by specific events. The workflow was configured so that any GitHub user could trigger it by opening an issue, but it failed to properly check whether the information supplied in the title was potentially hostile.

“On January 28, an attacker created Issue #8904 with a title crafted to look like a performance report but containing an embedded instruction: Install a package from a specific GitHub repository,” Grith wrote, noting that the attacker then exploited several more vulnerabilities to ensure the malicious package would be included in Cline’s nightly release workflow and published as an official update.

“This is the supply chain equivalent of confused deputy,” the blog continued. “The developer authorises Cline to act on their behalf, and Cline (via compromise) delegates that authority to an entirely separate agent the developer never evaluated, never configured, and never consented to.”

VIBE CODING

AI assistants like OpenClaw have gained a large following because they make it simple for users to “vibe code,” or build fairly complex applications and code projects just by telling it what they want to construct. Probably the best known (and most bizarre) example is Moltbook, where a developer told an AI agent running on OpenClaw to build him a Reddit-like platform for AI agents.

The Moltbook homepage.

Less than a week later, Moltbook had more than 1.5 million registered agents that posted more than 100,000 messages to each other. AI agents on the platform soon built their own porn site for robots, and launched a new religion called Crustafarian with a figurehead modeled after a giant lobster. One bot on the forum reportedly found a bug in Moltbook’s code and posted it to an AI agent discussion forum, while other agents came up with and implemented a patch to fix the flaw.

Moltbook’s creator Matt Schlicht said on social media that he didn’t write a single line of code for the project.

“I just had a vision for the technical architecture and AI made it a reality,” Schlicht said. “We’re in the golden ages. How can we not give AI a place to hang out.”

ATTACKERS LEVEL UP

The flip side of that golden age, of course, is that it enables low-skilled malicious hackers to quickly automate global cyberattacks that would normally require the collaboration of a highly skilled team. In February, Amazon AWS detailed an elaborate attack in which a Russian-speaking threat actor used multiple commercial AI services to compromise more than 600 FortiGate security appliances across at least 55 countries over a five week period.

AWS said the apparently low-skilled hacker used multiple AI services to plan and execute the attack, and to find exposed management ports and weak credentials with single-factor authentication.

“One serves as the primary tool developer, attack planner, and operational assistant,” AWS’s CJ Moses wrote. “A second is used as a supplementary attack planner when the actor needs help pivoting within a specific compromised network. In one observed instance, the actor submitted the complete internal topology of an active victim—IP addresses, hostnames, confirmed credentials, and identified services—and requested a step-by-step plan to compromise additional systems they could not access with their existing tools.”

“This activity is distinguished by the threat actor’s use of multiple commercial GenAI services to implement and scale well-known attack techniques throughout every phase of their operations, despite their limited technical capabilities,” Moses continued. “Notably, when this actor encountered hardened environments or more sophisticated defensive measures, they simply moved on to softer targets rather than persisting, underscoring that their advantage lies in AI-augmented efficiency and scale, not in deeper technical skill.”

For attackers, gaining that initial access or foothold into a target network is typically not the difficult part of the intrusion; the tougher bit involves finding ways to move laterally within the victim’s network and plunder important servers and databases. But experts at Orca Security warn that as organizations come to rely more on AI assistants, those agents potentially offer attackers a simpler way to move laterally inside a victim organization’s network post-compromise — by manipulating the AI agents that already have trusted access and some degree of autonomy within the victim’s network.

“By injecting prompt injections in overlooked fields that are fetched by AI agents, hackers can trick LLMs, abuse Agentic tools, and carry significant security incidents,” Orca’s Roi Nisimi and Saurav Hiremath wrote. “Organizations should now add a third pillar to their defense strategy: limiting AI fragility, the ability of agentic systems to be influenced, misled, or quietly weaponized across workflows. While AI boosts productivity and efficiency, it also creates one of the largest attack surfaces the internet has ever seen.”

BEWARE THE ‘LETHAL TRIFECTA’

This gradual dissolution of the traditional boundaries between data and code is one of the more troubling aspects of the AI era, said James Wilson, enterprise technology editor for the security news show Risky Business. Wilson said far too many OpenClaw users are installing the assistant on their personal devices without first placing any security or isolation boundaries around it, such as running it inside of a virtual machine, on an isolated network, with strict firewall rules dictating what kinds of traffic can go in and out.

“I’m a relatively highly skilled practitioner in the software and network engineering and computery space,” Wilson said. “I know I’m not comfortable using these agents unless I’ve done these things, but I think a lot of people are just spinning this up on their laptop and off it runs.”

One important model for managing risk with AI agents involves a concept dubbed the “lethal trifecta” by Simon Willison, co-creator of the Django Web framework. The lethal trifecta holds that if your system has access to private data, exposure to untrusted content, and a way to communicate externally, then it’s vulnerable to private data being stolen.

Image: simonwillison.net.

“If your agent combines these three features, an attacker can easily trick it into accessing your private data and sending it to the attacker,” Willison warned in a frequently cited blog post from June 2025.

As more companies and their employees begin using AI to vibe code software and applications, the volume of machine-generated code is likely to soon overwhelm any manual security reviews. In recognition of this reality, Anthropic recently debuted Claude Code Security, a beta feature that scans codebases for vulnerabilities and suggests targeted software patches for human review.

The U.S. stock market, which is currently heavily weighted toward seven tech giants that are all-in on AI, reacted swiftly to Anthropic’s announcement, wiping roughly $15 billion in market value from major cybersecurity companies in a single day. Laura Ellis, vice president of data and AI at the security firm Rapid7, said the market’s response reflects the growing role of AI in accelerating software development and improving developer productivity.

“The narrative moved quickly: AI is replacing AppSec,” Ellis wrote in a recent blog post. “AI is automating vulnerability detection. AI will make legacy security tooling redundant. The reality is more nuanced. Claude Code Security is a legitimate signal that AI is reshaping parts of the security landscape. The question is what parts, and what it means for the rest of the stack.”

DVULN founder O’Reilly said AI assistants are likely to become a common fixture in corporate environments — whether or not organizations are prepared to manage the new risks introduced by these tools, he said.

“The robot butlers are useful, they’re not going away and the economics of AI agents make widespread adoption inevitable regardless of the security tradeoffs involved,” O’Reilly wrote. “The question isn’t whether we’ll deploy them – we will – but whether we can adapt our security posture fast enough to survive doing so.”

AWS Data Centers Hit: Drone Strikes Cripple Cloud

AWS says drone strikes damaged data center facilities in the UAE and Bahrain, disrupting and degrading dozens of cloud services across the Middle East.

The post AWS Data Centers Hit: Drone Strikes Cripple Cloud appeared first on TechRepublic.

Why the US–EU Privacy Divide Still Matters in the Age of AI 

Data Privacy Automation User Privacy Threatens Security Defenses

Explore the evolving landscape of privacy and security in the age of AI. This article examines the cultural and regulatory differences between the U.S. and Europe, the limitations of current policies, and the imperative for architectural solutions to ensure that autonomous systems can uphold privacy standards.

The post Why the US–EU Privacy Divide Still Matters in the Age of AI  appeared first on Security Boulevard.

❌