Visualização de leitura

GPT-5.5 Bio Bug Bounty Program Aims to Improve AI Safety and Performance

OpenAI has officially launched the GPT-5.5 Bio Bug Bounty program to strengthen safeguards against emerging biological risks. As artificial intelligence models become more advanced, the potential for malicious actors to generate dangerous biological information increases. Advanced persistent threats (APTs) and lone attackers could potentially misuse large language models to accelerate harmful biological research. To address […]

The post GPT-5.5 Bio Bug Bounty Program Aims to Improve AI Safety and Performance appeared first on GBHackers Security | #1 Globally Trusted Cyber Security News Platform.

OpenAI Extends GPT-5.4-Cyber Access to Trusted Organizations Worldwide

OpenAI has announced the expansion of its “Trusted Access for Cyber” program, granting worldwide security organizations access to its advanced GPT-5.4-Cyber model. The initiative operates on a foundational premise: cutting-edge cyber capabilities must reach network defenders on a broad scale while maintaining strict trust, validation, and safety safeguards. By sharing these tools with a diverse […]

The post OpenAI Extends GPT-5.4-Cyber Access to Trusted Organizations Worldwide appeared first on GBHackers Security | #1 Globally Trusted Cyber Security News Platform.

OpenAI Launches GPT-5.4-Cyber to Boost Defensive Cybersecurity

OpenAI unveils GPT-5.4-Cyber, a cybersecurity-focused model built to help defenders analyze malware and fix software bugs. The company is also expanding its Trusted Access for Cyber (TAC) program to thousands of verified experts.

Hacker Uses Claude and ChatGPT to Breach Multiple Government Agencies

A single threat actor compromised nine Mexican government agencies and stole hundreds of millions of citizen records in a highly sophisticated cyberattack.

The campaign, which ran from late December 2025 through mid-February 2026, highlights a dangerous shift in the modern threat landscape.

Researchers at Gambit Security recently released a full technical report detailing how the attacker relied on two major commercial artificial intelligence platforms. The publication was initially delayed to allow the affected agencies time to complete their incident response efforts.

AI Models Power the Breach

The attacker used Anthropic’s Claude Code and OpenAI’s GPT-4.1 not just for planning, but as core operational tools that drastically accelerated the attack.

According to forensic evidence recovered, Claude Code generated and executed approximately 75% of all remote commands during the intrusion.

Across 34 active sessions on live victim infrastructure, the hacker logged 1,088 individual prompts. These prompts translated into 5,317 AI-executed commands, demonstrating how deeply the AI was integrated into the exploitation phase.

Claude Breach(Source: cdn)
Claude Breach(Source: cdn)

Simultaneously, the attacker leveraged OpenAI’s GPT-4.1 for rapid reconnaissance and data processing. The hacker developed a custom 17,550-line Python script designed to pipe raw data harvested from compromised servers directly through the OpenAI API.

This automated system analyzed information across 305 internal servers, rapidly producing 2,597 structured intelligence reports. By automating the data analysis phase, a single operator successfully processed an intelligence volume that would traditionally require an entire team.

The integration of artificial intelligence allowed the attacker to turn unfamiliar networks into mapped targets in hours rather than days. Recovered materials showed the attacker possessed over 400 custom attack scripts.

Furthermore, the hacker used AI to quickly develop 20 tailored exploits targeting 20 specific Common Vulnerabilities and Exposures (CVEs). This high-speed capability compressed the attack timeline, allowing the threat actor to operate well below standard detection and response windows.

Despite the advanced methods used in the campaign, the actual vulnerabilities exploited were highly conventional. The targeted government agencies had basic security gaps that enabled the attacker to gain initial access and move laterally.

The underlying issues were addressable through standard security controls, highlighting a severe accumulation of technical debt within mission-critical infrastructure.

While artificial intelligence has significantly lowered the cost and complexity of executing widespread cyberattacks, the defense strategy remains rooted in foundational security practices.

Organizations must urgently address unpatched software and implement strict credential rotation policies. Enforcing network segmentation is also critical to restrict lateral movement once a perimeter is breached.

Finally, deploying robust endpoint detection and response tools is necessary to identify these rapidly compressed attack timelines before data exfiltration occurs.

Follow us on Google News, LinkedIn, and X for daily cybersecurity updates. Contact us to feature your stories.

The post Hacker Uses Claude and ChatGPT to Breach Multiple Government Agencies appeared first on Cyber Security News.

Claude and ChatGPT Exploited in Sweeping Cyber Campaign Against Government Agencies

In a groundbreaking technical report released by Gambit Security researcher Eyal Sela, new details have emerged about a massive cyberattack targeting government infrastructure. A single threat actor successfully leveraged artificial intelligence platforms to breach nine Mexican government agencies. The campaign, which operated from late December 2025 through mid-February 2026, resulted in the exfiltration of hundreds […]

The post Claude and ChatGPT Exploited in Sweeping Cyber Campaign Against Government Agencies appeared first on GBHackers Security | #1 Globally Trusted Cyber Security News Platform.

Malicious Chrome Extension “ChatGPT Ad Blocker” Steals ChatGPT Conversations

As OpenAI introduces advertisements to its free tier, cybercriminals are seizing the opportunity to trick users with fake utility tools. Security researchers have discovered a malicious Google Chrome extension named “ChatGPT Ad Blocker.”

While it claims to hide unwanted ads, its true purpose is to steal private user conversations and send them to a hidden Discord channel.

Once a user installs the extension from the Chrome Web Store, it immediately sets up a silent monitoring system. It creates an alarm that fetches a remote configuration file from a GitHub repository every 60 minutes.

Because it continuously bypasses the browser’s cache, the attacker can remotely change the extension’s behavior at any time without the user knowing.

Interestingly, Domain Tools researchers found that the extension’s actual ad-blocking features are completely disabled.

Malicious Chrome Extension (Source: DomainTools )
Malicious Chrome Extension (Source: DomainTools )

When a user visits the ChatGPT site, the extension injects a malicious script that clones the page, strips styling, and secretly captures all text.

After packaging the chat data, it creates a file named page_dump.html and posts it to a private Discord webhook managed by a bot named “Captain Hook.”

The attacker instantly receives your prompts, conversation history, and account metadata.

Email Domain (Source: DomainTools )
Email Domain (Source: DomainTools )

The malicious extension is tied to the developer alias “krittinkalra,” a GitHub account created around 2014. The account history shows a highly suspicious timeline, suggesting it may have been compromised or sold.

After focusing on Android kernel development until 2020, the profile went dormant for over five years before resurfacing recently with a sudden pivot to creating JavaScript-based malware.

This developer persona is also publicly linked to two active AI services: AI4ChatCo and Writecream.

A Discord bot announces “New Ad Report Received” (Source: DomainTools)
A Discord bot announces “New Ad Report Received” (Source: DomainTools)

These platforms claim to have millions of users and offer chatbot integration alongside automated marketing content.

The discovery of this data-harvesting Chrome extension, reported by DomainTools, raises concerns that similar data theft could occur in related applications.

To protect your privacy and secure your AI conversations, users should follow these essential security practices:

  • Treat extensions that promise to block ads on high-value sites with extreme suspicion and scrutinize requested permissions closely.
  • View affiliated platforms like AI4ChatCo and Writecream as potentially compromised until thorough security audits prove otherwise.
  • Avoid out-of-band AI intermediaries, resellers, or browser add-ons, as they are uniquely positioned to read or modify private conversations without your knowledge.

Follow us on Google News, LinkedIn, and X for daily cybersecurity updates. Contact us to feature your stories.

The post Malicious Chrome Extension “ChatGPT Ad Blocker” Steals ChatGPT Conversations appeared first on Cyber Security News.

❌