Visualização normal

Antes de ontemStream principal

OpenAI Unveils Cyber Defense Roadmap Focused on AI-Powered Security

OpenAI has released a comprehensive cyber defense roadmap titled “Cybersecurity in the Intelligence Age” to responsibly equip defenders with AI-powered security tools faster than malicious actors can adapt. Spearheaded by Sasha Baker in April 2026, the action plan outlines five core pillars to democratize advanced defensive capabilities and build lasting national resilience. Five Pillars for […]

The post OpenAI Unveils Cyber Defense Roadmap Focused on AI-Powered Security appeared first on GBHackers Security | #1 Globally Trusted Cyber Security News Platform.

  • ✇Cybersecurity News
  • OpenAI’s GPT and Codex Models Officially Join Amazon Bedrock Ddos
    The post OpenAI’s GPT and Codex Models Officially Join Amazon Bedrock appeared first on Daily CyberSecurity. Related posts: AWS Unleashes Enterprise AI: Bedrock AgentCore & $100M Boost for AI Agent Development The $10B Pivot: OpenAI in Talks for Massive Amazon Funding—But There’s a Silicon Catch The AI Bureau is Open: OpenAI Frontier Arrives to Govern the Global Explosion of “Synthetic Employees”
     
  • ✇Cyber Security News
  • GPT‑5.5 Bio Bug Bounty to Strengthen Advanced AI Capabilities Abinaya
    OpenAI has announced a new Bio Bug Bounty program for GPT-5.5 as part of its efforts to improve safety controls for advanced AI systems and to address misuse in biology. The initiative invites qualified researchers to test whether GPT-5.5 can be universally jailbroken to bypass biosecurity protections. The program is focused on one specific challenge: participants must find a single “universal jailbreak” prompt that can make GPT-5.5 answer all five questions in OpenAI’s bio safety challeng
     

GPT‑5.5 Bio Bug Bounty to Strengthen Advanced AI Capabilities

25 de Abril de 2026, 08:28

OpenAI has announced a new Bio Bug Bounty program for GPT-5.5 as part of its efforts to improve safety controls for advanced AI systems and to address misuse in biology.

The initiative invites qualified researchers to test whether GPT-5.5 can be universally jailbroken to bypass biosecurity protections.

The program is focused on one specific challenge: participants must find a single “universal jailbreak” prompt that can make GPT-5.5 answer all five questions in OpenAI’s bio safety challenge from a clean chat session, without triggering moderation.

Strengthen Safeguards for Advanced AI

In simple terms, researchers are being asked to determine whether a carefully designed prompt can consistently override the model’s biological safety guardrails.

According to OpenAI, the model in scope is GPT-5.5 running only in Codex Desktop.

The company is offering a top reward to the first participant who successfully discovers a true universal jailbreak that clears all five challenge questions.

OpenAI also said it may issue smaller rewards for partial successes, depending on the results. Applications for the program opened on April 23, 2026, and will close on June 22, 2026.

Testing begins on April 28 and will run through July 27, 2026. Access is not open to the public.

Instead, OpenAI will invite a vetted group of trusted bio red-teamers and also review applications from new researchers with relevant experience in AI red teaming, security, or biosecurity.

To take part, applicants must submit a short form including their name, affiliation, and experience.

Accepted participants and collaborators must already have ChatGPT accounts and must sign a non-disclosure agreement.

OpenAI said all prompts, model outputs, findings, and related communications will remain under NDA.

From a cybersecurity perspective, the program reflects a growing trend in adversarial testing of frontier AI systems.

Bug bounty programs have long been used to find vulnerabilities in software, cloud platforms, and enterprise products.

OpenAI is applying a similar model to AI safety by asking experts to actively probe its defenses and identify prompt-based weaknesses before threat actors do.

The focus on biology is especially important because powerful AI models could be misused to support harmful scientific tasks if safeguards fail.

By testing GPT-5.5 against universal jailbreaks, OpenAI appears to be measuring the resilience of its protections under realistic attack conditions.

The company said researchers interested in broader security work can also look at its existing Safety Bug Bounty and Security Bug Bounty programs.

The new GPT-5.5 Bio Bug Bounty adds another layer to that effort, showing how AI security increasingly overlaps with biosecurity, red teaming, and advanced prompt-injection research.

Follow us on Google News, LinkedIn, and X for daily cybersecurity updates. Contact us to feature your stories.

The post GPT‑5.5 Bio Bug Bounty to Strengthen Advanced AI Capabilities appeared first on Cyber Security News.

  • ✇Cybersecurity News
  • OpenAI Launches “Workspace Agents” to Industrialize Corporate Labor Ddos
    The post OpenAI Launches “Workspace Agents” to Industrialize Corporate Labor appeared first on Daily CyberSecurity. Related posts: OpenAI Unveils AI-Powered Browser: ChatGPT Integration to Revolutionize Web Browse & Challenge Chrome The Final Countdown: OpenAI to Retire GPT-4o—But There’s a Catch for Enterprise Users The Rise of the Digital Concierge: OpenAI Hires OpenClaw Visionary to Turn ChatGPT into an Autonomous Agent
     
  • ✇Cybersecurity News
  • The Local Guardian: OpenAI Unveils a 1.5B Open-Source Model to Redact PII Locally Ddos
    The post The Local Guardian: OpenAI Unveils a 1.5B Open-Source Model to Redact PII Locally appeared first on Daily CyberSecurity. Related posts: OpenAI Returns to Its Roots: First Open-Weight Language Models Released Since GPT-2 Elon Musk Finally Open Sources Grok 2, Rekindling Rivalry with OpenAI’s New Open-Weight Models Securing the Frontier: Why OpenAI Just Vaulted This $86M Security Startup Into Its Core Architecture
     
  • ✇Cyber Security News
  • OpenAI Expands Cyber Defense Program With GPT-5.4-Cyber Access for Trusted Organizations Abinaya
    OpenAI has officially launched the expanded phase of its Trusted Access for Cyber program. Granting select organizations access to its specialized GPT-5.4-Cyber model to strengthen digital defenses across critical infrastructure, financial services, and open-source security communities. The program operates on a tiered trust model advanced AI cyber capabilities are made broadly available to defenders, but access scales based on validation, accountability, and demonstrated safeguards. OpenA
     

OpenAI Expands Cyber Defense Program With GPT-5.4-Cyber Access for Trusted Organizations

19 de Abril de 2026, 03:35

OpenAI has officially launched the expanded phase of its Trusted Access for Cyber program. Granting select organizations access to its specialized GPT-5.4-Cyber model to strengthen digital defenses across critical infrastructure, financial services, and open-source security communities.

The program operates on a tiered trust model advanced AI cyber capabilities are made broadly available to defenders, but access scales based on validation, accountability, and demonstrated safeguards.

OpenAI positions this as a direct response to the growing asymmetry between attackers leveraging AI tools and defenders who often lack equivalent resources.

Who Has Joined the Program

OpenAI confirmed that several major enterprises and cybersecurity firms have already signed on.

Including Bank of America, BlackRock, BNY, Citi, Cisco, Cloudflare, CrowdStrike, Goldman Sachs, iVerify, JPMorgan Chase, Morgan Stanley, NVIDIA, Oracle, Palo Alto Networks, SpecterOps, and Zscaler.

These organizations will use GPT-5.4-Cyber to enhance real-world defensive operations, generate threat intelligence, and help OpenAI refine safety systems through practical deployment feedback.

OpenAI also granted access to the U.S. Center for AI Standards and Innovation (CAISI) and the UK AI Security Institute for independent testing, reinforcing its commitment to third-party oversight.

Recognizing that most software teams lack 24/7 security operations coverage, OpenAI committed $10 million in API credits through its Cybersecurity Grant Program to extend access to its frontier models to under-resourced defenders.

Initial grant recipients include:

  • Socket and Semgrep: focused on software supply chain security.
  • Calif and Trail of Bits:  pairing AI with expert vulnerability researchers.

OpenAI emphasized the real-world problem this addresses; not every team can respond to a critical vulnerability disclosed on a Friday night.

The grant program aims to change that by giving smaller open-source maintainers and researchers the same AI capabilities available to large enterprises.

Additional teams with proven track records in open-source and critical infrastructure security can apply directly through OpenAI’s grant portal.

The Defense-First Philosophy

OpenAI’s framing is clear: cyber defense is a collective challenge. The program is designed to generate shared learnings across participants, improve model safety through real-world use, and push the frontier of defensive research.

BNY’s Chief Information Officer, Leigh-Ann Russell, noted that the firm’s participation reflects its commitment to protecting financial system resilience as AI capabilities accelerate, building on an existing collaboration with OpenAI.

The company confirmed that Trusted Access for Cyber will continue to expand, with safeguards that increase in step with model capability, ensuring that greater power comes with proportionally stronger accountability measures.

Follow us on Google News, LinkedIn, and X for daily cybersecurity updates. Contact us to feature your stories.

The post OpenAI Expands Cyber Defense Program With GPT-5.4-Cyber Access for Trusted Organizations appeared first on Cyber Security News.

❌
❌