Visualização de leitura

Capsule Security Emerges From Stealth to Secure AI Agents at Runtime

Capsule, capsule security,

Capsule Security emerges from stealth with a $7M seed round to launch a runtime security platform for AI agents. Featuring the open-source ClawGuard, the platform enforces governance and mitigates prompt injection risks like ShareLeak and PipeLeak without requiring SDKs or proxies.

The post Capsule Security Emerges From Stealth to Secure AI Agents at Runtime appeared first on Security Boulevard.

Claude Code Security: The AI Shockwave Hitting Cybersecurity

Anthropic’s Claude Code Security research preview promises AI-powered code analysis and vulnerability detection at scale. The announcement triggered strong reactions across the cybersecurity community and sent several vendor stocks lower. In this episode, we break down what the tool actually does, where it fits in modern AppSec, and whether AI automation threatens traditional security products […]

The post Claude Code Security: The AI Shockwave Hitting Cybersecurity appeared first on Shared Security Podcast.

The post Claude Code Security: The AI Shockwave Hitting Cybersecurity appeared first on Security Boulevard.

💾

Anthropic Didn’t Kill Cybersecurity. It Just Reminded Us There Are Two Doors.

Anthropic’s Claude Code Security sparked a sharp SaaS market selloff, but investors missed a critical reality: AI code scanning addresses only half of modern cyberattacks. Identity, credentials, and human factors remain the dominant breach vectors.

The post Anthropic Didn’t Kill Cybersecurity. It Just Reminded Us There Are Two Doors. appeared first on Security Boulevard.

Attacker Breached 600 FortiGate Appliances in AI-Assisted Campaign: Amazon

AI technology, security, AI security, visibility, insights, security platform, Arctic Wolf, zero-trust encrypted AI Trend Micro cybersecurity poverty line, data-centric, SUSE cloud Wiz Torq AirTag Skyhawk SASE security cloud security visibility PwC Survey Finds C-Level Execs Now View Cybersecurity as Biggest Risk

An single threat actor used AI tools to create and run a campaign that compromised more then 600 Fortinet FortiGate appliances around the world over five weeks, according to Amazon threat researchers, the latest example of how cybercriminals are using the technology in their attacks.

The post Attacker Breached 600 FortiGate Appliances in AI-Assisted Campaign: Amazon appeared first on Security Boulevard.

Will AI threaten the role of human creativity in cyber threat detection?

Cybersecurity requires creativity and thinking outside the box. It’s why more organizations are looking at people with soft skills and coming from outside the tech industry to address the cyber skills gap. As the threat landscape becomes more complex and nation-state actors launch innovative cyberattacks against critical infrastructure, there is a need for cybersecurity professionals who can anticipate these attacks and develop creative preventive solutions.

Of course, a lot of cybersecurity work is mundane and repetitive — monitoring logs, sniffing out false positive alerts, etc. Artificial intelligence (AI) has been a boon in filling the talent gaps when it comes to these types of tasks. But AI has also proven useful for many of the same things that creative thought brings to the threat table, such as addressing more sophisticated threat actors, the rapid increase of data and the hybrid infrastructure.

However, many companies are seeing the value of AI, especially generative AI (gen AI), in handling a greater share of creative work — not just in cybersecurity but also in areas like marketing and public relations, writing and research. But are these organizations using AI in a way that could threaten the importance of human creativity in threat detection?

Why creativity is important to cybersecurity

The very simple reason why cybersecurity requires innovative people is that threat actors are already coming up with novel approaches to how to get into your system. Are they using gen AI to launch their attacks? You bet they are; phishing emails have never been more grammatically constructed or realistic. But before AI was available, threat actors were designing social engineering attacks that attracted clicks. Now, they have advanced beyond “how can we lure in victims” to “how can we get more out of a single attack after we lure in the victims.”

Creativity isn’t just coming up with new ideas. It is also the ability to see things through a big-picture lens and discern historical data or where to find information you might not know you need to look for. For example, creative thought is required for the following security tasks:

  • Threat hunting or predicting a threat actor’s move or finding their tracks in a system
  • Finding buried evidence in a forensic search
  • Understanding historical data in anomaly detection
  • Ability to tell a real email or document versus a well-designed phishing attack
  • Verifying new zero day attacks and other malware variants found in otherwise unknown vulnerabilities

AI can augment human creativity, but gen AI gets a lot of things wrong. Users have found themselves in situations where AI claimed plagiarism on original work or AI hallucinations offered false information that nullified the research of human analysts. AI algorithms are also susceptible to bias that could lead to false positives.

Explore AI cybersecurity solutions

AI’s role in creative cybersecurity and beyond

While many creative people, cybersecurity professionals and beyond, see gen AI as a mixed blessing, many embrace the technology because it is a huge timesaver.

“Gen AI can help prototype much faster because the large language models can take over the refactoring and documentation of code,” wrote Aili McConnon in an IBM blog post. Also, the article pointed out, AI tools can help users create prototypes or visualize their ideas in minutes versus hours or days.

Creativity married to AI can help identify future leaders. According to research from IBM, two-thirds of company leaders found that AI is driving their growth, with four specific use cases — IT operations, user experience, virtual assistants and cybersecurity — most commonly favored by leaders.

“A Learner will typically copy predefined scenarios using out-of-the-box technologies,” Dr. Stephan Bloehdorn, Executive Partner and Practice Leader, AI, Analytics and Automation-IBM Consulting DACH, was quoted in the study. “But a Leader develops custom innovations.”

Over-reliance on AI?

As gen AI becomes more ubiquitous in the workplace and as more creative folks and leaders rely on it as a way to put their ideas in motion, are we also relying on the technology to the point that it could lead to a degradation of other important necessary skills, like the ability to analyze data and create viable solutions?

It is unclear if organizations are over-relying on gen AI, according to Stephen Kowski, Field CTO at SlashNext Email Security+, but it is becoming more of a designed feature due to unintended consequences related to resource allocation in organizations.

“While AI excels at processing massive volumes of threat data, real-world attacks constantly evolve beyond historical patterns, requiring human expertise to identify and respond to zero-day threats,” said Kowski in an email interview. “The key is achieving the right balance where AI handles high-volume routine detection while skilled analysts investigate novel attack patterns and determine strategic responses.”

Yet, Kris Bondi, CEO and Co-Founder of Mimoto, isn’t worried about AI leading to a degradation of skills — at least not for the foreseeable future.

“One of the biggest challenges for cybersecurity professionals is having too many alerts and too many false positives. AI is only able to automate a small percentage of responses. It’s more likely that AI will eventually automate additional requirements for someone deemed to be suspicious or the elevation of alert so that a human can analyze the situation,” Bondi said via email.

However, organizations should watch out for AI’s role in defining threat-hunting parameters. “If AI is the sole driver defining threat hunting parameters without spot-checks or audits, the threat intelligence approach could eventually be focused in the wrong area. The answer is more reliance on critical thinking and analytical skills,” said Bondi.

Embracing creativity in an AI-driven world

AI overall, and gen AI in particular, are going to be part of the business world going forward. It is going to play a vital role in how organizations and analysts approach cybersecurity defenses and mitigations. But the soft skills that creative thought depends on will still play an important and necessary role in cybersecurity.

“Rather than diminishing soft skills, AI integration has the opportunity to elevate the importance of communication, collaboration and strategic thinking, as security teams must effectively convey complex findings to stakeholders,” said Kowski. “The human elements of cybersecurity — leadership, adaptability and cross-functional partnership — become even more critical as AI handles the technical heavy lifting.”

The post Will AI threaten the role of human creativity in cyber threat detection? appeared first on Security Intelligence.

How to calculate your AI-powered cybersecurity’s ROI

Imagine this scenario: A sophisticated, malicious phishing campaign targets a large financial institution. The attackers use emails generated by artificial intelligence (AI) that closely mimic the company’s internal communications. The emails contain malicious links designed to steal employee credentials, which the attackers could use to gain access to company assets and data for unknown purposes.

The organization’s AI-powered cybersecurity solution, which continuously monitors network traffic and user behavior, detects several anomalies associated with the attack, blocks access to the suspicious domains across the network, quarantines the phishing emails, resets passwords for all potentially compromised accounts and sends real-time alerts to the security operations center, providing detailed information about the attack vector and affected systems.

Using predictive analytics, the AI suggests potential next steps the attackers might take, allowing the security team to strengthen defenses in those areas proactively.

The good guys won. But was the AI solution worth the price? What’s the value in dollars of that victory? It’s easy to measure the investment in AI. But how do you measure the return on that investment? Specifically, how do you measure the value of data never stolen, unknown reputational damage that never happened, customer trust never lost or reduced operational risks never incurred?

The rise of AI cybersecurity

To be sure, cybersecurity AI spending is set to increase dramatically. Organizations spent $24 billion in 2023, with an expected rise to $133 billion by 2030. Cybersecurity professionals and the companies they work for will increasingly rely on advanced AI solutions as threats grow and the cost of data breaches also rises.

The challenging nature of cybersecurity ROI is compounded by many other factors — dozens, hundreds or thousands of attempted cyberattacks per year per organization; the lack of universally accepted metrics or calculations for cybersecurity ROI; the long payback period for investments in cybersecurity AI; the fast-changing nature of the threat landscape; the fact that cybersecurity investments also touch areas like operational efficiency, regulatory compliance and others.

Historically, organizations calculated ROI in cybersecurity investments by estimating money saved in the absence of security incidents. But that fails to account for proactive security measures, efficiency gains in operations and the overall security posture. With the integration of AI, cybersecurity has fundamentally changed, offering enhanced threat detection and prevention capabilities beyond simply measuring the absence of incidents.

A proactive approach and improved operational efficiency through task automation provide tangible benefits not captured in traditional ROI calculations.

Explore AI cybersecurity solutions

New metrics for ROI calculation

The use of AI tools has transformed the typical cybersecurity ROI calculation, introducing several quantifiable metrics:

These metrics offer a more comprehensive view of the value derived from AI-powered cybersecurity investments, enabling organizations to make more informed decisions about resource allocation and strategic planning.

Cost savings can also be measured in the aggregate. According to the IBM 2024 Cost of a Data Breach report, organizations extensively using security AI and automation in prevention workflows saved an average of $2.2 million in breach costs compared to those without such technologies.

Still, measuring AI cybersecurity ROI comes with challenges, including difficulty attributing prevented incidents directly to AI, the constantly evolving threat landscape and balancing initial investment costs with long-term benefits.

Taking a holistic approach to cybersecurity AI ROI

Organizations can leverage established frameworks, such as the NIST Cybersecurity Framework, to effectively measure and communicate AI’s ROI in cybersecurity. By aligning AI initiatives with these functions, organizations can more accurately measure their impact on overall cybersecurity performance.

To effectively measure the impact of AI on cybersecurity ROI, organizations should focus on specific Key Performance Indicators (KPIs):

  • Mean time to detect
  • Mean time to respond
  • Security operational efficiency
  • Threat intelligence accuracy
  • Compliance adherence rate

The best approach is to adopt a more comprehensive approach that uses risk assessment frameworks, measures risk reduction, considers and estimates intangible benefits and regularly reviews and updates calculations.

Organizations must adopt a holistic approach that considers the proactive capabilities, efficiency gains and quantifiable metrics provided by AI-powered solutions. This comprehensive evaluation allows a more accurate assessment of cybersecurity investments’ true value and impact in today’s complex threat landscape.

Of course, cyberattacks don’t happen randomly or in a vacuum. Take the follow-on consequences of the ongoing cybersecurity skills gap, which can be self-enlarging, according to Sam Hector, senior strategy leader of IBM Security.

“When you don’t have enough skilled experts in monitoring and defending your infrastructure, a few things happen,” Hector said. “The time to triage alerts grows as the queue of incidents to review becomes longer, meaning you’re more likely to be breached, and attackers dwell times increase (when they are in your environment undetected) as you’re less likely to find the needle in the haystack. The time to detect increasing directly leads to higher breach costs on average.”

And the problem keeps growing: “Teams that are stretched too thin don’t have the time to devote to improving cybersecurity processes, integration and efficiency,” Hector said. “They’re unable to drill exercises and embark on further training as they’re too focused on keeping the lights on. This means over time, they’re less effective comparable to the threat landscape, and misconfigurations and gaps develop that attackers can exploit.”

Hector said persistent attackers are unlikely to go unnoticed by these weakening defenses: “If there’s a specific industry, region or even organization that is known to be struggling to acquire cybersecurity skills, this puts them at increased risk of being targeted by attackers who will be anticipating weaker defenses.”

An ongoing shift in cybersecurity investment

The integration of AI in cybersecurity has fundamentally changed how organizations approach and measure their security investments. By providing more tangible and comprehensive ROI metrics, AI enables organizations to make data-driven decisions about their cybersecurity strategies. As cyber threats continue to evolve, the role of AI in cybersecurity will only grow more critical, making it essential for organizations to invest in — and effectively measure — the impact of these technologies.

The post How to calculate your AI-powered cybersecurity’s ROI appeared first on Security Intelligence.

Cybersecurity trends: IBM’s predictions for 2025

Cybersecurity concerns in 2024 can be summed up in two letters: AI (or five letters if you narrow it down to gen AI). Organizations are still in the early stages of understanding the risks and rewards of this technology. For all the good it can do to improve data protection, keep up with compliance regulations and enable faster threat detection, threat actors are also using AI to accelerate their social engineering attacks and sabotage AI models with malware.

AI might have gotten the lion’s share of attention in 2024, but it wasn’t the only cyber threat organizations had to deal with. Credential theft continues to be problematic, with a 71% year-over-year increase in attacks using compromised credentials. The skills shortage continues, costing companies an additional $1.76 million in a data breach aftermath. And as more companies rely on the cloud, it shouldn’t be surprising that there has been a spike in cloud intrusions.

But there have been positive steps in cybersecurity over the past year. CISA’s Secure by Design program signed on more than 250 software manufacturers to improve their cybersecurity hygiene. CISA also introduced its Cyber Incident Reporting Portal to improve the way organizations share cyber information.

Last year’s cybersecurity predictions focused heavily on AI and its impact on how security teams will operate in the future. This year’s predictions also emphasize AI, showing that cybersecurity may have reached a point where security and AI are interdependent on each other, for both good and bad.

Here are this year’s predictions.

Shadow AI is everywhere (Akiba Saeedi, Vice President, IBM Security Product Management)

Shadow AI will prove to be more common — and risky — than we thought. Businesses have more and more generative AI models deployed across their systems each day, sometimes without their knowledge. In 2025, enterprises will truly see the scope of “shadow AI” – unsanctioned AI models used by staff that aren’t properly governed. Shadow AI presents a major risk to data security, and businesses that successfully confront this issue in 2025 will use a mix of clear governance policies, comprehensive workforce training and diligent detection and response.

Identity’s transformation (Wes Gyure, Executive Director, IBM Security Product Management)

How enterprises think about identity will continue to transform in the wake of hybrid cloud and app modernization initiatives. Recognizing that identity has become the new security perimeter, enterprises will continue their shift to an Identity-First strategy, managing and securing access to applications and critical data, including gen AI models. In 2025, a fundamental component of this strategy is to build an effective identity fabric, a product-agnostic integrated set of identity tools and services. When done right, this will be a welcome relief to security professionals, taming the chaos and risk caused by a proliferation of multicloud environments and scattered identity solutions.

Explore cybersecurity services

Everyone must work together to manage threats (Sam Hector, Global Strategy Leader, IBM Security)

Cybersecurity teams will no longer be able to effectively manage threats in isolation. Threats from generative AI and hybrid cloud adoption are rapidly evolving. Meanwhile, the risk quantum computing poses to modern standards of public-key encryption will become unavoidable. Given the maturation of new quantum-safe cryptography standards, there will be a drive to discover encrypted assets and accelerate the modernization of cryptography management. Next year, successful organizations will be those where executives and diverse teams jointly develop and enforce cybersecurity strategies, embedding security into the organizational culture.

Prepare for post-quantum cryptography standards (Ray Harishankar, IBM Fellow, IBM Quantum Safe)

As organizations begin the transition to post-quantum cryptography over the next year, agility will be crucial to ensure systems are prepared for continued transformation, particularly as the U.S. National Institute of Standards and Technology (NIST) continues to expand its toolbox of post-quantum cryptography standards. NIST’s initial post-quantum cryptography standards were a signal to the world that the time is now to start the journey to becoming quantum-safe. But equally important is the need for crypto agility, ensuring that systems can rapidly adapt to new cryptographic mechanisms and algorithms in response to changing threats, technological advances and vulnerabilities. Ideally, automation will streamline and accelerate the process.

Data will become a vital part of AI security (Suja Viswesan, vice president of Security Software Development, IBM)

Data and AI security will become an essential ingredient of trustworthy AI. “Trustworthy AI” is often interpreted as AI that is transparent, fair and privacy-protecting. These are critical characteristics. But if AI and the data powering it aren’t also secure, then all other characteristics are compromised. In 2025, as businesses, governments and individuals interact with AI more often and with higher stakes, data and AI security will be viewed as an even more important part of the trustworthy AI recipe.

Organizations will continue learning the juxtaposition of AI’s benefits and threats (Mark Hughes, Global Managing Partner, Cybersecurity Services, IBM)

As AI matures from proof-of-concept to wide-scale deployment, enterprises reap the benefits of productivity and efficiency gains, including automating security and compliance tasks to protect their data and assets. But organizations need to be aware of AI being used as a new tool or conduit for threat actors to breach long-standing security processes and protocols. Businesses need to adopt security frameworks, best practice recommendations and guardrails for AI and adapt quickly — to address both the benefits and risks associated with rapid AI advancements.

Greater understanding of AI-assisted versus AI-powered threats (Troy Bettencourt, Global Partner and Head of IBM X-Force)

Protect against AI-assisted threats; plan for AI-powered threats. There is a distinction between AI-powered and AI-assisted threats, including how organizations should think about their proactive security posture. AI-powered attacks, like deepfake video scams, have been limited to date; today’s threats remain primarily AI-assisted — meaning AI can help threat actors create variants of existing malware or a better phishing email lure. To address current AI-assisted threats, organizations should prioritize implementing end-to-end security for their own AI solutions, including protecting user interfaces, APIs, language models and machine learning operations, while remaining mindful of strategies to defend against future AI-powered attacks.

There’s a very clear message from these predictions that understanding how AI can help and hurt an organization is vital to ensuring your company and its assets are protected in 2025 and beyond.

The post Cybersecurity trends: IBM’s predictions for 2025 appeared first on Security Intelligence.

Preparing for the future of data privacy

The focus on data privacy started to quickly shift beyond compliance in recent years and is expected to move even faster in the near future. Not surprisingly, the Thomson Reuters Risk & Compliance Survey Report found that 82% of respondents cited data and cybersecurity concerns as their organization’s greatest risk. However, the majority of organizations noticed a recent shift: that their organization has been moving from compliance as a “check the box” task to a strategic function.

With this evolution in data privacy, many organizations find that they need to proactively make changes to their approach to set themselves up for the future. Here are five key considerations to get ready for the future of data privacy.

1. Create a process for staying up to date on new and evolving regulations

While data privacy is more than simply compliance, your organization must comply with all regulations first and foremost — or else risk fines and reputational damage. However, regulations are constantly being passed and changed, making it exceptionally challenging to stay up to date. As of September 2024, 20 states had consumer data privacy laws, with legislation pending in numerous other states. While the U.S. does not currently have a federal data privacy law, the American Privacy Rights Act is in the first stage of legislation.

As the data privacy regulation landscape continues to change, organizations must create a process to manage all pertinent regulations, which can be challenging for global companies. Because organizations must comply with the regulations of their customer locations, not the company’s locations, global businesses often find themselves bound by many different regulations. Organizations are increasingly turning to artificial intelligence (AI) with tools that monitor all relevant regulations and ensure compliance, which saves time and reduces fines.

2. Focus on balancing data privacy with analytics and AI goals

AI at the University of Pennsylvania’s Wharton School found that the percentage of employees who used AI weekly increased from 37% in 2023 to 73% in 2024. However, this significant and rapid increase in AI adoption has created significant data privacy issues. Top concerns include a lack of data transparency, new endpoints for vulnerabilities, third-party vendors and potential regulatory gaps. At the same time, businesses not using AI will likely quickly fall behind competitors in productivity and personalization.

Because not using AI is rarely the right business decision, organizations must take a strategic approach to creating a balance between business value and data security. While technology is part of the solution, platforms and systems cannot solve the challenges without a balanced approach. By creating processes and a framework that helps organizations evaluate risks and benefits, businesses can make smart business decisions with regard to data privacy. For example, a company may adopt automation throughout their organization using AI except in use cases that involve sensitive customer and employee data.

Explore data privacy solutions

3. Consider privacy-preserving machine learning (PPML)

By using specific techniques in AI and analytics, organizations can reduce data privacy risks. Many organizations are turning to PPML, which is an initiative started by Microsoft to protect data privacy when training large-capacity language models. Here are the three components of PPML defined by Microsoft:

  1. Understand: Organizations should conduct threat modeling and attack research while also identifying properties and guarantees. Additionally, leaders need to understand regulatory requirements.
  2. Measure: To determine the current status of data privacy, leaders should capture vulnerabilities quantitatively. Next, teams should develop and apply frameworks to monitor risks and mitigation success.
  3. Mitigate: After gaining a full picture of data privacy, teams must develop and apply techniques to reduce privacy risks. Lastly, leaders must meet all legal and compliance regulations.

4. Focus on data minimization

In the past, many businesses defaulted to keeping all — or at least most of — their data for a lengthy period of time. However, all data stored and saved must follow compliance regulations, causing many organizations to use a strategy referred to as data minimization.

Deloitte defines data minimization as taking steps to determine what information is needed, how it’s protected and used and how long to keep it. By taking this measured approach and determining which data to keep, organizations can reduce costs, make it easier to find the right data and improve compliance. Additionally, it’s easier and takes fewer resources to secure a smaller volume of data.

5. Create a culture of data privacy

Just like cybersecurity, data privacy is not simply the job of specific employees. Instead, organizations need to instill the mindset that every employee is responsible for data privacy. Creating a data privacy culture doesn’t happen overnight or with a single meeting. Instead, leaders must work to instill the values and focus over time. The first step is for leaders to become champions, express the shift in responsibility and “walk the walk” in terms of data privacy.

Because data privacy depends on team members following the processes and requirements specified, organizations must not simply dictate the rules but instead must explain the importance of data privacy. When employees understand the risks of not following the processes as well as the consequences to the organization and its consumers, they are more likely to comply.

Additionally, leaders should measure compliance with the processes to determine the current state and then the goal. By then offering incentives, organizations can help encourage compliance as well as stress its overall importance.

Start crafting your data privacy approach now

As your team focuses on planning for 2025 and beyond, now is the time to pause to make sure that your approach and goals align with where the industry is moving. Organizations that understand where data privacy is likely headed and take the steps needed to align their goals with the future of data privacy can be better prepared to more effectively gain business value from their data while still ensuring compliance.

The post Preparing for the future of data privacy appeared first on Security Intelligence.

CISO vs. CEO: Making a case for cybersecurity investments

Ask CISOs why they think there is a cyber skills shortage in their organization, what keeps them up at night or what the most important issue facing the industry is — at some point, even if not the first response, they will bring up budgets.

For example, at RSA Conference 2024, a roundtable discussion about issues facing the cybersecurity industry, one CISO stated bluntly that budgets — or lack thereof — are the biggest problem. At a time when everything is getting more expensive, the CISO said, security budgets are being slashed.

As for the cybersecurity talent shortage, the 2024 ISC2 Cybersecurity Workforce Study noted that “39% said a lack of budget was the top reason for cyber shortages, replacing a shortage of talent as the previous top reason for staff shortages.” According to Forrester’s 2024 Cybersecurity Benchmarks Global Report, the cybersecurity budget is just 5.7% of the entire IT budget, making it very difficult for CISOs to bring in the right personnel or upgrade tools and solutions.

However, it might not be the dollar amount that is the problem as much as where the budget is coming from. CEOs think about cybersecurity differently when it is tied to IT and when the CISO reports directly to the CIO versus when the CISO can present cybersecurity as a vital cog in overall business operations and tie it directly to business risk, the Forrester report found.

“CISOs who can articulate the business value of cybersecurity, demonstrating how it can drive revenue and support strategic goals, are more likely to secure the necessary funding. This shift also reflects a growing recognition of cybersecurity’s strategic importance beyond mere IT operations,” Louis Columbus wrote.

Key issues in cybersecurity funding

Once cybersecurity is approached as a key factor in business operations rather than as a function of IT, CEOs and CISOs are more likely to be on the same page when it comes to budget.

“Security funding and oversight is a top priority for both the management team and the Board of Directors,” said Dave Gerry, CEO of Bugcrowd.

“Cybersecurity investment uplift is prioritized against the cyber threats we face as a business; the IT risks that we have identified and need to remediate or the customer and compliance obligations that we need to ensure,” Gerry added. “Thematically, however, it all points back to ensuring that the confidentiality, integrity and availability of our data we reside over is protected — whether it’s that of customers, employees or critical business partners, whilst enabling our business in-turn.”

Risk prioritization and business continuity are two key areas that George Jones, CISO at Critical Start, focuses on. Along with emerging threats and vulnerability management, Jones says these four items are the pillars of security for the enterprise as they are aligned with overall business goals and objectives.

One of the drivers behind realigning cybersecurity investments is the Security and Exchange Commission’s (SEC) new rules around the disclosure of cybersecurity incidents. Organizations are now also required to share details about their cybersecurity risk management programs, particularly around any financial information.

“After recent SEC guidelines were announced, Boards are more focused than ever on cyber risk reduction and ensuring adequate funding is critical, especially as organization’s attack surfaces continue to rapidly expand,” said Gerry.

Explore AI cybersecurity solutions

Collaboration between CISOs and CEOs

While CISOs and CEOs (and, in many cases, in conjunction with the CFO) have to build an ongoing dialogue about cybersecurity investments, they are coming to the table with two different interests.

“The CEO lens will be focused on obtaining satisfaction that the security initiatives deliver value with tolerable impacts on productivity, but more importantly looking for the potential of competitive advantage,” said Gareth Lindahl-Wise, CISO at Ontinue. The CISO’s approach, on the other hand, focuses on risk prevention, mitigation and solutions to meet all of the organization’s legal, regulatory and contractual obligations.

The overall goal should be to create a security posture advantageous in gaining or retaining customers or attracting investment. Ultimately, said Lindahl-Wise, these decisions lie with the CEO and board.

“When it comes to funding and risk acceptance, CISO is, largely, an expert advisor — if an informed and conscious decision has been made by a CEO, then one should argue the CISO has discharged their responsibilities,” Lindahl-Wise added.

CEO Gerry, however, said the final decision on funding allocation is made by the Board of Directors, and it is up to both the CEO and the CISO to get their buy-in on where and what security investments should be made.

“This is a key reason that the CISO should report to the CEO and have direct access to the Board of Directors,” said Gerry. “While oftentimes security can be viewed as a cost center, the new reality is that a robust security program should be a competitive differentiator and a revenue enabler, in addition to simply being the cost of doing business in an ever-expanding threat environment.”

The Future is AI

CISOs have long understood the role AI plays in cybersecurity, particularly handling some of the most mundane tasks that free up time for overworked security teams to handle issues that require hands-on management. As generative AI becomes ubiquitous in the workplace, CEOs have become increasingly aware of AI’s impact on business and security risks. Some companies are turning to adding Chief AI Officers to their IT and security teams, but even when they aren’t CEOs still recognize the need to include AI in future security budgets.

“As threats become more sophisticated, leveraging AI tools enables us to enhance our threat detection, automate responses and improve incident management,” said Darren Guccione, CEO at Keeper Security. “Skilled professionals are needed to navigate the rapidly evolving threat landscape and ensure that our AI-driven strategies remain effective and secure and must be a budget consideration.”

How it is defined within the cybersecurity budget will depend on how it is used. Will it be a fringe use of AI in commercial tools for productivity gains or an embedded use of AI in the organization’s core offerings?

“If it is the latter, the CEO must satisfy themselves that the organization has the right experience to manage the opportunities and risks,” Lindahl-Wise said. As for the security side of things, “My hunch is we will see AI responsibilities feature heavily in CIO/CTO roles before standalone CAIOs become the norm.”

AI might be the most current technology and security disrupter, but it won’t be the last. Where it is similar is that it creates risk, both to the business and to cybersecurity, and risk is where CEOs and CISOs will focus on investments as a team.

The post CISO vs. CEO: Making a case for cybersecurity investments appeared first on Security Intelligence.

Apple Intelligence raises stakes in privacy and security

Apple’s latest innovation, Apple Intelligence, is redefining what’s possible in consumer technology. Integrated into iOS 18.1, iPadOS 18.1 and macOS Sequoia 15.1, this milestone puts advanced artificial intelligence (AI) tools directly in the hands of millions. Beyond being a breakthrough for personal convenience, it represents an enormous economic opportunity. But the bold step into accessible AI comes with critical questions about security, privacy and the risks of real-time decision-making in users’ most private digital spaces.

AI in every pocket

Having sophisticated AI at your fingertips isn’t just a leap in personal technology; it’s a seismic shift in how industries will evolve. By enabling real-time decision-making, mobile artificial intelligence can streamline everything from personalized notifications to productivity tools, making AI a ubiquitous companion in daily life. But what happens when AI that draws from “personal context” is compromised? Could this create a bonanza of social engineering and malicious exploits?

The risks of real-time AI processing

Apple Intelligence thrives on real-time personalization — analyzing user interactions to refine notifications, messaging and decision-making. While this enhances the user experience, it’s a double-edged sword. If attackers compromise these systems, the AI’s ability to customize notifications or prioritize messages could become a weapon. Malicious actors could manipulate AI to inject fraudulent messages or notifications, potentially duping users into disclosing sensitive information.

These risks aren’t hypothetical. For example, security researchers have exposed how hidden data in images can deceive AI into taking unintended actions — a stark reminder of how intelligent systems remain susceptible to creative exploitation.

In the new, real-time AI age, AI cybersecurity must address several risks, such as:

  1. Privacy concerns: Continuous data collection and analysis can lead to unauthorized access or misuse of personal information. For instance, AI-powered virtual assistants that capture frequent screenshots to personalize user experiences have raised significant privacy issues.
  2. Security vulnerabilities: Real-time AI systems can be susceptible to cyberattacks, especially if they process sensitive data without robust security measures. The rapid evolution of AI introduces new vulnerabilities, necessitating strong data protection mechanisms.
  3. Bias and discrimination: AI models trained on biased data can perpetuate or even amplify existing prejudices, leading to unfair outcomes in real-time applications. Addressing these biases is crucial to ensure equitable AI deployment.
  4. Lack of transparency: Real-time decision-making by AI systems can be opaque, making it challenging to understand or challenge outcomes, especially in critical areas like healthcare or criminal justice. This opacity can undermine trust and accountability.
  5. Operational risks: Dependence on real-time AI can lead to overreliance on automated systems, potentially resulting in operational failures if the AI system malfunctions or provides incorrect outputs. Ensuring human oversight is essential to mitigate such risks.
Explore AI cybersecurity solutions

Privacy: Apple’s ace in the hole

Unlike many competitors, Apple processes much of its AI functionality on-device, leveraging its latest A18 and A18 Pro chips, specifically designed for high-performance, energy-efficient machine learning. For tasks requiring greater computational power, Apple employs Private Cloud Compute, a system that processes data securely without storing or exposing it to third parties.

Apple’s long-standing reputation for prioritizing privacy gives it a competitive edge. Yet, even with robust safeguards, no system is infallible. Compromised AI features — especially those tied to messaging and notifications — could become a goldmine for social engineering schemes, threatening the very trust that Apple has built its brand upon.

Economic upside vs. security downside

The economic scale of this innovation is staggering, as it pushes companies to adopt AI-driven solutions to stay competitive. However, this proliferation amplifies security challenges. The widespread adoption of real-time AI raises the stakes for all users, from everyday consumers to enterprise-level stakeholders.

To stay ahead of potential threats, Apple has expanded its Security Bounty Program, offering rewards of up to $1 million for identifying vulnerabilities in its AI systems. This proactive approach underscores the company’s commitment to evolving alongside emerging threats.

The AI double-edged sword

The arrival of Apple Intelligence is a watershed moment in consumer technology. It promises unparalleled convenience and personalization while also highlighting the inherent risks of entrusting critical processes to AI. Apple’s dedication to privacy offers a significant buffer against these risks, but the rapid evolution of AI demands constant vigilance.

The question isn’t whether AI will become an integral part of our lives — it already has. The real challenge lies in ensuring that this technology remains a force for good, safeguarding the trust and security of those who rely on it. As Apple paves the way for AI in the consumer market, the balance between innovation and protection has never been more critical.

The post Apple Intelligence raises stakes in privacy and security appeared first on Security Intelligence.

Cloud Threat Landscape Report: AI-generated attacks low for the cloud

For the last couple of years, a lot of attention has been placed on the evolutionary state of artificial intelligence (AI) technology and its impact on cybersecurity. In many industries, the risks associated with AI-generated attacks are still present and concerning, especially with the global average of data breach costs increasing by 10% from last year.

However, according to the most recent Cloud Threat Landscape Report released by IBM’s X-Force team, the near-term threat of an AI-generated attack targeting cloud computing environments is actually moderately low. Still, projections from X-Force reveal that an increase in these sophisticated attack methods could be on the horizon.

Current status of the cloud computing market

The cloud computing market continues to grow exponentially, with experts expecting its value to reach more than $675 billion by the end of 2024. As more organizations expand their operational capabilities beyond on-premise restrictions and leverage public and private cloud infrastructure and services, adoption of AI technology is steadily increasing across multiple industry sectors.

Generative AI’s rapid integration into cloud computing platforms has created many opportunities for businesses, especially when enabling better automation and efficiency in the deployment, provisioning and scalability of IT services and SaaS applications.

However, as more businesses rely on new disruptive technologies to help them maximize the value of their cloud investments, the potential security danger that generative AI poses is something closely monitored by various cybersecurity organizations.

Read the Cloud Threat Landscape Report

Why are AI-generated attacks in the cloud currently considered lower risk?

Although AI-generated attacks are still among the top emerging risks for senior risk and assurance executives, according to a recent Gartner report, the current threat of AI technologies being exploited and leveraged in cloud infrastructure attacks is still moderately low, according to X-Force’s research.

This isn’t to say that AI technology isn’t still being regularly used in the development and distribution of highly sophisticated phishing schemes at scale. This behavior has already been observed with active malware distributors like Hive0137, who make use of large language models (LLMs) when scripting new dark web tools. Rather, the current lower risk projections are relevant to the likelihood of AI platforms being directly targeted in both cloud and on-premise environments.

One of the primary reasons for this lower risk has to do with the complex undertaking it will take for cyber criminals to breach and manipulate the underlying infrastructure of AI deployments successfully. Even if attackers put considerable resources into this effort, the still relatively low market saturation of cloud-based AI tools and solutions would likely lead to a low return on investment in time, resources and risks associated with carrying out these attacks.

Preparing for an inevitable increase in AI-driven cloud threats

While the immediate risks of AI-driven cloud threats may be lower today, this isn’t to say that organizations shouldn’t prepare for this to change in the near future.

IBM’s X-Force team has recognized correlations between the percentage of market share new technologies have across various markets and the trigger points related to their associated cybersecurity risks. According to the recent X-Force analysis, once generative AI matures and approaches 50% market saturation, it’s likely that its attack surface will become a larger target for cyber criminals.

For organizations currently utilizing AI technologies and proceeding with cloud adoption, designing more secure AI strategies is essential. This includes developing stronger identity security postures, integrating security throughout their cloud development processes and safeguarding the integrity of their data and quantum computation models.

The post Cloud Threat Landscape Report: AI-generated attacks low for the cloud appeared first on Security Intelligence.

Testing the limits of generative AI: How red teaming exposes vulnerabilities in AI models

With generative artificial intelligence (gen AI) on the frontlines of information security, red teams play an essential role in identifying vulnerabilities that others can overlook.

With the average cost of a data breach reaching an all-time high of $4.88 million in 2024, businesses need to know exactly where their vulnerabilities lie. Given the remarkable pace at which they’re adopting gen AI, there’s a good chance that some of those vulnerabilities lie in AI models themselves — or the data used to train them.

That’s where AI-specific red teaming comes in. It’s a way to test the resilience of AI systems against dynamic threat scenarios. This involves simulating real-world attack scenarios to stress-test AI systems before and after they’re deployed in a production environment. Red teaming has become vitally important in ensuring that organizations can enjoy the benefits of gen AI without adding risk.

IBM’s X-Force Red Offensive Security service follows an iterative process with continuous testing to address vulnerabilities across four key areas:

  1. Model safety and security testing
  2. Gen AI application testing
  3. AI platform security testing
  4. MLSecOps pipeline security testing

In this article, we’ll focus on three types of adversarial attacks that target AI models and training data.

Prompt injection

Most mainstream gen AI models have safeguards built in to mitigate the risk of them producing harmful content. For example, under normal circumstances, you can’t ask ChatGPT or Copilot to write malicious code. However, methods such as prompt injection attacks and jailbreaking can make it possible to work around these safeguards.

One of the goals of AI red teaming is to deliberately make AI “misbehave” — just as attackers do. Jailbreaking is one such method that involves creative prompting to get a model to subvert its safety filters. However, while jailbreaking can theoretically help a user carry out an actual crime, most malicious actors use other attack vectors — simply because they’re far more effective.

Prompt injection attacks are much more severe. Rather than targeting the models themselves, they target the entire software supply chain by obfuscating malicious instructions in prompts that otherwise appear harmless. For instance, an attacker might use prompt injection to get an AI model to reveal sensitive information like an API key, potentially giving them back-door access to any other systems that are connected to it.

Red teams can also simulate evasion attacks, a type of adversarial attack whereby an attacker subtly modifies inputs to trick a model into classifying or misinterpreting an instruction. These modifications are usually imperceptible to humans. However, they can still manipulate an AI model into taking an undesired action. For example, this might include changing a single pixel in an input image to fool the classifier of a computer vision model, such as one intended for use in a self-driving vehicle.

Explore X-Force Red Offensive Security Services

Data poisoning

Attackers also target AI models during training and development, hence it’s essential that red teams simulate the same attacks to identify risks that could compromise the whole project. A data poisoning attack happens when an adversary introduces malicious data into the training set, thereby corrupting the learning process and embedding vulnerabilities into the model itself. The result is that the entire model becomes a potential entry point for further attacks. If training data is compromised, it’s usually necessary to retrain the model from scratch. That’s a highly resource-intensive and time-consuming operation.

Red team involvement is vital from the very beginning of the AI model development process to mitigate the risk of data poisoning. Red teams simulate real-world data poisoning attacks in a secure sandbox environment air-gapped from existing production systems. Doing so provides insights into how vulnerable the model is to data poisoning and how real threat actors might infiltrate or compromise the training process.

AI red teams can proactively identify weaknesses in data collection pipelines, too. Large language models (LLMs) often draw data from a huge number of different sources. ChatGPT, for example, was trained on a vast corpus of text data from millions of websites, books and other sources. When building a proprietary LLM, it’s crucial that organizations know exactly where they’re getting their training data from and how it’s vetted for quality. While that’s more of a job for security auditors and process reviewers, red teams can use penetration testing to assess a model’s ability to resist flaws in its data collection pipeline.

Model inversion

Proprietary AI models are usually trained, at least partially, on the organization’s own data. For instance, an LLM deployed in customer service might use the company’s customer data for training so that it can provide the most relevant outputs. Ideally, models should only be trained based on anonymized data that everyone is allowed to see. Even then, however, privacy breaches may still be a risk due to model inversion attacks and membership inference attacks.

Even after deployment, gen AI models can retain traces of the data that they were trained on. For instance, the team at Google’s DeepMind AI research laboratory successfully managed to trick ChatGPT into leaking training data using a simple prompt. Model inversion attacks can, therefore, allow malicious actors to reconstruct training data, potentially revealing confidential information in the process.

Membership inference attacks work in a similar way. In this case, an adversary tries to predict whether a particular data point was used to train the model through inference with the help of another model. This is a more sophisticated method in which an attacker first trains a separate model – known as a membership inference model — based on the output of the model they’re attacking.

For example, let’s say a model has been trained on customer purchase histories to provide personalized product recommendations. An attacker may then create a membership inference model and compare its outputs with those of the target model to infer potentially sensitive information that they might use in a targeted attack.

In either case, red teams can evaluate AI models for their ability to inadvertently leak sensitive information directly or indirectly through inference. This can help identify vulnerabilities in training data workflows themselves, such as data that hasn’t been sufficiently anonymized in accordance with the organization’s privacy policies.

Building trust in AI

Building trust in AI requires a proactive strategy, and AI red teaming plays a fundamental role. By using methods like adversarial training and simulated model inversion attacks, red teams can identify vulnerabilities that other security analysts are likely to miss.

These findings can then help AI developers prioritize and implement proactive safeguards to prevent real threat actors from exploiting the very same vulnerabilities. For businesses, the result is reduced security risk and increased trust in AI models, which are fast becoming deeply ingrained across many business-critical systems.

The post Testing the limits of generative AI: How red teaming exposes vulnerabilities in AI models appeared first on Security Intelligence.

❌