Visualização de leitura

Will AI threaten the role of human creativity in cyber threat detection?

Cybersecurity requires creativity and thinking outside the box. It’s why more organizations are looking at people with soft skills and coming from outside the tech industry to address the cyber skills gap. As the threat landscape becomes more complex and nation-state actors launch innovative cyberattacks against critical infrastructure, there is a need for cybersecurity professionals who can anticipate these attacks and develop creative preventive solutions.

Of course, a lot of cybersecurity work is mundane and repetitive — monitoring logs, sniffing out false positive alerts, etc. Artificial intelligence (AI) has been a boon in filling the talent gaps when it comes to these types of tasks. But AI has also proven useful for many of the same things that creative thought brings to the threat table, such as addressing more sophisticated threat actors, the rapid increase of data and the hybrid infrastructure.

However, many companies are seeing the value of AI, especially generative AI (gen AI), in handling a greater share of creative work — not just in cybersecurity but also in areas like marketing and public relations, writing and research. But are these organizations using AI in a way that could threaten the importance of human creativity in threat detection?

Why creativity is important to cybersecurity

The very simple reason why cybersecurity requires innovative people is that threat actors are already coming up with novel approaches to how to get into your system. Are they using gen AI to launch their attacks? You bet they are; phishing emails have never been more grammatically constructed or realistic. But before AI was available, threat actors were designing social engineering attacks that attracted clicks. Now, they have advanced beyond “how can we lure in victims” to “how can we get more out of a single attack after we lure in the victims.”

Creativity isn’t just coming up with new ideas. It is also the ability to see things through a big-picture lens and discern historical data or where to find information you might not know you need to look for. For example, creative thought is required for the following security tasks:

  • Threat hunting or predicting a threat actor’s move or finding their tracks in a system
  • Finding buried evidence in a forensic search
  • Understanding historical data in anomaly detection
  • Ability to tell a real email or document versus a well-designed phishing attack
  • Verifying new zero day attacks and other malware variants found in otherwise unknown vulnerabilities

AI can augment human creativity, but gen AI gets a lot of things wrong. Users have found themselves in situations where AI claimed plagiarism on original work or AI hallucinations offered false information that nullified the research of human analysts. AI algorithms are also susceptible to bias that could lead to false positives.

Explore AI cybersecurity solutions

AI’s role in creative cybersecurity and beyond

While many creative people, cybersecurity professionals and beyond, see gen AI as a mixed blessing, many embrace the technology because it is a huge timesaver.

“Gen AI can help prototype much faster because the large language models can take over the refactoring and documentation of code,” wrote Aili McConnon in an IBM blog post. Also, the article pointed out, AI tools can help users create prototypes or visualize their ideas in minutes versus hours or days.

Creativity married to AI can help identify future leaders. According to research from IBM, two-thirds of company leaders found that AI is driving their growth, with four specific use cases — IT operations, user experience, virtual assistants and cybersecurity — most commonly favored by leaders.

“A Learner will typically copy predefined scenarios using out-of-the-box technologies,” Dr. Stephan Bloehdorn, Executive Partner and Practice Leader, AI, Analytics and Automation-IBM Consulting DACH, was quoted in the study. “But a Leader develops custom innovations.”

Over-reliance on AI?

As gen AI becomes more ubiquitous in the workplace and as more creative folks and leaders rely on it as a way to put their ideas in motion, are we also relying on the technology to the point that it could lead to a degradation of other important necessary skills, like the ability to analyze data and create viable solutions?

It is unclear if organizations are over-relying on gen AI, according to Stephen Kowski, Field CTO at SlashNext Email Security+, but it is becoming more of a designed feature due to unintended consequences related to resource allocation in organizations.

“While AI excels at processing massive volumes of threat data, real-world attacks constantly evolve beyond historical patterns, requiring human expertise to identify and respond to zero-day threats,” said Kowski in an email interview. “The key is achieving the right balance where AI handles high-volume routine detection while skilled analysts investigate novel attack patterns and determine strategic responses.”

Yet, Kris Bondi, CEO and Co-Founder of Mimoto, isn’t worried about AI leading to a degradation of skills — at least not for the foreseeable future.

“One of the biggest challenges for cybersecurity professionals is having too many alerts and too many false positives. AI is only able to automate a small percentage of responses. It’s more likely that AI will eventually automate additional requirements for someone deemed to be suspicious or the elevation of alert so that a human can analyze the situation,” Bondi said via email.

However, organizations should watch out for AI’s role in defining threat-hunting parameters. “If AI is the sole driver defining threat hunting parameters without spot-checks or audits, the threat intelligence approach could eventually be focused in the wrong area. The answer is more reliance on critical thinking and analytical skills,” said Bondi.

Embracing creativity in an AI-driven world

AI overall, and gen AI in particular, are going to be part of the business world going forward. It is going to play a vital role in how organizations and analysts approach cybersecurity defenses and mitigations. But the soft skills that creative thought depends on will still play an important and necessary role in cybersecurity.

“Rather than diminishing soft skills, AI integration has the opportunity to elevate the importance of communication, collaboration and strategic thinking, as security teams must effectively convey complex findings to stakeholders,” said Kowski. “The human elements of cybersecurity — leadership, adaptability and cross-functional partnership — become even more critical as AI handles the technical heavy lifting.”

The post Will AI threaten the role of human creativity in cyber threat detection? appeared first on Security Intelligence.

How to calculate your AI-powered cybersecurity’s ROI

Imagine this scenario: A sophisticated, malicious phishing campaign targets a large financial institution. The attackers use emails generated by artificial intelligence (AI) that closely mimic the company’s internal communications. The emails contain malicious links designed to steal employee credentials, which the attackers could use to gain access to company assets and data for unknown purposes.

The organization’s AI-powered cybersecurity solution, which continuously monitors network traffic and user behavior, detects several anomalies associated with the attack, blocks access to the suspicious domains across the network, quarantines the phishing emails, resets passwords for all potentially compromised accounts and sends real-time alerts to the security operations center, providing detailed information about the attack vector and affected systems.

Using predictive analytics, the AI suggests potential next steps the attackers might take, allowing the security team to strengthen defenses in those areas proactively.

The good guys won. But was the AI solution worth the price? What’s the value in dollars of that victory? It’s easy to measure the investment in AI. But how do you measure the return on that investment? Specifically, how do you measure the value of data never stolen, unknown reputational damage that never happened, customer trust never lost or reduced operational risks never incurred?

The rise of AI cybersecurity

To be sure, cybersecurity AI spending is set to increase dramatically. Organizations spent $24 billion in 2023, with an expected rise to $133 billion by 2030. Cybersecurity professionals and the companies they work for will increasingly rely on advanced AI solutions as threats grow and the cost of data breaches also rises.

The challenging nature of cybersecurity ROI is compounded by many other factors — dozens, hundreds or thousands of attempted cyberattacks per year per organization; the lack of universally accepted metrics or calculations for cybersecurity ROI; the long payback period for investments in cybersecurity AI; the fast-changing nature of the threat landscape; the fact that cybersecurity investments also touch areas like operational efficiency, regulatory compliance and others.

Historically, organizations calculated ROI in cybersecurity investments by estimating money saved in the absence of security incidents. But that fails to account for proactive security measures, efficiency gains in operations and the overall security posture. With the integration of AI, cybersecurity has fundamentally changed, offering enhanced threat detection and prevention capabilities beyond simply measuring the absence of incidents.

A proactive approach and improved operational efficiency through task automation provide tangible benefits not captured in traditional ROI calculations.

Explore AI cybersecurity solutions

New metrics for ROI calculation

The use of AI tools has transformed the typical cybersecurity ROI calculation, introducing several quantifiable metrics:

These metrics offer a more comprehensive view of the value derived from AI-powered cybersecurity investments, enabling organizations to make more informed decisions about resource allocation and strategic planning.

Cost savings can also be measured in the aggregate. According to the IBM 2024 Cost of a Data Breach report, organizations extensively using security AI and automation in prevention workflows saved an average of $2.2 million in breach costs compared to those without such technologies.

Still, measuring AI cybersecurity ROI comes with challenges, including difficulty attributing prevented incidents directly to AI, the constantly evolving threat landscape and balancing initial investment costs with long-term benefits.

Taking a holistic approach to cybersecurity AI ROI

Organizations can leverage established frameworks, such as the NIST Cybersecurity Framework, to effectively measure and communicate AI’s ROI in cybersecurity. By aligning AI initiatives with these functions, organizations can more accurately measure their impact on overall cybersecurity performance.

To effectively measure the impact of AI on cybersecurity ROI, organizations should focus on specific Key Performance Indicators (KPIs):

  • Mean time to detect
  • Mean time to respond
  • Security operational efficiency
  • Threat intelligence accuracy
  • Compliance adherence rate

The best approach is to adopt a more comprehensive approach that uses risk assessment frameworks, measures risk reduction, considers and estimates intangible benefits and regularly reviews and updates calculations.

Organizations must adopt a holistic approach that considers the proactive capabilities, efficiency gains and quantifiable metrics provided by AI-powered solutions. This comprehensive evaluation allows a more accurate assessment of cybersecurity investments’ true value and impact in today’s complex threat landscape.

Of course, cyberattacks don’t happen randomly or in a vacuum. Take the follow-on consequences of the ongoing cybersecurity skills gap, which can be self-enlarging, according to Sam Hector, senior strategy leader of IBM Security.

“When you don’t have enough skilled experts in monitoring and defending your infrastructure, a few things happen,” Hector said. “The time to triage alerts grows as the queue of incidents to review becomes longer, meaning you’re more likely to be breached, and attackers dwell times increase (when they are in your environment undetected) as you’re less likely to find the needle in the haystack. The time to detect increasing directly leads to higher breach costs on average.”

And the problem keeps growing: “Teams that are stretched too thin don’t have the time to devote to improving cybersecurity processes, integration and efficiency,” Hector said. “They’re unable to drill exercises and embark on further training as they’re too focused on keeping the lights on. This means over time, they’re less effective comparable to the threat landscape, and misconfigurations and gaps develop that attackers can exploit.”

Hector said persistent attackers are unlikely to go unnoticed by these weakening defenses: “If there’s a specific industry, region or even organization that is known to be struggling to acquire cybersecurity skills, this puts them at increased risk of being targeted by attackers who will be anticipating weaker defenses.”

An ongoing shift in cybersecurity investment

The integration of AI in cybersecurity has fundamentally changed how organizations approach and measure their security investments. By providing more tangible and comprehensive ROI metrics, AI enables organizations to make data-driven decisions about their cybersecurity strategies. As cyber threats continue to evolve, the role of AI in cybersecurity will only grow more critical, making it essential for organizations to invest in — and effectively measure — the impact of these technologies.

The post How to calculate your AI-powered cybersecurity’s ROI appeared first on Security Intelligence.

Cybersecurity trends: IBM’s predictions for 2025

Cybersecurity concerns in 2024 can be summed up in two letters: AI (or five letters if you narrow it down to gen AI). Organizations are still in the early stages of understanding the risks and rewards of this technology. For all the good it can do to improve data protection, keep up with compliance regulations and enable faster threat detection, threat actors are also using AI to accelerate their social engineering attacks and sabotage AI models with malware.

AI might have gotten the lion’s share of attention in 2024, but it wasn’t the only cyber threat organizations had to deal with. Credential theft continues to be problematic, with a 71% year-over-year increase in attacks using compromised credentials. The skills shortage continues, costing companies an additional $1.76 million in a data breach aftermath. And as more companies rely on the cloud, it shouldn’t be surprising that there has been a spike in cloud intrusions.

But there have been positive steps in cybersecurity over the past year. CISA’s Secure by Design program signed on more than 250 software manufacturers to improve their cybersecurity hygiene. CISA also introduced its Cyber Incident Reporting Portal to improve the way organizations share cyber information.

Last year’s cybersecurity predictions focused heavily on AI and its impact on how security teams will operate in the future. This year’s predictions also emphasize AI, showing that cybersecurity may have reached a point where security and AI are interdependent on each other, for both good and bad.

Here are this year’s predictions.

Shadow AI is everywhere (Akiba Saeedi, Vice President, IBM Security Product Management)

Shadow AI will prove to be more common — and risky — than we thought. Businesses have more and more generative AI models deployed across their systems each day, sometimes without their knowledge. In 2025, enterprises will truly see the scope of “shadow AI” – unsanctioned AI models used by staff that aren’t properly governed. Shadow AI presents a major risk to data security, and businesses that successfully confront this issue in 2025 will use a mix of clear governance policies, comprehensive workforce training and diligent detection and response.

Identity’s transformation (Wes Gyure, Executive Director, IBM Security Product Management)

How enterprises think about identity will continue to transform in the wake of hybrid cloud and app modernization initiatives. Recognizing that identity has become the new security perimeter, enterprises will continue their shift to an Identity-First strategy, managing and securing access to applications and critical data, including gen AI models. In 2025, a fundamental component of this strategy is to build an effective identity fabric, a product-agnostic integrated set of identity tools and services. When done right, this will be a welcome relief to security professionals, taming the chaos and risk caused by a proliferation of multicloud environments and scattered identity solutions.

Explore cybersecurity services

Everyone must work together to manage threats (Sam Hector, Global Strategy Leader, IBM Security)

Cybersecurity teams will no longer be able to effectively manage threats in isolation. Threats from generative AI and hybrid cloud adoption are rapidly evolving. Meanwhile, the risk quantum computing poses to modern standards of public-key encryption will become unavoidable. Given the maturation of new quantum-safe cryptography standards, there will be a drive to discover encrypted assets and accelerate the modernization of cryptography management. Next year, successful organizations will be those where executives and diverse teams jointly develop and enforce cybersecurity strategies, embedding security into the organizational culture.

Prepare for post-quantum cryptography standards (Ray Harishankar, IBM Fellow, IBM Quantum Safe)

As organizations begin the transition to post-quantum cryptography over the next year, agility will be crucial to ensure systems are prepared for continued transformation, particularly as the U.S. National Institute of Standards and Technology (NIST) continues to expand its toolbox of post-quantum cryptography standards. NIST’s initial post-quantum cryptography standards were a signal to the world that the time is now to start the journey to becoming quantum-safe. But equally important is the need for crypto agility, ensuring that systems can rapidly adapt to new cryptographic mechanisms and algorithms in response to changing threats, technological advances and vulnerabilities. Ideally, automation will streamline and accelerate the process.

Data will become a vital part of AI security (Suja Viswesan, vice president of Security Software Development, IBM)

Data and AI security will become an essential ingredient of trustworthy AI. “Trustworthy AI” is often interpreted as AI that is transparent, fair and privacy-protecting. These are critical characteristics. But if AI and the data powering it aren’t also secure, then all other characteristics are compromised. In 2025, as businesses, governments and individuals interact with AI more often and with higher stakes, data and AI security will be viewed as an even more important part of the trustworthy AI recipe.

Organizations will continue learning the juxtaposition of AI’s benefits and threats (Mark Hughes, Global Managing Partner, Cybersecurity Services, IBM)

As AI matures from proof-of-concept to wide-scale deployment, enterprises reap the benefits of productivity and efficiency gains, including automating security and compliance tasks to protect their data and assets. But organizations need to be aware of AI being used as a new tool or conduit for threat actors to breach long-standing security processes and protocols. Businesses need to adopt security frameworks, best practice recommendations and guardrails for AI and adapt quickly — to address both the benefits and risks associated with rapid AI advancements.

Greater understanding of AI-assisted versus AI-powered threats (Troy Bettencourt, Global Partner and Head of IBM X-Force)

Protect against AI-assisted threats; plan for AI-powered threats. There is a distinction between AI-powered and AI-assisted threats, including how organizations should think about their proactive security posture. AI-powered attacks, like deepfake video scams, have been limited to date; today’s threats remain primarily AI-assisted — meaning AI can help threat actors create variants of existing malware or a better phishing email lure. To address current AI-assisted threats, organizations should prioritize implementing end-to-end security for their own AI solutions, including protecting user interfaces, APIs, language models and machine learning operations, while remaining mindful of strategies to defend against future AI-powered attacks.

There’s a very clear message from these predictions that understanding how AI can help and hurt an organization is vital to ensuring your company and its assets are protected in 2025 and beyond.

The post Cybersecurity trends: IBM’s predictions for 2025 appeared first on Security Intelligence.

Preparing for the future of data privacy

The focus on data privacy started to quickly shift beyond compliance in recent years and is expected to move even faster in the near future. Not surprisingly, the Thomson Reuters Risk & Compliance Survey Report found that 82% of respondents cited data and cybersecurity concerns as their organization’s greatest risk. However, the majority of organizations noticed a recent shift: that their organization has been moving from compliance as a “check the box” task to a strategic function.

With this evolution in data privacy, many organizations find that they need to proactively make changes to their approach to set themselves up for the future. Here are five key considerations to get ready for the future of data privacy.

1. Create a process for staying up to date on new and evolving regulations

While data privacy is more than simply compliance, your organization must comply with all regulations first and foremost — or else risk fines and reputational damage. However, regulations are constantly being passed and changed, making it exceptionally challenging to stay up to date. As of September 2024, 20 states had consumer data privacy laws, with legislation pending in numerous other states. While the U.S. does not currently have a federal data privacy law, the American Privacy Rights Act is in the first stage of legislation.

As the data privacy regulation landscape continues to change, organizations must create a process to manage all pertinent regulations, which can be challenging for global companies. Because organizations must comply with the regulations of their customer locations, not the company’s locations, global businesses often find themselves bound by many different regulations. Organizations are increasingly turning to artificial intelligence (AI) with tools that monitor all relevant regulations and ensure compliance, which saves time and reduces fines.

2. Focus on balancing data privacy with analytics and AI goals

AI at the University of Pennsylvania’s Wharton School found that the percentage of employees who used AI weekly increased from 37% in 2023 to 73% in 2024. However, this significant and rapid increase in AI adoption has created significant data privacy issues. Top concerns include a lack of data transparency, new endpoints for vulnerabilities, third-party vendors and potential regulatory gaps. At the same time, businesses not using AI will likely quickly fall behind competitors in productivity and personalization.

Because not using AI is rarely the right business decision, organizations must take a strategic approach to creating a balance between business value and data security. While technology is part of the solution, platforms and systems cannot solve the challenges without a balanced approach. By creating processes and a framework that helps organizations evaluate risks and benefits, businesses can make smart business decisions with regard to data privacy. For example, a company may adopt automation throughout their organization using AI except in use cases that involve sensitive customer and employee data.

Explore data privacy solutions

3. Consider privacy-preserving machine learning (PPML)

By using specific techniques in AI and analytics, organizations can reduce data privacy risks. Many organizations are turning to PPML, which is an initiative started by Microsoft to protect data privacy when training large-capacity language models. Here are the three components of PPML defined by Microsoft:

  1. Understand: Organizations should conduct threat modeling and attack research while also identifying properties and guarantees. Additionally, leaders need to understand regulatory requirements.
  2. Measure: To determine the current status of data privacy, leaders should capture vulnerabilities quantitatively. Next, teams should develop and apply frameworks to monitor risks and mitigation success.
  3. Mitigate: After gaining a full picture of data privacy, teams must develop and apply techniques to reduce privacy risks. Lastly, leaders must meet all legal and compliance regulations.

4. Focus on data minimization

In the past, many businesses defaulted to keeping all — or at least most of — their data for a lengthy period of time. However, all data stored and saved must follow compliance regulations, causing many organizations to use a strategy referred to as data minimization.

Deloitte defines data minimization as taking steps to determine what information is needed, how it’s protected and used and how long to keep it. By taking this measured approach and determining which data to keep, organizations can reduce costs, make it easier to find the right data and improve compliance. Additionally, it’s easier and takes fewer resources to secure a smaller volume of data.

5. Create a culture of data privacy

Just like cybersecurity, data privacy is not simply the job of specific employees. Instead, organizations need to instill the mindset that every employee is responsible for data privacy. Creating a data privacy culture doesn’t happen overnight or with a single meeting. Instead, leaders must work to instill the values and focus over time. The first step is for leaders to become champions, express the shift in responsibility and “walk the walk” in terms of data privacy.

Because data privacy depends on team members following the processes and requirements specified, organizations must not simply dictate the rules but instead must explain the importance of data privacy. When employees understand the risks of not following the processes as well as the consequences to the organization and its consumers, they are more likely to comply.

Additionally, leaders should measure compliance with the processes to determine the current state and then the goal. By then offering incentives, organizations can help encourage compliance as well as stress its overall importance.

Start crafting your data privacy approach now

As your team focuses on planning for 2025 and beyond, now is the time to pause to make sure that your approach and goals align with where the industry is moving. Organizations that understand where data privacy is likely headed and take the steps needed to align their goals with the future of data privacy can be better prepared to more effectively gain business value from their data while still ensuring compliance.

The post Preparing for the future of data privacy appeared first on Security Intelligence.

Apple Intelligence raises stakes in privacy and security

Apple’s latest innovation, Apple Intelligence, is redefining what’s possible in consumer technology. Integrated into iOS 18.1, iPadOS 18.1 and macOS Sequoia 15.1, this milestone puts advanced artificial intelligence (AI) tools directly in the hands of millions. Beyond being a breakthrough for personal convenience, it represents an enormous economic opportunity. But the bold step into accessible AI comes with critical questions about security, privacy and the risks of real-time decision-making in users’ most private digital spaces.

AI in every pocket

Having sophisticated AI at your fingertips isn’t just a leap in personal technology; it’s a seismic shift in how industries will evolve. By enabling real-time decision-making, mobile artificial intelligence can streamline everything from personalized notifications to productivity tools, making AI a ubiquitous companion in daily life. But what happens when AI that draws from “personal context” is compromised? Could this create a bonanza of social engineering and malicious exploits?

The risks of real-time AI processing

Apple Intelligence thrives on real-time personalization — analyzing user interactions to refine notifications, messaging and decision-making. While this enhances the user experience, it’s a double-edged sword. If attackers compromise these systems, the AI’s ability to customize notifications or prioritize messages could become a weapon. Malicious actors could manipulate AI to inject fraudulent messages or notifications, potentially duping users into disclosing sensitive information.

These risks aren’t hypothetical. For example, security researchers have exposed how hidden data in images can deceive AI into taking unintended actions — a stark reminder of how intelligent systems remain susceptible to creative exploitation.

In the new, real-time AI age, AI cybersecurity must address several risks, such as:

  1. Privacy concerns: Continuous data collection and analysis can lead to unauthorized access or misuse of personal information. For instance, AI-powered virtual assistants that capture frequent screenshots to personalize user experiences have raised significant privacy issues.
  2. Security vulnerabilities: Real-time AI systems can be susceptible to cyberattacks, especially if they process sensitive data without robust security measures. The rapid evolution of AI introduces new vulnerabilities, necessitating strong data protection mechanisms.
  3. Bias and discrimination: AI models trained on biased data can perpetuate or even amplify existing prejudices, leading to unfair outcomes in real-time applications. Addressing these biases is crucial to ensure equitable AI deployment.
  4. Lack of transparency: Real-time decision-making by AI systems can be opaque, making it challenging to understand or challenge outcomes, especially in critical areas like healthcare or criminal justice. This opacity can undermine trust and accountability.
  5. Operational risks: Dependence on real-time AI can lead to overreliance on automated systems, potentially resulting in operational failures if the AI system malfunctions or provides incorrect outputs. Ensuring human oversight is essential to mitigate such risks.
Explore AI cybersecurity solutions

Privacy: Apple’s ace in the hole

Unlike many competitors, Apple processes much of its AI functionality on-device, leveraging its latest A18 and A18 Pro chips, specifically designed for high-performance, energy-efficient machine learning. For tasks requiring greater computational power, Apple employs Private Cloud Compute, a system that processes data securely without storing or exposing it to third parties.

Apple’s long-standing reputation for prioritizing privacy gives it a competitive edge. Yet, even with robust safeguards, no system is infallible. Compromised AI features — especially those tied to messaging and notifications — could become a goldmine for social engineering schemes, threatening the very trust that Apple has built its brand upon.

Economic upside vs. security downside

The economic scale of this innovation is staggering, as it pushes companies to adopt AI-driven solutions to stay competitive. However, this proliferation amplifies security challenges. The widespread adoption of real-time AI raises the stakes for all users, from everyday consumers to enterprise-level stakeholders.

To stay ahead of potential threats, Apple has expanded its Security Bounty Program, offering rewards of up to $1 million for identifying vulnerabilities in its AI systems. This proactive approach underscores the company’s commitment to evolving alongside emerging threats.

The AI double-edged sword

The arrival of Apple Intelligence is a watershed moment in consumer technology. It promises unparalleled convenience and personalization while also highlighting the inherent risks of entrusting critical processes to AI. Apple’s dedication to privacy offers a significant buffer against these risks, but the rapid evolution of AI demands constant vigilance.

The question isn’t whether AI will become an integral part of our lives — it already has. The real challenge lies in ensuring that this technology remains a force for good, safeguarding the trust and security of those who rely on it. As Apple paves the way for AI in the consumer market, the balance between innovation and protection has never been more critical.

The post Apple Intelligence raises stakes in privacy and security appeared first on Security Intelligence.

❌