Visualização de leitura

The straight and narrow — How to keep ML and AI training on track

Artificial intelligence (AI) and machine learning (ML) have entered the enterprise environment.

According to the IBM AI in Action 2024 Report, two broad groups are onboarding AI: Leaders and learners. Leaders are seeing quantifiable results, with two-thirds reporting 25% (or greater) boosts to revenue growth. Learners, meanwhile, say they’re following an AI roadmap (72%), but just 40% say their C-suite fully understands the value of AI investment.

One thing they have in common? Challenges with data security. Despite their success with AI and ML, security remains the top concern. Here’s why.

Full steam ahead: How AI and ML get smarter

Historically, computers did what they were told. Thinking outside the box wasn’t an option — lines of code dictated what was possible and permissible.

AI and ML models take a different approach. Instead of rigid structures, AI and ML models are given general guidelines. Companies supply vast amounts of training data that help these models “learn,” in turn improving their output.

A simple example is an AI tool designed to identify images of dogs. The underlying ML structures provide basic guidance — dogs have four legs, two ears, a tail and fur. Thousands of images of both dogs and not-dogs are provided to AI. The more pictures it “sees,” the better it becomes at differentiating dogs.

Learn more about today’s AI leaders

Off the rails: The risks of unauthorized model modification

If attackers can gain access to AI models, they can modify model outputs. Consider the example above. Malicious actors compromise business networks and flood training models with unlabeled images of cats and images incorrectly labeled as dogs. Over time, model accuracy suffers and outputs are no longer reliable.

Forbes highlights a recent competition that saw hackers trying to “jailbreak” popular AI models and trick them into producing inaccurate or harmful content. The rise of generative tools makes this kind of protection a priority — in 2023, researchers discovered that by simply adding strings of random symbols to the end of queries, they could convince generative AI (gen AI) tools to provide answers that bypassed model safety filters.

And this concern isn’t just conceptual. As noted by The Hacker News, an attack technique known as “Sleepy Pickle” poses significant risks for ML models. By inserting a malicious payload into pickle files — used to serialize Python object structures — attackers can change how models weigh and compare data and alter model outputs. This could allow them to generate misinformation that causes harm to users, steal user data or generate content that contains malicious links.

Staying the course: Three components for better security

To reduce the risk of compromised AI and ML, three components are critical:

1) Securing the data

Accurate, timely and reliable data underpins usable model outputs. The process of centralizing and correlating this data, however, creates a tempting target for attackers. If they can infiltrate large-scale AI data storage, they can manipulate model outputs.

As a result, enterprises need solutions that automatically and continuously monitor AI infrastructure for signs of compromise.

2) Securing the model

Changes to AI and ML models can lead to outputs that look legitimate but have been modified by attackers. At best, these outputs inconvenience customers and slow down business processes. At worst, they could negatively impact both reputation and revenue.

To reduce the risk of model manipulation, organizations need tools capable of identifying security vulnerabilities and detecting misconfigurations.

3) Securing the usage

Who’s using models? With what data? And for what purpose? Even if data and models are secured, use by malicious actors may put companies at risk. Continuous compliance monitoring is critical to ensure legitimate use.

Making the most of models

AI and ML tools can help enterprises discover data insights and drive increased revenue. If compromised, however, models can be used to deliver inaccurate outputs or deploy malicious code.

With Guardium AI security, businesses are better equipped to manage the security risks of sensitive models. See how.

The post The straight and narrow — How to keep ML and AI training on track appeared first on Security Intelligence.

AI decision-making: Where do businesses draw the line?

“A computer can never be held accountable, therefore a computer must never make a management decision.”

– IBM Training Manual, 1979

Artificial intelligence (AI) adoption is on the rise. According to the IBM Global AI Adoption Index 2023, 42% of enterprises have actively deployed AI, and 40% are experimenting with the technology. Of those using or exploring AI, 59% have accelerated their investments and rollouts over the past two years. The result is an uptick in AI decision-making that leverages intelligent tools to arrive at (supposedly) accurate answers.

Rapid adoption, however, raises a question: Who’s responsible if AI makes a poor choice? Does the fault lie with IT teams? Executives? AI model builders? Device manufacturers?

In this piece, we’ll explore the evolving world of AI and reexamine the quote above in the context of current use cases: Do companies still need a human in the loop, or can AI make the call?

Getting it right: Where AI is improving business outcomes

Guy Pearce, principal consultant at DEGI and member of the ISACA working trends group, has been involved with AI for more than three decades. “First, it was symbolic,” he says, “and now it’s statistical. It’s algorithms and models that allow data processing and improve business performance over time.”

Data from IBM’s recent AI in Action report shows the impact of this shift. Two-thirds of leaders say that AI has driven more than a 25% improvement in revenue growth rates, and 72% say that the C-suite is fully aligned with IT leadership about what comes next on the path to AI maturity.

With confidence in AI growing, enterprises are implementing intelligent tools to improve business outcomes. For example, wealth management firm Consult Venture Partners deployed AIda AI, a conversational digital AI concierge that uses IBM watsonx assistant technology to answer potential clients’ questions without the need for human agents.

The results speak for themselves: Alda AI answered 92% of queries correctly, 47% of queries led to webinar registrations and 39% of inquiries turned into leads.

Missing the mark: What happens if AI makes mistakes?

92% is an impressive achievement for Alda AI. The caveat? It was still wrong 8% of the time. So, what happens when AI makes mistakes?

For Pearce, it depends on the stakes.

He uses the example of a financial firm leveraging AI to evaluate credit scores and issue loans. The outcomes of these decisions are relatively low stakes. In the best-case scenario, AI approves loans that are paid back on time and in full. In the worst case, borrowers default, and companies need to pursue legal action. While inconvenient, the negative outcomes are far outweighed by the potential positives.

“When it comes to high stakes,” says Pearce, “look at the medical industry. Let’s say we use AI to address the problem of wait times. Do we have sufficient data to ensure patients are seen in the right order? What if we get it wrong? The outcome could be death.”

As a result, how AI is used in decision-making depends largely on what it’s making decisions about and how these decisions impact both the company making the decisions and those the decision affects.

In some cases, even the worst-case scenario is a minor inconvenience. In others, the results could cause significant harm. 

Explore AI cybersecurity

Taking the blame: Who’s accountable if AI gets it wrong?

In April 2024, a Tesla operating in “full self-driving” mode struck and killed a motorcyclist. The driver of the vehicle admitted to looking at their phone prior to the crash despite active driver supervision being required.

So who takes the blame? The driver is the obvious choice and was arrested on charges of vehicular homicide.

But this isn’t the only path to accountability. There’s also a case to be made in which Tesla bears some responsibility since the company’s AI algorithm failed to spot the victim. Blame could also be placed on governing bodies such as the National Highway Traffic Safety Administration (NHTSA). Perhaps their testing wasn’t rigorous or complete enough.

One could even argue that the creator(s) of Tesla’s AI could be held liable for letting code that could kill someone go live.

This is the paradox of AI decision-making: Is someone at fault, or is everyone at fault? “If you bring all the stakeholders together who should be accountable, where does that accountability lie?” asks Pearce. “With the C-suite? With the whole team? If you have accountability that’s spread over the entire organization, everyone can’t end up in jail. Ultimately, shared accountability often leads to no accountability.”

Drawing the line: Where does AI end?

So, where do organizations draw the line? Where does AI insight give way to human decision-making?

Three considerations are key: Ethics, risk and trust.

“When it comes to ethical dilemmas,” says Pearce, “AI can’t do it.” This is because intelligent tools naturally seek the most efficient path, not the most ethical. As a result, any decision involving ethical questions or concerns should include human oversight.

Risk, meanwhile, is an AI specialty. “AI is good in risk,” Pearce says. “What statistical models do is give you something called a standard error, which lets you know if what AI is recommending has a high or low potential variability.” This makes AI great for risk-based decisions like those in finance or insurance.

Finally, enterprises need to prioritize trust. “There are declining levels of trust in institutions,” says Pearce. “Many citizens don’t feel confident that the data they share is being used in a trustworthy manner.”

For example, under GDPR, companies need to be transparent about data collection and handling and give citizens a chance to opt-out. To bolster trust in AI use, organizations should clearly communicate how and why they’re using AI and (where possible) allow customers and clients to opt out of AI-driven processes.

Decisions, decisions

Should AI be used for management decisions? Maybe. Will it be used to make some of these decisions? Almost certainly. The draw of AI — its ability to capture, correlate and analyze multiple data sets and deliver new insights — makes it a powerful tool for enterprises to streamline operations and reduce costs.

What’s less clear is how the shift to management-level decision-making will impact accountability. According to Pearce, current conditions create “blurry lines” in this area; legislation hasn’t kept pace with increasing AI usage.

To ensure alignment with ethical principles, reduce the risk of wrong choices and engender stakeholder and customer trust, businesses are best served by keeping humans in the loop. Maybe this means direct approval from staff is required before AI can act. Maybe it means the occasional review and evaluation of AI decision-making outcomes.

Whatever approach enterprises choose, however, the core message remains the same: When it comes to AI-driven decisions, there’s no hard-and-fast line. It’s a moving target, one defined by possible risk, potential reward and probable outcomes.

The post AI decision-making: Where do businesses draw the line? appeared first on Security Intelligence.

Router reality check: 86% of default passwords have never been changed

Misconfigurations remain a popular compromise point — and routers are leading the way.

According to recent survey data, 86% of respondents have never changed their router admin password, and 52% have never adjusted any factory settings. This puts attackers in the perfect position to compromise enterprise networks. Why put the time and effort into creating phishing emails and stealing staff data when supposedly secure devices can be accessed using “admin” and “password” as credentials?

It’s time for a router reality check.

Rising router risks

Routers allow multiple devices to use the same internet connection. They accomplish this goal by directing traffic — internal devices are routed along the most efficient path to outside-facing services, and incoming data is sent to the appropriate endpoint.

If attackers manage to compromise routers, they can control both what comes out of and what goes into your network. This introduces risks such as:

The nature of router attacks also makes them hard to detect. This is because cyber criminals aren’t forcing their way into routers or taking circuitous routes to evade security defenses. Instead, they’re taking advantage of overlooked weak spots to access routers directly, which means they aren’t raising red flags.

Consider a router with “admin” as the login and no password. A few simple guesses get attackers into router settings without triggering a security response since they haven’t breached a network service or compromised an application. Instead, they’ve accessed routers the same way as staff and IT teams.

Explore IBM Instana

Exploring the defensive disconnect

Companies recognize the need for robust cybersecurity. According to Gartner, spending on information security will grow 15% in 2025 to reach $212 billion. Common investment areas include endpoint protection platforms (EPPs), endpoint detection and response (EDR) and the integration of generative AI (gen AI). Routers, however, are often overlooked.

For example, 89% of respondents have never updated their router firmware. The same number have never changed their default network name, and 72% have never changed their Wi-Fi password.

This is problematic. A recent report found that popular OT/IoT router firmware images were outdated and contained exploitable N-day vulnerabilities. The report found that, on average, open-source components were more than five years old and were four years behind the latest release.

As noted by GovTech, meanwhile, an attack on a Pittsburgh-area water authority succeeded in part because the default password to its network was “1111”. Other common passwords include “password” and “123456;” in some cases, routers have no passwords. All attackers need is the login credential — which is often “admin” — and they have full access to router functions.

Even more telling is the fact that router security is getting worse, not better. Consider that in 2022, 48% of respondents said they had not adjusted their router settings, and 16% had never changed the admin password. In 2024, over 50% of routers were still running on factory settings, and just 14% had changed their password.

By spending more on security tools but not changing default configurations or updating router firmware, businesses are closing the doors but leaving the windows wide open.

Minimizing misconfiguration mistakes

So, how do companies minimize the risk of misconfiguration mistakes?

It starts with the basics: Change passwords regularly, update firmware and ensure that routers aren’t left on factory settings. Simple? Absolutely. Common? As survey data indicates, not so much.

In part, the disconnect between router risks and security realities stems from the sheer volume of cyberattacks. For example, 2023 saw 94% of companies hit by phishing attacks, and as noted by the IBM Cost of a Data Breach Report 2024, the average cost of a data breach is now $4.88 million, up 10% from 2023 and the highest ever reported. This puts cybersecurity teams on the defensive and on high alert for common attack vectors such as phishing, smishing and the use of “shadow IT” applications that haven’t been vetted or approved.

As a result, routers can slip through the cracks. The first step in solving this problem is creating a regular update schedule. Every four to six months, schedule a router review — put it in a shared calendar, and make sure all security staff know it’s going to happen. When the designated day comes, update firmware where possible and change login and password details. It’s also worth establishing a weekly schedule to review router traffic for any odd behaviors or unexpected login requests.

Shoring up security

While basic cyber hygiene helps lower the risk of router attacks, shoring up security requires a more in-depth approach.

The first step is finding and securing every router on your network. Given the increasingly complex nature of enterprise networks, the easiest way to accomplish this goal is by using automation. Solutions such as IBM SevOne Automated Network Observability provide pre-built workflow templates for IT teams to identify connected devices, collect performance data and make data-driven decisions.

Companies also need to consider what happens when a router compromise occurs. Despite best efforts by security teams, the growing number of end points means it’s only a matter of time until attackers manage to find unprotected routers or circumvent existing defenses.

Effective response requires effective incident management. Solutions such as IBM Instana offer full-stack visibility, one-second granularity and three seconds to notify, giving teams the information they need when they need it to reduce security risks.

Bottom line? Failure to monitor and update router settings can open the door to compromise. To solve the problem, teams need a router reality check. By combining security hygiene best practices with intelligent automation solutions, enterprises can keep unauthorized users where they belong: 0utside protected networks.

The rising risk of router attacks, paired with a growing list of unreasonable expectations, creates complex challenges for security teams. The solution? Unreasonable observability. Learn more on IBM Instana and how it can help.

The post Router reality check: 86% of default passwords have never been changed appeared first on Security Intelligence.

❌