Visualização de leitura

오픈AI·앤트로픽, SI 영역 넘본다…엔터프라이즈 AI 경쟁 ‘구현 영역’으로

오픈AI와 앤트로픽은 합작 투자와 인수 협상을 통해 전문 서비스 영역으로 사업 범위를 확장하며, 기존 시스템 통합 기업이 맡아온 구현 역할에 한층 더 가까이 다가가고 있다.

로이터의 5일 보도에 따르면, 두 AI 기업과 연계된 합작사는 기업의 AI 도입을 지원하는 서비스 업체 인수를 논의해 왔으며, 이 가운데 오픈AI 측은 3건의 협상에서 상당한 진척을 이룬 것으로 알려졌다.

또한 기업 고객들이 생성형 AI를 실험 단계에서 실제 운영 환경으로 전환하는 과정에서, 엔지니어와 컨설턴트 인력을 확충하려는 움직임도 나타나고 있다.

한편 앤트로픽은 블랙스톤, 헬만앤프리드먼, 골드만삭스의 투자를 기반으로 새로운 엔터프라이즈 AI 서비스 기업 설립 계획을 발표했다. 이 회사는 중견 기업이 ‘클로드(Claude)’를 핵심 업무에 적용할 수 있도록 지원하는 것을 목표로 한다.

앤트로픽은 자사의 응용 AI 엔지니어들이 신설 기업의 엔지니어링 팀과 협력해 유즈케이스를 발굴하고, 맞춤형 시스템을 구축하며, 장기적으로 고객 지원을 수행할 것이라고 밝혔다.

서비스 확장 배경…엔터프라이즈 AI 주도권 경쟁 본격화

CIO들에게 이번 변화의 핵심은 AI 벤더가 기존 컨설팅 기업, 시스템 통합(SI) 업체, 매니지드 서비스 제공업체가 맡아온 역할을 점차 대체하고 있는지 여부다. 이번 흐름은 모델 기업들이 엔터프라이즈 AI 구현 과정에서 더 큰 주도권을 확보하려는 의지를 보여준다. 다만 대규모 구축 프로젝트에서는 여전히 SI 기업의 역할이 중요하다는 점도 함께 드러난다.

이 같은 움직임은 이미 많은 CIO들이 직면한 문제를 반영한다. AI 파일럿은 빠르게 시작할 수 있지만, 이를 보안과 안정성을 갖춘 운영 시스템으로 전환하는 데에는 수개월에 걸친 통합과 프로세스 작업이 필요하다.

컨설팅 기업 테크아크의 설립자이자 수석 애널리스트인 파이살 카우사는 “엔터프라이즈 IT 구축은 전통적으로 컨설팅이나 자문 중심으로 이뤄져 왔다”라며 “실질적인 수익이 발생하는 도입 속도를 높이기 위해서는 기존 엔터프라이즈의 프레임워크와 시장 진출 모델에 맞춰야 한다”고 설명했다.

이어 카우사는 “현재 AI 기업들은 가치 사슬의 최상단에 위치해 있으며, 단순한 IT 공급업체로 전락하기보다는 ‘주도권을 쥔 상태’를 유지하려 한다”고 분석했다.

IDC 아시아태평양 지역 AI·데이터 분석·데이터 부문 리서치 총괄 디피카 기리는 “이번 변화는 엔터프라이즈 AI 전반의 구조 재편으로 이어질 가능성이 있다”라며 “AI 모델 기업들이 플랫폼 공급자를 넘어 전체 AI 가치 사슬을 적극적으로 설계하는 방향으로 이동하고 있다”고 말했다. 이어 “구현, 컨설팅, 매니지드 서비스까지 확장함으로써 단순 기술 공급을 넘어 기업의 실제 성과에 더 밀접하게 관여하려는 전략”이라고 덧붙였다.

카우사는 일부 IT 서비스 기업들이 AI 도입에 신중한 태도를 보이는 이유로 기술의 불확실성과 역할 축소 가능성을 지목했다. 그는 “시장 진출 전략의 변화 속에서 AI 기업들이 주도권을 잡고 있다”고 평가했다.

도입 리스크는 낮추지만…‘락인’ 심화 우려

AI 모델 기업으로부터 직접 서비스를 도입하면 초기 구축은 한층 수월해질 수 있다.

카덴스 인터내셔널의 수석 부사장 툴리카 쉴은 “기업이 더 긴밀한 통합과 전문 인력 지원을 받을 수 있어 단기적으로는 구축 리스크를 줄일 수 있다”고 설명했다.

다만 이러한 편의성은 장기적인 부담으로 이어질 수 있다는 지적도 나온다.

쉴은 “모델부터 데이터 파이프라인, 워크플로우에 이르기까지 전체 스택 전반에서 의존도가 더욱 심화될 수 있다”라며 “시간이 지날수록 락인이 강화돼, 큰 혼란 없이 벤더를 교체하기 어려워질 수 있다”고 말했다.

카운터포인트 리서치의 부사장이자 파트너인 닐 샤는 “AI 모델 기업들은 사용량 기반 비즈니스 모델과 애플리케이션, 서비스 간 결합을 강화하며 기업 고객을 위한 ‘원스톱 서비스’ 제공자로 자리매김하려 한다”고 분석했다.

이어 “애플리케이션과 서비스 계층을 직접 통제하면 기업을 자사 생태계에 묶어둘 수 있을 뿐 아니라, 고객의 요구와 문제, 업무 방식까지 직접 이해해 모델 최적화에도 활용할 수 있다”고 설명했다.

IDC의 기리는 락인이 불가피한 것은 아니라고 진단했다. 다만 이를 피하기 위해서는 초기 단계에서의 전략적 설계가 중요하다고 강조했다.

기리는 “모듈형 아키텍처를 통해 모델 계층은 점차 추상화할 수 있지만, 락인을 피하려면 의도적인 설계 선택이 필요하다”라며 “그렇지 않으면 특정 모델뿐 아니라 데이터 파이프라인, 워크플로우, 거버넌스 프레임워크까지 포함한 전체 스택에 종속될 위험이 있다”고 말했다.

한편 이번 흐름은 엔터프라이즈 AI가 여전히 많은 구현 작업을 필요로 한다는 점도 보여준다.

쉴은 “생성형 AI 플랫폼은 강력하지만, 실제 비즈니스 프로세스를 지원하려면 기업 내부 데이터와 워크플로우, 거버넌스 시스템과의 깊은 통합이 필수적”이라며 “이는 모델 성능과 실제 현장 적용 사이에 간극이 존재한다는 것을 의미한다”고 짚었다.

이러한 변화는 CIO들이 단순히 어떤 AI 모델의 성능이 더 뛰어난지를 넘어서, 해당 모델이 기업 시스템에 적용된 이후 구현과 운영을 누가 주도할 것인지까지 함께 고려해야 함을 시사한다.
dl-ciokorea@foundryco.com

Global Instructure Breach Hits Queensland Schools Through QLearn Platform

QLearn Cybersecurity Incident

A major QLearn cybersecurity incident has affected thousands of educational institutions globally, including Queensland state schools and universities, after a cyber breach involving third-party education technology provider Instructure exposed personal information linked to students and staff. Queensland Education Minister John-Paul Langbroek confirmed the incident in an official statement, saying the Queensland Department of Education was briefed about the international cybersecurity breach involving Instructure, the provider behind the Department’s online learning platform, QLearn. According to early assessments, the breach may affect more than 200 million people and over 9,000 institutions worldwide, making it one of the largest education-sector cybersecurity incidents disclosed this year.

QLearn Cybersecurity Incident Impacts Queensland Schools

The Department of Education said students and staff who have worked or studied at Education Queensland schools since 2020 may have been affected by the QLearn cybersecurity incident. Authorities stated that compromised information currently appears limited to names, email addresses, and school locations. Officials added there is currently no evidence that passwords, dates of birth, or financial information were accessed during the breach. The online learning platform QLearn was introduced in Queensland schools in 2020 under the previous government and has since become a widely used digital education system across the state. Minister Langbroek said school principals have already begun contacting affected families and teachers to notify them about the breach and provide further guidance. “This morning I have been briefed by the Department of Education about an international cybersecurity breach involving a third-party provider, Instructure, which delivers the Department’s online learning platform, QLearn,” Langbroek said in the statement.

Instructure Data Breach Raises Concerns Across Education Sector

The QLearn cybersecurity incident has once again highlighted the growing cybersecurity risks facing the global education sector, particularly as schools and universities continue relying heavily on third-party digital learning platforms. Because the breach involves Instructure, a provider serving institutions across multiple countries, the incident extends far beyond Queensland. Authorities indicated that educational institutions across Australia and overseas are also impacted. While officials stressed that no sensitive financial or authentication data has been identified as compromised so far, cybersecurity experts often warn that exposed personal information such as names and email addresses can still be valuable to cybercriminals. Threat actors frequently use this type of information in phishing campaigns, identity-based scams, and social engineering attacks targeting students, parents, and school employees. The Department of Education has not publicly disclosed how the cybersecurity breach occurred or whether any ransomware or unauthorized network access was involved. Investigations into the incident are ongoing.

Queensland Department Prioritizes Support for Vulnerable Families

In response to the QLearn cybersecurity incident, the Queensland Department of Education said it is prioritizing support for vulnerable individuals and families potentially affected by the breach. According to the Minister’s statement, the Department is providing priority assistance to families and teachers with known family and domestic violence concerns, as well as individuals connected to Child Safety services. The additional support measures appear aimed at reducing potential risks associated with the exposure of school-related location information and contact details. Government agencies increasingly recognize that cybersecurity incidents affecting education systems can carry broader safety implications, especially for vulnerable groups whose personal or location-related information may require additional protection.

Global Education Sector Continues Facing Cybersecurity Threats

The QLearn cybersecurity incident adds to a growing list of cyberattacks and data breaches targeting educational institutions worldwide. Schools, universities, and online learning providers have become frequent targets due to the large amount of personal information they manage and the widespread use of interconnected digital platforms. Education systems often rely on multiple third-party vendors for online learning, communications, and student management services, increasing the potential attack surface for cybercriminals. The Queensland Department of Education said it will continue updating the public as more information becomes available from the ongoing investigation into the breach. At this stage, authorities have not advised affected individuals to reset passwords or take additional security measures, though officials are continuing to assess the full scope and impact of the incident. The investigation into the Instructure-related breach remains active as educational institutions worldwide work to determine the extent of the exposure and any potential long-term cybersecurity implications.

OpenAI, Anthropic expand services push, signaling new phase in enterprise AI race

OpenAI and Anthropic are expanding their reach into professional services through joint ventures and acquisition talks, moving model providers closer to implementation roles traditionally held by systems integrators.

Joint ventures tied to the two AI companies have held talks to acquire services companies that help businesses deploy AI, with OpenAI’s venture in advanced stages on three deals, Reuters reported.

The companies are reportedly looking to add engineers and consultants as enterprise customers try to move generative AI from experiments into production.

Separately, Anthropic announced plans for a new enterprise AI services company backed by Blackstone, Hellman & Friedman, and Goldman Sachs, aimed at helping mid-sized businesses bring Claude into core operations.

Anthropic said its applied AI engineers will work with the new company’s engineering team to identify use cases, build custom systems, and support customers over time.

Market drivers for services expansion

For CIOs, the issue is whether AI vendors are starting to take over more of the work traditionally handled by consulting firms, systems integrators, and managed service providers. The developments suggest model providers want a stronger hand in enterprise AI implementation, even as systems integrators remain central to large-scale rollouts.

The push also reflects a problem many CIOs are already facing: AI pilots can be launched quickly, but turning them into secure production systems usually requires months of integration and process work.

“IT deployments in the enterprise domain have been consultations or advisory-driven,” said Faisal Kawoosa, founder and chief analyst at Techarc. “So if they have to expedite adoption, because that is where the real money is, they will have to align with the existing framework and go-to-market model of enterprises.”

Kawoosa said AI companies are currently at the top of the value chain and want to remain “in the driver’s seat” rather than become just another IT vendor.

Deepika Giri, head of research for AI, analytics, and data for Asia Pacific at IDC, said the shift could point to a broader restructuring of enterprise AI.

“AI model providers are moving beyond being platform vendors to actively shaping the entire AI value chain,” Giri said. “By expanding into implementation, consulting, and managed services, they are positioning themselves closer to enterprise outcomes rather than just supplying underlying technology.”

Kawoosa added that some IT services companies may be cautious about AI because the technology is still unreliable, and because wider adoption could weaken their role in enterprise IT projects. “With this change in go-to-market strategy, AI players are taking charge,” he said.

Lower deployment risk, but deeper lock-in

Buying AI services directly from model providers could make early deployments easier.

The process could reduce deployment risk in the short term because enterprises get tighter integration and access to specialized expertise, said Tulika Sheel, senior vice president at Kadence International.

But that convenience could come with a longer-term trade-off.

“It also creates deeper dependency across the stack, from models to data pipelines and workflows,” Sheel said. “Over time, this could increase lock-in, making it harder to switch vendors without significant disruption.”

AI model providers are trying to become a “one-stop shop” for enterprises by tying AI applications and services more closely to their usage-driven business models, according to Neil Shah, VP for research and partner at Counterpoint.

“Controlling the application and services layer allows them to lock in enterprises and also benefit from optimizing the model better by understanding the enterprise needs, pain points, and way of working firsthand,” Shah said.

Lock-in is not inevitable, according to Giri, but avoiding it requires CIOs to make deliberate architecture choices early.

“While the model layer can increasingly be abstracted through modular architectures, avoiding lock-in requires deliberate design choices,” Giri said. “Without that, organizations risk becoming dependent not just on a model, but on the entire stack: data pipelines, workflows, and governance frameworks tied to a specific provider.”

The move also shows why enterprise AI requires so much hands-on implementation work, according to Sheel. Generative AI platforms may be powerful, but they still require significant enterprise adaptation before they can support real business processes.

“Enterprise AI isn’t plug-and-play because it needs deep integration with internal data, workflows, and governance systems,” Sheel said. “This highlights a gap between model capability and real-world deployment.”

That might prompt CIOs to consider not only which AI model performs best, but also who will control the implementation path once those models are embedded in enterprise systems.

How ignoring digital friction erodes your competitive advantage

A VPN that takes too long to connect, an application that crashes mid-workflow, or software so heavy it slows a machine to a crawl. Most employees simply adjust and move on. These issues rarely show up in a report, but over time, they change how people work while weakening the security of your environment.

Prioritizing visibility directly into the employee experience is the only way to keep productivity and morale high.

The hidden cost of employees who adapt instead of escalate

When a tool doesn’t work, many employees don’t file a ticket. They either try to fix it themselves or find a workaround. Often reaching for a personal device or an application that IT hasn’t sanctioned. According to TeamViewer’s research, 40% of employees do just that on a regular basis. Making environments harder to secure and issues harder to predict.

This isn’t defiance. It’s an adaptation to a system that isn’t surfacing its own problems. Fast resolution times can look like a success for IT, but those metrics don’t show what happened before the ticket was logged. Or how many employees were hit by the same issue and never said a word.

Digital friction doesn’t just affect individual productivity. It cascades through your entire organization and touches every aspect of business performance. 48% of organizations say IT dysfunction has directly delayed critical operations or projects. And with 27% of employees saying they’d trade workplace perks for technology that simply works, the cost extends beyond operations into retention and engagement.

The instinctive response is to resolve issues faster. But by the time a ticket is logged, the employee has already lost time and found a workaround. The question is: how do you break that cycle before it starts?

The shift from faster resolution to fewer incidents

If tickets don’t reflect reality, the problem stops being response time and starts being visibility. However, no team can manually observe thousands of endpoints and connect weak signals across systems. Addressing that requires tools that provide continuous visibility into your digital workplace performance.

That’s where proactive IT management changes the equation. Continuous monitoring gives IT teams real-time insight into performance issues, application errors, and system instability across the entire device ecosystem. Pair that with AI-powered workflows to automate routine remediations and issues can be resolved before they surface. Root causes get fixed rather than symptoms patched over. And the same problems stop recurring.

When prevention becomes the operating rhythm, something else shifts, too. IT capacity that was absorbed by firefighting becomes available for more strategic work, such as compliance. The technology experience stabilizes, and with it, so does employee confidence in the tools they need.

When prevention becomes the standard, performance follows

Productivity isn’t about responding to IT issues faster. It’s about preventing problems before employees ever notice them, freeing them to focus on work that actually moves the business forward. That’s only possible when the technology underneath is stable, visible, and built to stay ahead of itself.

That’s the shift TeamViewer ONE is built to support. Newly launched, it brings together what once required multiple separate tools. It combines endpoint management, remote support, and digital employee experience together into a single platform. Enabling you to measure success not by how fast teams react, but by how rarely they need to.

The competitive edge won’t belong to the organizations that respond to problems quickly. It’ll belong to the ones where those problems don’t reach employees at all.

Fix it before they feel it

Before issues escalate, turn to the platform you can trust.

Learn more

A Cybersecurity Lifeline for Lean IT Teams: Introducing C.R.E.W.

“Too small to target” is a dangerous cybersecurity myth, while "Where do I start?," is a legitimate cyber defense question.

Imagine leaving your office unlocked overnight—not because you don’t have anything valuable, but because you assume no one would bother breaking in.

The post A Cybersecurity Lifeline for Lean IT Teams: Introducing C.R.E.W. appeared first on Security Boulevard.

AWS 코리아 “목표·데이터·가드레일·실행”…AI 성공 4대 조건 제시

이번 간담회에서 발표를 맡은 라훌 파탁 AWS 데이터 및 인공지능(AI) GTM 부문 부사장은 “2026년은 에이전트의 해가 될 것”이라며, AI가 단순한 도구를 넘어 기업 운영 전반에 실질적인 변화를 가져오는 핵심 인프라로 자리 잡고 있다고 강조했다.

파탁 부사장은 실제 고객 사례 분석을 바탕으로, AI 프로젝트가 성공적으로 프로덕션 단계에 도달하기 위한 네 가지 핵심 요소로 ▲명확한 비즈니스 목표 ▲데이터 정합성 ▲가드레일(보안·거버넌스) ▲빠른 실행을 꼽았다.

그는 특히 “가장 많은 실패는 이 네 가지 원칙에서 벗어날 때 발생한다”며, 목표 없이 광범위한 AI 적용을 시도하거나, 현대화와 혁신을 순차적으로 진행하려는 접근이 대표적인 실패 요인이라고 지적했다.

이어 “모든 유즈케이스를 한 번에 해결하려 하기보다, 중요한 몇 가지 문제에 집중해 빠르게 성과를 만들고 조직 내 모멘텀을 확보하는 것이 중요하다”고 조언했다.

AWS는 이날 에이전트 기반 AI 아키텍처를 핵심 전략으로 제시했다. 파탁 부사장은 “단일 모델이 모든 것을 처리하는 시대는 끝났다”며, 여러 AI 에이전트와 기능을 조합하는 ‘멀티 에이전트 워크플로우’가 새로운 표준으로 자리 잡고 있다고 설명했다.

예를 들어, 기업 IT 환경을 분석하는 에이전트가 보고서를 생성하면, 이를 기반으로 코드 생성 도구가 자동으로 애플리케이션을 구축하고, 이후 데브옵스 에이전트가 운영 안정성을 점검하는 식으로 전체 개발·운영 과정이 연결된다는 것이다.

이러한 흐름을 지원하기 위해 AWS는 아마존 베드록(Amazon Bedrock), 에이전트 코어(Agent Core), 아마존 세이지메이커(Amazon SageMaker) 등으로 구성된 ‘에이전트 AI 스택’을 제공하고 있으며, AWS 마켓플레이스를 통해 1,000개 이상의 에이전트를 공급하고 있다고 밝혔다.

기업들이 AI에서 기대하는 가치도 빠르게 변화하고 있다. 파탁 부사장은 “고객들은 더 이상 수개월을 기다리지 않고, 도입 첫날부터 성과를 원한다”며, AI의 핵심 가치는 ‘추론(inference)’ 단계에서 발생한다고 언급했다.

이에 따라 AWS는 클라우드 기반에서 혁신과 현대화를 동시에 진행할 수 있는 구조를 제공하고 있으며, 이를 통해 기업들이 빠르게 비즈니스 성과를 확보할 수 있도록 지원한다는 전략이다.

이날 간담회에서는 AWS 코리아의 파트너 전략도 공개됐다. 방희란 AWS 코리아 파트너 부문 총괄은 파트너 조직을 통합해 세일즈와 매니지먼트를 일원화했다고 밝히며, 보다 긴밀한 협업 구조를 구축했다고 설명했다.

AWS는 2025년 4분기 기준 24% 성장과 함께 연간 약 1,420억 달러 규모를 기록했으며, 파트너 생태계 역시 동반 성장하고 있다고 강조했다.

특히 파트너는 단순 리세일을 넘어 ▲운영 ▲구축 ▲설계 등 고부가가치 영역으로 확장할수록 수익성이 높아지는 구조로 진화하고 있으며, AWS는 이를 지원하기 위해 마켓플레이스와 데이터 기반 협업 체계를 강화하고 있다.

AWS는 제조, 금융, 헬스케어 등 산업별 AI 적용 사례도 소개했다. 예를 들어 제조 분야에서는 AI 기반 자동화로 도면 작성 시간을 최대 90% 단축했고, 금융 분야에서는 보고서 작성 시간을 80% 줄이는 성과를 달성했다.

방 총괄은 “AI 자체보다 중요한 것은 산업 도메인 지식과의 결합”이라며 “파트너가 특정 산업에서 깊이 있는 전문성을 확보하는 것이 핵심 경쟁력”이라고 밝혔다.
jihyun.lee@foundryco.com

What Makes Credential Stuffing Difficult to Detect?

Credential stuffing is a cyberattack where attackers use stolen usernames and passwords, often obtained from data breaches or bought on the dark web, to gain unauthorized access to accounts on other platforms. These attacks are highly prevalent and a major contributor to data breaches, largely because 64% of users reuse passwords across multiple accounts. On […]

The post What Makes Credential Stuffing Difficult to Detect? appeared first on Kratikal Blogs.

The post What Makes Credential Stuffing Difficult to Detect? appeared first on Security Boulevard.

CAIS

Cyber AI Suite (CAIS) Contact Us Solution Brief Overview What is Cyber AI Suite (CAIS)? As AI security concerns shift from theoretical to tangible, the threat landscape evolves rapidly. Corporate data is increasingly at risk of being ingested by third-party models unnoticed. AI-powered applications with internal access introduce new attack vectors, creating a blind spot […]

The post CAIS appeared first on HolistiCyber.

The post CAIS appeared first on Security Boulevard.

“2026년 물량 전부 달라” 고객 몰린 AWS, 자체 칩 전략 가속

아마존웹서비스(AWS)의 칩 사업은 “불타오르고 있다”고 평가받고 있다. 트레이니움은 엔비디아 대비 더 나은 가격 대비 성능을 제공하고 있으며, 고객은 AI 컴퓨팅 용량을 확보하기 위해 현재 이용 가능한 물량을 모두 사들이려 할 정도로 적극적인 모습을 보이고 있다.

이 같은 내용은 아마존 최고경영자 앤디 재시가 2025년 연례보고서에 담긴 8페이지 분량의 주주 서한에서 밝힌 핵심 메시지다.

재시는 기업이 AI에 전면적으로 투자하고 있다는 점을 강조하는 한편, AI가 전기만큼 혁신적인 기술이 될 것이라고 평가하며 해당 분야를 선도하겠다는 아마존의 의지를 분명히 드러냈다.

컨설팅 기업 인포테크 리서치 그룹의 자문 연구원 스콧 비클리는 “종합해 보면 AWS는 전력, 데이터센터, 중간 계층의 맞춤형 실리콘, 최상단의 학습과 추론에 이르기까지 AI 스택 전반을 아우르며 보다 깊이 통제하려 하고 있다”고 분석했다.

대형 고객, 추론 수요 급증

재시는 주주 서한에서 AWS가 2025년 한 해 동안 3.9기가와트(GW)의 신규 전력 용량을 추가했으며, 2027년 말까지 전체 전력 용량을 두 배로 확대할 계획이라고 밝혔다. 그러면서도 “여전히 용량 제약으로 인해 충족되지 못한 수요가 존재한다”고 설명했다.

특히 재시는 대형 고객 두 곳이 AI 연산 자원을 대규모로 필요로 하면서, AWS의 자체 CPU 칩인 그래비톤의 2026년 전체 인스턴스 용량을 모두 구매하겠다고 요청했다고 공개했다. 다만 다른 고객의 수요를 고려할 때 이러한 요청을 수용할 수는 없다고 분명히 했다.

또 다른 컨설팅 기업 무어인사이트앤스트래티지의 부사장 겸 수석 애널리스트 맷 킴벌은 “두 대형 고객이 2026년 그래비톤 전체 용량을 사들이겠다고 나선 사실은 현재 시장 상황을 단적으로 보여준다”고 분석했다.

킴벌은 이를 단순한 공급망 문제로만 보기는 어렵다고 진단했다. 기업이 단순히 컴퓨팅 자원을 구매하는 차원을 넘어, 경쟁사보다 먼저 용량을 확보하려는 ‘전략적 의존성’ 확보에 나서고 있다는 설명이다. 킴벌은 “AWS의 위험은 인프라를 충분히 빠르게 구축하지 못하는 데 있다기보다, 용량 제약을 느낀 고객이 애저나 구글 클라우드 플랫폼(GCP)으로 일부 수요를 분산하는 상황에 있다”고 짚었다.

이 같은 움직임은 그래비톤의 인기가 크게 높아졌음을 보여주는 동시에, AWS가 수요를 모두 소화하기 어려운 상황일 수 있음을 시사한다. 그래비톤은 더 이상 ‘가벼운 워크로드를 지원하는 경량 칩’에 머무르지 않고, 다양한 연산 특성을 요구하는 폭넓은 워크로드에 활용되고 있다고 킴벌은 설명했다.

또한 애저 코발트와 구글 클라우드 액시온 프로세서 역시 성숙 단계에 접어들면 유사한 수요를 경험할 가능성이 높다고 내다봤다. 이는 Arm과 x86 기술 간 경쟁 구도에 흥미로운 시장 역학을 형성할 것이라고 덧붙였다.

인포테크 리서치 그룹의 비클리 역시 공급망 제약이 AI 인프라 확장 전반에 미치는 영향이 광범위하고 깊다고 평가했다. 2026년 계획된 AI 데이터센터 용량의 50%가 실제로는 실현되지 않을 것이라는 전망이 나오는 상황에서도, “사실상 모든 용량이 전반적으로 매진된 상태”라고 전했다.

트레이니움의 경쟁력

재시는 2026년을 앞두고 아마존의 칩 사업이 “불타오르고 있다”고 평가했다. AWS가 반도체 기업 엔비디아와 긴밀한 협력 관계를 유지하며 해당 반도체를 활용하고 있지만, 고객이 더 나은 가격 대비 성능을 요구하면서 프로세서 시장에 새로운 변화가 나타나고 있다고 설명했다.

아마존은 2024년 말 자체 AI 실리콘 2세대 제품인 트레이니움2를 출시했다. 현재 베드록은 대부분의 추론 작업을 이 차세대 가속기에서 실행하고 있다. 재시는 트레이니움2가 유사한 GPU 대비 약 30% 더 우수한 가격 대비 성능을 제공하며, 현재 상당 물량이 이미 판매된 상태라고 전했다.

최근 출하를 시작한 트레이니움3는 트레이니움2보다 30~40% 더 개선된 가격 대비 성능을 제공하며, 이미 대부분의 용량이 예약됐다고 밝혔다. 또한 본격적인 대량 공급까지 약 18개월이 남은 트레이니움4 역시 상당 부분이 사전 예약된 상태라고 설명했다.

재시는 “우리 칩에 대한 수요가 매우 높아 향후에는 제3자에게 랙 단위로 판매하는 방안도 가능할 수 있다”고 언급했다.

인포테크 리서치 그룹의 비클리는 아마존의 전략이 엔비디아를 배제하는 데 있다기보다, AWS가 경제성 측면에서 경쟁력을 확보할 수 있는 영역에서 엔비디아 기술 의존도를 낮추려는 데 있다고 분석했다.

비클리는 AWS가 여전히 엔비디아의 핵심 파트너이지만, 가격 대비 성능을 기반으로 차별화된 가치를 제시할 수 있다고 평가했다. 베드록과의 긴밀한 통합, AWS가 설계한 인터커넥트, 효율적인 토큰 경제성, 표준 PyTorch·JAX·vLLM 워크플로를 기반으로 한 소프트웨어 스택을 결합해 종합적인 패키지를 제공하고 있다는 설명이다.

트레이니움의 주요 활용 분야는 수천억 개에서 1조 개 이상의 파라미터를 갖는 대규모 언어모델(LLM), 멀티모달 모델, 디퓨전 트랜스포머의 학습과 추론이다.

비클리는 앤스로픽과 우버 같은 주요 기업이 AWS의 효율성 주장을 실제 환경에서 검증하고 있다고 전했다. 반면 코히어와 스태빌리티 AI는 성숙한 툴링 프레임워크와 우수한 칩 설계를 이유로 엔비디아를 선호하고 있으며, AWS의 서비스 및 가용성 문제를 언급하고 있다고 설명했다.

무어인사이트앤스트래티지의 킴벌은 AWS와 미국의 AI 반도체 설계 기업 세레브라스(Cerebras)의 파트너십도 주목할 요소라고 짚었다. 트레이니움은 프리필에, 세레브라스 CS-3는 디코드에 각각 최적화돼 있어 두 기술을 결합하면 사용자 개입 없이도 높은 추론 성능을 제공할 수 있다는 설명이다. 킴벌은 “기업 사용자가 원하는 것은 이러한 ‘포인트 앤 클릭’ 수준의 단순성”이라고 평가했다.

킴벌은 그래비톤이 x86 생태계에 가져온 변화와 트레이니움이 엔비디아에 미치는 영향을 직접적으로 연결 지을 수 있다고 분석했다. 추론은 기업 AI에서 가장 빠르게 성장하면서 비용 민감도가 높은 워크로드이며, 바로 그 지점에서 트레이니움이 빠르게 입지를 넓히고 있다는 설명이다.

추론 엔진 ‘맨틀’에서 얻은 교훈

재시는 “방향을 재설정하기 위해 출발선으로 돌아갈 수 있는 역량”의 중요성도 강조했다. 베드록은 예상보다 빠르게 구축·확장됐지만, 단순한 조정이 아니라 완전히 다른 유형의 추론 엔진이 필요하다는 점을 팀이 인식하게 됐다고 밝혔다.

이에 따라 베드록 팀은 AWS의 에이전트 기반 코딩 서비스 키로를 활용해 6명의 숙련된 엔지니어로 구성된 소규모 팀을 꾸렸고, 76일 만에 새로운 엔진 ‘맨틀(Mantle)’을 개발했다. 맨틀은 이후 베드록의 핵심 기반으로 자리 잡았으며, 재시에 따르면 2026년 1분기에 처리한 토큰 수가 이전 모든 연도를 합친 것보다 많았다.

비클리는 소규모 팀이 짧은 기간 안에 대규모 재구축을 수행하고, 상태 기반 대화 관리, 비동기 추론, 기본 할당량 상향 등 다양한 기능을 추가한 점은 인상적이라고 평가했다. 맨틀은 독자적인 추론 제품으로 간주할 수 있을 만큼 중요한 의미를 갖는다고 분석했다. 또한 AWS가 별도의 게시글을 통해 보안과 거버넌스 측면에 대한 신뢰를 강화하려는 점도 주목할 부분이라고 설명했다.

킴벌은 맨틀의 탄생을 두 가지 관점에서 해석했다. 하나는 운영상의 필요성으로, 베드록에 새로운 아키텍처가 요구됐다는 점이다. 다른 하나는 생산성 압축 효과다.

킴벌은 “에이전트 도구를 활용한 6명의 엔지니어가 기존 40명으로는 더 빠르게 수행하지 못했을 작업을 해냈다면, 팀 규모와 프로젝트 일정, 자체 구축과 외부 도입에 대한 판단 기준이 근본적으로 달라진다”고 분석했다. 이어 “토큰 처리량 수치가 그 결과를 분명하게 보여준다”고 전했다.

맨틀은 단순한 재구축 사례를 넘어, AI 지원 개발이 실제 운영 환경에서 어떤 변화를 만들어내고 있는지를 보여주는 사례로 평가된다. 킴벌은 “이론이나 마케팅 구호 차원이 아니라, 실제 프로덕션 환경에서 벌어지고 있는 변화”라고 설명했다.

재시는 “진전은 선형적으로 이뤄지지 않는다”며 “가속하는 순간도 있고 방향을 조정해야 하는 시점도 있다. 중요한 영역에는 과감히 투자하고, 효과가 없는 부분은 과감히 축소할 것”이라고 밝혔다.
dl-ciokorea@foundryco.com

KPMG report finds enterprise disconnect between AI and its ROI

Enterprise CIOs need no convincing that return on investment (ROI) for genAI and agentic AI is elusive, but consulting giant KPMG is reporting that some companies are plowing ahead with the technology anyway.

In fact, beyond the lack of quantifiable ROI, executives are not even letting a weak economy slow their AI investment plans. “Three out of four global leaders will prioritize AI investment despite economic uncertainty,” KPMG found.

“A clear gap is present between organizations still in the experimentation phase and those that have moved beyond pilots to fully scaling AI agents and capturing real business value outcomes,” the company said in its Global AI Pulse Survey report. “Although AI adoption is accelerating worldwide, only a small group of AI leaders are seeing clear returns. These leaders consistently outperform others, including 82% saying that AI is already delivering meaningful business value, compared to 62% of their peers. This is not simply an AI maturity gap; it is a widening performance gap between organizations that treat AI as an enterprise-wide transformation and those that are trying to bolster AI onto existing models and seeing incremental gains.”

In the subset of its report focusing on the UK, KPMG reported: “AI no longer needs traditional return on investment to be justified. 65% of UK respondents say their organization would continue to invest in AI regardless of tangible ROI. Despite a lot of money being spent by businesses on artificial intelligence, traditional return on investment isn’t necessarily needed for them to see value in the technology.”

Mindset shift

Leanne Allen, a KPMG head of AI, said the extreme focus on enterprise AI has forced a new approach to the financial aspects of the technology. 

“This shift in mindset by business leaders from viewing AI as something that must deliver an immediate return to one that sees AI as a long-term investment, recognizing it as a strategic enabler for enterprise‑wide transformation, is an important milestone,” Allen said. “But that shouldn’t translate into investing in AI blindly, without a clear strategy. AI is reshaping how organizations operate, how decisions are made, and how human and AI agents work together day‑to‑day.”

This shift in thinking is partly pragmatic, with many CIOs being told by their boards that AI investments are not optional. But the ROI challenge with AI has many forms.

The many problems with AI ROI

Given the urgent pace of AI experimentation and deployment, many AI proofs of concept (PoCs) are launched by executives setting unrealistic ROI goals. If the project is being measured against a standard that it technologically can’t achieve, it’s not an indictment of the LLM when the inappropriate metrics were not delivered

Some enterprises are also discovering unexpected costs from AI rollouts, such as when they use AI in customer chatbots and then discover that people are abusing them as “free” genAI tools, with the enterprise having to pay for the additional tokens

What to measure

What some analysts and investors argue is that the kinds of intellectual effort that AI is replacing have never been measured well, if at all. This means that financial departments will need to figure out different ways of measuring AI ROI.

Ben Grant, managing partner at Lambton Capital Partners, said, “I believe the problem is how we measure it. Traditional ROI wants clean input-output. AI doesn’t do that yet in most businesses. The value shows up in time reclaimed, decisions made faster and gaps being plugged before they become problems. Try putting that in a spreadsheet.”

But, he added, “I definitely don’t think companies investing in AI without traditional ROI are being reckless. They’re being practical. They’ve seen enough to know it works. They just can’t quantify it in the language finance teams want.”

Manish Jain, a principal research director at Info-Tech Research Group, said that he believes this disconnect exists “because enterprises are simultaneously operating in two modes: exploratory, where learning velocity matters more than ROI, and industrialized, where value realization is expected, but maturity is still evolving.”

Companies have adjusted their expectations, he noted. “It is not that companies don’t care for returns,” he said. “It’s that they’ve learned that before focusing on ROI, they need to focus on maturing AI capabilities. When a new engine comes along, wise operators don’t ask first what it earns. They ask what happens if they’re the only ones without it.”

Is AI becoming mundane?

Gartner VP Analyst Nader Henein isn’t going so far as to call AI deliverables trivial, but the technology has started to integrate into mundane everyday functions, which can challenge a traditional ROI spreadsheet.

“Some AI investments like AI assistants are becoming standard office tools, like the office suite. No one calculates ROI by counting the number of Word documents or presentations produced,” Henein said. “But ROI calculations on AI projects are not going anywhere. If it burns cash and fails to produce any tangible ROI, it will be retired. P&L reports and the expectations of investors from publicly traded companies are not changing.”

Spending and hoping

Michael Leone, VP/principal analyst at Moor Insights & Strategy, said the differentiated nature of AI deployments can also frustrate typical ROI mechanisms. 

“The old ROI playbook from ERP or cloud migrations doesn’t fit AI, and every CIO I talk to knows it. They can likely tell you exactly what productivity benefits they’re getting on a specific workflow, but ask them what the three-year enterprise payoff looks like and you get a shrug. That’s where the ‘regardless of ROI’ line is really coming from and, frankly, I think leaders are right to keep funding through it,” Leone said. “Budget fell off the list of things killing AI programs a while ago. The money’s there and the mandate’s there. The real blockers now are security, privacy, and the fact that almost nobody has the people to run this at scale. I look at it as all of the orgs making an informed bet. They’ve done the math on what falling behind costs, and they don’t like the answer.”

He noted that perhaps one in ten enterprises he’s spoken to has the talent, governance, and operating discipline to actually get compounding returns from its AI spend. “Everyone else is spending and hoping. That’s the real story,” he said.

Carmi Levy, an independent technology analyst, said he sees it as “sheer fiscal suicide to spend on any bleeding edge technology without at least a modicum of ROI to justify it. Yet the speed and scope of AI advancement means traditional means of calculating ROI have become woefully obsolete. AI now compels organizations to dive in more out of fear of being left behind.”

This means, Levy argued, that finance may simply need to back off rigid ROI calculations for the moment. 

“The need to remain competitive in AI, or at least stay within sight of the competition while everyone struggles to figure AI out, means decisions may not be based on the same depth of fiscal rigor that might have been used in years past,” Levy said. “Increasingly turbulent economic conditions often compel organizations to hit the brakes on technology investments, but that logic is being tested as AI deepens its hold on the technological roadmap. Organizations will seek savings elsewhere to avoid the risk of falling behind competitors who refuse to back off their own AI-centric spending amid economic uncertainty. Indeed, many leaders use AI as a catch-all driver of unspecified future cost savings, which in the frenzied rush to remain AI-relevant is often enough to secure sign-in from the C-Suite.”

AI demand is so high, AWS customers are trying to buy out its entire capacity

The Amazon Web Services (AWS) chip business is “on fire,” Trainium offers better price-performance than Nvidia, and customers are so eager for AI compute capacity that they’re looking to buy up all that’s currently available.

These are the takeaways shared by Amazon CEO Andy Jassy in his eight page letter to shareholders in the tech giant’s 2025 annual report.

Jassy’s comments underscore how all-in enterprises are for AI, and Amazon’s ambitions to dominate a technology that, as he described it, will be as transformative as electricity.

Noted Scott Bickley, advisory fellow at Info-Tech Research Group, “pulling it all together, AWS is diving deeper to control the AI stack comprehensively through every layer: power, data center, custom silicon in the middle, and training and inference at the top.”

Big inference asks from customers

AWS added 3.9GW of new power capacity in 2025 and expects to double its total power capacity by the end of 2027, Jassy wrote to shareholders. “Yet we still have capacity constraints that yield unserved demand,” he said.

Notably, he revealed that two large customers are in such need of AI compute that they asked to buy all available 2026 instance capacity for AWS’ custom CPU chip, Graviton. He emphasized that AWS can’t agree to those kinds of requests, given other customer needs.

Matt Kimball, VP and principal analyst at Moor Insights & Strategy, noted, “two large customers asking to buy all of AWS’s Graviton capacity for 2026 says everything we need to know about where the market is.”

It’s not necessarily just a supply chain story, though, he said; it’s more of a “strategic dependency” story. Enterprises aren’t just shopping for compute, they’re trying to lock up capacity before a competitor does. “The risk for AWS isn’t failing to build fast enough. It’s more along the lines of constrained customers maybe hedging toward Azure or Google Cloud Platform (GCP),” he pointed out.

This also indicates how popular Graviton has become, and suggests that AWS might be struggling to meet demand. Rather than “lightweight chips supporting lightweight workloads,” Graviton is being used across workloads “with a variety of computational profiles,” said Kimball.

As they mature, Azure Cobalt and Google Cloud Axion processors will likely see the same kind of demand, which will make for an “interesting market dynamic” between Arm and x86 technologies, he said.

Info-Tech’s Bickley agreed that the impact of supply chain constraints is “broad and deep” in its effect on AI buildout. Even in the midst of reports that 50% of planned AI data center capacity will not materialize in 2026, “everything is sold out across the board.”

Trainium’s competitive edge

Going into 2026, Jassy described Amazon’s chip business as “on fire.” While AWS has a strong partnership with Nvidia and uses its semiconductors, there is what he called a “new shift” in the processor landscape as customers seek out better price-performance.

Notably, Amazon released the second generation of its custom AI silicon, Trainium2, in late 2024, and Bedrock now runs most of its inference on these next-generation accelerators. Jassy claimed Trainium2 offers roughly 30% better price-performance than comparable GPUs, and is “largely sold out.”

Meanwhile, Trainium3, which just began shipping, is 30% to 40% more price/performant than Trainium2, and is already “nearly fully-subscribed,” he said. Further, a significant chunk of Trainium4 capacity, which is still about 18 months from broad availability, has been reserved.

“There’s so much demand for our chips that it’s quite possible we’ll sell racks of them to third parties in the future,” Jassy said.

Info-Tech’s Bickley pointed out that Amazon is not necessarily trying to eliminate Nvidia so much as reduce its dependence on the chip leader’s technology in areas “where AWS can win on economics.”

While AWS remains a strong Nvidia partner, it can provide a differentiated value proposition based on price-performance, he said. AWS brings a “holistic package” via tight integration with Bedrock, AWS-designed interconnects, more efficient token economics, and a software stack built on standard PyTorch/JAX/vLLM workflows.

Trainium’s prime use cases are training and inference for large language models (LLMs), multimodal models, and diffusion transformers in the hundreds of billions to trillion-plus parameter range, Bickley explained.

Marquee names like Anthropic and Uber are “putting AWS’s efficiency claims to the test,”  he noted; on the other hand, customers like Cohere and Stability AI prefer Nvidia’s mature tooling framework and “superior chip designs,” citing AWS service and availability issues.

Moor’s Kimball pointed out that another factor to consider is AWS’ partnership with Cerebras. Trainium is optimized for prefill and Cerebras CS-3 is optimized for decode, allowing the two to deliver what they claim is the best inference performance with no user intervention required. “This is the kind of ‘point-and-click’ simplicity enterprise users are looking for,” he said.

Ultimately, Jassy is drawing a direct line from what Graviton did to x86 to what Trainium is doing to Nvidia, he said. Inference is the “fastest-growing and most cost-sensitive workload in enterprise AI, and that’s exactly where Trainium is gaining the most ground.”

Learning from the Mantle scale-up

Jassy also emphasized the importance of being able to go back to the starting line to “redirect the trajectory.” For instance, Amazon Bedrock was built rapidly and scaled “faster than expected,” and the team realized it required a whole different type of inference engine, not just a tweak.

The Bedrock team quickly spun up a group of six “very skilled engineers” using AWS’ agentic coding service, Kiro, to deliver a new engine, Mantle, in 76 days. Mantle has since become the backbone of Bedrock, which processed more tokens in Q1 2026, Jassy claimed, than had been processed in all prior years combined.

The ability for a small team to accomplish such a large rebuild in such a short time frame, alongside adding features such as stateful conversation management, asynchronous inference, and higher default quotas, among others, is “impressive at first blush,” noted Info-Tech’s Bickley.

“The takeaway is that Mantle should be considered a key product for inference in its own right,” he said. And a separate AWS engineering post seeks to add confidence in the model’s security and governance considerations, Bickley explained.

Moor’s Kimball called the genesis of Mantle “really two stories.” One is operational (Bedrock needed a new architecture); the other is productivity compression.

“If six engineers with agentic tools can do what 40 couldn’t have done faster, the calculus on team size, project timelines, and build-vs-buy decisions shifts fundamentally,” he said. “The token volume numbers make the outcome clear and compelling.”

But Mantle isn’t just a rebuild, it’s yet another proof point that AI-assisted development is changing what’s possible. “Not just in theory or some marketing slogan,” Kimball said, “but in production.”

Jassy noted, “progress will not be linear. There will be moments of acceleration and moments where we adjust course. We will experiment, invest disproportionately behind what matters, and pull back when something isn’t working.”

This article originally appeared on NetworkWorld.

Gov. Tim Walz Deploys National Guard After Winona Cyberattack Disrupts Services

Winona County cyberattack

A Winona County cyberattack has disrupted critical systems and forced Minnesota to step in with emergency support. The cyberattack on Winona County began on April 6 and continued overnight into April 7, affecting key digital infrastructure used to run emergency and municipal services. County officials said the disruption significantly impaired their ability to deliver essential services, including core administrative and public-facing operations. Governor Tim Walz signed an executive order authorizing the Minnesota National Guard to assist with the response. “Cyberattacks are an evolving threat that can strike anywhere, at any time,” said Governor Walz. “Swift coordination between state and local experts matters in these moments. That's why I am authorizing the National Guard to support Winona County as they work to protect critical systems and maintain essential services.”

Winona County Cyberattack Strains Local Response

The Winona County cyberattack quickly overwhelmed local response efforts. Officials said teams have been working around the clock since the incident was detected. The county is coordinating with Minnesota Information Technology Services, the Minnesota Bureau of Criminal Apprehension, the League of Minnesota Cities, the Federal Bureau of Investigation, and external cybersecurity specialists. Despite this multi-agency response, officials confirmed that the scale and complexity of the incident exceeded both internal and commercial response capabilities. This led to a formal request for cyber protection support from the Minnesota National Guard. The incident highlights how even smaller jurisdictions are now facing large-scale cyber disruptions that require state-level intervention.

National Guard Activated Under Emergency Order

Under the emergency order, the Adjutant General is authorized to deploy personnel, equipment, and other resources to support the response to the Winona County cyberattack. The order also allows the state to procure services needed to manage the incident and confirms that costs will be covered through the state’s general fund. It is already in effect and will remain active until the emergency conditions subside or the order is formally rescinded. Officials say the priority is to stabilize affected systems, prevent further damage, and restore full functionality as quickly as possible.

Essential Services Continue Amid Disruption

Even as systems remain impacted, officials stressed that emergency services are still operational. 911 services, fire response, and other emergency operations continue to function during the Winona County cyberattack, ensuring that urgent public safety needs are not affected. However, the disruption has slowed other county services, and officials have warned that some delays are expected as systems are brought back online. Residents have been asked for patience while recovery efforts continue.

Investigation Underway

Authorities have not disclosed the nature of the Winona County cyberattack or whether it involves ransomware or another type of cyber intrusion. The FBI is actively involved in the investigation, along with state agencies and external cybersecurity experts. Investigators are working to determine how the attack occurred, what systems were impacted, and whether any sensitive data was accessed. For now, the focus remains on containment, system recovery, and strengthening defenses to prevent further intrusion.

Earlier Ransomware Incident Raises Concerns

The latest Winona County cyberattack comes as an update to a ransomware incident the county first reported in January 2026. At the time, officials said, “We recently identified and responded to a ransomware incident affecting our computer network. Upon discovery, we immediately initiated an investigation to assess the scope and impact of the incident.” A local emergency was declared during that event by County Board Chair Commissioner Meyer, as officials worked to maintain continuity of services. Emergency operations, including 911 and fire response, remained active while systems were analyzed and restored. The recurrence of cyber incidents in such a short period has raised concerns about ongoing vulnerabilities and the growing threat landscape facing local governments.

Growing Cyber Pressure on Local Governments

The Winona County cyberattack highlight a broader trend, local governments are increasingly targeted but often lack the resources to respond to complex cyber incidents on their own. When systems go down, the impact is immediate. Public services are disrupted, and recovery can take time. State support is now helping Winona County stabilize operations. But the incident highlights a larger issue: cyberattacks are becoming more frequent, more disruptive, and harder for local agencies to handle without outside assistance.

Data Masking Gaps That Could Expose Your Organization

Organizations collect and store huge amounts of sensitive data, customer details, financial records, login credentials, and more. Protecting this data is not just important; it’s critical for business survival. One of the most commonly used techniques to protect sensitive data is data masking. At first glance, it seems like a strong solution. It hides sensitive […]

The post Data Masking Gaps That Could Expose Your Organization appeared first on Kratikal Blogs.

The post Data Masking Gaps That Could Expose Your Organization appeared first on Security Boulevard.

What Makes Browser Hijacking a Silent Threat?

Web browsers act as a critical gateway to an organization’s digital ecosystem, enabling access to banking, email, cloud applications, and sensitive customer data. When attackers compromise this gateway, they can monitor user activity, redirect traffic, and capture confidential credentials without detection. This threat, known as browser hijacking, has become increasingly widespread, affecting organizations of all […]

The post What Makes Browser Hijacking a Silent Threat? appeared first on Kratikal Blogs.

The post What Makes Browser Hijacking a Silent Threat? appeared first on Security Boulevard.

❌