Visualização normal

Antes de ontemStream principal
  • ✇Security | CIO
  • Oracle will patch more often to counter AI cybersecurity threat
    Oracle plans to issue security patches for its ERP, database, and other software on a monthly cycle, rather than quarterly, to respond to the increased pace of AI-enabled software vulnerability discovery. Other software vendors, notably Microsoft, SAP, and Adobe, already release patches on a monthly beat, always on the second Tuesday of each month. Oracle, though, is taking an off-beat approach: It will release the first of its monthly Critical Security Patch Updates
     

Oracle will patch more often to counter AI cybersecurity threat

5 de Maio de 2026, 12:38

Oracle plans to issue security patches for its ERP, database, and other software on a monthly cycle, rather than quarterly, to respond to the increased pace of AI-enabled software vulnerability discovery.

Other software vendors, notably Microsoft, SAP, and Adobe, already release patches on a monthly beat, always on the second Tuesday of each month.

Oracle, though, is taking an off-beat approach: It will release the first of its monthly Critical Security Patch Updates (CSPUs) on May 28, the fourth Thursday, and after that, it will release its patches on the third Tuesday of each month — a week after the other vendors — with the next batches arriving on June 16, July 21, and August 18, it said earlier this week.

The new CSPUs “provide targeted fixes for critical vulnerabilities in a smaller, more focused format, allowing customers to address high-priority issues without waiting for the next quarterly release,” Oracle said.

It will issue a cumulative Critical Patch Update each quarter, so on the same schedule as before. The first one this year came in January.

Oracle initially announced the switch to a monthly patching schedule last week, but did not provide the dates.

The new patching rhythm will primarily interest customers running Oracle applications on premises or in their own or third-party hosting environments. For customers using the software in an Oracle-managed cloud, Oracle applies the patches automatically automatically.

Oracle is using artificial intelligence to identify and fix the vulnerabilities faster than before. It said it has access to OpenAI’s latest models through that company’s Trusted Access for Cyber program, and to Anthropic’s Claude Mythos Preview.

Mythos has contributed greatly to concerns that AI will uncover thousands of zero-day flaws in software, but as of mid-April, only one vulnerability report had been tied directly to it.

This article first appeared on CSO.

  • ✇Security | CIO
  • IBM unveils its blueprint to help enterprises run AI at the core of their business
    At its Think conference on Monday night, IBM announced what it calls a new operating model for the agentic enterprise. It encompasses coordinated AI agents that execute across the business, real-time connected data, end-to-end automated workflows, and hybrid, including IBM Sovereign Core. “Your AI is only as good as your data, which informs everything that we’ve been doing across both AI and hybrid cloud,” said Rob Thomas, IBM’s SVP of software. “We are talking this
     

IBM unveils its blueprint to help enterprises run AI at the core of their business

5 de Maio de 2026, 01:07

At its Think conference on Monday night, IBM announced what it calls a new operating model for the agentic enterprise.

It encompasses coordinated AI agents that execute across the business, real-time connected data, end-to-end automated workflows, and hybrid, including IBM Sovereign Core.

“Your AI is only as good as your data, which informs everything that we’ve been doing across both AI and hybrid cloud,” said Rob Thomas, IBM’s SVP of software. “We are talking this week about an AI operating model, which is how do companies leverage AI to become one of the winners in the AI era? It’s about how they do their intelligence, how they do automation, how they create AI for their operations, and ultimately trusted AI.”

Each of the four components of the model, although integrated, is a separate priority, IBM said in its announcement. “Together, they represent a fundamental shift from improving parts of the business to changing how the business operates.”

Product announcements to further the shift included the next generation of watsonx Orchestrate, which becomes an agentic control plane for the multi-agent era. It is now in private preview.

On the data side, the company is leveraging its recent acquisition of Confluent for real-time data streaming built on Kafka and Flink technologies, and the addition of a real-time context layer for AI to upcoming capabilities in watsonx.data.

IBM is also offering private previews of Context in watsonx.data that adds an open, federated context layer to help enterprise AI reason over business data, watsonx.data GPU-accelerated Presto that, the company said, “showed the potential to significantly reduce the cost of running certain workloads and processing time on large enterprise datasets in internal benchmark testing with Nvidia.” In addition, an IBM Z Database Assistant provides an AI-powered workspace to monitor performance, provide automation, and optimize configurations for Db2 and IMS databases across IBM Z environments.

In public preview, IBM announced HCP Terraform powered by Infragraph to offer unified infrastructure visibility.

The IBM Concert platform, also available in public preview, will serve the automation arm of the strategy. It provides a single view across applications, infrastructure, and networks without forcing companies to replace existing tools.

Finally, on the hybrid front, the company announced the general availability of its Sovereign Core.

Sanchit Vir Gogia, chief analyst at Greyhound Research, sees the new focus not so much as a group of products but as an “accountability architecture”.

“IBM’s AI Operating Model should be read less as another AI product bundle and more as IBM naming the problem now sitting at the centre of enterprise AI: accountability,” he said. “Large organizations are not short of AI tools. They are short of ways to govern what those tools do once they begin acting across data, workflows, applications, infrastructure and regulated environments. That is the real shift here. The market has spent two years proving that AI can be useful. The next test is whether AI can be made auditable, costed, secured, reversible and trusted inside the messy estates where enterprise technology actually lives.”

He noted, “IBM’s framing around Data, Agents, Automation and Hybrid is useful because those are not four neat product buckets. They are dependencies. Agents without trusted data become improvisation machines. Automation without governance becomes operational gambling. Hybrid without runtime policy becomes compliance theatre. Data without real-time context becomes yesterday’s truth wearing today’s clothes. The strength of IBM’s argument is that enterprise AI no longer scales as a collection of pilots. It has to operate as a connected system.”

Mark Tauschek, VP of Research Fellowships at Info-Tech Research Group, added, “Agentic orchestration and governance are quickly becoming table stakes as organizations start to see agent sprawl, inconsistent policy applied across agents, increased risk and exposure due to a lack of auditability, and ‘shadow’ AI. watsonx Orchestrate is IBM’s answer to the growing agent sprawl, and an answer to several similar solutions hitting the market in the last couple of weeks alone.”

Overall, “IBM has named the right problem. Now it has to prove the architecture holds under enterprise pressure,” Gogia said. “The future of enterprise AI belongs to those who can govern the action, not merely generate it.”

The article originally appeared on NetworkWorld.

  • ✇Security | CIO
  • 개발의 민주화를 이끄는 바이브 코딩 도구 19선
    AI가 인간이 한 달 동안 작성할 코드를 몇 분 만에 만들어준다면 마다할 사람이 있을까. 마법 같은 기술을 싫어할 이유가 있을까. 바이브 코딩을 둘러싼 기대는 등장 초기부터 개발자와 비즈니스 사용자 모두에게 이런 가능성을 제시해왔다. 그리고 이제 그 기대가 현실이 될 만큼 기술이 성숙했다는 평가가 나온다. 물론 신중한 리더들이 “숨겨진 함정은 없는가? 혹시 위험한 기술은 아닌가?”라고 의문을 제기하는 것도 타당하다. AI는 인간이 작성한 코드를 학습해 발전해왔고, 인간은 실수를 하기 때문이다. 실제로 일부 바이브 코딩 사용자와 업계 전문가들은 문서화되지 않은 엔드포인트나 민감 데이터 유출과 같은 취약점이 존재할 수 있다고 지적한다. 그럼에도 불구하고 많은 사용자들은 바이브 코딩을 적극적으로 도입하고 있으며, 긍정적인 결과를 보고하고 있다. 이들은 새로운 도구들이 “놀라울 정도로 강력하다”고 평가한다. 간단한 설명만으로 몇 분
     

개발의 민주화를 이끄는 바이브 코딩 도구 19선

4 de Maio de 2026, 05:52

AI가 인간이 한 달 동안 작성할 코드를 몇 분 만에 만들어준다면 마다할 사람이 있을까. 마법 같은 기술을 싫어할 이유가 있을까. 바이브 코딩을 둘러싼 기대는 등장 초기부터 개발자와 비즈니스 사용자 모두에게 이런 가능성을 제시해왔다.

그리고 이제 그 기대가 현실이 될 만큼 기술이 성숙했다는 평가가 나온다.

물론 신중한 리더들이 “숨겨진 함정은 없는가? 혹시 위험한 기술은 아닌가?”라고 의문을 제기하는 것도 타당하다. AI는 인간이 작성한 코드를 학습해 발전해왔고, 인간은 실수를 하기 때문이다. 실제로 일부 바이브 코딩 사용자와 업계 전문가들은 문서화되지 않은 엔드포인트나 민감 데이터 유출과 같은 취약점이 존재할 수 있다고 지적한다.

그럼에도 불구하고 많은 사용자들은 바이브 코딩을 적극적으로 도입하고 있으며, 긍정적인 결과를 보고하고 있다. 이들은 새로운 도구들이 “놀라울 정도로 강력하다”고 평가한다. 간단한 설명만으로 몇 분 안에 프로토타입을 만들고, 몇 차례 반복을 거쳐 최소 기능 제품(MVP)까지 구현할 수 있다. 과거에는 몇 주가 걸리던 작업이 단시간에 가능해진 셈이다. 특히 비즈니스 사용자의 경우 개발 리소스를 확보하는 과정에서 발생하던 각종 절차와 제약도 크게 줄일 수 있다. 물론 오류나 누락이 발생할 수 있지만, 이는 인간 개발팀에서도 충분히 발생할 수 있는 수준이라는 시각도 있다.

결국 현실은 분명하다. 바이브 코딩은 기업이 실험을 시작할 만큼 충분히 실용적인 기술로 자리 잡았다. 다만 플랫폼마다 특성은 크게 다르다. 일부는 대규모 코드베이스를 다루는 전문 개발자를 지원하는 데 적합하고, 다른 일부는 개발 경험이 많지 않지만 원하는 결과를 알고 있는 사용자들을 돕는 데 초점을 맞춘다. 특정 프로그래밍 언어에 익숙하지 않거나 프로그래밍 자체에 대한 이해가 부족한 사용자도 활용할 수 있다. 더 나아가 컴퓨터 사용이 익숙하지 않은 초보자까지 겨냥한 도구도 등장하고 있다.

차별화 요소는 사용자 경험 수준에만 국한되지 않는다. 일부 도구는 전문 개발자의 생산성을 극대화하는 ‘보조 도구’ 역할에 집중하는 반면, 다른 도구는 데이터베이스부터 프런트엔드까지 전체 애플리케이션을 자동으로 생성한다. 특히 후자의 경우 대규모 사용자 환경에 바로 적용하기에는 한계가 있을 수 있지만, 경영진이나 투자자에게 제시할 프로토타입 제작에는 매우 효과적이다. 또한 소규모 사용자 환경에서는 충분히 실용적으로 활용될 수 있다.

그렇다면 이 도구들은 실제로 충분히 완성도 높은 수준일까. 단점을 보완해 활용할 수 있을까. 이를 확인하는 방법은 단 하나, 직접 실행해보고 결과를 확인하는 것이다.

다음은 주목할 만한 바이브 코딩 도구 19종을 알파벳 순으로 정리한 목록이다. 이들 모두 애플리케이션 개발 과정에 ‘마법 같은 경험’을 제공하겠다는 공통된 목표를 내세우고 있다.

베이스44 (Base44)

베이스44(Base44, 현재는 윅스(Wix)가 인수)는 ‘빌더 챗(Builder Chat)’에서 데이터 아키텍처를 중심으로 논의를 시작하는 방식으로 동작한다. 이후 사용자가 입력한 설명을 바탕으로 리액트(React)와 테일윈드(Tailwind) 기반 프런트엔드와 디노(Deno) 백엔드를 결합해 애플리케이션을 생성한다. 전자상거래, 콘텐츠 관리, 생산성 등 다양한 활용 사례에 맞는 템플릿을 제공해 개발 속도를 높일 수 있다. 또한 드래그앤드롭 방식의 시각적 UI 편집기를 통해 세부 요소를 빠르게 수정할 수 있어, 텍스트로 변경 사항을 일일이 설명하는 방식보다 효율적이다.

베티 블록스(Betty Blocks)

베티 블록스(Betty Blocks)는 ‘시민 개발자’를 주요 타깃으로 하는 노코드 플랫폼이다. 이는 프로그래밍 경험은 없지만 조직 내 요구사항을 잘 이해하고 있는 사용자들을 의미한다. 해당 플랫폼은 사용자의 설명을 기반으로 리액트 코드를 생성하며, 이를 코드 저장소로 내보내 추가 개발에 활용하거나 WASM 형태로 컴파일해 배포할 수 있다. 또한 시각적 인터페이스를 통해 세부 조정이 가능한 ‘로우코드’ 기능도 함께 제공한다.

블링크(Blink)

블링크(Blink)는 코드 생성 에이전트로, 두 가지 모드에서 타입스크립트(TypeScript) 기반 리액트 애플리케이션을 생성한다. 하나는 실제 개발을 수행하는 ‘에이전트 모드’, 다른 하나는 논의와 설계를 위한 ‘채팅 모드’다. 개발자는 채팅을 통해 방향을 설정한 뒤 두 모드를 오가며 작업을 진행한다. 완성된 애플리케이션은 블링크의 내부 CDN에서 호스팅할 수 있으며, 자체 서버나 클라우드 환경으로 내보내는 것도 가능하다.

볼트(Bolt)

볼트(Bolt)는 다양한 백엔드 AI 코딩 모델을 하나의 시각적 인터페이스에서 활용할 수 있도록 설계된 노코드 채팅 서비스다. 앤트로픽의 클로드, 구글 제미나이 등 주요 모델을 포함해 여러 코딩 에이전트를 선택적으로 사용할 수 있다. 특히 디자인 레이어를 분리해 공통 디자인을 정의하고, 이를 모든 생성 애플리케이션에 일관되게 적용할 수 있는 것이 특징이다. MIT 라이선스로 공개된 오픈소스 버전도 제공된다.

버블(Bubble)

버블(Bubble)은 단순한 채팅 기반 인터페이스를 넘어 다양한 시각적 기능을 제공하는 노코드 도구다. 전체 UI를 직접 조정할 수 있는 시각적 편집기와 함께, 내부 동작 구조를 확인할 수 있는 워크플로우 뷰를 제공한다. 이를 통해 사용자는 결과를 추측하는 수준을 넘어 실제 동작을 이해하고 제어할 수 있다. 궁극적으로 인간과 AI가 협력하는 개발 환경을 구현하는 것을 목표로 한다.

클로드 코드(Claude Code)

AI 기업 앤트로픽의 클로드 코드는 새로운 애플리케이션 생성은 물론 기존 코드 수정까지 다양한 개발 요구를 처리하는 데 강점을 보인다. 백엔드는 VS코드(VSCode) 등 주요 IDE와 연동되며, 터미널 창이나 슬랙(Slack) 채널을 통해 접근하는 사용자도 많다. 특히 대규모 코드베이스에서 문제 해결의 출발점을 찾는 용도로 자주 활용된다. 사용자들은 클로드가 각 조직의 코딩 표준에 유연하게 적응하는 점을 높이 평가한다.

컨티뉴(Continue)

컨티뉴(Continue)는 오픈소스 기반 에이전트로, 추가적인 지원이 필요한 전문 개발자에게 적합한 도구다. 코드 저장소를 지속적으로 모니터링하면서 새로운 풀 리퀘스트 등 특정 이벤트가 발생하면 AI 에이전트를 호출해 반복 작업을 처리한다. 모든 작업을 자동화하기보다는 단순하고 반복적인 업무를 대신 수행해 개발자가 창의적인 작업에 집중할 수 있도록 돕는 것이 목표다. 다양한 IDE와 AI API와의 연동도 지원한다.

크리에이트(Create.xyz)

크리에이트(Create.xyz)는 ‘애니띵(Anything)’이라는 이름의 도구를 통해 간단한 텍스트 입력만으로 다양한 리액트/테일윈드 애플리케이션을 생성할 수 있도록 한다. 데이터베이스 접근 등 주요 기능은 미리 설계된 컴포넌트로 구성되며, 브라우저와 모바일 환경 모두에서 원활하게 작동한다. 이후 개발자는 생성된 코드에 직접 접근해 필요한 부분을 수정하거나 추가 기능을 구현할 수 있다.

커서(Cursor)

커서(Cursor)는 기존 개발 환경에서 활용할 수 있는 AI 기반 개발 도우미로, 많은 숙련 개발자들 사이에서 높은 평가를 받고 있다. 새로운 코드 작성, 기존 코드 검토, 슬랙 등 협업 채널에서 발생하는 이슈 추적까지 다양한 역할을 수행한다. 여러 파일을 동시에 처리하고 전체 코드베이스를 분석해 실행 계획을 제시한 뒤 실제 작업까지 수행하는 것이 특징이다. 전통적인 개발 환경을 기반으로 하기 때문에 ‘노코드’ 도구로 보기는 어렵지만, 실제로는 개발자가 직접 작성하는 코드의 양을 크게 줄여준다.

이머전트(Emergent)

이머전트(Emergent)는 여러 AI 에이전트를 결합한 웹 애플리케이션으로, 사용자의 텍스트 설명을 기반으로 프런트엔드(React), 백엔드(Node.js), 데이터베이스(MongoDB), 그리고 다양한 API(Stripe 등 통합 포함)까지 한 번에 생성한다. 단순한 보조 기능을 넘어 개발 과정의 복잡성을 사용자에게서 완전히 숨기는 것이 목표다. 비개발자는 원하는 애플리케이션을 직접 구축할 수 있고, 개발자는 빠르게 프로토타입을 제작해 실제 서비스에 가까운 형태로 배포할 수 있다.

킬로 코드(Kilo Code)

킬로 코드(Kilo Code)는 대규모 코드베이스를 유지·확장하는 개발자를 위한 오픈소스 코딩 에이전트다. ‘오케스트레이터 모드(Orchestrator Mode)’를 통해 작업 계획을 수립할 수 있으며, 코드 리뷰 기능은 오류를 이중으로 점검한다. 또한 ‘메모리 뱅크(Memory Bank)’를 통해 프로젝트 아키텍처에 대한 핵심 정보를 저장·관리할 수 있다. 500개 이상의 AI 모델과 연동을 지원해 특정 플랫폼에 종속되지 않고, 상황에 맞는 최적의 모델을 선택할 수 있는 유연성도 제공한다.

린디(Lindy)

린디(Lindy)는 슬랙 메시지나 코드 저장소의 새로운 커밋과 같은 이벤트를 트리거로 동작하는 ‘에이전트’ 생성에 초점을 맞춘 도구다. 이러한 이벤트는 주요 클라우드 서비스와 지라(Jira), 조호(Zoho) 등 업무 관리 플랫폼을 포함한 수백 개 웹 애플리케이션에서 발생한다. 사전 정의된 템플릿을 활용하면 일반적인 애플리케이션을 보다 빠르게 구축할 수 있으며, 대표적인 활용 사례로는 고객 지원 에이전트가 꼽힌다.

러버블(Lovable)

러버블(Lovable)은 대화를 통해 요구사항을 파악한 뒤 자체 클라우드 환경에서 애플리케이션 생성과 배포까지 자동으로 수행하는 노코드 플랫폼이다. UI(리액트와 테일윈드), 비즈니스 로직, 데이터베이스(주로 Supabase)를 모두 처리한다. 비교적 완성도 높은 인터페이스와 함께 보안 및 접근 제어 기능을 갖춘 엔터프라이즈급 프로토타입을 빠르게 구축할 수 있는 것이 특징이다.

리플릿(Replit)

리플릿(Replit)은 최대 30개 프로그래밍 언어를 지원하는 노코드 솔루션으로, 주요 언어는 물론 다양한 비주류 언어까지 폭넓게 지원한다. 기본 인터페이스는 노코드 챗봇 형태지만, 생성된 코드는 이후 저장소에 저장돼 전통적인 방식으로 추가 개발이 가능하다. 데이터베이스 계층을 분리해 운영 환경과 테스트 환경을 구분하는 등 보다 전통적인 개발 접근도 지원한다. 또한 팀 단위 협업 기능을 통해 구성원들이 함께 대화하며 애플리케이션을 개선할 수 있다.

소프트젠(Softgen)

소프트젠(Softgen)은 간단한 텍스트 설명을 기반으로 Next.js 웹 애플리케이션을 생성하는 도구다. 클로드 4.5, 제미나이 등 주요 AI 모델과 연동해 최소 기능 제품(MVP)을 빠르게 구축할 수 있다. 사용량 기반 과금 방식을 적용해 애플리케이션에 필요한 토큰만큼만 비용을 지불하면 된다.

솔리드(Solid)

기본적인 애플리케이션 생성이 점차 쉬워진 가운데, 솔리드(Solid)는 ‘엔터프라이즈급’ 배포에 초점을 맞추고 있다. 고도화된 보안 모델과 분산 배포 환경을 지원하는 것이 특징이다. 문서에서는 AI와 반복적으로 협업하며 강점을 활용하는 방식을 강조하는데, 주로 리액트/테일윈드 기반 프런트엔드와 노드JS 상에서 타입스크립트로 구현된 백엔드, 그리고 포스트그레SQL 데이터베이스 조합을 중심으로 구성된다.

템포 랩스(Tempo Labs)

템포 랩스(Tempo Labs)는 시각적 편집기를 통해 개발 속도를 크게 높이는 데 초점을 맞춘 도구다. 간단한 UI 작업은 AI에 의존하지 않고 직접 수행할 수 있으며, 디자인 중심의 개발 환경을 제공한다. 프로젝트별로 표준 UI 요소 라이브러리를 유지하고, 기존 리액트 코드베이스를 불러와 사전 개발된 컴포넌트와 템플릿으로 확장할 수 있다. 직관적인 인터페이스를 통해 ‘바이브 코딩’ 경험을 극대화하는 것이 특징이다.

버셀(Vercel)

버셀(Vercel)의 v0 시스템은 다양한 템플릿을 기반으로 애플리케이션 설계를 시작할 수 있도록 지원한다. 기본 인터페이스는 여전히 채팅 형태지만, 템플릿은 설계 아이디어를 제공하는 동시에 명세 작성의 공통 언어 역할을 한다. 또한 디자인 템플릿을 활용해 여러 애플리케이션 간 UI 일관성을 유지할 수 있으며, 모바일 브라우저에 최적화된 구조를 통해 모바일 대응 웹사이트를 쉽게 구축할 수 있다.

윈드서프(Windsurf)

윈드서프(Windsurf)는 대규모 코드베이스를 다루는 팀을 위한 AI 내장형 IDE다. 버그 수정이나 기능 추가와 같은 복잡한 작업을 여러 단계에 걸쳐 처리할 수 있도록 설계됐다. 전통적인 개발 방식을 보완하는 형태의 바이브 코딩 도구라고 할 수 있다. 특히 탭(Tab) 키를 활용한 인터페이스가 특징으로, 제안된 수정안을 단계별로 확인하며 승인할 수 있다. 이 IDE는 이러한 다단계 작업 흐름을 ‘캐스케이드(cascade)’라고 부른다.
dl-ciokorea@foundryco.com

  • ✇Security | CIO
  • 19 vibe coding tools for democratizing app development
    Who doesn’t want an AI to pump out more code in minutes than a human might write in a month? Who doesn’t like magic? That’s what the hype around vibe coding has asked of developers and business users alike since its inception. But now the tools might have matured enough to deliver. Yes, cautious leaders are right in wondering, “What’s the catch? Is this a trap?” After all, the AIs learned to code from examining code created by humans, and humans fail. So it should be
     

19 vibe coding tools for democratizing app development

1 de Maio de 2026, 07:01

Who doesn’t want an AI to pump out more code in minutes than a human might write in a month? Who doesn’t like magic? That’s what the hype around vibe coding has asked of developers and business users alike since its inception.

But now the tools might have matured enough to deliver.

Yes, cautious leaders are right in wondering, “What’s the catch? Is this a trap?” After all, the AIs learned to code from examining code created by humans, and humans fail. So it should be no surprise that some vibe coders and industry experts are reporting vulnerabilities such as undocumented endpoints and sensitive data leakage.

Still, many are diving right into the vibe-code deep end and are reporting positive results. The new tools, they say, are amazing. Vibe coders can build a prototype in minutes and a minimum viable product in a few iterations. In goes a few handwavy sentences, and out comes something that used to take weeks to produce — not to mention all the red tape in tapping development time, if you’re a business user. Sure there can be errors and omissions, but are they any worse than what a team of humans might inadvertently include or overlook?

The reality is that vibe coding is legit enough that enterprises need to start experimenting. The platforms offer numerous differences. Some are better suited to helping professional developers who often need to work with large code bases. Others want to help not-so-professional programmers who know what they want but aren’t ready to write it all by themselves. Maybe they don’t know a particular programming language or maybe they don’t know much about programming at all. Still others are aimed at complete novices who can barely turn on a computer.

And it’s not just the level of experience that distinguishes them. Some tackle smaller wishes —the kind of tools that professional developers can use as a “force multiplier.” Others create entire applications, from database to front-end, and they’re best for people who want to create a prototype. The architecture may not be ready to scale for a large user base, but they’re ideal for presenting to the boss or some investors. They can also do a pretty good job with a smaller collection of users.

Are any good enough? Can we work around their flaws? The only way to find out is to fire them up and look at the results.

Here is a list of 19 vibe coding tools worth checking out, in alphabetical order. They all promise to provide some amount of app-dev magic.

Base44/Wix

The process at Base44 (now owned by Wix) begins with a Builder Chat in which the discussion focuses on the data architecture. From there, your words guide a tool that merges React and Tailwind code for the front-end with a Deno backend. To speed things up, templates can jumpstart common use cases — ecommerce, content management, productivity, etc. Many parts of the UI can be adjusted with a drag-and-drop visual editor that can be much faster than trying to describe all your changes with words.

Betty Blocks

The developers of the Betty Blocks no-code system say they’re targeting “citizen developers” —non-programmers who know what their corner of the enterprise needs. The platform takes a description and then produces React code that can be exported to a code repository for future development or be compiled down to WASM level deployment. They also offer a “low-code” approach that gives a visual interface for further tweaking and refinement.

Blink

The code generation agent from Blink produces TypeScript React applications starting with two modes: agent mode, for building; and chat mode, for discussing and planning. Developers start with chatting and then toggle back and forth. Blink will host any application with its internal CDN or allow you to export it to your own servers or cloud.

Bolt

The no-code chat service from Bolt is designed to provide a single visual interface to various backend coding AIs. It’s possible to work with different coding agents, including some of the best-known ones, such as Anthropic’s Claude and Google’s Gemini. The design layer is broken out making it possible to create a standard design that is then adopted by any app that’s produced by the AI. An open-source version released under the MIT license is also available.

Bubble

The no-code tool from Bubble includes several features that let human users do more than just chat. A full visual editor lets developers adjust the interface directly. A workflow view outlines much of what’s going on underneath, again making it possible for the user to do more than guess. The goal is to make humans more of a partner.

Claude Code

Anthropic’s main LLM, Claude, is skilled in answering many programming needs, including either creating new applications or fixing old ones. The backend connects to many traditional IDEs, such as VSCode, but many users connect with Claude through a terminal window or even a Slack channel. One common use is to search through a large code base for the right place to begin fixing an issue. Many praise how it adapts to your local coding standards.

Continue

The open-source agent from Continue is best for professional developers who need a bit of extra help vibe coding. The tool will watch a code base for triggers, such as new pull releases, and then invoke AI agents to handle many of the chores. The goal isn’t to do everything, just the most boring things, so humans can be creative. The tool integrates with many IDEs and AI APIs.

Create

The tool from Create.xyz is called Anything because they want users to be able to create any possible React/Tailwind app from a simple text prompt. The results are crafted from many stylized components for tasks such as database access that run well in either the browser or a mobile platform. Developers can then dive into the code and add any human touches.

Cursor

Many old-school developers love Cursor because it’s designed to be the assistant they’ve never had. It writes new code, audits old code, and tracks issues evolving over channels like Slack. The tool can juggle multiple files and analyze entire codebases before proposing a plan of action and then executing it. It’s not exactly fair to call it “no-code” because it’s designed to work in a traditional development environment, but many users aren’t writing much code anymore because Cursor does so much.

Emergent

The web application from Emergent is a front-end for a team of AI agents that will turn your text description into a frontend (React), backend (Node.js), databases (MongoDB), and collection of APIs with full integrations (Stripe, etc.).The goal is not just to perform hand-holding, but to hide all the complexity of development behind a big facade. Non-developers can build everything they want while developers can knock off prototypes — all deployed in close to production-ready form.

Kilo Code

The open-source coding agent from Kilo has a number of features that will appeal to coders maintaining and extending larger code bases. Orchestrator Mode, for instance, helps create work plans while Code Review double checks for errors. A Memory Bank stores high-level details about the project’s architecture. Connections to more than 500 models avoids lock in and lets you choose the right model that’s delivering just what you want.

Lindy

The tool from Lindy focuses on creating “agents” that tend to be bits of code that sit in the background and respond to triggers like a Slack message or a new commit to a repository. There are hundreds of different web applications that can generate these events, including all major clouds and office organization sites such as Jira or Zoho. Many of the standard applications can be built more quickly by leveraging pre-defined templates. Some of the most common use cases include support agents.

Lovable

The no-code interface from Lovable chats with you for a bit and then builds the app and deployment in Lovable’s cloud. It handles the UI (React plus Tailwind), business logic, and database (mainly Supabase). The result is one of the fastest ways to spin up an enterprise-ready prototype with a fairly polished interface and many of the security and access control features required for bigger environments.

Replit

The no-code solution from Replit delivers code in up to 30 programming languages, supporting all the major languages and many of the minor ones. The main interface is a no-code chatbot, but after that it dumps the code in a repository where it can be further refined using traditional methods. The database layer is broken out and treated separately, which allows for a more traditional approach by enabling options such as a separate database for production and testing. There are enterprise features that allow a team to collaborate as they chat together to improve the app.

Softgen

The tool for creating full Next.js web apps from Softgen works with several of the major models, such as Claude 4.5 or Gemini, to turn a basic text description into a full minimum viable product. A pay-as-you-go option allows you to pay for only the tokens your application requires.

Solid

Creating basic applications isn’t too hard anymore. Solid emphasizes creating apps that offer “enterprise-grade” deployments with top-notch security models and distributed deployment. The documentation emphasizes working iteratively with the AI and playing to its strengths, which is crafting a React/Tailwind front end with a wide variety of backends that generally mean TypeScript code running on Node.js with PostgreSQL.

Tempo Labs

The visual editor from Tempo Labs aims to allow human users to create a React app ten times faster. Simple visual tasks can be performed without relying on the AI. The tool emphasizes design and maintains a library of standard elements for each project. Any React code base can be imported and extended with predeveloped components and templates. It’s an editor that lets you vibe.

Vercel

The v0 system from Vercel offers a large collection of templates as a foundation for any app designer. The main interface is still a chat box that accepts any designs, but the templates act as both inspiration and a shared language for writing the specifications. There are also design templates that make it simpler to harmonize several different applications by allowing you to define a look once and then reuse it easily. The templates are also focused on mobile browsers to make it simpler to create mobile-ready sites.

Windsurf

Teams working with big code bases can use Windsurf, an IDE with embedded AI that’s designed to handle longer, multi-step plans for fixing bugs or adding features to a code base. It’s vibe coding, but focused on assisting traditional techniques. The tab key in the Windsurf IDE is quite powerful. As you hit the tab key, it moves from suggested fix to suggested fix waiting for you to signal your approval by hitting it again. The IDE aims to produce multistep plans that it calls “cascades.”

  • ✇Security | CIO
  • Enterprise search has a relevance problem. Here’s what to do about it.
    Traditional keyword-based enterprise search fails to keep up with modern, unstructured data in emails, wikis, and chat, leading to massive productivity losses. Organizations must treat search as a strategic capability and adopt hybrid or AI-powered retrieval to unlock institutional knowledge and gain a competitive advantage.Enterprise search was never really broken. It just stopped keeping up. For years, keyword-based systems delivered what they were designed to deliver
     

Enterprise search has a relevance problem. Here’s what to do about it.

30 de Abril de 2026, 15:57

Traditional keyword-based enterprise search fails to keep up with modern, unstructured data in emails, wikis, and chat, leading to massive productivity losses. Organizations must treat search as a strategic capability and adopt hybrid or AI-powered retrieval to unlock institutional knowledge and gain a competitive advantage.

Enterprise search was never really broken. It just stopped keeping up.

For years, keyword-based systems delivered what they were designed to deliver: lists. Enter a term, get documents ranked by how many times that term appeared. For structured databases and predictable queries, this worked well enough. But the way enterprises produce and store information has changed dramatically, and keyword search has not kept pace.

Today, most enterprise knowledge lives in formats that keyword systems were never built to handle: email threads, support tickets, internal wikis, code repositories, policy documents, Slack channels, and meeting notes. These sources hold real institutional knowledge, but they are largely invisible to traditional search. When employees cannot find what they need, they spend time they do not have retracing steps, asking colleagues the same questions, and making decisions with incomplete information.

The cost is real. Research consistently shows that knowledge workers spend a significant portion of their week searching for information rather than acting on it. Every hour lost to ineffective search is an hour not spent on the work that actually drives business outcomes.

Why this is a strategy problem, not just a technical one

Most organizations treat search as infrastructure: something that runs in the background and gets upgraded when it breaks. This framing underestimates what is at stake.

When enterprise knowledge is hard to access, the organization pays in three ways. First, individual productivity drops as workers spend time searching rather than doing. Second, decision quality degrades because the information needed to make good decisions arrives late, arrives incomplete, or does not arrive at all. Third, institutional knowledge becomes locked inside siloed systems that cannot talk to each other.

The reverse is also true. Organizations that invest in high-quality enterprise search create compounding advantages. Teams work faster, decisions are better informed, and institutional knowledge becomes genuinely useful rather than theoretically accessible.

How search has evolved and where it is now

Understanding the path forward requires understanding how we got here. Enterprise search has moved through distinct phases, each representing a meaningful step forward but also introducing new limitations.

Lexical search: fast, predictable, but limited

Traditional lexical search matches exact keywords using inverted indexes. If a user types the right term, results appear quickly. If they do not, or if the content was written using different phrasing, the results are poor. Lexical systems are transparent and fast, but they require users to know exactly what they are looking for before they search.

Semantic search: understanding meaning, not just words

Semantic search introduced a fundamentally different approach. Rather than matching terms, semantic systems use vector embeddings to understand the intent and meaning behind a query. A user can ask a question in plain language, and the system will find conceptually related content, even if none of the exact words appear in the results. This was a significant improvement for knowledge discovery and natural language queries, but it introduced new challenges around transparency and relevance tuning.

Hybrid search: the dominant enterprise approach today

The most effective enterprise implementations today use hybrid search, which combines lexical and semantic techniques. Hybrid systems apply keyword precision where it matters, such as searching for a specific policy number or product code, and semantic understanding where it helps, such as answering a conceptual question about a process. Modern platforms like OpenSearch handle this automatically, dynamically routing queries to the right retrieval method.

AI-powered and agentic search: the next evolution

The leading edge of enterprise search today is AI-powered retrieval, including retrieval-augmented generation and agentic systems that can plan, reason, and synthesize across multiple sources. These systems do not return lists. They return answers. 

The business case for acting now

Enterprise search modernization is not an optional upgrade. Organizations that continue to rely on legacy keyword systems are accepting a growing competitive disadvantage as the volume of unstructured enterprise data increases and the gap between what employees need and what search delivers continues to widen.

The companies making progress are those that have started treating search as a strategic capability rather than a utility feature. That shift in framing changes what gets funded, who owns it, and how it gets built.

For engineering leaders, the practical question is not whether to modernize but how to do it responsibly: choosing the right architecture, deploying incrementally, and building for the long term. OpenSearch offers the open source platform designed exactly for this transition. With built-in hybrid search, native AI capabilities, and enterprise security included at no additional cost, OpenSearch enables organizations to modernize search infrastructure on their own terms.

Ready to build your modernization roadmap? The enterprise search implementation guide, Modernizing Search: An Enterprise Guide to AI-Powered Information Retrieval, provides the technical depth and strategic framework you need. Get your copy now.

  • ✇Security | CIO
  • 일문일답 | “S/4HANA 전환 2027년 기한 달성 어려워…ERP는 여전히 핵심”
    SAP S/4HANA로의 전환은 여전히 대기업 IT 환경에서 핵심 의제로 자리 잡고 있다. 그렇다면 기업들은 실제로 어느 단계에 와 있으며, 어떤 전략이 효과를 거두고 있을까. SAP 전환의 현재 진행 상황과 S/4HANA 도입 과정에서 CIO가 직면한 주요 과제, 그리고 최신 ERP 트렌드를 살펴보기 위해 기업용 컨설팅 업체 CBS(Corporate Business Solutions)의 대표 홀거 쉘과 인터뷰를 진행했다. Q: S/4HANA 전환이 현재 프로젝트 사업에 미치는 영향은 어느 정도인가?A(홀거 쉘): 핵심 동력이다. 현재 우리가 구축하는 솔루션의 약 98%가 SAP 기반이며, 상당수 프로젝트가 ERP 시스템 현대화와 S/4HANA 전환 필요성에서 비롯됐다. Q: 현재 기업들은 전환 과정에서 어느 단계에 와 있는가?A: 주요 산업군인 대기업과 중견 제조기업을 기준으로 보면, 긍정적으로 평가해 약 절반 정도가 전환을
     

일문일답 | “S/4HANA 전환 2027년 기한 달성 어려워…ERP는 여전히 핵심”

30 de Abril de 2026, 04:21

SAP S/4HANA로의 전환은 여전히 대기업 IT 환경에서 핵심 의제로 자리 잡고 있다. 그렇다면 기업들은 실제로 어느 단계에 와 있으며, 어떤 전략이 효과를 거두고 있을까.

SAP 전환의 현재 진행 상황과 S/4HANA 도입 과정에서 CIO가 직면한 주요 과제, 그리고 최신 ERP 트렌드를 살펴보기 위해 기업용 컨설팅 업체 CBS(Corporate Business Solutions)의 대표 홀거 쉘과 인터뷰를 진행했다.

Q: S/4HANA 전환이 현재 프로젝트 사업에 미치는 영향은 어느 정도인가?
A(홀거 쉘): 핵심 동력이다. 현재 우리가 구축하는 솔루션의 약 98%가 SAP 기반이며, 상당수 프로젝트가 ERP 시스템 현대화와 S/4HANA 전환 필요성에서 비롯됐다.

Q: 현재 기업들은 전환 과정에서 어느 단계에 와 있는가?
A: 주요 산업군인 대기업과 중견 제조기업을 기준으로 보면, 긍정적으로 평가해 약 절반 정도가 전환을 시작한 상태다. 다만 완전히 완료한 기업은 매우 적다.

Q: 많은 기업이 2027년 SAP 마감 기한에 직면해 있다. 현실적인가?
A: 많은 기업에는 해당되지 않는다. 특히 ERP 시스템이 다수인 대기업은 2027년까지 확장 유지보수 단계에 도달하기 어려울 가능성이 크다. 2030년이 보다 현실적인 목표 시점이다.

Q: 현재 주로 사용되는 전환 전략은 브라운필드인가, 그린필드인가?
A: 어느 하나의 순수한 방식이 지배적이지 않다. 완전히 새로 구축하는 전통적인 그린필드 방식은 실제로 성공 사례가 매우 제한적이다. 그렇다고 단순 기술 전환인 브라운필드가 제조기업에서 표준 방식으로 자리 잡은 것도 아니다.

Q: 그렇다면 실제로 선호되는 접근 방식은 무엇인가?
A: 대부분 기업은 혁신과 전환을 선택적으로 결합한 하이브리드 전략을 채택하고 있다. 흔히 ‘스마트 브라운필드’ 또는 ‘믹스 앤 매치’라고 부르는 방식이다.

그 이유는 명확하다. 순수 브라운필드 접근은 비용과 시간이 많이 들지만 추가적인 비즈니스 가치를 만들어내지 못한다. 기업은 단순히 지원 기간을 맞추는 데 그치지 않고, 실제 성과를 만들어내기를 원한다. 즉, 프로세스를 개선하고 혁신을 도입하며, 전환이 왜 필요한지 경영진에게 설명할 수 있는 명확한 가치를 확보하려는 것이다.

Q: SAP가 ‘클린 코어(Clean Core)’ 접근 방식을 강하게 강조하고 있다. 실제로 얼마나 현실적인가?
A: 단순하게 볼 수 있는 개념은 아니다. 클린 코어는 표준 소프트웨어만 사용하고 모든 커스터마이징이 사라진다는 의미가 아니다. 오히려 새로운 설계 패러다임에 가깝다. 핵심은 ERP 코어, 즉 S/4를 가능한 한 표준에 가깝게 유지하는 것이다. 그 외 기업별 맞춤 기능은 SAP BTP 같은 플랫폼이나 새로운 개발 방식으로 외부에서 구현하는 구조다.

Q: 과도한 표준화가 경쟁력을 약화시킬 수 있다는 지적도 있다. 어떻게 보는가?
A: 차별화는 여전히 매우 중요하다. 기업은 고유한 역량을 계속 보여줘야 한다. 클린 코어는 이를 구현하는 기술적 방식이 바뀌는 것일 뿐, 차별화 필요성 자체가 사라지는 것은 아니다.

그래서 우리는 ‘더 깨끗한 코어(cleaner core)’라는 표현을 사용한다. ERP 코어가 단순하고 정돈될수록 업그레이드와 신규 기능 도입이 쉬워지고, 기업의 민첩성도 높아진다. 다만 이는 급격한 변화가 아니라 점진적인 진화 과정이다.

Q: 기업들이 S/4HANA 프로젝트에서 복잡성을 가장 과소평가하는 부분은 무엇인가?
A: 가장 중요한 요소는 출발점이다. 특히 제조업에서는 20년 이상에 걸쳐 축적된 ERP 시스템이 많다. 이로 인해 데이터 구조는 복잡하고, 프로세스 역시 역사적으로 누적된 형태를 띤다. 이를 해체하고 표준화·통합된 목표 구조로 재정비하는 작업은 매우 큰 과제이며, 동시에 혁신 도입과 병행해야 하는 경우가 많다.

Q: 이러한 요소가 실제 프로젝트에는 어떤 의미를 가지는가?
A: 레거시 시스템이 많고, 프로세스·시스템·데이터 환경의 준비도가 낮을수록 전환 기간은 길어진다. 결국 미래 지향적인 ERP 플랫폼으로 가는 과정이 그만큼 복잡해진다.

Q: 현재 많은 기업이 SAP가 아닌 외부 AI 솔루션을 활용하고 있다. SAP가 압박을 받고 있다고 보는가?
A: 일정 부분 그렇다. 소프트웨어 기업이라면 반드시 성과를 보여줘야 한다. 다만 SAP는 역사적으로 신기술에서 선도 기업 역할을 해온 경우는 많지 않았다. 대신 비즈니스 맥락을 시스템에 통합하는 데 강점을 가져왔다.

현재 시장을 보면 AI는 고객에게 매우 중요한 요소다. 기업은 AI를 통해 경쟁력을 확보하고 효율성을 높이기를 기대한다. 그 결과, 과거에는 많은 AI 솔루션이 SAP 외부에서 구현됐다.

앞으로 중요한 지점은 SAP가 강조하는 ‘비즈니스 AI’다. 기업의 비즈니스 프로세스에서 생성되는 방대한 데이터를 어떻게 효과적으로 활용할 것인가의 문제다. 이러한 데이터는 ERP 시스템, 즉 기업의 핵심에 존재한다. SAP는 비즈니스 스위트와 ERP 중심 아키텍처를 기반으로 이 영역에서 매우 유리한 위치에 있다.

Q: 최근 AI와 전통적인 ERP 시스템의 미래를 둘러싼 논의가 활발하다. 비즈니스 모델이 위협받고 있다고 보는가?
A: 오해에 가깝다. 많은 사람들이 이른바 ‘챗GPT 모먼트’를 과대평가하면서 ERP 기반이 더 이상 필요 없다고 생각한다. 하지만 AI 시스템은 신뢰할 수 있고 의미적으로 정제된 데이터 기반을 필요로 한다. 이 지점에서 SAP는 구축된 시스템 환경과 고객 기반, 그리고 제조업 고객과의 경험을 바탕으로 분명한 강점을 가지고 있다.

Q: 그렇다면 ERP 공급업체의 비즈니스 모델이 구조적으로 위협받고 있다고 보지는 않는가?
A: 현재의 과열된 분위기는 점차 진정될 것이며, 견고한 ERP 백본 아키텍처의 중요성이 다시 인식될 것으로 본다. ERP는 ‘에이전트 기반 기업’과 같은 새로운 개념을 구현하는 기반이 된다. 이는 ERP와 같은 기존 트랜잭션 중심 시스템 위에 행동 시스템(system of action)을 구축하는 구조다.

Q: 이는 서비스나우(ServiceNow)와 같은 플랫폼 모델과 유사한 개념인가?
A: 그렇다. 그런 방향으로 발전할 가능성이 크다. 기존 시스템 위에 에이전트 레이어가 올라가는 구조다. SAP 역시 이러한 흐름에 맞춰 관련 기능과 솔루션을 아키텍처에 통합하고 있다.

Q: 전체 구조를 어떻게 이해해야 하는가?
A: 크게 두 가지 계층이 있다. 하나는 데이터 기록 시스템(system of record), 다른 하나는 실행 시스템(system of action)이다. 그리고 그 사이에 데이터 레이어가 존재한다. SAP는 ‘비즈니스 데이터 클라우드(Business Data Cloud)’를 통해 데이터에 맥락을 부여하고, 분석과 트랜잭션 영역을 통합하는 방식을 제시하고 있다. 이는 시스템과 벤더를 넘어서 적용된다.

Q: 이러한 구조가 의미하는 바는 무엇인가?
A: 핵심 가치는 여전히 기업이 보유한 데이터에 있다. 단순한 데이터가 아니라, 명확히 이해되고 맥락이 부여된 비즈니스 데이터다. 이러한 데이터가 있어야 ‘에이전틱 기업’이 안정적으로 작동할 수 있고, 궁극적으로 신뢰를 확보할 수 있다.

Q: AI 공급업체들이 오류율을 크게 낮췄다. 그럼에도 기업이 요구하는 수준의 신뢰성에는 아직 못 미친다는 평가가 있다. 어떻게 보는가?
A: 현재 수준은 아직 충분하지 않다. 기업의 핵심 의사결정에 필요한 100% 신뢰성에는 도달하지 못한 상태다.

지금의 AI는 기술적으로 잘 구현됐다고 하더라도 상당 부분 추정에 기반해 작동한다. 물론 이를 계속 개선할 수는 있지만, 본질적으로는 추정의 성격을 완전히 벗어나기 어렵다.

결국 중요한 것은 의미적으로 정제된 데이터 기반의 신뢰성이다. 이를 위해서는 감사 기준에도 부합할 수 있는 안정적이고 신뢰할 수 있는 데이터 코어가 필요하다. 그리고 바로 그 기반을 ERP 시스템이 제공한다.

Q: 그렇다면 ERP 시스템은 여전히 핵심 인프라인가?
A: 그렇다. ERP는 여전히 기업의 핵심 백본이다. 특히 제조기업은 ERP에 막대한 투자를 해왔다. 이를 단순히 에이전트 기반 시스템으로 대체하는 것은 현실적으로 불가능하다고 본다.

Q: 현재 SAP 환경에서 AI 프로젝트는 어느 정도 진행되고 있는가?
A: 이미 1년 전부터 명확한 기준을 세웠다. SAP 환경에서 고객의 디지털 전환 목표를 설계할 때 AI를 초기 단계부터 필수 요소로 포함하고 있다. 가능한 모든 영역에 AI를 적용해 최대한의 가치를 창출하려는 방향이다.

AI는 컨설팅 사업 자체를 변화시키고 있으며, SAP 기반 미래 솔루션의 핵심 요소로 자리 잡고 있다. 현재도 이 분야에 지속적이고 적극적으로 투자하고 있다.

Q: SAP 내 AI는 어떤 방식으로 구분되는가?
A: SAP는 AI를 임베디드 AI와 커스텀 AI로 구분한다. 임베디드 AI는 표준 소프트웨어에 포함된 기능이며, 커스텀 AI는 SAP 기술 기반, 예를 들어 비즈니스 테크놀로지 플랫폼을 활용해 특정 업무에 맞춰 개발하는 솔루션이다.

그동안은 코어 영역에서 제공되는 기능이 제한적이었기 때문에 커스텀 AI 중심으로 프로젝트를 진행해왔다. 하지만 최근 들어 상황이 크게 달라지고 있다.

Q: 현재 프로젝트에서 AI의 역할은 무엇인가?
A: 기대 수준이 크게 높아졌다. 이제는 현업 부서와의 워크숍에서 AI 활용 가능성이 기본 논의 항목으로 자리 잡았다.

프로세스를 분석하면서 반복 작업이 어디에 있는지, 자동화가 가능한 영역은 어디인지, 분석이나 제어 과정에서 복잡성을 AI로 줄일 수 있는 부분은 어디인지 등을 함께 검토한다. 실제 적용 가능한 영역이 점점 늘어나고 있으며, 고객의 관심도도 빠르게 증가하고 있다.

Q: AI가 향후 새로운 성장 영역이 될 수 있는가? 현재는 S/4HANA 전환이 주요 사업 동력인데, 이후에는 어떻게 되는가?
A: 결국 기업들은 S/4 전환을 완료하게 된다. 기술 중심 마이그레이션에만 집중한 사업자에게는 위험 요소가 될 수 있다. 하지만 시장은 그렇게 단순하지 않다.

S/4 수요가 갑자기 사라지지는 않는다. 단기적으로는 마이그레이션 중심 프로젝트가 줄어들 수 있지만, 그 자리는 후속 프로젝트가 대체하게 될 것이다.

Q: 후속 프로젝트란 무엇을 의미하는가?
A: 이미 ‘포스트 S/4 전환’이라는 개념이 논의되고 있다. 많은 기업이 브라운필드 방식으로 S/4HANA로 전환했지만, 프로세스를 근본적으로 바꾸지는 않았다. 즉, 실제 핵심 변화라고 할 수 있는 혁신과 비즈니스 프로세스 고도화는 이제부터 시작되는 단계다.

대외적으로는 기업의 전환 여정이 실제보다 단순하게 인식되는 경우가 많다. 하지만 이는 단순히 새로운 시스템을 도입하는 문제가 아니라, 데이터 기반 조직으로 점진적으로 진화하는 과정이며, 궁극적으로는 ‘에이전틱 엔터프라이즈’로 나아가는 흐름이다.

Q: 그렇다면 전환은 장기적인 과정이라고 볼 수 있는가?
A: 그렇다. 기업은 시간을 두고 역량을 체계적으로 확장해 나간다. S/4HANA는 목적지가 아니라 출발점에 가깝다. 현재 기업들은 비즈니스 전환, 기술 전환, 프로세스 혁신, 디지털 전환 등 보다 폭넓은 과제를 동시에 추진하고 있다.

Q: SAP가 자체 기능을 확대하면 컨설팅 파트너의 역할은 줄어들지 않는가?
A: 핵심은 컨설턴트가 기업을 어떻게 변화로 이끄는가에 있다. 비즈니스 프로세스를 어떻게 설계하고, 미래로 가는 경로를 어떻게 제시하느냐가 중요하다. 이러한 역할은 앞으로도 컨설팅의 중심이 될 것이다.

실제로 컨설팅 영역은 축소가 아니라 확대되고 있다. 시스템과 프로세스의 복잡성이 크게 증가하고 있으며, ‘에이전틱 엔터프라이즈’와 같은 개념은 새로운 의미적 계층을 추가하고 있다. 그 결과 IT와 프로세스 환경은 더욱 정교하고 어려워지고 있다.

기업은 이러한 복잡성을 이해하고 구조화하며, 이를 기반으로 실질적인 비즈니스 해법을 도출하는 데 점점 더 많은 지원을 필요로 한다. 이것이 바로 컨설팅의 역할이다.

반면, 전통적인 개발·테스트·설정과 같은 단순 구현 중심 서비스는 압박을 받고 있다. 이러한 흐름은 이전부터 존재했지만, AI와 SAP의 전략 변화로 인해 앞으로 더 가속화될 가능성이 크다. 해당 영역은 대체 가능성이 높아지며 수요도 점차 줄어들 것으로 보인다.
dl-ciokorea@foundryco.com


  • ✇Security | CIO
  • Oracle NetSuite announces AI coding skills for SuiteCloud developers
    Oracle NetSuite is adding AI capabilities to SuiteCloud to help developers customize its ERP platform faster using natural language prompts. In a statement, the company said its NetSuite SuiteCloud Agent Skills “will make it easier for developers to create customized vertical and industry-specific applications by giving AI coding assistants a better understanding of the conventions, patterns, and best practices in SuiteCloud – NetSuite’s standards-based AI extensibility
     

Oracle NetSuite announces AI coding skills for SuiteCloud developers

29 de Abril de 2026, 07:19

Oracle NetSuite is adding AI capabilities to SuiteCloud to help developers customize its ERP platform faster using natural language prompts.

In a statement, the company said its NetSuite SuiteCloud Agent Skills “will make it easier for developers to create customized vertical and industry-specific applications by giving AI coding assistants a better understanding of the conventions, patterns, and best practices in SuiteCloud – NetSuite’s standards-based AI extensibility and customization platform.”

The new skills give AI coding assistants NetSuite-specific development guidance, including UI framework references, permission codes, SuiteScript fields, documentation practices, OWASP security guidance, and tools to help migrate older SuiteScript 1.0 code to SuiteScript 2.1.

This comes as developers increasingly use AI coding assistants in their daily work. Stack Overflow’s 2025 Developer Survey found that 84% of respondents were either using or planning to use AI tools in their development process, up from 76% a year earlier.

The tougher challenge for enterprise software vendors is making those tools understand how business applications actually work. For platforms like NetSuite, useful AI assistance requires knowledge of the platform’s own APIs, permission models, UI conventions, and business workflows. In ERP systems, even a small customization error can ripple into core business operations.

Impact and adoption challenges

NetSuite said it is “introducing SuiteCloud development guidance across more than 25 AI coding platforms.” Analysts said this could reduce friction for developers by making NetSuite-specific knowledge available across widely used AI coding tools, rather than limiting it to a single vendor-controlled environment.

“If you can package platform-specific knowledge in a format that drops into any of the major AI coding tools through an open framework, removing a lot of friction, that is great for enterprise developers,” said Neil Shah, VP for research at Counterpoint Research.

However, broader adoption across enterprise software platforms may depend on how ready vendors and customers are to switch from their long-established development practices.

“Enterprises have already invested in systems and personnel to build their applications using their own proprietary approaches,” Shah said. “We will have to see how soon vendors adopt this new approach and whether they are ready to let go of sunk costs and perhaps some personnel.”

In this sense, the technology may be more immediately useful for new applications or for modernization work around legacy systems, rather than for wholesale redevelopment of existing enterprise applications. Cost and governance are other important considerations.

“What the token economics will be as enterprises get up the learning curve remains to be seen, as the initial token burn rate is likely to be significantly higher,” Shah said. “Also, security and risk are big challenges here, as ERP apps are tightly coupled, and one small change in approach that does not work well with the proprietary stack could break downstream workflows and become a disaster.”

That means companies are likely to test such tools cautiously, especially for customizations that touch sensitive data. Shah said that enterprises will have to use this in a sandboxed environment to check for code hallucinations and to see what breaks in terms of business logic, security, or privacy.

  • ✇Security | CIO
  • SAP 2027 deadline for S/4HANA out of reach for most customers
    Migrating to SAP S/4HANA remains a dominant topic in the IT landscape of large corporations. But where do these companies really stand — and which strategies are proving successful? Computerwoche spoke with Holger Scheel, managing director of cbs (Corporate Business Solutions), to get the SAP consultant’s insights into the current state of SAP migrations, the challenges CIOs face in shifting to S/4HANA, and the latest ERP-related trends. Here is that interview, edite
     

SAP 2027 deadline for S/4HANA out of reach for most customers

29 de Abril de 2026, 06:00

Migrating to SAP S/4HANA remains a dominant topic in the IT landscape of large corporations. But where do these companies really stand — and which strategies are proving successful?

Computerwoche spoke with Holger Scheel, managing director of cbs (Corporate Business Solutions), to get the SAP consultant’s insights into the current state of SAP migrations, the challenges CIOs face in shifting to S/4HANA, and the latest ERP-related trends.

Here is that interview, edited for clarity and length.

Computerwoche: How strongly is the topic of S/4HANA transformation currently shaping your project business?

Holger Scheel: It’s a key driver — around 98% of the solutions we implement are based on SAP, with a large number of projects triggered by the need to modernize ERP systems and implement the S/4HANA transformation.

Where do companies currently stand in this transformation?

If we look at our core sectors — large and midsize industrial companies — then I would say: To put it positively, about half of the companies have started the transformation. But very few have completely finished it.

Many companies are facing the SAP deadline of 2027. Is that realistic?

Not for many. Larger companies with numerous ERP systems, in particular, are unlikely to make it to extended maintenance by 2027. The 2030 timeline is a more realistic target.

Which migration strategy currently dominates — brownfield or greenfield?

Neither in its purest form. The classic greenfield approach — developing everything from scratch — has proved successful and feasible for very few companies. But even the purely technical conversion, i.e., brownfield, is not the dominant standard among manufacturing customers.

What is the preferred approach instead?

The majority of companies are pursuing a hybrid approach — a selective combination of innovation and transformation. This is often called “smart brownfield” or “mix & match.”

The reason: A purely brownfield approach costs money and time but delivers no added value. Companies want more than just to stay within the release window — they want to create real added value. They want to improve processes, introduce innovations, and be able to explain to their management why the transformation is worthwhile.

SAP strongly promotes the “clean core” approach. How realistic is that in practice?

You have to look at it in a nuanced way. Clean Core doesn’t mean I only use standard software and all custom developments disappear. It’s more about a new design paradigm. The idea is that the ERP core — i.e., S/4 — remains as close to the standard as possible. Everything a company needs beyond that in terms of customization for its business is then organized externally — for example, via platforms like SAP BTP — or through new development approaches.

Critics argue that excessive standardization jeopardizes competitive advantages. What is your view?

Differentiation remains absolutely crucial. Companies must continue to showcase their specific capabilities. Clean core only changes how this is implemented technically — not the need for differentiation.

We therefore speak more of a “cleaner core”: The cleaner the ERP core, the easier upgrades and the use of new functionality become, and the more agile the company becomes. But it remains an evolutionary process — not a radical break.

Where do companies most underestimate the complexity of S/4HANA projects?

A key point is the initial state. Especially in industry, many companies have ERP systems that have grown organically over 20 years — with correspondingly complex data structures and historically evolved processes. Breaking all of this down and achieving a more standardized, harmonized, and consolidated target state is an enormous task that often has to be mastered in conjunction with the establishment of innovations.

What does this mean specifically for the projects?

The more legacy systems a company has and the less prepared its process, system, and data landscape is, the longer the transformation will take. The path to a future-proof ERP platform then becomes correspondingly more complex.

Many users are currently relying on external AI solutions rather than SAP offerings. Do you see the company under pressure in this regard?

Of course, SAP is under pressure — a software manufacturer has to deliver. But historically, SAP has rarely been a first mover with new technologies. Its strength has always lay in the business context that SAP integrates into its systems.

Looking around, AI is clearly of enormous importance to customers. Companies expect it to give them a competitive edge and increase efficiency. Consequently, many solutions have been implemented outside of SAP in the past.

Looking ahead, the crucial point is that SAP is talking about “Business AI”: How can I effectively utilize the wealth of data from my business processes? And this data resides in the ERP system — the “heart and soul” of the company. SAP is, of course, exceptionally well-positioned here thanks to its Business Suite and ERP-driven architecture.

There is currently a lot of discussion about AI and the future of traditional ERP systems. Is the business model threatened?

That’s a misconception. Many people are overestimating this “ChatGPT moment” and think an ERP foundation is no longer needed. AI systems require a reliable, semantically clean data foundation. And that’s where SAP has a significant advantage with its environments, installed base, and experience with industrial customers.

So you do not believe the business model of ERP providers is under sustained threat.

We believe that the current hysteria will subside and that the crucial importance of a solid ERP backbone architecture will be recognized. It forms the basis for implementing new concepts such as an agent-driven company — that is, a system of action built on a classic transactional system of record, such as ERP.

Is this similar to platform approaches — such as ServiceNow’s — that function as a higher-level platform on which agents work and retrieve the necessary data from various systems?

Exactly. That’s how it will be. This is essentially the agent layer that’s placed on top of the existing systems. SAP is also adding this and integrating corresponding solutions and functions into its architecture.

Ultimately, we’re talking about different levels: the system of record on the one hand and the system of action on the other. The data layer lies in between. SAP addresses this, for example, with its Business Data Cloud, to contextualize data and merge analytical and transactional levels — even across systems and vendors.

What does that mean?

My conviction is that the real value still lies in the data treasure trove of companies — that is, in clearly understood and contextualized business data. This remains the crucial foundation for an “agentic company” to function in the future and ultimately be trustworthy.

AI providers have significantly reduced error rates. Nevertheless, we are probably still far from the 100% reliability that companies need for business-critical decisions.

What AI systems can do today, in many cases, is guesswork, provided it’s technically well implemented — and while you can perfect that further, it ultimately remains guesswork.

The reliability of the semantic foundation is crucial. This requires a stable, trustworthy data core — a foundation that even auditors will accept. And that is precisely the foundation that ERP systems provide.

The ERP system therefore remains the backbone.

I am convinced this foundation remains indispensable. Manufacturing companies in particular have invested heavily in their ERP systems. They will not simply abandon them and replace them with purely agent-based systems. I consider that impossible.

Are you already managing many AI projects in the SAP environment for your customers?

We issued a clear guideline over a year ago: When we develop target scenarios for our clients’ digital transformation in the SAP environment, AI is an integral part of the process from the very beginning. We strive to consistently incorporate it and use it to create as much added value as possible. AI shapes our consulting business; AI shapes the SAP-based solutions of the future. AI is a driving force for us — we are investing heavily and sustainably in this area.

It’s important to understand that the possibilities within SAP itself have only developed gradually. SAP distinguishes between embedded AI and custom AI. Embedded AI is what the standard software already provides. Custom AI, on the other hand, refers to specific solutions developed for individual use cases based on SAP technologies — such as the Business Technology Platform.

In recent years, we’ve primarily worked in the custom AI environment because there simply weren’t that many ready-made features available in the core area. However, that has changed significantly since then.

What role does AI play in your projects today?

The expectations have risen significantly. In our workshops with the specialist departments, the question of AI potential is now a standard part of the discussion. We examine processes and consider: Where are there repetitive tasks, where can automation help, where can complexity — for example in analysis or control — be reduced through AI?

And we are finding there are more and more meaningful areas of application and a growing interest on the part of customers to actively address such topics.

Will AI become a new growth area for you? Currently, your business is primarily driven by S/4HANA migrations. What will happen when this wave subsides?

Eventually, companies will have completed their S/4 migration. For providers who have focused exclusively on technical migrations, this may be a risk. But that’s not how we see the market.

Demand for S/4 will not disappear abruptly. While purely migration-driven projects will decrease in the long term, they will be replaced by follow-up projects.

What do you mean by follow-up projects?

We’re already talking about “post-S/4 transformations.” Many companies, for example, migrated to S/4HANA using a brownfield approach, without fundamentally changing their processes. This means that the real substantive transformation — innovation and further development of business processes — is still to come.

The public perception often underestimates the true extent of a company’s transformation journey. It’s not just about a new system, but about a gradual evolution towards a data-driven organization — ultimately, an “agentic enterprise.”

So it’s a longer process?

Companies systematically expand their capabilities over time — and S/4HANA is more of a starting point than the destination. Today, companies are increasingly pursuing a broader agenda: business transformation, technology transformation, process innovation, and digital transformation.

Will there be enough left for consulting partners if SAP increasingly provides its own functionality?

The essential thing is the question: How do I, as a consultant, bring a company along? How do I design business processes? And how do I convey the path to the future? That will continue to be the core task of consulting — and that’s how we are positioned.

We see very clearly that the field of consulting is not shrinking, but expanding. Complexity is increasing massively. Concepts like the “agentic enterprise” add a new semantic layer. This makes IT and process landscapes even more demanding.

Companies increasingly need support to understand and structure this complexity and derive meaningful business solutions from it. That is precisely where our role lies. Services that are very close to pure implementation — classic development, testing, or configuration tasks — are under greater pressure. Although this has always been the case, AI and SAP’s strategic direction will likely intensify this pressure. Such tasks will become easier to replace and will tend to be in less demand.

  • ✇Security | CIO
  • MET Energía impulsa su transformación digital en un sector cada vez más complejo
    La transformación digital se ha convertido en uno de los grandes motores de cambio en el sector energético. La volatilidad de los mercados, la transición hacia modelos más sostenibles y la creciente complejidad en la gestión del consumo están obligando a las compañías a apoyarse cada vez más en la tecnología para optimizar sus procesos y mejorar su capacidad de adaptación. En este nuevo escenario, el dato se ha convertido en un activo estratégico. Plataformas de analíti
     

MET Energía impulsa su transformación digital en un sector cada vez más complejo

27 de Abril de 2026, 06:53

La transformación digital se ha convertido en uno de los grandes motores de cambio en el sector energético. La volatilidad de los mercados, la transición hacia modelos más sostenibles y la creciente complejidad en la gestión del consumo están obligando a las compañías a apoyarse cada vez más en la tecnología para optimizar sus procesos y mejorar su capacidad de adaptación.

En este nuevo escenario, el dato se ha convertido en un activo estratégico. Plataformas de analítica avanzada, inteligencia artificial o sistemas de monitorización del consumo en tiempo real permiten anticipar la demanda, mejorar la gestión del riesgo y ofrecer a los clientes una visión más precisa y transparente de su consumo energético. Al mismo tiempo, la automatización de procesos internos está ayudando a reducir cargas administrativas, mejorar la eficiencia operativa y acelerar la toma de decisiones.

La digitalización ha dejado de ser una cuestión tecnológica para convertirse en una palanca clave para la competitividad. En este contexto, MET Energía, compañía internacional del sector energético con más de 1.100 empleados y presencia en 17 países, lleva varios años impulsando una estrategia tecnológica orientada a digitalizar procesos, mejorar la gestión del dato y reforzar su capacidad de adaptación a un entorno energético en constante evolución.

La transformación digital en MET Energía se ha abordado desde el principio como un proceso transversal que afecta a distintas áreas de la organización. Según explica Ramón Vicioso, director de TI de MET Energía, este proceso se intensificó especialmente en 2024, cuando la compañía decidió acelerar su hoja de ruta tecnológica con el objetivo de abordar la digitalización de procesos que todavía se realizaban de forma manual y tener así “una visual del roadmap de nuestra infraestructura de IT”.  

Uno de los principales desafíos en esta fase inicial ha sido identificar correctamente los procesos susceptibles de digitalización y establecer una gobernanza adecuada del dato dentro de la organización.

El dato y la automatización como pilares del nuevo modelo

En la actualidad, la estrategia tecnológica de MET Energía se basa en la incorporación progresiva de plataformas digitales integradas, automatización de procesos y herramientas avanzadas de analítica de datos. Según explica el responsable de TI, esta evolución ha permitido mejorar tanto la eficiencia interna de la compañía como la relación con los clientes. “En los últimos años, MET Energía ha experimentado una evolución muy sólida a nivel tecnológico, alineada con la digitalización del sector energético y con un enfoque claro en la eficiencia y el dato”, señala.

Para ello, la compañía ha reforzado especialmente sus capacidades relacionadas con la gestión inteligente de la energía, la previsión de la demanda y la gestión del riesgo en un mercado energético cada vez más volátil.

Esta evolución tecnológica ha tenido también un impacto directo en los clientes, que ahora disponen “mayor transparencia, información más accesible y en tiempo real, procesos más ágiles y soluciones más personalizadas según su perfil de consumo”, explica Vicioso.

Ramón Vicioso, director de TI de MET Energía

MET Energía. En la imagen, Ramón Vicioso.

Inteligencia artificial y analítica para anticipar el consumo

El uso de tecnologías emergentes es otro de los elementos clave dentro del proceso de transformación digital de la compañía. En este sentido, MET Energía está desarrollando aplicaciones basadas en inteligencia artificial y analítica avanzada orientadas a mejorar la previsión de consumo energético y la conciliación de facturas con los datos recibidos de las distribuidoras. “Las aplicaciones que más estamos desarrollando son para la generación de previsiones de consumo y la conciliación de facturas de energía contra los consumos recibidos desde las distintas distribuidoras”, explica Vicioso.

En este proceso, el departamento de TI desempeña un papel estratégico y transversal dentro de la organización. “No se limita únicamente a dar soporte tecnológico, sino que actúa como un facilitador clave del negocio, alineando la tecnología con los objetivos corporativos y acompañando a las distintas áreas en su transformación digital”, señala el responsable tecnológico.

Proyecto Yooz: automatización contable

Uno de los proyectos recientes dentro del proceso de digitalización de MET Energía ha sido la implantación de Yooz, una solución de automatización del proceso de cuentas a pagar integrada con el ERP Microsoft Dynamics NAV.

Antes de su implantación, el departamento contable debía introducir manualmente las facturas de proveedores, un proceso que implicaba una elevada carga administrativa y retrasos en la contabilización. “El objetivo principal es el ahorro del tiempo en nuestro departamento contable ya que reciben muchas facturas en formato PDF y tenían que entrar en cada fichero y teclear manualmente dicha factura en nuestro sistema contable”, explica Vicioso.

La nueva solución ha permitido automatizar la lectura y el procesamiento de facturas, eliminando tareas repetitivas y agilizando el flujo de aprobación.

Uno de los factores decisivos para su implantación fue su compatibilidad nativa con Microsoft Dynamics NAV, el sistema contable de la empresa, lo que permitió una integración directa y sin desarrollos adicionales. Además, su interfaz sencilla e intuitiva facilitó una rápida adopción por parte de los usuarios, mientras que su tecnología de Smart Data Extraction basada en inteligencia artificial eliminó la necesidad de crear plantillas para la lectura de facturas, aportando flexibilidad y precisión.

Los resultados han sido significativos: MET Energía ha logrado acelerar notablemente el procesamiento y validación de facturas, eliminando tareas manuales repetitivas y reduciendo de forma significativa los plazos desde la recepción hasta el pago. Esta automatización ha permitido a los equipos administrativos centrarse en tareas de mayor valor añadido, mejorando tanto la productividad como la satisfacción interna. “Ahora podemos procesar más de 200 facturas en una hora cuando antes nos podía llevar varios días subirlas a nuestro sistema de aprobación de factura y creación en nuestro sistema contable”, señala el responsable tecnológico.

Además de acelerar los procesos, la automatización ha mejorado la trazabilidad y el control del proceso contable, liberando recursos internos para tareas de mayor valor añadido.

Próximos retos: modernizar infraestructuras y reforzar la gobernanza del dato

De cara a los próximos años, la compañía continuará avanzando en la modernización de su infraestructura tecnológica y en el desarrollo de nuevas capacidades digitales.

Entre los retos más inmediatos se encuentra la actualización de algunos sistemas tecnológicos y el refuerzo de la gobernanza del dato, un elemento clave para poder incorporar soluciones basadas en inteligencia artificial. “Modernizar toda nuestra infraestructura que se ha quedado obsoleta y seguir trabajando con la gobernanza del dato para incluir agentes digitales con tecnología IA”, resume Vicioso.

En un sector marcado por la volatilidad de los mercados energéticos, la presión regulatoria y la aceleración tecnológica, la transformación digital se ha convertido en una condición imprescindible para mantener la competitividad. En el caso de MET Energía, este proceso no se plantea como una iniciativa puntual, sino como una evolución permanente que seguirá desarrollándose en los próximos años.

  • ✇Security | CIO
  • Why bizware is becoming the dominant form of software
    Since the early 1950s, software has slowly moved from an obscure technical discipline to something that touches almost every person’s life every day. The transition was gradual at first. Most people didn’t have direct access to computers, but the businesses they interacted with did. Computers sat in back rooms quietly changing how companies handled inventory, accounting and customer relationships. Computing accelerated in the 1980s and 1990s. The computer went from an o
     

Why bizware is becoming the dominant form of software

20 de Abril de 2026, 06:00

Since the early 1950s, software has slowly moved from an obscure technical discipline to something that touches almost every person’s life every day. The transition was gradual at first. Most people didn’t have direct access to computers, but the businesses they interacted with did. Computers sat in back rooms quietly changing how companies handled inventory, accounting and customer relationships.

Computing accelerated in the 1980s and 1990s. The computer went from an obscure machine to something sitting on everyone’s desk and, eventually, their homes. At a minimum, people needed basic computer skills to complete everyday tasks.

Over the last 20 years, computing has evolved even further. It is no longer just a utilitarian tool; it is a fundamental part of daily life. Whether that’s good or bad is debatable, but it’s the reality we live in. And that reality requires massive technological infrastructure. Where businesses once needed buildings, now they also need websites.

To explain what this has done to software, it helps to look at another trade.

A skilled carpenter can build a beautiful mahogany table, cabinet or chair. Some spend decades mastering joinery, shaping, finishing and countless other techniques. With enough experience, they can build almost anything.

But homes are also built out of wood, and homes must be built in enormous quantities. There is massive economic pressure to build them quickly, efficiently and at scale. It would not be practical to build houses the same way master furniture makers build cabinets. The objectives are different. Home construction must happen quickly, with minimal waste, while still meeting building codes and safety standards. It is still carpentry, but it is a different discipline with different constraints.

The same thing has happened with software. The massive economic demand for digital infrastructure has created a new category of software work that operates very differently from traditional software engineering. Standing up the technology required to keep modern society running does not require deep knowledge of computer science or the inner workings of computers. Instead, it requires understanding a large ecosystem of specialized tools that assemble the components businesses need. It is still software, but software shaped by business infrastructure. This isn’t traditional software, but it is still a kind of software.

I call it bizware.

Software has split into two disciplines

This distinction becomes clearer when you look at how teams have transitioned in organizations. Traditional software teams are often organized around deep technical problems: building a compiler, optimizing a database engine or designing a new algorithm. Progress is measured by correctness, performance and innovation.

Bizware teams focus on something different. Most businesses now are not trying to develop software; instead, they need to deploy software to run their business. They are typically organized around business functions: payments, authentication, internal tools, customer dashboards or analytics pipelines. The goal is not to push the boundaries of computing, but to assemble reliable, secure systems quickly using existing components.

This difference in orientation changes how success is measured. In traditional software, elegance and efficiency matter. In bizware, speed, reliability and integration matter more. The system does not need to be perfect; it needs to work consistently and support the business.

Bizware is driven by business infrastructure, not computer science

Many traditional concepts of computer science are not central to bizware. Concepts like Von Neumann architecture, NP-completeness or decidability are rarely relevant. Instead, it is far more important to understand authentication systems, infrastructure tooling, security frameworks and deployment pipelines.

This has created an entire ecosystem of tools that primarily exist to solve business infrastructure problems.

Docker is a good example. Docker solves a deployment problem that businesses face. It does not solve a universal computing problem. Building Docker required deep software expertise, but the people using Docker are leveraging it to solve the business problems that arise from large-scale deployment. The rise of platforms like Docker and Kubernetes reflects this shift toward operational software. These tools exist because companies need consistent environments across development and production.

In the beginning, these tools were hard to use. The computers were slow and the software infrastructure was comparatively primitive. A person had to understand the tools and have a significant traditional software background to effectively and efficiently use the tools. As the tools have matured, the knowledge of traditional software development has become less relevant.

To deploy your website globally, you no longer need to understand what NP-Complete means or the nuances of von Neumann architecture. However, outside of business environments, deployment is rarely a major concern. Students, researchers and hobbyists rarely struggle with deployment the way companies do. In contrast, tools like compilers or interpreters are universal; everyone writing software needs them.

Software has effectively undergone a kind of speciation, and a new, distinct discipline has emerged. Bizware and traditional software engineering require different skill sets. Both are difficult and require significant expertise, but they emphasize different types of knowledge. Being excellent at one does not automatically make one excellent at the other.

That distinction also explains where AI is currently being applied. AI struggles with traditional software development. It is not even close to replacing engineers doing deeply technical traditional software work. For example, if I wanted to design a domain-specific language to describe Kalman filters, AI would be almost useless. That task requires deep understanding across multiple technical fields and the ability to combine them creatively in ways that have never existed before. At the same time, the market for that kind of work is relatively small compared with the need businesses have for bizware. 

Bizware also operates under very different economic pressures than traditional software. Businesses need digital infrastructure at enormous scale. These systems must be built quickly, reliably and repeatedly across thousands of organizations. Because the problems are highly repetitive, automation becomes practical and extremely valuable. AI can often produce a reasonable starting point because the patterns are well-known and widely reused.

This also explains why discussions about AI often become confusing. AI is not impacting all software equally. It is far more effective in domains where problems are repetitive and patterns are well understood.

That aligns closely with bizware.

In contrast, traditional software development often involves creating something fundamentally new. That kind of work still requires deep expertise and cannot be easily automated. I explored a related dynamic in my analysis of why hardware and software development fail, where mismatched assumptions between disciplines create systemic problems. Understanding where AI applies and where it does not becomes much easier once the distinction between bizware and traditional software is clear.

Economic pressure is reshaping how software is built

Further, this scale has created strong incentives to standardize and automate as much of the process as possible. Cloud platforms, infrastructure frameworks, containerization and orchestration systems exist primarily to solve these operational problems.

Traditional software development is different. It focuses on building new computational capabilities: compilers, algorithms, operating systems, simulation tools and domain-specific systems that push the boundaries of what computers can do.

Traditional software development solves software problems. Bizware solves business problems.  As a result, we’ve experienced a speciation of expertise and a separation of disciplines.

Why this distinction matters for companies

This divide helps explain many of the tensions inside modern technology companies. Engineers who excel at one discipline are often assumed to be interchangeable with those in the other, even though the skills and objectives are quite different.

The market for bizware is enormous. Capitalism constantly pushes toward optimization. That force becomes stronger as the market grows larger. We are seeing the same thing in construction. Companies like Reframe Systems are now building robots designed to automate large parts of home construction. The economic pressure to optimize never disappears. While skilled carpentry is still critical, homebuilding has become commoditized.

Bizware isn’t a lesser form of software, just as framing a house isn’t a lesser form of carpentry than building fine furniture. They simply exist to serve different economic needs.

Understanding that distinction clarifies what modern software development has become.

Software hasn’t disappeared. But the industry that once revolved around computer science now also revolves around operating digital infrastructure at enormous scale. For companies, this distinction has practical implications. This is not really a technical distinction. It is an operational one.

Hiring and team organization are focused on keeping the infrastructure running while also keeping it up to date. Before the internet, this used to be the purview of the store managers who needed to keep the store clean and accessible. What used to be physical infrastructure is now digital infrastructure.

Traditional software is not extinct, and it is not dying. If anything, it is more important than ever. However, it can feel that way because the scale of traditional development has been completely eclipsed by the scale of bizware.

This speciation has already happened; I’m just trying to give it a name. That way, people, businesses and organizations can all agree on what they are doing and what they want to do, because confusion around concepts like software and bizware costs money.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

  • ✇Security | CIO
  • 6 ways agentic AI will reshape the enterprise software market
    Microsoft CEO Satya Nadella raised some eyebrows recently when he predicted that traditional business applications will “collapse” in the agentic AI era. Investor concerns that agentic AI could disrupt the enterprise software market came to a head in early February when Anthropic’s release of Cowork — a clear shot across the bow at Microsoft Copilot — triggered a massive selloff in U. S. software company stocks. But is the “SaaS-pocalypse” simply a Wall Street phenome
     

6 ways agentic AI will reshape the enterprise software market

14 de Abril de 2026, 07:01

Microsoft CEO Satya Nadella raised some eyebrows recently when he predicted that traditional business applications will “collapse” in the agentic AI era.

Investor concerns that agentic AI could disrupt the enterprise software market came to a head in early February when Anthropic’s release of Cowork — a clear shot across the bow at Microsoft Copilot — triggered a massive selloff in U. S. software company stocks.

But is the “SaaS-pocalypse” simply a Wall Street phenomenon or does it impact enterprise CIOs who have invested millions into enterprise software systems that run mission-critical business processes? Is the “SaaS is dead” rhetoric real or overblown?

There’s no question that agentic AI is a game changer, but maybe in unexpected ways. Here’s what expert industry observers are saying.

Enterprise incumbents will gain early advantages

Industry outlook: Incumbent market leaders will likely retain much of their dominance for the foreseeable future by embedding agents into their platforms.

In terms of the fate of the software market, Forrester analyst Kate Leggett says, “There’s investor and company valuations and then there’s the reality of what happens in large enterprises, as well as the timeline that these changes will happen.”

“Core applications are not going to go away anytime soon,” she tells CIO, although there will be erosion around the edges. In terms of timeline, Leggett says, “It could take decades for these workloads to be completely taken over by AI agents.”

Industry consultant William Flaiz adds, “Executive-level decisions are not being made to rip out CRM systems.” However, enterprise CIOs are highly incentivized to add agentic AI to existing platforms to squeeze more value from their current multimillion-dollar investments. “They’re looking to see what they can do better using the tools they have available,” he says.

“There’s a lot of black-and-white thinking about the level of disruption that these incumbents are facing,” says Alex Demeule, senior analyst at Technology Business Review Inc. (TBRI). “Obviously AI is going to have a big impact on software vendors. But when we look at what the future holds, framing it in a 5- to 10-year span, they’re positioned a lot better to make the pivot into the AI era compared to what we’re seeing in the stock price.”

Demeule adds, “When talking about large enterprises, I still think the risk of handing off autonomy to an agentic system is not really there yet.” His prediction is that agentic AI will be rolled out slowly and that humans will remain in the loop for many years to come.

Agentic AI will transform pricing models

Industry outlook: Agentic AI will trigger a seismic shift from subscription-based pricing to consumption- or outcome-based pricing.

Dana Gardner, president and principal analyst at Interarbor Solutions, says, “The short to medium-term concern is less about ripping and replacing systems of record, and more about the end of the current level of pricing power from these vendors.”

He adds, “Savvy CIOs will be leveraging AI to reduce the total cost of IT.” AI agents have the capacity to understand consumption and usage patterns of business applications and CIOs will be able to turn those insights into more favorable contracts, says Gardner.

Bain & Co., in a report on the impact of AI on the SaaS market, says, “If an agent replaces a human task, customers will expect to pay based on outcomes, not log-ons. Leaders, such as Intercom and Salesforce, are already shifting in this direction. The fundamental shift is to stop charging for access and start charging for work done.”

And IDC, in its FutureScape: Worldwide Agentic AI 2026 Predictions report, says that by 2028, pure seat-based pricing will be obsolete, with 70% of software vendors refactoring their pricing strategies around new value metrics, such as consumption, outcomes, or organizational capability.

Forrester’s Leggett says this shift away from subscription pricing could play out in a number of ways. For example, a CIO with a subscription for 100 seats could swap 10 or 20 of those seats for consumption-based or outcome-based pricing. Vendors will likely offer different licensing tiers or flex options that include some type of agentic-based pricing. Or consultants could come in and offer to manage agentic AI implementations and charge a percentage of the outcomes.

Software platforms will merge, creating new rivalries

Industry outlook: Because AI agents don’t care where data comes from, the lines between traditional enterprise software categories like CRM and ERP will blur.

To be effective, AI agents need access to data no matter where it resides. SaaS vendors recognize this imperative and are breaking down the lines of demarcation between CRM, ERP, IT service management, and other categories.

Leggett points out that vendors such as Oracle and Microsoft are building unified data platforms that integrate with the open-source Model Context Protocol (MCP) to support complex AI-based workflows.

Oracle offers an integrated suite of cloud-based ERP and CRM applications along with a fully managed agentic platform. Microsoft offers both ERP and CRM functionality under its Dynamics365 umbrella, as well as industry-specific agentic offerings based on small language models (SLMs), which are more lightweight and cost-effective than LLMs.

SAP is integrating its Signavio business process management suite, its LeanIX SaaS tool for enterprise architecture management, and its Joule AI agent into one cohesive system.

Salesforce is merging its Mulesoft integration and automation platform-as-a-service offering with its Data360 customer data platform and its Agentforce AI platform.

IT service management powerhouse (ITSM) ServiceNow recently completed its acquisition of agentic AI platform vendor Moveworks and is challenging Salesforce in CRM.

Winners and losers

Industry outlook: Agentic AI will have a major impact on point product vendors; those with a generic app will struggle; those with a domain-specific tool are better positioned to survive.

Forrester’s Leggett divides the enterprise software market into three segments. She says that simple point products such as workflow, spreadsheet, or lightweight project management apps “are going to go away in fairly short order,” because they are easy to replicate and don’t offer differentiation.

Highly verticalized apps are more insulated from disruption because they deliver deep domain expertise and integrations with adjacent systems such as CAD or medical imaging. Examples include Epic and Cerner for electronic healthcare record (EHR) management, IQVIA for pharmaceuticals and life sciences, or Procore in construction.

The major CRM platform players have built-in advantages, including a moat around their data, industry-specific knowledge and workflows, deep partner networks, industry best practices as well as expertise in areas such as regulatory compliance, according to Leggett.

TBRI’s Demeule also points out that these incumbent vendors have survived precisely because they have been able to successfully pivot each time a potential disruption occurred, whether that’s moving from on-prem to the cloud, or shifting from perpetual licenses to subscriptions.

Vibe coding to disrupt certain segments

Industry outlook: Vibe coding could disrupt SaaS vendor dominance, empowering end users to spin up their own agents.

Vibe coding, the use of AI agents to write software based on simple natural language prompts, takes the low-code, no-code movement to another level.

With vibe coding, end users can ask OpenAI ChatGPT, Google Gemini, Anthropic Claude, Cursor Chat, GitHub Copilot, or others to build a productivity app outside the boundaries of a traditional CRM or ERP platform, for example.

Leggett says vibe coding is a legitimate threat because it can potentially enable workers to be more productive, while sidestepping traditional enterprise software platforms, which for many end users are seen as bloated and overly complicated.

On the other hand, organizations that are not technologically sophisticated might not have the skills or the confidence to build and deploy their own agents that impact mission-critical workflows.

“Vibe coding in terms of a point solution is something we see as being disruptive,” says Demeule. “But it’s the ability to build point solutions that’s more at risk of being disrupted versus something managing an entire customer database or supply chain database.”

The agentic orchestration layer emerges

Industry outlook: Traditional SaaS applications will still exist, but they will likely be hidden behind an agentic orchestration layer.

Analysts agree that the user interface of the future will not be the traditional SaaS platform; it will be agentic. That doesn’t mean the underlying CRM or ERP will go away, but it will be hidden.

IDC analyst Bo Lykkegaard says, “Complexity is the Achilles’ heel of the SaaS model. Each SaaS application demands its own learning curve and user interface, often used sporadically and inefficiently. AI offers a compelling remedy. Instead of navigating multiple dashboards, users could interact with agent-driven, conversational interfaces that perform tasks across systems. The result? AI as the new interface layer, which is one that abstracts away complexity, automates repetitive processes, and redefines how enterprises consume software.”

TBRI’s Demeule envisions an orchestration agent that assigns tasks to an LLM, an SLM, or a robotic process automation (RPA) tool, based on which approach is more appropriate in terms of efficiency, as well as other considerations such as cost and energy usage.

The question that will play out over the next several years is whether enterprise CIOs obtain that functionality from their current providers or from disruptors like OpenAI, Anthropic, Palantir, UiPath, or others.

  • ✇Security | CIO
  • ServiceNow embeds AI across the platform with Context Engine
    ServiceNow is rolling out a broad set of platform updates designed to bake AI, data, security, and governance into every part of its stack. At the center of the move is a new Context Engine, which pulls together enterprise data, policies, and decision history to give AI-driven workflows a shared understanding of how the business operates and how decisions are made. “Context Engine is ServiceNow’s new enterprise context solution that captures the why behind decisions, not
     

ServiceNow embeds AI across the platform with Context Engine

9 de Abril de 2026, 10:00

ServiceNow is rolling out a broad set of platform updates designed to bake AI, data, security, and governance into every part of its stack. At the center of the move is a new Context Engine, which pulls together enterprise data, policies, and decision history to give AI-driven workflows a shared understanding of how the business operates and how decisions are made.

“Context Engine is ServiceNow’s new enterprise context solution that captures the why behind decisions, not just the what. Every AI interaction, resolution, approval, and escalation builds a richer understanding of how the business actually operates,” John Aisien, SVP of product management, security, and risk at ServiceNow, told CIO.com.

“It’s grounded in ServiceNow’s incumbent Service Graph and Knowledge Graph, with incremental AI decision tracing that captures decision context persistently, powered by our recent acquisition of Traceloop. It will also be enriched by IP that we acquired from Veza & IP that we announced plans to acquire from Armis,” Aisien added.

Expanding on how it works in practice, Aisien further pointed out that the Context Engine is not a standalone interface, but an embedded capability woven into the broader ServiceNow platform and “surfaces” through the experiences and workflows customers already use, and via published, public interfaces that the ITSM software provider will expose to customers and partners over time.

From ITSM platform to enterprise AI control plane

That embedded design, analysts say, also signals a broader ambition for ServiceNow.

“Service Now is seeking to move beyond the modular capability provider label and to move up the tech stack to be adopted as the enterprise AI operating layer, in turn owning the orchestration and control layer for enterprise AI, while remaining model agnostic beneath the surface,” said Scott Bickley, advisory fellow at Info-Tech Research Group.

However, Bickley pointed out that many vendors are pursuing a similar approach, not just ServiceNow: “Most enterprise scale SaaS providers are feverishly incorporating AI functionality horizontally across their solutions, all seeking to lock their customers in leveraging their own flavors of data fabric and context layers.”

“This can be seen with Salesforce’s AgentForce, Microsoft 365 Copilot, etc.  It is clear to these vendors that customers will want a single execution layer where possible, and their future depends on being the solution of choice,” Bickley added.

Vendor convergence raises stakes for CIOs

That convergence in strategy from multiple vendors, though, comes with trade-offs for CIOs.

According to Bickley, these updates from ServiceNow, akin to newer offerings from other vendors, will force enterprises to decide whether they want ServiceNow to be the control plane for enterprise automation.

“If not, ServiceNow offerings can become one expensive set of modules with built-in AI shelfware and the price to go with it,” Bickley said.

For enterprises embracing ServiceNow and its recent approach, the trade-off will be an “implied dependence upon ServiceNow’s data model, governance model, and platform architecture”, Bickley added.

The Context Engine is currently in preview with select customers, with broader availability details yet to be disclosed.

Build Agent Skills aims to widen developer appeal

Alongside Context Engine, ServiceNow also introduced Build Agent Skills and an SDK that Aisien said will allow developers to use external tools, such as Claude Code, Cursor, or OpenAI Codex, and integrated development environments (IDEs) to build applications and deploy them directly to the ServiceNow platform, while keeping governance and execution within its environment.

For developers, Stephanie Walter, practice leader of AI stack at HyperFRAME Research, said the skills and the SDK will add flexibility and speed.

“Instead of forcing teams into a single development environment, ServiceNow is letting them build with familiar tools and still deploy into a governed enterprise platform. That makes development faster, but it also means ServiceNow becomes the place where those applications ultimately run and are managed,” Walter said.

For Bickley, the skills and SDK are ServiceNow’s strategy to appeal more broadly to platform engineering teams compared to its older approach, where it tried to appeal to the legacy ServiceNow admin ecosystem and low-code user base.

However, the analyst cautioned against the Build Agent skills offering, saying that ServiceNow was not being very forthcoming about its details, especially its pricing.

When asked about more details on skills, Aisen told CIO.com that enterprise customers will receive 100 free Build Agent calls to start with, and developers on personal instances will get 25.

“Beyond that on-ramp, full pricing details are not being disclosed,” Aisen said.

These scant details, according to Bickley, are precisely what introduce cost unpredictability as ServiceNow doesn’t seem to explain how much or what can be accomplished with 100 Build Agent calls.

In fact, Bickley further warned that CIOs should adopt a FinOps mentality if they seek to avoid drowning in the emerging consumption model ocean as Build Agent calls isn’t the only area that will draw charges.

For CIOs, the bigger concern is how multiple consumption-based charges could stack up across Build Agent usage, Workflow Data Fabric, and core platform licenses, creating a layered cost structure rather than a single subscription, Bickley said.

Without clear visibility into how these meters interact, enterprises risk unexpected spending spikes as AI adoption scales, making it harder for CIOs to track what’s driving costs and tie that spend back to business value, Bickley added.

ESM Foundation to ease adoption

Against this backdrop of potential cost complexity, ServiceNow is also trying to simplify adoption for midsized enterprises, at the packaging level, with the introduction of its ESM Foundation offering, which bundles core enterprise capabilities into a more streamlined, faster-to-deploy package.

For Bickley, the packaging reduces confusion for midsized companies trying to choose between its varied offerings and shortens their time to deployment.

Separately, the analyst pointed out that ESM Foundation is also ServiceNow’s strategy to compete with smaller vendors, such as Atlassian.

“If ServiceNow can credibly offer faster deployment for midsize companies while including AI, governance, and workflow executing into the platform price, they can attack the complexity and time to value obstacles their competitors have wielded against them,” Bickley said.

The ESM Foundation package and the company’s new tiered packaging model are now generally available to customers. Meanwhile, the Build Agent Skills are set to become available to developers starting April 15.

  • ✇Security | CIO
  • The vibe coding crisis: Why you need a dual-track engineering strategy
    If you scroll through your professional feeds or check your inbox this week, you are guaranteed to see the phrase “vibe coding.” Instead of writing code, your product managers can just chat with a coding agent and prompt a fully deployed app into existence. I just read the market-tanking prediction from Citrini Research arguing that AI is on the verge of coding entire SaaS products completely on its own. LLM vendors and YC startups are aggressively selling this exact id
     

The vibe coding crisis: Why you need a dual-track engineering strategy

9 de Abril de 2026, 06:00

If you scroll through your professional feeds or check your inbox this week, you are guaranteed to see the phrase “vibe coding.”

Instead of writing code, your product managers can just chat with a coding agent and prompt a fully deployed app into existence. I just read the market-tanking prediction from Citrini Research arguing that AI is on the verge of coding entire SaaS products completely on its own. LLM vendors and YC startups are aggressively selling this exact idea that anyone can build complex software in an afternoon simply by describing their desired features.

But from where I sit, this unchecked acceleration is an absolute disaster. AI today might be able to generate the superficial shell of a SaaS app, but it is still far away from having the engineering rigor required to build something reliable enough to become part of our digital infrastructure.

While this conversational approach makes it incredibly easy to scaffold apps, it is quietly creating a massive crisis in enterprise security and technical debt. We are abandoning disciplined software engineering and replacing it with a culture of probabilistic guesswork. If we don’t course-correct, we are going to expose ourselves to catastrophic risk.

The rise of unsanitized agents

The risks multiply when we transition from AI that just generates new content to AI that takes action. Over the past few months, we’ve seen an explosion of unsanitized agentic systems. The most popular is an open-source project called OpenClaw (formerly Moltbot/Clawdbot). Unlike a regular chatbot, this thing has the ability to independently execute actions on a machine—sending files, running programs, making outside connections.

I recently deployed OpenClaw to a sandboxed environment just to see what the fuss was about. I found a bloated mess of features, but the most basic functions, such as Telegram streaming, didn’t even work. I tried consulting their documentation, but it was clearly just a wall of AI-generated, high-entropy and low-variance text that told me absolutely nothing useful. To make matters worse, the project changed its name twice in a row without providing a single migration guide for how to move to the new binaries. If a traditional piece of software shipped like this, we would deem it completely unacceptable. But because it’s an AI that theoretically can do a lot of things on paper, people tolerate it.

They do look incredible in YouTube demos. But deploying unsanitized, non-deterministic agents with root access to local environments is a massive security regression. You are effectively taking decades of strict Identity and Access Management (IAM) protocols and tossing them in the trash.

Consider the “lethal trifecta” these agents represent. First, they hold persistent privileged access. Second, they continuously read untrusted external data like incoming emails or Slack messages. Third, they have unrestricted communication with the outside world. If an attacker sends an email with a hidden prompt injection, the agent doesn’t verify it and might just silently leak your local SSH keys!

The “works on my machine” problem at scale

The crisis goes beyond deviant agents. It infects how we build our entire software supply chain. When developers prioritize speed over deep understanding, they start building infrastructure based on luck.

Right now, my team is fighting a novel threat vector known as “slopsquatting.” It is also known as AI package hallucination. AI models do not query a deterministic database of facts. They predict the next most likely word. Because of this, they frequently invent software package names that sound perfectly plausible but do not actually exist.

Here’s how the attack works: malicious actors register these hallucinated packages on public repositories and inject them with malware. The coding agent suggests the fake package and blindly installs it. From the vibe coder’s perspective, the AI’s code works without throwing any warnings and the installed package seems legit. But under the hood, they just handed root access to a cybercriminal.

This blind trust also destroys our internal quality assurance. A big part of the vibe coding promise is that the AI will write the feature and then the unit tests to validate it.

I recently reviewed a pull request for a new internal routing microservice. 100% test coverage. The CI pipeline showed a beautiful sea of green checkmarks. But then I actually read the code. I found what my co-founder and I now call “cardboard muffins.”

The AI hadn’t written tests to verify the underlying business logic. It completely ignored the edge cases. It simply hardcoded the exact return values needed to satisfy the assertions. Its only goal was to make the deployment pipeline pass.

When 80% of a codebase is generated by an AI that hallucinates dependencies and fakes unit tests just to get a green checkmark, you haven’t built software. You’ve built a house of cards. Scaling this kind of code takes the old “works on my machine” problem and turns it into an enterprise-wide disaster.

I firmly believe that the new luxury in software development won’t be the sheer speed of feature rollouts. The new luxury will be old-fashioned, boring determinism.

The dual-track strategy

We cannot afford to ban generative AI. The capability for rapid innovation and market testing is simply too valuable. But we absolutely cannot let probabilistic vibe coding dictate the architecture of our production systems.

To fix this, CIOs can promote a “dual-track” development lifecycle. This strategy separates rapid exploration from rigorous production engineering.

Track 1 (the fast track)

This is the domain of unconstrained discovery. In Track 1, vibe coding is explicitly permitted and heavily encouraged. If a product manager wants to use an autonomous agent to scaffold a prototype in an afternoon, let them do it. The core metric here is speed to feedback. We want to validate business ideas and test user interfaces as cheaply and quickly as possible.

But there is a massive catch. Track 1 development must occur in heavily sandboxed environments. These vibe-coded applications are disposable blueprints. They are never permitted to touch production data, customer PII or mission-critical corporate networks.

Track 2 (the slow track)

Once a prototype in Track 1 proves its business value, the project moves to Track 2. This is the domain of real software engineering.

The mandate here is simple but painful: Start over. Do not attempt to refactor, salvage or clean up the vibe code. Rewrite it from the ground up.

In Track 2, human engineers take the lead. They use the Track 1 prototype merely as a visual reference. They build secure and scalable architectures. This track prioritizes deterministic security guarantees, strict type safety and rigorous human peer review. AI tools are still used, but they are demoted from being autonomous creators to highly restricted assistants. Every dependency is verified against established security frameworks and every unit test is manually reviewed to ensure we aren’t baking cardboard muffins into our core product.

A big cultural shift

Implementing a dual-track strategy requires a big cultural shift. This is especially true when managing executive expectations. It hinges on one non-negotiable directive: never base the timeline of the slow track on the velocity of the fast track.

It’s going to be a tough conversation with your business stakeholders. When they see a seemingly functional, vibe-coded prototype spun up over a single weekend, it’s natural for them to assume the final product can be finished if given one more week. But enforcing this boundary is exactly how we ensure the business becomes a benefactor of AI coding, rather than its next victim.

AI is an incredible force multiplier for innovation. But it is not a substitute for architectural foresight. By embracing a dual-track strategy, we can give our teams the freedom to experiment at the speed of thought while protecting the deterministic rigor that keeps our digital infrastructure running.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

❌
❌