For CIOs managing enterprise software estates, this narrative doesn’t fully capture the complexity of their reality.
I’ve watched clients become captivated by the vibe coding promise. They see demos where AI generates a working prototype in minutes. They imagine their legacy modernization problems solved. Then they try applying these tools to a 25-year-old mainframe application processing millions of transactions daily and discover why speed alone doesn’t solve enterprise problems.
The gap between prototyping a new app and modernizing critical infrastructure isn’t about coding velocity. It’s about preserving decades of undocumented business logic while simultaneously transforming the technical foundation beneath it. That requires a fundamentally different approach than telling AI to “build me a customer portal.”
Dotun Opasina
What vibe coding solves (and what it doesn’t)
Vibe coding — using natural language to prompt AI into generating code — has legitimate enterprise applications. A product manager can validate an idea without engineering resources. A business analyst can prototype a workflow automation without waiting for sprint capacity. A marketing team can build internal tools without IT tickets.
These are real productivity gains. When Sundar Pichai says vibe coding has “made coding so much more enjoyable,” he’s describing how AI removes friction from exploration and experimentation. The barrier between “I wish we had this” and “here’s a working version” has essentially collapsed.
But enterprise modernization isn’t exploration. It’s surgery on mission-critical systems where the patient can’t be sedated.
Consider the typical enterprise modernization scenario I encounter: A leading health care organization needed to modernize 10,000+ COBOL mainframe screens to improve claims processing and customer service. These systems were built before most current developers were born. The original architects retired years ago. Documentation is incomplete or contradictory. Business rules are embedded in code that nobody fully understands anymore.
Vibe coding tools can generate modern code quickly. What they can’t do is tell you whether that code implements the same business logic as the legacy system — logic that represents decades of regulatory compliance decisions, edge case handling and institutional knowledge that was never written down.
This is where the “vibe coding hangover” hits enterprise IT. Fast code generation creates new problems when applied to complex, tightly coupled systems.
The specification problem nobody talks about
Here’s the uncomfortable truth about AI-assisted development: AI generates perfect code for poorly defined problems.
I’ve seen this pattern repeatedly in client work. Teams use AI to accelerate development. Code gets written faster than ever. Then they discover the code solves the wrong problem because the requirements weren’t clear enough to begin with.
For greenfield projects building something new, you can iterate quickly. Wrong assumption? Rewrite it. Missed a requirement? Add it next sprint. The cost of mistakes is measured in developer time and missed deadlines.
For legacy modernization, mistakes compound differently. You’re not just building new functionality. You’re replacing systems that process payroll, manage inventory, handle financial transactions, route customer service calls — critical operations where “oops, we missed a business rule” isn’t acceptable.
Traditional modernization approaches tried to solve this through massive requirements-gathering efforts. Armies of business analysts documenting every screen, every workflow, every edge case. These projects took years and often failed because by the time you finished documenting, the business had evolved.
The enterprise-grade AI approach inserts a different layer: specification extraction.
Rather than jumping from legacy code to modern code, systems that work at enterprise scale first extract what the legacy system does — the business rules, the dependencies, the logic flow — into a clear specification. That specification becomes the source of truth for generating modern code. It’s verifiable. It’s traceable. It preserves institutional knowledge that exists nowhere else.
At Publicis Sapient, our proprietary AI platform Sapient Slingshot embodies this specification-first approach. When RWE needed to modernize a 24-year-old application with no source code or documentation, the platform analyzed the running system to extract business logic before generating replacement code. What would have taken two weeks of manual reverse-engineering happened in two days, with human oversight ensuring accuracy.
This isn’t about speed. It’s about preserving what works while transforming how it runs.
Dotun Opasina
Why enterprise context changes everything
The difference between prototyping and production isn’t just scale. It’s context.
Vibe coding tools work well for isolated problems. Build a dashboard. Generate a data transformation script. Create an internal tool. These tasks have clear boundaries and limited dependencies.
Enterprise systems don’t have clear boundaries. A seemingly simple change to how customer addresses are validated might cascade through order processing, shipping logistics, tax calculation, fraud detection and customer service routing. Understanding those dependencies requires context that exists across thousands of files, dozens of databases and years of incremental changes.
This is where general-purpose AI coding assistants hit their limits. They can read individual files. They can suggest code completions. They can even generate multi-file changes. What they can’t do is understand how your 15-year-old inventory management system integrates with your 10-year-old order fulfillment platform which talks to your 5-year-old customer service tool — and why changing one piece breaks another.
Enterprise-grade AI modernization requires building an Enterprise Context Graph — a living map of how code, architecture, data and business rules connect. This context allows AI to make informed decisions about modernization, not just fast guesses.
When a health care organization used this approach to modernize critical legacy systems, the platform identified hidden dependencies that would have caused production failures if missed. The AI didn’t just generate modern code faster. It generated modern code that worked in the complex environment where it needed to run.
Dotun Opasina
What this means for CIO technology strategy
The vibe coding phenomenon signals something important: AI is changing how software gets built. But for enterprise leaders, the strategic question isn’t “Can AI write code faster?” It’s “Can AI help us escape decades of technical debt while keeping critical systems running?”
The answer is yes — but only with the right approach.
Stop optimizing for coding speed. Your constraint isn’t how fast developers can write code. It’s how accurately you can understand and preserve business logic while modernizing the technical foundation. Tools that prioritize speed over comprehension will create more problems than they solve.
Start measuring specification accuracy. The new productivity metric isn’t lines of code generated. It’s code-to-spec accuracy — how reliably the generated code implements verified business requirements. Platforms achieving 99% code-to-spec accuracy enable modernization projects that were previously too risky to attempt.
Treat institutional knowledge as a strategic asset. Your legacy systems contain decades of business logic that represents real competitive advantage — edge cases handled, regulatory requirements met, customer workflows optimized. Modernization approaches that discard this knowledge to move faster are destroying value in the name of speed.
Invest in context preservation, not just code generation. The winners in enterprise AI adoption won’t be organizations that generate code fastest. They’ll be organizations that can systematically extract, verify and modernize business logic at scale.
The modernization opportunity hiding in plain sight
Here’s what makes March 2026 different from March 2024: We now have AI systems capable of reading legacy code, extracting business rules and generating verified modern replacements at enterprise scale. The technology matured.
According to the Stanford AI Index 2025, 78% of organizations used AI in 2024, up from 55% in 2023. But adoption and effectiveness are different metrics. Most organizations are still experimenting with AI tools for individual developer productivity.
Consider the typical enterprise IT budget: 60-80% goes to maintaining legacy systems. That maintenance cost compounds annually as skills become scarcer and systems become more brittle. Every dollar spent keeping COBOL running is a dollar not spent on innovation.
Vibe coding tools won’t solve this. They’re built for creation, not preservation. Enterprise modernization requires AI that understands what you have before transforming it into what you need.
Organizations applying this approach are seeing 75% faster delivery timelines, 40% higher productivity and up to 50% savings in modernization costs. More importantly, they’re tackling modernization projects that were previously shelved as too risky or expensive to attempt.
The specification-first future
The vibe coding phenomenon will continue to accelerate. More business users will build tools. More prototypes will become products. More organizations will democratize software creation beyond traditional engineering teams.
For CIOs, this creates both opportunity and risk.
The opportunity: Free your engineering teams from routine development by enabling business users to build their own solutions. The risk: Create a fragmented estate of AI-generated tools that nobody can maintain.
The solution requires treating AI-assisted development as a spectrum. Prototypes and internal tools can embrace the speed and accessibility of vibe coding. Mission-critical systems and legacy modernization need specification-first approaches that prioritize accuracy and traceability over velocity.
Your competitors are experimenting with AI coding tools. The question is whether they’re building sustainable transformation capabilities or accumulating a new generation of technical debt at AI speed.
The CIOs who understand this distinction will spend 2026 systematically eliminating legacy constraints, while others remain focused on incrementally improving existing systems. By 2027, that gap will be difficult to close. Vibe coding democratized software creation. Enterprise-grade AI makes transformation predictable. Choose your tools accordingly.
This article is published as part of the Foundry Expert Contributor Network. Want to join?
The next wave of AI in software development goes beyond better code generation: agents are starting to take accountability throughout planning, design, build, test, release and operations. In the teams I work with, this is already changing team dynamics, leadership priorities and what CIOs must do to maintain quality, security and control.
The biggest shift I see is genuine delegation: AI can now draft backlog items, inspect codebases, propose implementation paths, create tests, summarize reviews and prep releases before teams fully agree on ‘done.’ This marks a shift from AI as an assistant to AI as an active participant. That is why this topic matters for CIOs right now. With Google I/O on May 19–20 and Microsoft Build on June 2–3, attention will continue to rise around AI coding models, agentic development workflows and the platforms that now span planning through operations. Microsoft and GitHub are embedding agents more deeply into the engineering workflow.
Gemini Code Assist, GitHub Copilot’s coding agent, OpenAI Codex and Claude Code all reflect the same direction: AI is beginning to participate across planning, building, testing, reviewing and operations, not just within the editor. Google is trying to provide coding assistance to broader lifecycle support. Amazon is leaning into operationalization. OpenAI and Anthropic are pushing agentic coding and repository reasoning. Newer prompt-to-app platforms such as Lovable and Replit are compressing the path from idea to working application. The market signal is clear: AI is moving beyond code suggestion and into software delivery itself.
For business and technology executives, the strategic question is no longer whether AI can generate output. It is whether the organization can use AI to improve delivery without creating faster paths to weak requirements, inconsistent standards, poor testing and vague governance. That is why I frame this conversation around software delivery rather than relying too heavily on the older SDLC label. SDLC still makes sense, but it sounds procedural for what is actually happening. Agentic AI is not just accelerating tasks inside a fixed lifecycle. It is rewiring the operating model of delivery. Recent DORA research reinforces what I see in practice: AI tends to amplify an organization’s existing strengths and weaknesses and the biggest returns come not from the tool alone, but from improving the delivery system around it.
Where agentic AI is creating the most value
The first place CIOs should focus on is where agentic AI is creating measurable value across the lifecycle. In planning and requirements, AI can already do meaningful first-pass work. Teams can ask it to inspect an existing codebase, summarize dependencies, suggest implementation paths, draft user stories, refine acceptance criteria and surface tradeoffs before engineers begin building. Used well, that reduces administrative drag and improves consistency. It also changes where the bottleneck appears. What I see most often is that teams adopt agentic tools expecting a boost, but the first real bottleneck appears upstream when acceptance criteria are too loose for the agent to interpret safely. The teams that struggle most are not the ones with weak prompts. They are the ones with vague intent. AI amplifies ambiguity as efficiently as it amplifies insight. OpenAI’s guidance for AI-native engineering teams describes agents contributing to scoping, ticket creation and other lifecycles work well before code is merged.
A practical model of agentic AI across the software delivery lifecycle
Vipin Jain
In architecture and design, the real gain is not that AI can produce more diagrams. It can help teams compare options faster, trace dependencies, expose inconsistencies and document decisions with less manual effort. But architecture is not just pattern matching. It is a judgment about resilience, security, compliance, integration, cost and long-term business fit. The strongest teams use AI to explore options while architects define the guardrails, review points and non-functional requirements that the system must adhere to. In an agentic environment, architecture becomes more important, not less, because someone still has to define what the system is allowed to do. What I see in the strongest teams also matches Anthropic’s experience: simpler, well-bounded agent patterns usually outperform elaborate multi-agent complexity when the goal is reliable software delivery.
Build, test and review are changing even faster. GitHub Copilot’s coding agent, Claude Code, Amazon Q Developer, OpenAI Codex and Google’s broader agentic tooling all point in the same direction: the market is moving from AI-assisted coding to AI-assisted flow. In practice, that means agents can decompose work, generate code, create tests, run checks, summarize failures and prepare work for human review. The important metric is no longer lines of code per developer. It is the amount of safe, reviewable work the team can move through the pipeline without increasing rework. That is a more executive-relevant measure because it ties AI to throughput and quality rather than just speed. Benchmarks such as SWE-bench matter here because they test models against real repository-level software tasks, rather than isolated code snippets, which is much closer to the work CIOs are actually trying to improve.
Deployment, operations and maintenance are where the enterprise’s stakes become highest. This is the point that many organizations underestimate. Writing code is visible. Governing agent behavior in production is harder, less glamorous and much more important. In the teams I see gaining the most value, leaders are using AI to support release readiness, detect anomalies, summarize incidents, draft remediation steps and improve documentation around recurring issues. I have also seen teams pilot agents successfully in build, then stall at release because no one had clearly defined what the agent could change on its own, what required approval or who owned rollback when something went wrong. The organizations that make progress are the ones that answer those questions early. That is where trust is built. That is also why the market is shifting toward governed runtime and operations support, not just coding help; Amazon Bedrock AgentCore is one example of that broader move toward secure deployment, monitoring and controlled agent operation at scale.
How roles and teams are evolving
Agentic AI changes agile teams by shifting what roles contribute. Developers spend less time on first drafts and more time steering AI, validating diffs, hardening edge cases and managing exceptions. Their leverage shifts from typing speed to judgment—knowing what to trust, challenge or escalate. Leaders should recognize this meaningful change in role identity.
Architects also move up the value chain. In traditional environments, they often spend too much time creating static documentation that teams interpret unevenly. In agentic environments, the more valuable work is defining executable guardrails: approved patterns, tool boundaries, policy controls, integration rules and quality gates that both humans and agents can follow. That makes architecture more operational and more consequential.
QA, platform and SRE teams also gain influence. Testing becomes less about writing every case manually and more about building evaluation strategies, validating behavior, instrumenting pipelines and preserving rollback discipline. The closer AI moves to release and operations, the more essential traceability, observability and control become. Product owners and business analysts also need to raise their game. When requirements are fuzzy, human teams usually compensate through conversation. Agents often execute fuzziness literally. In practice, that means the teams that benefit most from agentic AI are the ones that improve intent, edge-case thinking and acceptance discipline. One more shift deserves attention: pro-code and low-code are converging. Microsoft’s Copilot Studio, IBM WatsonX Orchestrate, Lovable and Replit are lowering the barrier between idea and execution for a broader set of contributors. That is good news for experimentation and business alignment, but it also raises the risk of software sprawl outside shared architecture and security controls. CIOs should not dismiss these tools as toys, nor let them float free of governance. The most effective organizations will connect pro-code and low-code through common guardrails rather than force a false choice between them.
width="1024" height="519" sizes="auto, (max-width: 1024px) 100vw, 1024px">How agentic AI is shifting the center of gravity for core delivery roles.
Vipin Jain
What CIOs should do now?
As roles and delivery processes evolve, what concrete actions should CIOs consider now? The organizations I see getting the most from agentic AI are not treating it as a coding-assistant bakeoff. They are redesigning the delivery system around it. That starts with intent. Leaders should raise the quality of requirements before work enters agentic pipelines. If the business outcome, constraints and acceptance criteria are unclear, the AI will often produce technically plausible but strategically wrong work.
Next comes guardrails and autonomy. Leaders should define what agents can do on their own, what requires approval, what systems and data they can touch and what evidence the pipeline must capture. This is not bureaucracy for its own sake. It is the difference between acceleration and avoidable damage. Teams need clear security rules, architecture patterns, approval boundaries and rollback paths before they scale autonomy. Google Research offers a useful counterweight to the hype here: more agents do not automatically produce better outcomes, especially when the task design, coordination model and workflow are weak.
The management system leaders need for agentic software delivery.
Vipin Jain
Then comes observability. If an agent drafts code, generates tests, touches data, triggers a workflow or influences a release decision, leaders should be able to see that activity, evaluate it and audit it later. This is where many pilots remain weak. They prove that AI can do something. They do not prove that the organization can repeatedly trust it. That is why a more formal evaluation matter. Microsoft’s guidance on agent evaluators is useful here because it focuses on operational signals leaders actually need: task completion, task adherence, intent resolution and tool-call accuracy.
Finally, leaders should change how they measure success. Code volume and demo velocity are weak proxies. Better measures include defect escape, rework, release confidence, cycle time for work that reaches production safely and the percentage of work that moves through the pipeline with clear evidence and human accountability. Start with bounded use cases such as maintenance tasks, test generation, documentation, technical debt reduction and lower-risk feature work with strong review. Build supervision muscle before you try to scale autonomy.
The executive takeaway
The strategic mistake I see most often is treating this moment as a tool refresh or a beauty contest among AI coding platforms. Google, Microsoft, Amazon, OpenAI, Anthropic and the next wave of prompt-to-app players matter because they signal where the market is going. But the winning question for leaders is not which demo looks smartest. It is whether the organization is redesigning software delivery so AI can contribute without weakening quality, security or control.
More generated code is not the prize. Better software delivery is. The enterprises that win will connect business intent to engineering execution more tightly, instrument agent behavior more rigorously and redesign team roles around judgment, supervision and accountability. They will make AI part of the team, not just another tab in the IDE.
This article was made possible by our partnership with the IASA Chief Architect Forum. The CAF’s purpose is to test, challenge and support the art and science of Business Technology Architecture and its evolution over time as well as grow the influence and leadership of chief architects both inside and outside the profession. The CAF is a leadership community of the IASA, the leading non-profit professional association for business technology architects.
This article is published as part of the Foundry Expert Contributor Network. Want to join?
Yet that is precisely how most organizations are deploying AI coding agents today. The prevailing narrative around “AI-powered development” frames these systems as productivity tools. Vibe-coding and agentic coding are considered something closer to a faster autocomplete or a more sophisticated IDE plugin. Flip the switch, the story goes, and suddenly your engineering organization becomes dramatically more efficient. Everyone is “all in” on the first hand of cyber-Texas Hold ’Em. That mental model is wrong.
AI coding agents are not tools. They behave far more like junior developers: Capable, energetic, sometimes brilliant, but absolutely capable of causing catastrophic damage if given autonomy before they understand and respect the environment they’re operating in.
The organizations that treat AI coding agents like tools will create and accumulate technical debt at unprecedented speed. The organizations that treat them like junior engineers by onboarding them as talent, pairing with them and teaching them context will unlock the productivity gains everyone is chasing. The difference between those outcomes is not the technology. It is the management model.
The lesson every engineer learns early
Midway through the DevOps phase of my career, I worked at the CME Group, where the exchange operates one of the most critical financial infrastructures on the planet. The CME processes roughly a quadrillion dollars’ worth of contracts annually and, at the time, ran across five datacenters with more than 10,000 servers, including racks of Oracle Exadata systems costing hundreds of thousands of dollars each. The biggest SIFI of SIFIs.
You did not get root access to that environment on day one.
Instead, you were paired with a mentor. Your mentor was part of a buddy system for onboarding new hires and was effectively a docent for the infrastructure. My mentor was a deeply technical manager named Matt, one of the most capable engineers I have ever worked with. His job wasn’t simply to show me which commands to run or where to find documentation. His job was to teach me how to ask the platform, a system of systems, meaningful questions.
When you’re managing infrastructure at that scale, every question returns thousands of answers.
Are the matching engines pinned correctly to CPU cores?
Are cgroups configured properly for workload isolation?
Which RAID arrays are starting to show drive failures?
Are firmware and BIOS versions aligned across production and QA?
None of this can be learned through a quick tutorial or a training video. You learn by doing. You learn by working through the ticket queue, performing dry runs, preparing rollback plans and executing changes within narrow maintenance windows (a few minutes per week).
The lesson wasn’t simply technical. It was epistemological. Engineering expertise is not about knowing commands. It is about knowing which questions matter and how to understand the response. And that knowledge only develops through mentorship, iteration and experience.
Why the pair-programming model matters
The software industry already solved this problem decades ago through a practice called pair programming. In agile teams, a senior developer pairs with a junior one. They work together on the same problem in real time. The junior developer contributes energy and fresh thinking, while the senior developer contributes experience and judgment. The result is faster capability development without sacrificing quality. At first, it might seem an expensive allocation of resources, but when you think it through, it is really a strong knowledge management technique.
AI coding tools are like a super smart baby, a nascent intelligence that is as eager as any recent college graduate, but without much in the way of real-life experience solving real-world problems because it cannot rely on a body of lived experience and hard-won lessons in software development, release engineering and debugging. That description should sound familiar. It is essentially the profile of a junior developer.
The implication is obvious once you see it: the most effective deployment model for AI coding agents is the same pairing model that works for human developers. Human plus agent.
Not a human supervising an agent after the fact. Not just a human reviewing pull requests from an automated pipeline. But genuine co-development, with contextual education on why the vulnerability should not be introduced in the first place. When that pairing works, the productivity gains are real. When it doesn’t, you ship vulnerabilities faster than your security team can ever hope to triage them.
What the agent gets wrong first
The first time I worked alongside a coding model on a real security problem, the mistake it made was subtle but revealing. I was experimenting with ways to harden an API without introducing latency or complexity on the client side. The goal was to produce a transparent security uplift that improved the API’s defensive posture without forcing developers to substantially change how they interacted with the service.
The model generated plausible suggestions quickly. Too quickly. Some of the techniques it proposed were technically correct but operationally obsolete. Others referenced security mechanisms that had been deprecated. Still others ignored non-functional requirements around compliance or performance. In other words, the model surfaced relevant information but lacked the judgment to distinguish wheat from chaff.
There is also a tendency to accept the legitimacy of the ask rather than questioning the assumptions and baseline parameters of the situation. The agent is not going to think outside the box (unless it is hallucinating a nonexistent function or package/library that solves the problem). It assumes that the question being asked for it to try to solve is a legitimate and valid question or problem to be solved.
Humans develop that discernment over time. It’s part of how we move from data to information to knowledge to wisdom. What information scientists have called the DIKW pyramid.
Models don’t struggle their way up that pyramid. They jump directly to conclusions. The struggle, however, is a messy process of trial, failure and iteration, but it is where human experience and knowledge form. That knowledge is then further refined and distilled into wisdom. When that process is skipped, real expertise never develops. This is why treating AI coding agents as tools is dangerous. Tools don’t need to exercise judgment. Junior developers do.
How trust actually develops
Think about the best junior engineer you ever worked with. How long did it take before you trusted them to work independently? Rarely less than months. Oftentimes a year or more.
Trust emerges gradually. It grows from observing how someone works through problems: how they document changes, how they write tests, how they think about rollback procedures and anticipating edge cases and race conditions. In my own teams, I’ve always preferred a management philosophy of 100% freedom and 100% responsibility (Netflix Manifesto circa 2001).
Engineers on my teams are expected to behave like owners of the company. They are indoctrinated to commit infrastructure changes as code. They document their reasoning. They attach testing artifacts to their pull requests. We track progress not just by time spent but by contributions: Commits, documentation, testing evidence and operational discipline.
That process shapes junior engineers into reliable junior engineers. The exact same logic applies to AI coding agents. Trust should expand progressively.
At first, the agent proposes little code snippets and stanzas.
Then it drafts functions and packages libraries.
Eventually, it might implement entire features, but only after proving it understands the environment and the risk appetite of the company.
Skip those steps, and you aren’t accelerating development. You’re accelerating chaos being driven by FOMO and FUD.
Learning from more than one chef
Over the course of my career, I’ve worked across a wide range of industries: dot-com era web development in San Francisco, trading infrastructure in European financial markets, cloud transformations for legacy enterprises and large-scale infrastructure engineering.
Each environment changed how I thought about software and security. The dot-com era taught speed and experimentation. European financial institutions taught rigorous project governance (PRINCE2 anyone?). Large-scale options and commodity exchanges taught what real operational resilience looks like.
Those experiences fundamentally reshaped how I approach engineering problems. AI agents will benefit from the same diversity. Pairing them with multiple engineers and rotating pairings over time will expose them to different coding styles, architectural philosophies and security techniques. Best practices, but not monolithic best practices aggregated and homogenized by token prediction algorithms trained on millions and billions of lines of code. Just as aspiring chefs learn from multiple masters, agents improve faster when exposed to varied expertise.
A warning for CISOs
Many security leaders today are under pressure to reduce developer headcount because executives believe AI can absorb the workload. This assumption misunderstands both security and AI. If an organization already has strong security discipline with well-documented architectures, clear coding standards and mature review processes then AI agents will amplify that core mindset and culture.
But if the organization has weak security habits, AI will amplify those weaknesses even faster. Human knowledge is like sunlight. Large language models are more like moonlight. A mere reflection of that knowledge. You cannot build a thriving ecosystem entirely under moonlight. Sooner or later, you need the sun, despite what the vampires and werewolves howling at the moon might lead you to believe.
The real promise of AI development
None of this is an argument against AI coding tools. Used properly, they are extraordinary collaborators. They can surface patterns across massive codebases, accelerate documentation and help engineers explore alternative designs more quickly than ever before.
But unlocking that potential requires the right mental model. Not as a tool, but as a junior developer. Onboard them. Pair with them. Teach them your systems, regale them with your stories of isolating a bug or race condition that took weeks to pinpoint. Rotate them across your teams. Expand their responsibilities gradually as trust develops.
That investment phase is what transforms AI from a novelty into a genuine multiplier. And like every good mentorship relationship in engineering, the payoff compounds over time. Treat your AI coding agent like a disposable tool and you’ll get disposable code (aka slop).
Treat it like a junior developer and you might just raise up the best engineering partner you’ve ever had.
This article is published as part of the Foundry Expert Contributor Network. Want to join?
More than 1,300 internet-exposed SharePoint servers remain unpatched against CVE-2026-32201, a spoofing flaw Microsoft says was exploited as a zero-day.
Sonatype uncovers a sophisticated malware campaign using hijacked npm developer accounts to steal API keys and passwords. Is your dev environment at risk?
Substack says hackers accessed user emails, phone numbers, and internal metadata in October 2025, with a database of 697,313 records later posted online.
A fake CAPTCHA scam is tricking Windows users into running PowerShell commands that install StealC malware and steal passwords, crypto wallets, and more.