5 minutes ago

AI won’t wait

Yagmur Sahin, Head & VP of Engineering, NUCAL
Yagmur Sahin, Head and VP of Engineering, NUCAL

The colleagues you didn’t hire are already working

The commit nobody made

I reviewed a pull request that solved a problem our team had been wrestling with for weeks. The code was clean. The architecture was elegant. The documentation explained not just what the solution did, but why it chose that approach over three alternatives we had debated. When I reached out to congratulate the author, there was no one to congratulate. The commit had been generated autonomously by an AI agent that observed our failed attempts, synthesised the constraints from our architecture documents, and produced a solution while the team slept.

I did not feel replaced. I felt like I had just met a new colleague.

This moment crystallised something building across the industry: AI is no longer a tool we use. It is becoming a collaborator we work with. The distinction sounds subtle, but its implications are seismic. Tools wait for instructions. Colleagues anticipate needs, take initiative, and contribute ideas you never asked for. The future of engineering is not about humans directing AI. It is about humans and AI thinking together.

The coworker threshold

Something fundamental has shifted in how organizations experience AI systems. Unlike the chatbots of two years ago, which waited passively for prompts, today’s AI agents take initiative. They schedule meetings, generate reports, triage data, coordinate across systems, and increasingly, they contribute to the actual work of building software. Enterprise leaders across industries are beginning to describe these systems not as tools but as coworkers. This is not marketing language. It reflects a genuine change in the nature of the relationship.

The adoption curve tells the story. Companies that were experimenting with agentic AI last year are now deploying it at scale. Some analysts predict that within the next few years, AI agents will outnumber human employees in the most operationally mature enterprises. Whether or not that specific timeline holds, the direction is unmistakable.

What does this mean for engineering? The question was never whether AI would change engineering. It was whether we would change with it. She envisions workplaces where a three-person team can launch a global campaign in days, with AI handling data analysis, content generation, and personalization while humans steer strategy and creativity. Small teams will punch above their weight in ways that were previously impossible.

But amplification requires a new kind of relationship. You do not amplify a hammer. You amplify a colleague by creating the conditions for them to do their best work. That shift in mental model changes everything about how we build, lead, and organize engineering teams.

Context is the new code

If 2024 was the year of prompt engineering, 2026 is the year of context engineering. The distinction matters. Prompt engineering focuses on crafting the right question to get a useful answer. Context engineering focuses on building the environment where AI colleagues can contribute meaningfully without being asked.

Think about how you onboard a new human teammate. You do not just hand them tasks. You give them access to documentation, introduce them to the codebase, explain the history of key decisions, and connect them with people who have relevant expertise. You build context so they can eventually operate autonomously, asking good questions and making good judgment calls without constant supervision.

AI colleagues require the same investment, just through different mechanisms. They need access to your knowledge graphs, your decision records, your architectural principles, and your organizational memory. They need retrieval systems sophisticated enough to surface not just relevant documents but relevant relationships. They need interfaces to your tools that provide semantic context, not just raw API responses. The organizations succeeding with AI colleagues are the ones treating context as infrastructure, investing as heavily in knowledge architecture as they once invested in compute.

Standards like the Model Context Protocol are accelerating this shift by providing universal interfaces for AI systems to access enterprise data and tools. But the protocol only handles connectivity. The harder work is curating what your AI colleagues can see, ensuring that the context they access is accurate, current, and appropriate for the decisions they will make.

Some experts predict the emergence of new organizational roles: AI librarians and knowledge navigators responsible for curating and evolving the knowledge assets that AI systems depend on. These roles will define the quality of every AI decision made downstream.

What engineers actually do now

What does an engineer actually do when AI handles implementation? This question keeps surfacing in leadership conversations, often with an undercurrent of anxiety. The answer is becoming clearer, and it is more exciting than threatening.

One industry observer puts it bluntly: engineers will spend the vast majority of their time reviewing, evaluating, conceptualising, and thinking. The core competency becomes bridging the gap between implementation details, code quality, velocity, and alignment with business goals. Knowing how to code remains valuable, perhaps increasingly so, because deep technical understanding is what allows you to effectively guide AI systems and catch their mistakes.

This is not a deskilling of engineering. It is an elevation. The mechanical aspects of coding, the syntax, the boilerplate, the repetitive patterns, those get automated. What remains is the judgment: deciding what to build, evaluating whether what was built is correct, understanding the implications of architectural choices, and maintaining the coherence of complex systems over time. These are the skills that were always most valuable. AI simply makes them more visible by handling everything else.

The research community is tracking this shift quantitatively. Evaluations show that the complexity of software engineering tasks AI can reliably complete has been doubling every few months. Current frontier models can handle tasks that would take skilled humans nearly five hours. The trajectory suggests AI colleagues will soon be capable of sustained, multi-day workstreams with minimal human intervention.

But capability is not the same as trustworthiness. AI colleagues can still hallucinate, misroute data, or misinterpret goals. The engineering skill of the future is knowing when to trust AI output, when to verify it, and when to reject it entirely. This is judgment work, and it requires deeper technical understanding than simply writing code yourself.

The lab model goes mainstream

Perhaps the clearest preview of engineering’s future comes from scientific research, where AI colleagues are already transforming how discoveries happen.

A near future is described where AI does not just summarise papers and answer questions but actively joins the process of discovery. AI will generate hypotheses, use tools that control experiments, and collaborate with both human and AI research colleagues. Every scientist could soon have an AI lab assistant that suggests new experiments and runs parts of them autonomously.

This is the trajectory for software engineering as well. Your AI colleague will not just write code you specify. It will notice patterns in your bug reports, hypothesise about root causes, propose architectural changes, and prototype solutions for your review. It will observe your system’s behaviour in production and suggest optimizations you did not know were possible. It will read the research literature and surface techniques relevant to problems you are working on.

In science, accuracy alone isn’t enough, you need to understand how the model arrived at its answer. There is an increasing focus on the internal workings of neural networks, understanding which data drives which outputs. Engineering will follow the same path. We will need to open AI’s black box, not to replicate what AI does, but to validate it and learn from it.

Hybrid architectures win

The organizations leading this transition share a common architectural principle: they are building for hybrid intelligence, not AI replacement. The most successful AI strategies combine foundation models with classical AI, simulations, engineering models, and structured knowledge systems. No single approach handles all requirements well.

In practice, this means orchestration becomes a core engineering capability. An AI colleague working on a complex problem might use a large language model for reasoning, a retrieval system for accessing relevant knowledge, a simulation engine for testing hypotheses, and deterministic tools for actions that require precision. The skill is knowing which component to invoke for which part of the problem and how to compose their outputs into coherent solutions.

GraphRAG architectures exemplify this hybrid approach. Instead of treating documents as flat text to be searched, these systems build explicit knowledge graphs that enable reasoning over relationships. When an AI colleague needs to answer a complex question, it does not just find similar documents. It traverses actual connections between entities, understanding that Component X requires Certification Y, which expires in six months, and Supplier Z had quality issues last quarter. The answer emerges from structured reasoning, not keyword matching.

This architectural sophistication is what separates AI colleagues from AI tools. A tool gives you an answer. A colleague gives you an answer grounded in your specific organizational context, with the reasoning visible and auditable. The difference matters enormously when stakes are high.

Governance before scale

Every new colleague, human or AI, creates governance questions. What decisions can they make independently? What requires escalation? How do you verify their work? How do you hold them accountable for mistakes?

One industry observer notes that as fleets of autonomous agents proliferate across data systems, the biggest bottleneck will not be model performance. It will be governance. Organizations are waking up to this reality. Some predict the emergence of a new executive role: the Chief AI Agent Officer, responsible for defining, auditing, and governing the rules of engagement between humans and autonomous systems. Every AI action becomes observable, explainable, and aligned with enterprise ethics.

Ethics itself is becoming an engineering topic rather than a philosophical afterthought. You cannot embed ethical principles into systems through policy documents alone. You need to encode them directly into the architecture, into the retrieval systems that determine what information AI colleagues can access, into the validation rules that check their outputs, into the escalation workflows that trigger human review. Organizations that treat governance as an enabler rather than a constraint will move faster and more confidently than those that see it as overhead.

The regulatory environment is reinforcing this imperative. Customer expectations are evolving even faster, with enterprise buyers increasingly demanding AI governance documentation in their procurement processes. Companies that built governance into their architecture from the beginning will adapt through configuration changes. Companies that treated it as an afterthought will face painful retrofitting.

Your four-point readiness plan

For engineering leaders navigating this transition, four principles are becoming clear.

First, invest in your people’s ability to work with AI, not just use it. The organizations capturing the most value have internal AI champions and enablement programs that help teams use agents safely and effectively. Deep technical intuition is what will distinguish engineers in this new landscape. Build and learn. Look closely at what others are building, deconstruct their work to understand the why and how, then build your own versions.

Second, treat your knowledge architecture as critical infrastructure. AI colleagues are only as good as the context they can access. Audit your data landscape. Identify silos. Begin structuring critical information as queryable knowledge. Build the retrieval and reasoning systems that let AI navigate your organizational memory. This investment pays dividends across every AI initiative you undertake.

Third, design for collaboration, not automation. The goal is not to remove humans from processes but to create partnerships where human judgment and AI capability reinforce each other. Identify where AI colleagues can handle routine work while humans focus on strategy, creativity, and exception handling. Build feedback loops where human corrections improve AI performance over time.

Fourth, build governance before you build agents. The organizations succeeding at scaled agentic AI have observable, auditable systems in place before they deploy broadly. Define what your AI colleagues can and cannot do. Establish validation rules and review workflows. Create incident response processes for when AI makes mistakes, and they will make mistakes.

The teams of tomorrow

We are entering a phase where small teams can accomplish previously unthinkable things. The playing field is flattening as the competitive advantage shifts from raw technical capability to the infrastructure and culture that make human-AI collaboration effective.

This is not the end of engineering. It is the beginning of a new chapter where the craft evolves from writing code to orchestrating intelligence. The engineers who thrive will be those who embrace this partnership, who develop the judgment to guide AI colleagues effectively, and who build the systems that make collaboration seamless.

The future will not be built by AI alone or by humans alone. It will be built by humans and AI thinking together, each contributing what they do best. The organizations that understand this first will define the next decade of innovation. The ones that cling to old models, whether fearing AI replacement or expecting AI to handle everything, will struggle to keep pace.

Your new colleagues have arrived. They are not waiting for permission. The question is whether you are ready to work alongside them.

“AI won’t wait. Neither should you.”

Leave a Reply

Don't Miss

Yagmur Sahin

The control paradox in agentic AI

Yagmur Sahin, who leads engineering for AI-powered platforms, explains why the autonomy

Welcome to

By signing or creating an account you agree with our Code of conduct & Privacy policy