Meet Babak, one of agentic’s original inventors

| ,

If you’re an enterprise leader staring down ballooning tech debt, rising pressure for AI transformation, and daily new product drops that all scream “innovate now or be left behind,” stop. Take a breath—and read this wide-ranging Q&A between Cognizant CTO for AI, Babak Hodjat, and HFS Executive Research Leader David Cushman.

Babak has agentic AI cred in spades and has been operating in this space for three decades.  He was the main inventor of core massively distributed evolutionary computation technology as co-founder and CEO of Sentient Technologies.  When Cognizant acquired Sentient’s IP assets in 2019, Babak came along for the ride.  Sentient’s platform combined evolutionary computation, which mimics biological evolution, and deep learning, which is based on the structure of nervous systems.  His patented work on artificial intelligence led to the technology used by Apple for its digital assistant Siri.  In short, Babak’s entire career has been rooted in the humanization of technology, which forms the basis of what we today call Agentic AI.

However, it wasn’t until Ravi Kumar took the reins at Cognizant that he quickly unearthed this gem and encouraged Babak to energize the firm’s multi-agent strategy under its Neuro AI brand.

The conversation cuts through the agentic noise to tackle what enterprises need to do today. Chasing a “god model” of general-purpose, all-knowing AI is a distraction. The real opportunity lies in engineered agentic systems—practical, modular, governable, and already delivering enterprise value.

This is how AI becomes real.

From Siri to agentic systems: the long road to a practical AI future

HFS: It’s striking how far back your work in this space goes.

Babak: Yeah, I got into agents in the mid-90s. I wrote about agent-oriented software before it was a thing. The code that led to Siri was multi-agentic. Each component understood a piece of context. Plug in a new DVD player to your entertainment ‘stack’, for example, and our system understood what it could do without reengineering anything.

HFS: You were solving for adaptability and context – problems we’re still trying to tackle today.

Babak: Exactly. What made that system robust was not the natural language model alone—it was the agentic architecture behind it.

What agentic AI is—and what it is not

HFS: Fast-forward to now. Suddenly, everyone wants “agentic”—but what do they really mean?

Babak: That’s the problem. A GenAI model is passive: you give it a prompt, it gives a response. An agent is active. It has autonomy. It chooses which tools to use and when to act. It’s model + code. And that code is yours—you control it.

HFS: So it’s not just a chatbot that can hit APIs. It’s a unit of engineered intelligence that does work?

Babak: Exactly. It’s not just smart. It’s structured. You define what tools it has. You decide the boundary of autonomy. You design the interactions. That’s engineering, not magic. It brings control to the ‘black box’ of LLMs.

Why the god model is a dead end for enterprise

HFS: Let’s confront the elephant: the “god model” narrative. Everyone wants an AI that just does everything. Why’s that a trap?

Babak: Because it’s technically limited and socially unworkable. These models are pre-trained. Their world model is fixed. You can’t keep patching it with plugins and prompts. You hit a wall.

And society isn’t ready. If an AI makes a decision and it fails—who’s accountable? You’re effectively outsourcing decision-making to a black box. That won’t fly in enterprise contexts where there are high risk situations, for example in finance or healthcare.

HFS: And yet this is still what OpenAI, DeepMind and others are building toward.

Babak: Sure. And it’s useful for them to chase it. But for enterprises? You’ll wait 10 years for something that may never be stable—or usable.

Multi-agent systems: how the work really gets done

HFS: What is usable right now?

Babak: Multi-agent systems. At Cognizant, we use them for RFP processing. The system breaks down the ask, maps it to prior work, assembles the right human team, drafts responses. Throughput’s up. Accuracy’s up. And people complain if it’s out of action—because they have come to rely on it already.

HFS: That’s real adoption. And it feels like the model could extend everywhere: HR, finance, ops?

Babak: Yes. The power isn’t just individual agents. It’s agents that talk to each other—breaking technology silos. We had teams building separate agents for HR and finance. But when someone asks for time off that affects payroll, those agents need to coordinate.

Engineering vs aspiration: control is a feature, not a flaw

HFS: This gets to something deeper—the difference between engineering a solution and wishing one into existence.

Babak: The big change is that we’ve moved from aspirational AI—trying to build one model that does everything—to engineered autonomy. You build narrow agents, give them job descriptions, define interaction rules.

You don’t need omniscience. You need discoverability, control, and extensibility.

HFS: And enterprise leaders know how to build with those principles. They’ve done it before—at least in the digital era.

Babak: Exactly. The agentic opportunity is practical. It’s familiar. It’s here now.

Agents wrap round legacy to allow you to ‘speak your intent’ 

HFS: Let’s talk legacy. How do agents help enterprises escape decades of bad plumbing?

Babak: By speaking in natural language, agents wrap legacy systems like Siebel, mainframes, or rigid APIs. They act as the interpreter. So instead of rebuilding a UI or middleware layer, you just speak your intent.

HFS: That’s massive. Suddenly, you’re not trapped by brittle integrations.

Babak: And it future-proofs your stack. Because your interfaces aren’t wires—they’re words.

The risk of over-trusting and under-scrutinising 

HFS: How do you manage the risk of agents hallucinating or getting things subtly wrong?

Babak: That’s the critical point. Because they’re good—most of the time—they lull you into trusting them. But when they confabulate, it’s dangerous. So we build in trust thresholds. Below a certain level of confidence, they will re-route the request to a human and we can add another layer of safety by adding rule-based fallback.

HFS: So governance isn’t an afterthought. It’s a design principle.

Babak: Always. You engineer for oversight.

What enterprise leaders should prioritise next

HFS: Alright—if I’m a CEO or CTO reading this, what do I do now?

Babak: Three things.

  1. Be LLM-agnostic. Don’t build on one vendor’s stack. Models are commoditizing.
  2. Prioritise interoperability. Use standards like A2A or MCP so agents can talk without bespoke glue code.
  3. Distribute empowerment. Let teams build their own agents—but plug them into a discoverable, governed multi-agent ecosystem.

HFS: You can’t centralise innovation anymore.

Babak: No. You need controlled chaos—engineered, but decentralised.

Bottom line: Stop dreaming of an AI ‘god’ and start engineering for what is within reach here and now.

The dream of an all-knowing, all-doing AI still makes headlines. But in the enterprise world, the smarter move is to stop dreaming and start engineering. Agentic systems aren’t some distant vision. They’re here. They’re proven. And they solve real problems without waiting for the stars to align.

Enterprise leaders must resist the lure of the god model. Focus on what’s within reach: modular agents that can break silos, wrap legacy, enable new workflows, and scale intelligence safely.

Posted in : Agentic AI, Artificial Intelligence, GenAI, LLMs

Comment0 ShareThis 27 Twitter 0 Facebook 0 Linkedin 0

Leave a Reply

Your email address will not be published. Required fields are marked *

    Continue Reading