Or: Why “Can we flip off technology” is likely to be the neatest query in generative AI
Not way back, I discovered myself in a gathering with technical leaders from a big enterprise. We had been discussing Parlant as an answer for constructing fluent but tightly managed conversational brokers. The dialog was going nicely—till somebody requested a query that fully caught me off guard:
“Can we use Parlant whereas turning off the technology half?”
At first, I truthfully thought it was a misunderstanding. A generative AI agent… with out the technology? It sounded paradoxical.
However I paused. And the extra I thought of it, the extra the query began to make sense.
The Excessive Stakes of Buyer-Going through AI
These groups weren’t enjoying round with demos. Their AI brokers had been destined for manufacturing—interfacing immediately with hundreds of thousands of customers per 30 days. In that sort of setting, even a 0.01% error charge isn’t acceptable. One in ten thousand unhealthy interactions is one too many when the result might be compliance failures, authorized threat, or model injury.
At this scale, “fairly good” isn’t ok. And whereas LLMs have come a great distance, their free-form technology nonetheless introduces uncertainty—hallucinations, unintended tone, and factual drift.
So no, the query wasn’t absurd. It was truly pivotal.
A Shift in Perspective
Later that evening, I saved fascinated about it. The query made extra sense than I had initially realized, as a result of these organizations weren’t missing assets or experience.
In actual fact, that they had full-time Dialog Designers on employees. These are professionals educated in designing agentic behaviors, crafting interactions, and writing responses that align completely with model voice and authorized necessities, and get prospects to really interact with the AI — which seems to be no straightforward activity in observe!
In order that they weren’t asking to show off technology out of concern—they had been asking to show it off as a result of they needed—and had been in a position—to take management into their very own palms.
That’s when it hit me: we’ve been misframing what “generative AI brokers” truly are.
They’re not essentially about open-ended token-by-token technology. They’re about being adaptive: responding to inputs in context, with intelligence. Whether or not these responses come immediately, token-by-token, from an LLM, or from a curated response financial institution, doesn’t truly matter. What issues is whether or not they’re applicable: compliant, contextual, clear, and helpful.
The Hidden Key to the Hallucination Drawback
Everyone seems to be searching for a repair to hallucinations. Right here’s a radical thought: we predict it’s already right here.
Dialog Designers.
Having dialog designers in your staff—as many enterprises already do—you’re not simply mitigating output hallucinations, you’re truly primed to remove them fully.
Additionally they carry readability into the shopper interplay. Intentionality. An enticing voice. And, they create simpler interactions than basis LLMs can, as a result of LLMs (on their very own) nonetheless don’t sound fairly proper in customer-facing situations.
So as an alternative of attempting to retrofit generative programs with band-aids, I spotted: Why not bake this into Parlant from the bottom up? In any case, Parlant is all about design authority and management. It’s about giving the proper individuals the instruments to form how AI behaves on the planet. This was an ideal match—particularly for these enterprise use instances which had a lot to achieve from adaptive conversations, if solely they may belief them with actual prospects.
From Perception to Product: Utterance Matching
That was the breakthrough second that led us to construct Utterance Templates into Parlant.
Utterance Templates let designers present fluid, context-aware templates for agent responses: responses that really feel pure however are absolutely vetted, versioned, and ruled. It’s a structured strategy to keep LLM-like adaptability whereas holding a grip on what’s truly mentioned.
Underneath the hood, utterances templates work in a 3-stage course of:
- The agent drafts a fluid message primarily based on the present situational consciousness (interplay, pointers, software outcomes, and so on.)
- Based mostly on the draft message, it matches the closest utterance template present in your utterance retailer
- The engine renders the matched utterance template (which is in Jinja2 format), utilizing tool-provided variable substitutions the place relevant
We instantly knew this may work completely with Parlant’s hybrid mannequin: one that provides software program builders the instruments to construct dependable brokers, whereas letting enterprise and interplay specialists outline how these brokers behave. And the fellows at that individual enterprise instantly knew it will work, too.

Conclusion: Empower the Proper Folks
The way forward for conversational AI isn’t about eradicating individuals from the loop. It’s about empowering the proper individuals to form and constantly enhance what AI says and the way it says it.
With Parlant, the reply will be: the individuals who know your model, your prospects, and your obligations greatest.
And so the one factor that turned out to be absurd was my preliminary response. Turning off—or at the least closely controlling—technology in customer-facing interactions: that wasn’t absurd. It’s most probably simply the way it must be. A minimum of in our view!
Disclaimer: The views and opinions expressed on this visitor article are these of the writer and don’t essentially mirror the official coverage or place of Marktechpost.

Yam Marcovitz is Parlant’s Tech Lead and CEO at Emcie. An skilled software program builder with intensive expertise in mission-critical software program and system structure, Yam’s background informs his distinctive method to creating controllable, predictable, and aligned AI programs.