AI Failure Is a Human Problem

You wouldn’t expect any other system to work without ownership, so it’s unclear why we expect AI to.

Welcome Back, Friends

Six months ago, I wrote about what it felt like to step out of certainty and back into learning. Since then, I’ve been deep in conversations with CX teams who feel like they did what they were supposed to do — launched AI, cleaned up docs, checked the boxes — and are still waiting for something to change. This piece is an attempt to name why that gap exists, and why it’s so easy to miss.

I don’t pretend to have all the answers, so if you’ve learned something different or seen this play out another way, please @ me!

We’re primed for the AI disaster story, even when nothing actually breaks

Most AI launches don’t fail in some big dramatic way, headlines be damned. Nothing major breaks or catches fire, support teams don’t revolt against their new AI teammate, not usually. Instead, things just… stay mostly the same. Inbox volume doesn’t really move and customers still find their way to email or ticket forms while agents still spend their days triaging instead of solving.

Somewhere in a dashboard there’s a chart showing a “good” containment rate, but only if you don’t look too closely at how little traffic actually touched the AI agent in the first place. And that’s the part that’s tough to admit: the technology didn’t let you down, your organization never actually asked for it to matter. Treating AI as another channel feels safe because it doesn’t force you to change anything else, but redesigning how customers enter your support experience does.

Calling it “just another channel” is appealing for all the wrong reasons. It lets you say you’re modernizing without forcing yourself to make any real decisions. You don’t have to touch the existing intake process or upset customers with change. And you don’t have to ask your team to work differently or admit that the way your support system has operated for years might be part of the problem. You can drop a widget alongside your ticket form, your email address, your phone number, and tell yourself you’re being flexible. But flexibility in this case is just avoidance, because as long as customers have a (perceived) easier, more familiar way around the AI, they’ll take it, and the AI will never get the volume, context, or trust it needs to operate at a high standard.

Nothing Broke, So Nothing Changed.

What tends to get lost in these conversations is that customers aren’t resisting AI because they’re irrational or change-averse. They’re responding to patterns that we (the CX industry) trained them on. Ticket forms, direct email, phone calls - those are familiar paths that promise one important thing: a human will eventually take responsibility for the problem. When you introduce AI without changing how people enter your support funnel, you’re asking customers to trust something new while still signaling that the old, safer option is right there if they need it. Of course they’ll bypass the bot, most people would. And every time they do, it reinforces the idea that the AI is optional, experimental, or not really meant to handle anything important.

The first time you actually remove a ticket form, it feels like you’re doing something reckless, even if you’ve talked yourself into it logically (I lit a candle for luck the first time I did this in 2019). There’s this moment where you realize you’re taking away the thing everyone falls back on when they don’t know what else to do. And I mean everyone, customers and internal teams included. Suddenly all your vague concerns become very specific: what if people get stuck, what if something important slips through, what if we just made the experience worse instead of better? But that moment also exposes something deeper than a tooling gap, because it forces you to confront the idea that “meeting customers where they are” has limits, and that sometimes the right move is asking them to meet you where they actually need to be. Sunsetting the form isn’t about being rigid or uncaring, it’s about deciding, intentionally, how your customers should experience your support strategy, instead of defaulting to what’s always existed.

When AI Becomes the Front Door

Once AI becomes the front door instead of a side entrance, a lot of things get clearer very quickly. You’ll start seeing where your knowledge actually holds up and where it falls apart, and notice which questions are truly repetitive versus which ones only seem that way until you look closer. Patterns will quickly emerge that were previously buried under manual triage and inbox sprawl. This is also where the work gets harder, because volume flowing through AI doesn’t just surface customer behavior, it also reveals gaps inside the organization. Missing integrations, unclear ownership, outdated assumptions about what support should handle versus what the product should prevent… none of that is visible when AI is an optional channel. It only shows up when you make AI agents the primary entrance.

This is usually the point where teams realize that AI can’t be treated like a side project or a launch-and-leave tool. When it’s the front door, it needs care in a way most support systems never got. Someone has to own it, not as a checkbox, but as a living part of the experience. That ownership is about constantly asking whether the paths customers are being guided down still make sense, whether the questions being asked are the right ones, whether the answers still reflect how the product actually works today. Without that kind of stewardship, your implementation usage will decay and people will start routing around it again, not because it’s bad technology, but because no one was responsible for keeping it good.

A lot of the fear around making AI the front door gets framed as concern for the customer, but more often than not it’s really about accountability. When everything funnels through a default experience, it becomes much harder to hide behind edge cases or blame the tool when outcomes don’t improve. You can see, very clearly, whether people are getting what they need or getting stuck and why. That visibility is scary if you’re used to success being defined as “nothing blew up.” But it’s also the point. Risk doesn’t come from asking customers to change their behavior, it comes from refusing to look closely at what happens when they do. Once you accept that, the conversation shifts from “what if this goes wrong” to “what are we willing to own if it goes right.”

Caution Isn’t the Same as Rigor

This is also where teams tend to confuse caution with rigor. Stretching a rollout indefinitely can feel responsible, but more often it just delays the moment where real learning happens. When AI is optional, you can always explain away poor performance as a temporary state: not enough data yet, not fully trained yet, still early days. None of that pressure exists until you narrow the path and see what actually breaks. A phased approach can be smart, especially in regulated or genuinely complex environments, but “phased” still means moving forward. Otherwise you end up protecting legacy paths so thoroughly that nothing new ever has the chance to work under real conditions.

The metrics can also start to get misleading if you’re not careful. It’s easy to point at containment, deflection, or even CSAT and convince yourself things are trending in the right direction, but those numbers don’t mean much when they’re based on a small slice of traffic. An AI that looks “successful” while only handling a fraction of volume isn’t actually carrying any weight yet. The harder (and more honest) question is:  what happens when more customers are routed through it by default? What breaks, where does friction show up, and how quickly does the experience improve once you start paying attention to those signals? Volume is both a load test and a feedback mechanism. Without it, you’re optimizing in a vacuum.

Working closely with your vendor and your peers matters at this stage. They don’t have some secret answer, but they’ve seen the same avoidance patterns play out before. The most productive conversations you’re having together should be about what worked/broke when volume increased, what assumptions didn’t hold up, and where teams underestimated the effort required to change behavior. Talking to other CX leaders who’ve been through this can be sobering in the best way! You realize the mistakes you’re nervous about making are the same ones everyone makes when they try to modernize without committing. The difference isn’t who had better tools, it’s who made clear decisions and followed through when the first version didn’t work perfectly.

The phrase “CX is everyone’s responsibility” sounds good until you realize how easily it becomes a way for no one to take ownership. When AI is just another channel, that ambiguity is pretty easy to live with. When it’s the front door, it isn’t. Someone has to own the experience design, someone has to own the content, someone has to own the system’s ongoing health, and someone has to own what happens when customers don’t get what they need. That doesn’t mean one person carries the entire burden, it means the work is shared deliberately, instead of just diffused. The teams that see real outcomes aren’t magically better aligned, they’re just clearer about who is responsible for which parts of the experience once customers are actually being routed through it.

Ultimately, This is a Leadership Choice

At a certain point, the hesitation stops being about customers or technology and starts being about leadership. Redesigning how people enter your support experience forces you to make decisions that can’t be undone without someone noticing, and that’s uncomfortable in organizations that are used to optional change. It’s easier to add something than to remove something, or to say you’re experimenting rather than to say you’re committing. But AI only starts to have a strong, positive impact when you’re willing to narrow the path and stand behind it.  If nothing about your support experience truly changes after you “launch AI,” it’s worth asking whether the goal was transformation in the first place, or just the comfort of saying you tried.

Next
Next

From Executive to Beginner Again