The AI Isn't Broken. The Collaboration Is.
When AI generated conversations go wrong, we blame the model. But after years of designing these systems, I've started to wonder if the real failure isn't intelligence at all. It's that nobody designed for how humans actually communicate. Anthropic's latest AI Fluency research is starting to confirm what practitioners have been seeing all along.
AI FLUENCYPROMPT ENGINEERINGAI FUNDAMENTALS
Christi Akinwumi
Why we keep blaming the model when the real failure is design.
A user opens a chatbot on a website to do something simple and starts typing a question...
But they don't type it all at once. They hit enter after a few words.
Then they type more. Hit enter again.
They're texting the way they text everyone else in their life. Short bursts.
Thoughts broken across multiple sends with long ellipsis..........
The system doesn't wait. It treats every enter key as a complete turn.
So it starts responding to a half-finished thought. The user sees a response that doesn't match what they were trying to say. They try again. The AI responds to that fragment too. Now the conversation is out of sync.
The user leaves thinking the AI is stupid.
But the AI wasn't stupid. It responded to exactly what it received. And the user wasn't doing anything wrong. They were communicating the way humans naturally communicate in a chat interface.
So if the AI wasn't broken and the user wasn't broken, what was?
The collaboration was. Nobody designed for it.
Designing for an imaginary user
I've spent the past five years building conversational AI systems that serve global users across eight international markets. I've designed intent taxonomies, fallback strategies, multi-agent routing architectures, and the quiet, invisible logic that determines what a system says when it doesn't know what you mean.
The most consistent failure I've seen is not a model problem or a data problem.
We tend to design AI conversations for a happy path user who reads the instructions, types in complete sentences, provides all the context upfront, and never needs to be taught how to interact with the system.
That user does not exist.
Real users text in fragments. They change their mind mid-sentence. They ask three things at once. They assume the system remembers what they said yesterday. They don't read the onboarding tooltip. They don't specify what format they want.
They just start talking. Because that's what humans do.
And when the system can't handle that, we diagnose it as an AI model problem. We say the model needs to be smarter. The retrieval needs to be better. The prompts need to be tighter.
We almost never say: the collaboration wasn't designed.
What AI fluency actually means
Anthropic recently published something that gave language to a pattern I've been watching for years.
Their AI Fluency Index tracks 11 behaviors that represent effective human-AI collaboration. Things like clarifying goals, iterating on responses, questioning reasoning, and identifying missing context. The framework was built on the 4D AI Fluency Framework developed by Professors Rick Dakan and Joseph Feller in collaboration with Anthropic.
Two findings stood out to me.
People who iterate get dramatically better results
85.7% of productive conversations involved iteration and refinement. Users who treated the AI's first response as a starting point, not a final answer, were 5.6 times more likely to question the AI's reasoning and 4 times more likely to catch missing context.
The single strongest predictor of using AI well is not accepting the first thing it gives you.
The better AI looks, the less we question it
When the AI produced polished outputs like code, documents, or apps, users became less critical. They were less likely to check facts. Less likely to question reasoning. Less likely to notice gaps.
The more competent the AI appears, the less competent the human becomes at evaluating it.
That's not a user failure. That's a design failure. We built systems that look confident, and then we're surprised when users don't question them.
[insert image: a graphic showing the inverse relationship between output polish and user scrutiny, could be a simple chart or illustrated concept]
The gap no one is designing for
Here's what the fluency research also revealed: only 30% of users ever tell the AI how they want to work together. They don't say "push back on my assumptions" or "tell me what you're uncertain about" or "ask me clarifying questions before you answer."
70% of people walk into an AI conversation with no agreement about how the collaboration should work. No shared expectations. No structure. Just a blinking cursor and a hope that the system figures it out.
Think about any other professional collaboration.
A new employee starts a job? They get onboarding. A patient sees a doctor? There's an intake process. A student enters a classroom? There's a syllabus.
But a user opens a chatbot and gets... nothing. We drop them into a conversation with a system that has no context about their communication style, their expectations, their expertise level, or their goal.
And we expect the interaction to just work.
The gap between how humans naturally behave and how AI systems expect them to behave is not a training problem. It's a design problem. And it's the designer's job to close it.
What education taught me about AI design
Once upon a time wanting to become a classroom teacher myself, during my graduate program I learned:
The foundational principle of that field is simple and unforgiving:
If the learner isn't learning, the problem isn't the learner. It's the instruction.
You don't tell a student with a processing difference to "just pay attention." You redesign the environment. You meet them where they are. You build scaffolding that supports the way they actually think, not the way you wish they thought.
The same principle applies to AI collaboration.
When a user sends fragmented messages and the system can't handle it? That's not a user education problem. The system should be built to detect incomplete input and wait, or prompt, or clarify.
When a user gets a polished response and stops questioning it? That's not a critical thinking failure. The system should surface its own uncertainty and invite scrutiny.
When a user walks in with no context and no collaboration framework? That's not a literacy gap. The system should establish the terms of the interaction, not assume the user arrives fluent.
We don't need users who are better at talking to AI. We need AI that's better at collaborating with humans as they actually are.
What this means for people who build AI
If you design conversational systems, AI agents, or any product where a human interacts with a model, here's what I'd challenge you to rethink.
Stop just designing for the ideal user.
Design for the person who texts in fragments. Who doesn't read instructions. Who assumes the system knows more than it does.
That's your actual user. If your system breaks when humans behave like humans, the system is broken.
Design for the collaboration, not just the output.
Most AI product work focuses on what the system says. Very little focuses on how the system and the user establish a working relationship. The first few turns of any conversation should set the terms. If you're not designing for that, you're leaving the most important part of the interaction to chance.
Build scaffolding, not barriers.
Instead of expecting users to arrive fluent, design systems that teach fluency in the moment. Prompt them to iterate. Invite them to question. Surface uncertainty. Make the system a partner in developing the skills the user needs, not a tool that only works for users who already have them.
Related reading: Anthropic's research on how AI assistance impacts the formation of coding skills explores a similar dynamic in software development.
The real question
The AI isn't broken. It's doing exactly what it was designed to do.
The question is whether we designed it to collaborate with real humans or with the idealized, instruction-reading, context-providing, perfectly articulate humans who exist only in our product specs.
Because those users aren't coming. The real ones are already here. They're texting in fragments, skipping the onboarding, and trusting polished outputs at face value.
The question isn't whether they'll learn to use AI better.
The question is whether we'll design AI that meets them where they are.
Christi Akinwumi is a Senior Conversation Designer with several years of experience building conversational AI systems for millions of users. Her background in hospitality, curriculum design and special education informs her approach to building AI that works for people who just need to get something done.
Connect with her on LinkedIn or explore her work at christi.io.
Contact
Seriously, let's chat about your next AI project.
christi@christi.io
+1-214-682-5105
© 2026. All rights reserved.


Principal & Founder of
Intelligent CX Consulting, LLC