Conversation design for AI chatbot
Conversation design for AI chatbot
How I trained an LLM-enabled chatbot to automate customer support and reduce user tickets by 85% in 3 months.
Client: CARS24 Partners Pvt. Ltd.
My role: Conversation designer, UX writer
Team: Project Manager, Senior Product Designer, UI Designer, Gen AI Engineers, API developers, Illustrator
Tools used: Figma, FigJam, Google Docs, Excel, Jira, AIRO, Chat GPT
CARS24 Partners is a B2B platform that helps 15,000+ dealers across India buy, sell, finance, and manage used cars.
CARS24 receives approximately 7,500 support tickets every month from car dealers in the used car industry. Tickets often cover common topics such as process status, how-tos, or general information. To reduce this load, we created a self-serve UX model so users can access all relevant information themselves, in their native language.
For this case study, I’ve detailed my process for one of the critical journeys: account closure.
PROTOTYPE
A chatbot that is contextually aware, multilingual, and can code-switch effortlessly.
⬆️ 120 accounts closed (in 60 days) | ⬇️ 85% ticket reduction | ⬆️ 50% automation | ⬇️ 7s avg. latency
IDEATION & RESEARCH
Leveraging LLM and our databases to reduce the monthly user support tickets
As mentioned, CARS24 receives about 7500 support tickets every month. 90% of these tickets fall into one of the three problem buckets that we identified. After multiple brainstorming sessions, we solidified our problem statements.
1/3 THE PROBLEM
What are the friction points for which users are raising tickets?
Insight: As suspected, most tickets were raised for a few repeat friction points in the journey. I found this after reading 300+ user tickets and analysing their concerns. I was able to track and further categorise the nature of these queries. Later, we would train the AI chatbot accordingly.
2/3 THE PROBLEM
Are user queries getting resolved on time? Can we optimise it?
Insight: Most users raised tickets at different levels of their journey. These tickets were being segregated manually and redirected to a specific operator for issue resolution. Most of these conversation threads would last for up to 10 days.
With AI, we can automate the segregation process and cut the time to resolution.
3/3 THE PROBLEM
How do we break the language barriers to make information more accessible and better understood?
Insight: CARS24 uses British English for all communication. However, most of our user base is of a low-literacy segment that struggles with understanding instructions in standard English.
Easy English is a simplified form of English written at about a Grade 3–4 reading level, using limited vocabulary and short, clear sentences to ensure accessibility for people with low literacy or cognitive challenges.
TIME TO SET THE GOALS
Our plan, for the next 6 months, focused on process automation and LLM training
📊 Reduce monthly support tickets by 80%.
📊 Reduce manpower in the user support team by 20%.
📊 Improve confidence metric and dealer's trust.
✏️ Build conversation logics with conditional parameters and priority order.
✏️ Give the bot a relatable, hoo-man personality! ;)
✏️ Build an intuitive chat UI.
✏️ Test with real users; tweak, fine-tune, & try again.
TARGET USERS
Who are my users? And, what do they want?
CONVERSATION DESIGN
To flow smoothly, conversations need context and... logic.
Paul Grice introduced the 4 maxims for a fruitful conversation between two subjects:
➡️ Quality: All info must be true. We fetched all data via API calls for authentic & updated data.
➡️ Quantity: The amount of info should be just right. I used prompts to control the token lengths.
➡️ Relation: Info should stay relevant to the topic. Content design guardrails were added to add a layer of relevance.
➡️ Manner: All info must be clear, brief & orderly. I framed a voice & tone guideline to maintain this.
1/2 CONVERSATION DESIGN
I built conversation flows covering all actions and dialogue units for a seamless, human-bot journey.
A conversation flow in live action!
2/2 CONVERSATION DESIGN
For logic, I built a priority-based framework to check closure criteria and guide the LLM in deciding whether an account could be closed. This taught the model to handle conditional scenarios with consistency.
VOICE AND TONE
We wanted the bot to sound more...human. So, I wrote guardrails & guidelines to fine-tune the experience.
Guidelines stated the DOs & DON'Ts for LLM, CARS24's standard set of terminology & word usage, as well as notes on how to handle error states [like API fails, fallbacks, or simply, out-of-context queries].
These guidelines also reduced miscommunication across engineering pods and ensured tone alignment in 90% of LLM responses. This helped build trust in dealers, especially in low-literacy contexts.
1/3 VOICE AND TONE
Personality explorations helped me ideate who I want our chatbot to be. It also enabled me to lay a structure for tonality.
2/3 VOICE AND TONE
To standardise jargons, abbreviations and word usage, I documented and built a glossary that we fed to Mitra.
3/3 VOICE AND TONE
Prompt engineering helped me standardise tone, emojis, & code-switching.
WRITING IN ACTION
I used few-shotting and user data to 'teach' our tonality to Mitra, the bot.
Few-shot meant giving Mitra 2–3 writing samples to help it infer tone and intent more reliably. See all guidelines here.
TESTING IT OUT
Real impact. Real stories
A report of user conversation threads from our servers. Zoom in on the picture to see it clearly. Use this image link for a closer look.
TESTING IT OUT
User data analysis helped me to fine-tune our machinery constantly. And, check for hallucinations.
I did an intensive user testing to understand user-bot conversations and tweak the copy or the prompts.
While we maintained the Roman script, the bot can hold conversations in 5 different languages, along with
code-mixed variations like "Hinglish' (a mix of Hindi and English), Hindi-Punjabi, Hindi-Bihari, etc.
Prompt cards for maintaining tonality
We noted instances where users tried to bypass the LLM guardrails. Or, where the bot was unable to guide users towards a solution & tonality was impacted.
I fixed this with journey-specific prompts.
TESTING WITH OUR USERS
As of October 2025, the process is 50% automated with future scope to get fully automated and reduce manpower further.
Additional metrics like change in latency time, API restrictions aren't covered & beyond the scope of this case study.
LAST STOP
Building a chatbot is not definitive. It's an iteration and helps us uncover new user insights every day.
This was the most demanding (and exciting!) project of my UX writing career.
Over the next 3 months, I built logic flows, drafted conversation design and wrote tonality prompts for other user journeys. We tapped into onboarding, car documentation assistance, and the car auction processes to create and automate the support care for more complex user journeys.
As for now, we continue to iterate on our prompts to make Mitra sound more human. Next, I aim to refine Mitra’s contextual understanding by training it on partial Hindi intent triggers to reduce fallback loops. You can use the CARS24 Partners app to catch the bot in conversation.
Thanks for reading!
“There is no real ending. It’s just the place where you stop the story.”
― Frank Herbert