BLVD

Voice AI Receptionist

Small team, tight timeline, extreme ownership. No role boundaries, no egos. Just everyone doing whatever it takes to ship. That's exactly how I like to work.

0:00 / 2:03
2025
Goal Ship a live pilot with real customers
Role Product Design · Conversation Design · Customer Research
Team Product Manager, Product Designer, Engineer
The Problem

Stylists build careers.
Front desk is a job.

Everyone assumes this is about saving time. Fewer missed calls, faster bookings. True, but secondary. The real problem is turnover. It takes a month to train someone who's gone in six. AI doesn't quit. Salons are skeptical by default. We had to prove it could work before anyone would trust it.

Gameplan
01

Find one customer willing to try it

Build with them, launch, and watch how their clients react.

02

Use their testimonial to bring in the next

Expand niche by niche until we have enough signal to scale.

03

General availability

Once we know it works reliably across verticals and edge cases.

The nuts and bolts of the Voice AI Receptionist: prompts, tool calls, and the agent configuration behind the conversations.

How we got there

Five areas,
start to finish

I was involved in each of these. Some more than others. None where I was just watching from the side.

01

Customer Recruitment and Research

We recruited pilot customers from day one so we could reach them anytime: questions, prototype tests, gut checks on direction. Talking to front desk workers directly shaped the conversation design in concrete ways.

Instead of asking a caller what time and date they want, we tell them the first available slot. In busy salons with no openings for weeks, that one change makes the whole call feel less like a dead end.

02

Building the Agent

Single prompt? Multi-agent? Conversational flow? We talked to vendors and consultants before picking an approach. Then we hit the first real constraint: our booking API was built for visual flows.

Select service, select date, select time. Clean on a screen, useless on a phone call. Someone says "do you have anything Monday or Friday after 3 with Matt?" and the whole structure breaks. The engineer had to rework the API and backend. That left Indra and me to handle prompt engineering and tool calls.

03

Scoping MVP

The plan was minimal: book the appointment, or transfer to a human. Then we showed the demo to our pilot group and scope started creeping. Most customers wanted to collect a card on file for late cancellations.

We added a flow that texts a secure link instead of taking card details over the phone. That unlocked significantly more eligible customers. We held the line elsewhere. No in-app experience yet. I built an n8n-to-Airtable automation to give customers a basic call log they could actually read.

04

Testing

Demoing an AI agent is easy. Making it work reliably is not. We ran a remote usability panel. Turns out you can submit a phone number instead of a prototype link, and people will just call it.

Real inputs, real background noise, things we never would have scripted. Iterated on the agent, ran more sessions, and launched our first pilot customer.

05

Evals

Manual call analysis worked but didn't scale. I led the eval process. Two-sided approach: top-down, measuring whether user intent was achieved; bottom-up, tracking specific failure patterns we'd already found: wrong service, wrong time, wrong provider.

I also managed the relationship with our testing and monitoring software vendor and onboarded the engineering team to their tooling.

Andrey Gargul

Want more details? I can walk you through the decisions, the dead ends, and what I'd do differently. Book a call.

Get in touch

The team doubled. New customer cohorts every two weeks. Time to retire the Airtable dashboard and build a real in-app experience.

In-App Experience

From Airtable
to inbox

Customers needed to know three things: what calls the AI picked up, what happened, and whether they needed to do anything. That was my Airtable dashboard: a call log with date, time, phone number, a summary, and a status: transferred, completed, or needs callback.

My first move was cleaning it up and sending it to our customer success team. They've worked salon jobs and I use them as a quick gut check. They loved it. Mostly. One piece of feedback didn't sit right: if a staff member called the client back, they wanted to update the call status to completed.

They didn't care about the call status. They cared about whether the issue was resolved. Issues can take multiple calls. And not just calls. We were already texting clients secure links to complete bookings. The whole interaction spanned channels.

So instead of Calls, I shifted focus to Conversations. Conversation status maps to issue status far more cleanly. And we already had a home for them: the Messages inbox. Close a thread when you're done.

One conversation across channels: starts as a text, continues on a call, ends with a link to complete the booking.

Design Details

That introduced a new problem. The inbox is a busy place, and my focus group worried AI actions would get lost in it. I needed to make the AI's presence explicitly visible without cluttering the experience.

Agent Actions

Explicitly show what action the agent took after the call: booked, transferred, or flagged.

Participant Avatars

Show everyone who participated in a conversation, including the AI receptionist.

Filters

Quickly surface conversations that need attention, without scanning the entire inbox.

Prototype

See it for yourself

It's the front-end, not the full agent. Click around, open a conversation. See if you like it.

Check it out
Call History

Once the team and pilot customers saw the potential, they got excited about the multichannel inbox. Some even started questioning whether we needed Call History at all.

A few things it does better than the inbox. You can see at a glance every call the AI picked up, and for customers just starting with AI tools, that visibility builds trust. We're also thinking about future outbound calls that won't fit naturally into a conversation inbox.

Dead ends included

Case studies show the path you picked. Not the three you tried first. Every decision here had versions that almost shipped. That part doesn't make the write-up, but it's where most of the thinking happened.

Call History is a good example. I stripped statuses and moved them to conversation threads. Right call. But I kept worrying customers would anchor on the summary and miss the full context behind it. First fix: make the whole row clickable, linking to the conversation. Slightly off. You click a call record and land in a message thread. Small mismatch. I call it a "bad smell": not broken, just structurally wrong. Two calls pointing to the same conversation made it obvious. Fixed with a separate, explicit link. A few UI passes to make it clean.

The in-app call experience is a slice of the project. Alongside it: a manager-facing setup flow, visibility changes across calendar and appointments so staff aren't flying blind, and a reports section for businesses tracking ROI. All built for general availability.

Results
48%

of calls handled without any human involvement

87%

adherence rate. AI performing tasks correctly.

2 weeks

new customer cohort launch cadence, building toward general availability

Next Case Study

Next-Gen Booking Experience

Redesigning how salons book appointments, from first tap to confirmed.

Andrey Gargul

Let's chat

Product designer, 12 years in vertical SaaS. Now doing it with AI in the mix. Always looking for the next interesting problem.

Email me Reach out on LinkedIn