Alex Delgado
Senior Product Designer @ Amazon
Leveraging intent matching to automate and accelerate support associate case handling — building the beginning of an external assistant off successful intent mapping
The outcome
We reduced agent FCR by 10.4% and AHT by 10.1%, and achieved a 95% resolution rate.
The problem
Our advertiser support team uses Paragon for communication and document storage, as well as over 40+ other tools to diagnose issues and provide support. Following multi-step processes requires them to navigate various Amazon-owned tools, deal with browser incompatibilities, and copy-paste data between tools. This leads to longer wait times and higher risk of human error, impacting the team's average handle time and first contact resolution rate.
The opportunity
With our problem in mind there was an opportunity to simplify how agents interact with diagnostic tools. We took a few different paths and iterated on a few different ideas before landing on our final solution, an assistant. With that, I also identified that we could create an external assistant in tandem to further reduce the time to resolution.
Talking to our associates
We started out with multiple sessions with our support associates, having them share their screen and take cases as they came in and talk us through their process and thoughts. Immediately, we validated our goals by observing associates open 10+ tabs in their browsers to troubleshoot cases. We also noticed that there was indeed a higher percentage of human error in handling cases as the associates had to juggle multiple tasks.

Having a wealth of data allowed us to move fast through our validation research and dive directly into designing our solutions.
Content strategy first
Before diving into designs we started working on our virtual assistant. We aligned on tenets to frame our content and responses.
  • Maintain Transparency: The assistant uses plain language and a casual tone to sound human, but never poses as a real person.
  • Engage the User: The assistant speaks to advertisers and associates in first person using active voice and concise sentence structure. They are energetic, but not over-spirited, and direct, but not dry.
  • Keep it Moving: When the assistant doesn’t understand an advertiser’s response, they rephrase and ask for clarification. If they still don’t understand, the conversation is marked for triage later.
  • Cognitive Load: If the assistant needs to present more than one piece of information, or information that is over 4 lines of text long, they will chunk her responsive in to several consecutive messages.
Simplifying workflows
Now that we had content strategy figured out we moved onto simplifying the multitude of steps that agents needed to take to resolve a case. We started with selecting intents based on the information we gathered from advertisers but that still had them completing manual tasks.

We landed on an automated process that shows our agents the workflow in a centralized location. This would also query the SOPs for best next steps and guide associates on the path they needed to take.

Launching this internal assistant decreased our AHT and FCR.
Facing outwards to our advertisers
With a successful associate facing assistant we moved on to bringing this experience to our advertisers. Starting with removing our contact us page in favor for an assistant collecting advertiser intent, we then moved towards building out all of the intents in Amazon Lex with corresponding responses. This was only temporary allowing us to use these intents and mappings to train a model in bedrock on our content. We used the internal virtual assistant to train intents so that our advertisers would have minimal failures.

With any virtual assistant we needed to be sure in our ability to fail gracefully. If we couldn't get the advertiser the answer they needed or a best guest, we needed to seamlessly transfer them to an agent.
Could we be more proactive?
Having a virtual assistant that provides answers when needed is a significant tool for our advertisers. But what if we could preempt them and know when they might need help before they even do? Or, what if we could provide recommendations and suggest content proactively?

This question, led is into the final stage of this project which was providing in UI proactive help and suggestions. With models built to understand whether or not the products an advertiser selected would perform well or even taking data trends based on their campaign performance we were able to provide this proactive help.
Final thoughts
After all of this has launched we reduced agent FCR by 10.4% and AHT by 10.1%, and achieved a 95% resolution rate.
The whole process of learning about AI and different models has been illuminating and I look for forward to continuing to improve these experiences and seeing how the models adapt to our advertisers!

Click here to see where this project has evolved into!