Introduction
In our current marketing landscape, we have an unprecedented number of ways to engage the customer, including both traditional and digital channels, and offering a unified experience across channels is critical to delivering customer delight. Meanwhile, despite the rise of alternative options, email remains the preferred communication channel for customer support, largely because it provides several unique benefits: maintaining a history of interactions, allowing customers to explain issues comprehensively and enabling file attachments for additional context.
Because of their popularity, prioritizing and queuing a significant volume of customer emails often leads to delayed responses, creating poor customer experiences. To mitigate this potential risk, our GenAI-powered email EAR (extract, act and respond) solution aims to transform the customer support process by automating the process of reading, analyzing and thoughtfully responding to incoming emails.
Specifically, our system can extract the core query, complaint or issue within an email and then identify and summarize the actions required to resolve the customer's needs. Finally, it generates a user-friendly, detailed response explaining steps taken to address their questions or concerns.
Solution overview
Our EAR solution provides several key capabilities, including:
- Email context extraction: EAR's large language model (LLM) powers the natural language processing required to extract the question or request from the email, classify the email type and determine the sentiment and tone of the sender
- Agent routing: LLM-powered agents route emails to their respective handlers for subsequent actions
- Actions: Our EAR solution performs the following actions based on defined contexts
- Corpus search/FAQs: Relevant answers are retrieved from the knowledge base using a semantic search based on the customer inquiry. This is enabled through a retrieval-augmentation-generation (RAG) framework
- Transactional retrieval: If transactional data is required for the response, the system triggers a database search
- Log ticket/service request: If the email requires a ticket or service request, the system automatically logs it and extracts details like ticket number and description
- Response generation: Leveraging LLM's natural language generation capabilities to compose a response that incorporates the results of the actions above in a well-structured format tailored to the email context, classification and sentiment
- Explainability: To enhance transparency, the system can explain the reasoning and data flow behind its actions using capabilities like the built-in ReAct framework
Below is the high-level process flow of the EAR solution:
- The customer sends an inquiry, question, complaint, etc., via email to the respective support email address
- The email extractor and parser extracts the text from the email body/attachment(s) and passes it to the fine-tuned LLM to generate context
- The LLM understands the email's context and the email text's overall sentiment. Based on the context, the LLM agent/action router triggers one of the following actions: corpus search/FAQs, transactional retrieval, or raising a service request ticket
- Once the agent receives a response from one of the above actions, it embeds the email context along with the response and sends it to the generative LLM responder
- The LLM generates the final response from all these inputs and triggers the action to send the response back to the customer
- The customer can review the response and, if dissatisfied with the model-generated email response, can submit feedback explaining issues with the response
- The original response, along with the feedback, goes to a human admin for review. The admin can further review/edit the response and submit it to improve future responses
- The admin can also review the generated action plan for each response, which provides insights into how different actions get triggered based on customer queries
Technical architecture
The GenAI email EAR solution is developed using AWS native services and LangChain agents. Below are several of the AWS services leveraged for the solution development:
LLM via Amazon Bedrock: This solution leverages LLM accessible via Amazon Bedrock Service. Amazon Bedrock helps to hide the complexity of managing the underlying hardware and model deployments.
LangChain Agent Framework: LangChain is a framework for developing LLMs-powered applications. More details about this framework can be found here.
AWS Lambda: AWS lambda is a serverless, even-drive compute. In this solution, it is used for various purposes, including as an email extractor (reading email content from the designated email box) and parser (parse the content in the defined output format), reading/writing to the Dynamo DB and sending notifications and email responses back to the customer.
Chroma DB on Amazon EC2: Chroma DB is an open-source embedding database. To support the corpus and FAQ search functionality via embeddings, we have deployed Chroma DB on the Amazon EC2 instance for storing and retrieving documents via an embedding search.
Application Layer — Streamlit: The GUI for the solutions is created using Streamlit in Python language. Streamlit is a faster way to develop and share apps. It has various controls to create a smooth navigation experience. The app container is deployed using AWS Microservice-based architecture leveraging Amazon ECS Clusters and AWS Fargate.
Mermaid for Action Flow Graphs: To show the action flow as a graphical representation, this solution leverages Mermaid, a JavaScript-based diagramming and charting tool that renders markdown-inspired text definitions to create and modify diagrams dynamically.
Example scenario:
Industrial adoption
The goal of this solution is to improve overall customer experience by helping customers better understand the products and services an organization provides. The solution can potentially be adopted by organizations in different industries in the following ways:
Industry | Solution Adoption |
---|---|
Financial | Can assist in answering questions about financial products and loan eligibility, as well as provide product recommendations based on email context |
Education | Can help students get responses to queries related to the admission process, scholarship opportunities, enrollment procedures, etc. |
Healthcare / Insurance | Can aid customers in understanding health insurance eligibility, claim processing status, etc. |
Retail | Questions pertaining to product exchanges/returns can be managed more effectively |
Conclusion
This solution demonstrates how generative AI can be leveraged in an actionable workflow by harnessing the generative power of large language models. The integration of services like Amazon Bedrock with LangChain, vector databases, etc., facilitates the creation of faster, scalable LLM-based applications. It also illustrates how queries received via email can be addressed and responded to in a timely, creative manner. This solution can be further customized to enhance customer experience delivery across a variety of use cases and scaled up to listen to each customer's email and respond with appropriate actions. Overall, this solution showcases how generative AI can drive efficient workflows that provide tailored, rapid responses to customer inquiries received via email.
For more detailed information, a demonstration, or to implement this solution, please reach out to our team of experts: awsecosystembu@hcltech.com.