Build a generative AI Virtual Assistant with Amazon Bedrock, Langchain and Amazon Elasticache
In this 15-minute session YouTube video, we will discuss how you can use Amazon Bedrock, Langchain and Amazon Elasticache services together to implement a generative AI (GenAI) chatbot. We will dive into two application patterns: they are chat history and messaging broker patterns. We will show you how ElastiCache simplifies the implementation of these application patterns by leveraging the built-in Redis data structures.
ElastiCache is a fully managed service delivering real-time, cost-optimized performance for modern applications. ElastiCache scales to hundreds of millions operations per second with microsecond response time, and offers enterprise-grade security and reliability.
This guide will walk you through the steps to deploy a Python chatbot application using Streamlit on Cloud9. This is the architecture we will be implementing today.
The application is contained in the 'chatbot_app.py' file, and it requires specific packages listed in 'requirements.txt'.
Before you proceed, make sure you have the following prerequisites in place:
-
An AWS Cloud9 development environment set up.
-
We will be using Amazon Bedrock to access foundation models in this workshop.
-
Python and pip installed in your Cloud9 environment.
- Clone this repository to your Cloud9 environment:
git clone https://github.com/aws-samples/amazon-elasticache-samples.git
cd webinars/genai-chatbot
- Install the required packages using pip:
pip3 install -r ~/environment/workshop/setup/requirements.txt -U
- Set the ElastiCache cluster endpoint as below. Use redis instead of rediss it encryption is not enabled.
export ELASTICACHE_ENDPOINT_URL=rediss://ClusterURL:6379
streamlit run 'chatbot_app.py' --server.port 8080
The first step is to login to the application. This would create a unique session to be stored in Amazon ElastiCache. This session data is retrieved, summarized and provided as a context to the LLM to help stay in context.
Here are some sample questions you can try out to validate the LLM stays in context while loading previous conversation from Elasticache.
1. Can you help me draft a 4 sentence email to highlight some fun upcoming events for my employees?
2. Can you add in these events into the email: 1. Internal networking event on 4/20/2024, Summer gift giveaway on 6/20/2024, 3. End of summer picnic: 8/15/2024, 4. Fall Formal on 10/10/2024, and 5. Christmas Party on 12/18/2024
3. Can you reformat it with bullets for my events?
4. Can you please remove everything that happens after September 2024
Here is how we can check the session data stored in ElastiCache.
redis-cli -c --user $ECUserName --askpass -h $ELASTICACHE_ENDPOINT_UR --tls
LRANGE "chat_history:user1" 0 -1