-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Readme Updates #1
base: main
Are you sure you want to change the base?
Conversation
|
||
* Run the pipeline | ||
|
||
`NOTE:` The example requires atleast 4xA100 GPUs to deploy all the required models locally. If you are using a AxA100 system, we need ensure that the NIM LLM microservice runs on a dedicated GPU, Follow the steps below for the same |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
4xA100?
Also is this conflicting with the HW req stated above?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@angudadevops do we want to publish this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@angudadevops the test pdf is not viewable pls check
"execution_count": 3, | ||
"id": "8dea0e28-41e0-419d-acfb-9ae2c6b79b4f", | ||
"metadata": {}, | ||
"outputs": [ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@angudadevops let's clear the outputs? thanks
Refer to the [Notebooks](./notebooks) to evalute the MultiModel RAG with LangChain |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
*evaluate
Updated the ReadMe Instructions with Early Access bits