-
Notifications
You must be signed in to change notification settings - Fork 1
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
Showing
3 changed files
with
83 additions
and
48 deletions.
There are no files selected for viewing
This file was deleted.
Oops, something went wrong.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,15 +1,87 @@ | ||
# What is Geekathon AI Starter Template | ||
# Geekathon AI Starter Template | ||
|
||
Geekathon AI Starter Template is a project that helps you to quickly start a new AI Project. | ||
It provides you with a starting codebase, and a functional demo project, from where you can build you project up on. | ||
[Live Demo](https://geekathon-ai-starter-template.xgeeks.tech/) | ||
|
||
The codebase is developed with NextJS, and makes use of OpenAI models, as well as Pinecone for vector database and Replicate for image generation. | ||
<img width="1305" alt="ScreenShot" src="./assets/Demo_Screenshot.png"> | ||
|
||
Another cool benefit is that the deployment part is already made up for you. | ||
As soon as you request your instance of this project, you will have access to: | ||
## Stack | ||
|
||
- a github repo | ||
- a project url, where you can see it running live (it will be hosted in a xgeeks kubernetes cluster) | ||
- also, all the pipeline for the new feature deployments will be set and ready to receive your commits. | ||
- App logic: [Next.js](https://nextjs.org/) | ||
- VectorDB: [Pinecone](https://www.pinecone.io/) | ||
- LLM Orchestration: [Langchain.js](https://js.langchain.com/docs/) | ||
- Image Model: [Replicate](https://replicate.com/) | ||
- Text Model: [OpenAI](https://platform.openai.com/docs/models) | ||
- Text streaming: [ai sdk](https://github.com/vercel-labs/ai) | ||
- Deployment: will be automatic on xgeeks platform | ||
|
||
This is possible via xgeeks IDP (Internal Developer Platform), that allow developers to self service their new projects resources, as easy as it is to fill out a small form and click a button. | ||
## Overview | ||
|
||
- 🚀 [Quickstart](#quickstart) | ||
|
||
## Quickstart | ||
|
||
The simplest way to try out this stack is to test it out locally and traverse through code files to understand how each component work. Here are the steps to get started. | ||
|
||
### 1. Fork and Clone repo | ||
|
||
Fork the repo to your Github account, then run the following command to clone the repo: | ||
|
||
``` | ||
git clone [email protected]:xgeekshq/xgeeks-ai-starter-demo.git | ||
``` | ||
|
||
### 2. Run Install Script | ||
|
||
```sh | ||
cd xgeeks-ai-starter-demo | ||
|
||
# (SOS) make script executable: chmod +x run-install.sh | ||
./run-install.sh | ||
# | ||
# script will: | ||
## - install npm packages | ||
## - create a .env file | ||
## - generate a random secret for Auth0 to encrypt tokens | ||
``` | ||
|
||
### 3. Fill out secrets :construction: --wip-- | ||
|
||
All Geekathon participants will receive an email with api keys, for the required AI tools (OpenAI, Pinecone, Replicate). | ||
|
||
``` | ||
you must update your .env.local file with those provided keys | ||
``` | ||
|
||
### 4. Generate embeddings | ||
|
||
There are a few markdown files under `/blogs` directory as examples so you can do Q&A on them. To generate embeddings and store them in the vector database for future queries, **you just have to add the new files into this folder, and push a new commit.** | ||
A github action will be triggered and will create and upload the new embeddings. | ||
|
||
**NOTE 1:** the included files are already embedded. You don't have to do anything for those, just for new ones. | ||
**NOTE 2:** it may take a minute or two to complete. Be patient. | ||
|
||
### 4.1. Optionally you can do it manually | ||
|
||
#### If using Pinecone | ||
|
||
Run the following command to generate embeddings and store them in Pinecone: | ||
|
||
```bash | ||
npm run generate-embeddings-pinecone | ||
``` | ||
|
||
### 5. Run app locally | ||
|
||
Now you are ready to test out the app locally! To do this, simply run `npm run dev` under the project root. | ||
|
||
### 6. Deploy the app | ||
|
||
Just do your code and them commit. :rocket: | ||
After a few seconds the changes will be live and running. | ||
You can get some feedback, by looking the github actions pipelines, to check if they have finished. | ||
|
||
## Refs | ||
|
||
- https://js.langchain.com/docs/modules/indexes/vector_stores/integrations/pinecone | ||
- https://js.langchain.com/docs/modules/models/llms/integrations#replicate | ||
- https://js.langchain.com/docs/modules/chains/index_related_chains/retrieval_qa |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters