This README provides instructions on how to set up and execute the pipeline for E&I's Vertex AI, along with configuring and deploying REST API Cloud Functions.
This figure schematizes the process of training and deploying Google's Vertex AI and PaLM 2 large language models (LLMs). It shows the process of starting with training data in JSONL format, training a large language model with Vertex AI, and finally integrating the trained model into a real-world software application.
- JSONL Training Data: This represents the training data. The JSONL format is a text file where each line is a JSON object. Each object contains an input_text and output_text field, providing data for the model to train on.
- Vertex AI: Part of the Google Cloud Platform, this is a service that builds, deploys, and manages large-scale machine learning models. This service is used to train and optimize models using training data you provide.
- PaLM 2 LLM: This refers to Google's large language model, which is used as the base model for "adapter-based fine-tuning". Adapter-based fine-tuning is a technique for fine-tuning a model for a specific task by inserting small additional networks into an existing model.
- Software Component: This represents the software component where the AI model built using Vertex AI and PaLM 2 LLM will actually be implemented. This component can be an AI application, service, client, or subsystem of a system.
The figure shows a pipeline for fine-tuning Google's Pathways Language Model2 (PaLM2) language model using Vertex AI on Google Cloud.
- validate_pipeline: This step validates the pipeline to ensure that all required components are set up correctly.
- tuning_graph: This represents the process of tuning the model to a specific task or dataset. This process typically involves tuning hyperparameters to optimize the performance of the model.
- export_managed_dataset: This is the step to export the tuned dataset as a managed dataset. This dataset is used for training or evaluation.
- dataset_encoder: Encodes the dataset and converts it into a format that the model can understand.
- evaluation-dataset-encoder: This is the process of encoding an evaluation dataset, which is used to evaluate how well the model performs.
- vertex-pipelines-prompt: Sets the pipeline prompts for Vertex AI. This can be a step to specify components or parameters for pipeline execution.
- compose-params-for-model: This is the step to configure parameters for the model, determining what settings or hyperparameters the model needs before it can start training.
- large_language_model_tuning: This is the actual tuning of the large language model. In this step, the model is trained for a specific task.
- tensorboard-uploader: Upload the data generated during the training process to TensorBoard to visualize and monitor the training process.
- deployment_graph: A pipeline for deploying the model. In this step, models are deployed to endpoints that users can access.
- Upload-LLM-Model: The process of uploading a large, aligned language model to Vertex AI.
- create-endpoint-and-deploy: Creates an endpoint where the model is available and deploys the model.
Before running this function, make sure you have the following:
- Node.js 18 installed
- Firebase CLI installed
- Firebase project set up
- Firestore Set Up
- Create Document
users
,follows
,notification_logs
- Change Policy
- Create Document
service cloud.firestore {
match /databases/{database}/documents {
match /users/{userId} {
allow read: if true;
allow write: if request.auth != null && request.auth.uid == userId;
}
match /follows/{userId} {
allow read: if true;
allow write: if request.auth != null && request.auth.uid == userId;
}
match /notification_logs/** {
allow read, write: if false;
}
}
}
- Clone this repository
git clone https://github.com/GDSC-DGU/2024-SolutionChallenge-EarthAndI.git
- npm dependences install
# Move to the 'server' directory.
cd ./2024-SolutionChallenge-EarthAndI/server/trigger_and_notification_api
# Move to the `functions` subdirectory.
cd functions
# Install all of the dependencies of the cloud functions.
npm install
# Move to the `Parent Folder` subdirectory.
cd ../
- Select the Firebase project you have created.
firebase use -add
- Execute Emulators
firebase emulators:start --only functions
- Deploy Project
firebase deploy