Skip to content

Latest commit

 

History

History
109 lines (83 loc) · 5.54 KB

File metadata and controls

109 lines (83 loc) · 5.54 KB

E&I Cloud Functions(FaaS) And VertexAI

This README provides instructions on how to set up and execute the pipeline for E&I's Vertex AI, along with configuring and deploying REST API Cloud Functions.

Vertex AI Pipeline

How To Ues Vertex AI?

vertex-ai-image

This figure schematizes the process of training and deploying Google's Vertex AI and PaLM 2 large language models (LLMs). It shows the process of starting with training data in JSONL format, training a large language model with Vertex AI, and finally integrating the trained model into a real-world software application.

  1. JSONL Training Data: This represents the training data. The JSONL format is a text file where each line is a JSON object. Each object contains an input_text and output_text field, providing data for the model to train on.
  2. Vertex AI: Part of the Google Cloud Platform, this is a service that builds, deploys, and manages large-scale machine learning models. This service is used to train and optimize models using training data you provide.
  3. PaLM 2 LLM: This refers to Google's large language model, which is used as the base model for "adapter-based fine-tuning". Adapter-based fine-tuning is a technique for fine-tuning a model for a specific task by inserting small additional networks into an existing model.
  4. Software Component: This represents the software component where the AI model built using Vertex AI and PaLM 2 LLM will actually be implemented. This component can be an AI application, service, client, or subsystem of a system.

Pipeline

vertex-ai-text

The figure shows a pipeline for fine-tuning Google's Pathways Language Model2 (PaLM2) language model using Vertex AI on Google Cloud.

  1. validate_pipeline: This step validates the pipeline to ensure that all required components are set up correctly.
  2. tuning_graph: This represents the process of tuning the model to a specific task or dataset. This process typically involves tuning hyperparameters to optimize the performance of the model.
  3. export_managed_dataset: This is the step to export the tuned dataset as a managed dataset. This dataset is used for training or evaluation.
  4. dataset_encoder: Encodes the dataset and converts it into a format that the model can understand.
  5. evaluation-dataset-encoder: This is the process of encoding an evaluation dataset, which is used to evaluate how well the model performs.
  6. vertex-pipelines-prompt: Sets the pipeline prompts for Vertex AI. This can be a step to specify components or parameters for pipeline execution.
  7. compose-params-for-model: This is the step to configure parameters for the model, determining what settings or hyperparameters the model needs before it can start training.
  8. large_language_model_tuning: This is the actual tuning of the large language model. In this step, the model is trained for a specific task.
  9. tensorboard-uploader: Upload the data generated during the training process to TensorBoard to visualize and monitor the training process.
  10. deployment_graph: A pipeline for deploying the model. In this step, models are deployed to endpoints that users can access.
  11. Upload-LLM-Model: The process of uploading a large, aligned language model to Vertex AI.
  12. create-endpoint-and-deploy: Creates an endpoint where the model is available and deploys the model.

Prerequisites

Before running this function, make sure you have the following:

service cloud.firestore {
  match /databases/{database}/documents {
    match /users/{userId} {
      allow read: if true;
      allow write: if request.auth != null && request.auth.uid == userId;
    }
    
    match /follows/{userId} {
      allow read: if true;
      allow write: if request.auth != null && request.auth.uid == userId;
    }
    
    match /notification_logs/** {
    	allow read, write: if false;
    }
  }
}

Run And Deploy Trigger And Notificatoin API Cloud Functions

  1. Clone this repository
git clone https://github.com/GDSC-DGU/2024-SolutionChallenge-EarthAndI.git
  1. npm dependences install
# Move to the 'server' directory.
cd ./2024-SolutionChallenge-EarthAndI/server/trigger_and_notification_api

# Move to the `functions` subdirectory.
cd functions

# Install all of the dependencies of the cloud functions.
npm install

# Move to the `Parent Folder` subdirectory.
cd ../
  1. Select the Firebase project you have created.
firebase use -add

To run the sample app locally during development

  1. Execute Emulators
firebase emulators:start --only functions

To deploy the application

  1. Deploy Project
firebase deploy

References && Support