Management Dashboard for Torchserve
-
Updated
Jan 31, 2023 - Python
Management Dashboard for Torchserve
Serving PyTorch models with TorchServe 🔥
Torchserve server using a YoloV5 model running on docker with GPU and static batch inference to perform production ready and real time inference.
Deploy DL/ ML inference pipelines with minimal extra code.
Deploy FastAI Trained PyTorch Model in TorchServe and Host in Amazon SageMaker Inference Endpoint
A minimalistic and pluggable machine learning platform for Kubernetes.
TorchServe+Streamlit for easily serving your HuggingFace NER models
Deploy Swin Transformer using TorchServe
Slides and notebook for the workshop on serving bert models in production
Pushing Text To Speech models into production using torchserve, kubernetes and react web app 😄
FastAPI middleware for comparing different ML model serving approaches
Twin Neural Network Training with PyTorch and fast.ai and its Deployment with TorchServe on Amazon SageMaker
How to deploy TorchServe on an Amazon EKS cluster for inference.
Deploy FastAI Trained PyTorch Model in TorchServe and Host in GCP's AI-Platform Prediciton.
Project to implement, test and evaluate different methods to deploy machine learning models for production.
Predicting musical valence of Spotify songs using PyTorch.
DET is an end-to-end tool for extracting Key-Value pairs from a variety of documents, built entirely on PyTorch and served using TorchServe.
This repo implements a minimalistic pytorch_lightning + neptune + torchserve flow for (computer vision) model training and deployment
Quick and easy tutorial to serve HuggingFace sentiment analysis model using torchserve
Serving BERT embeddings via Torchserve
Add a description, image, and links to the torchserve topic page so that developers can more easily learn about it.
To associate your repository with the torchserve topic, visit your repo's landing page and select "manage topics."