Torchserve server using a YoloV5 model running on docker with GPU and static batch inference to perform production ready and real time inference.
-
Updated
Feb 10, 2023 - Python
Torchserve server using a YoloV5 model running on docker with GPU and static batch inference to perform production ready and real time inference.
Deploy DL/ ML inference pipelines with minimal extra code.
Serving PyTorch models with TorchServe 🔥
A minimalistic and pluggable machine learning platform for Kubernetes.
Slides and notebook for the workshop on serving bert models in production
Deploy FastAI Trained PyTorch Model in TorchServe and Host in Amazon SageMaker Inference Endpoint
Management Dashboard for Torchserve
Deploy Swin Transformer using TorchServe
How to deploy TorchServe on an Amazon EKS cluster for inference.
Twin Neural Network Training with PyTorch and fast.ai and its Deployment with TorchServe on Amazon SageMaker
TorchServe+Streamlit for easily serving your HuggingFace NER models
Project to implement, test and evaluate different methods to deploy machine learning models for production.
Serving BERT embeddings via Torchserve
Predicting musical valence of Spotify songs using PyTorch.
DET is an end-to-end tool for extracting Key-Value pairs from a variety of documents, built entirely on PyTorch and served using TorchServe.
Pushing Text To Speech models into production using torchserve, kubernetes and react web app 😄
This repo implements a minimalistic pytorch_lightning + neptune + torchserve flow for (computer vision) model training and deployment
Simple example of using TorchServe to serve a PyTorch Object Detection model
ML Lifecycle project
Add a description, image, and links to the torchserve topic page so that developers can more easily learn about it.
To associate your repository with the torchserve topic, visit your repo's landing page and select "manage topics."