This is a small app I built using HuggingFace Transformers and FastAPI to perform text classification using the pre-trained DistilBERT model. I mostly relied on the excellent tutorial by Venelin to build this (ref 1). I made a few key changes to his approach:
- Used pre-trained model instead of fine-tuning
- Used
requirement.txt
for pip instead of using pipenv - Did not use a lot of extra code style packages
How to use?
pip install requirements.txt
bash bin/run_server
Then make your API call:
http POST http://127.0.0.1:8000/classify text="Pre-trained DistilBERT seems to work quite well!"
You'll get an output like:
{
"confidence": 0.9998160004615784,
"probabilities": {
"negative": 0.00018407008610665798,
"positive": 0.9998160004615784
},
"sentiment": "positive"
}
- Deploy on Heroku - will need to find a workaround for downloading pre-trained models since Heroku's file system is ephemeral
- Add tests for API calls