This guide walks through creating a basic sentiment analysis inference job using a lightweight model. This demo is perfect for showing how to structure a Lilypad module without requiring significant computational resources aka a gpu!.
sentiment-demo/
├── Dockerfile
├── run_inference.py
├── requirements.txt
└── README.md
transformers==4.36.0
torch==2.1.0
import os
import json
from transformers import pipeline
def main():
# Get input from environment variable
text = os.environ.get('INPUT_TEXT', 'Default text for analysis')
# Initialize the sentiment analysis pipeline
# This will use a small model suitable for CPU inference
classifier = pipeline("sentiment-analysis",
model="distilbert-base-uncased-finetuned-sst-2-english",
device=-1) # -1 forces CPU usage
try:
# Perform inference
result = classifier(text)
# Format output
output = {
'input_text': text,
'sentiment': result[0]['label'],
'confidence': float(result[0]['score']),
'status': 'success'
}
except Exception as e:
output = {
'input_text': text,
'error': str(e),
'status': 'error'
}
# Save output to the designated output directory
os.makedirs('/outputs', exist_ok=True)
output_path = '/outputs/result.json'
with open(output_path, 'w') as f:
json.dump(output, f, indent=2)
if __name__ == "__main__":
main()
FROM python:3.9-slim
# Set working directory
WORKDIR /workspace
# Install dependencies
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
# Create output directory
RUN mkdir -p /outputs
# Copy inference script
COPY run_inference.py .
# Set entrypoint
ENTRYPOINT ["python", "/workspace/run_inference.py"]
- Build the Docker image:
docker build -t sentiment-demo:latest .
- Test locally with a sample input:
docker run -e INPUT_TEXT="I love this amazing workshop!" \
-v $(pwd)/outputs:/outputs \
sentiment-demo:latest
- Check the results:
cat outputs/result.json
Expected output:
{
"input_text": "I love this amazing workshop!",
"sentiment": "POSITIVE",
"confidence": 0.9998,
"status": "success"
}
Here's a sequence of commands you can use during the live demo:
- Show the project structure:
tree sentiment-demo
- Explain key components:
- Show requirements.txt
- Walk through run_inference.py
- Explain Dockerfile structure
- Build the image:
docker build -t sentiment-demo:latest .
- Run inference with different examples:
# Positive example
docker run -e INPUT_TEXT="This is a fantastic demo!" \
-v $(pwd)/outputs:/outputs \
sentiment-demo:latest
# Negative example
docker run -e INPUT_TEXT="This demo is confusing and complicated" \
-v $(pwd)/outputs:/outputs \
sentiment-demo:latest
- Show real-time results:
cat outputs/result.json
-
Docker Build Issues:
- Ensure Docker daemon is running
- Check internet connection for package downloads
- Verify Python version compatibility
-
Runtime Issues:
- Verify the outputs directory exists and has proper permissions
- Check environment variable is being passed correctly
- Ensure enough system memory (at least 2GB recommended)
After demonstrating the local inference job, you can show how to:
- Push the image to a registry:
docker tag sentiment-demo:latest your-registry/sentiment-demo:latest
docker push your-registry/sentiment-demo:latest
- Create lilypad_module.json.tmpl
- Initialize as a Git repository
- Create a tag for versioning
This provides a natural transition to the Lilypad module creation section of your workshop.