Deploy SageMaker Endpoints & Bedrock Runtime Exploration
A comprehensive project for the Great AI Hackathon Malaysia 2025, demonstrating AWS SageMaker endpoint deployment and Amazon Bedrock model integration.
© 2025 Goodbye World team, for Great AI Hackathon Malaysia 2025 usage.
SageMaker Model Deployment & Notebooks
- Jupyter notebooks for deploying and working with Hugging Face models
genai-llm.ipynb- Generative AI Large Language Model experimentsqna-llm.ipynb- Question & Answer system using LLM models- Demonstrates deployment of models like
meta.llama3-8b-instruct-v1:0anddistilbert-base-uncased-distilled-squadfor text generation and embeddings
Amazon Bedrock Foundation Models
- Core Bedrock service integration and setup utilities
bedrock_wrapper.py- Wrapper class for Bedrock operationsbedrock_studio_bootstrapper.py- Automated setup for Bedrock Studio environmentshello_bedrock.py- Basic Bedrock API introduction- Foundation model management and configuration tools
Bedrock Runtime API Examples & Model Comparison
- Extensive collection of model-specific runtime examples
- Supported Models:
- 🔥 Amazon Nova (Text, Canvas, Reel)
- 🏛️ Amazon Titan (Text, Image, Embeddings)
- 🧬 Anthropic Claude (Various versions)
- 🎯 Cohere Command (R & Standard)
- 🦙 Meta LLaMA (3.1 variants)
- 🌪️ Mistral AI
- 🎨 Stability AI (Image generation)
Special Features:
- 📊
comparison/- Model comparison tools for testing different AI models with the same prompts - 🔄
cross-model-scenarios/- Advanced scenarios like tool use demonstrations - 🧪
test/- Comprehensive test suite for all model integrations
Located in bedrock-runtime/comparison/, this system allows you to:
- Compare responses from multiple AI models using identical prompts
- Measure performance and response times
- Export results in various formats
- Interactive and command-line interfaces available
- Hello World programs for each AI service
- Streaming responses for real-time applications
- Document understanding capabilities
- Image generation workflows
- Cross-model tool usage scenarios
- Automated integration tests for all models
- pytest-based testing framework
- Performance benchmarking tools
- Error handling and validation
# Install Python dependencies
pip install -r bedrock/requirements.txt
pip install -r bedrock-runtime/requirements.txt
# Configure AWS credentials
aws configure1. Explore Bedrock Models:
cd bedrock-runtime
python hello/hello_bedrock_runtime_converse.py2. Compare AI Models:
cd bedrock-runtime/comparison
python quick_compare.py3. SageMaker Experiments:
cd sagemaker
# Open genai-llm.ipynb in Jupyter
jupyter lab genai-llm.ipynbEach folder contains detailed README files with specific instructions:
bedrock-runtime/comparison/README.md- Model comparison toolsbedrock-runtime/cross-model-scenarios/tool_use_demo/README.md- Advanced scenarios- Individual model folders contain usage examples and API documentation