Made with β€οΈ from Gar's Bar
- Streamlit Random Apps
- π― Darts API Playground
- π URL Scanner
- π₯ WSGI Stack vs Streamlit
- πΈ Guitar Tuner
- π» Streamlit Full Stack 3 Ways
- π Fidelity / Personal Stock Account Dashboard
- π» Peak Weather: NH 4,000 Footers
- πΌ Pandas Power
- βπ» Text Recognition Dataset Generator App
- π Github Lines of Code Analyzer
- π AWS Textract Document Text Scan
- π° Roommate Spending Ledger Visualization
- Darts Intro
- Function to Streamlit Form
- PDF Merge and Split Utility
- Python Web Form Generator
- Basic File Drop
- Gif Maker
Explore the Datasets, Metrics, and Models of the Darts Time Series library.
See: Github Repo
Using AWS Rekognition + Streamlit to provide interactive OCR URL Scanner / Text Extraction on real world images.
See: Github Repo
Comparing an interactive web app built with bottle
+ htmx
to the same idea built with streamlit
.
In folder wsgi_comparison
π₯ Watch: Youtube Breakdown βπ» Read: Blog Post
Left: ~50 lines of Python and HTML
Right: ~15 lines of Python
Simple guitar tuner powered by streamlit-webrtc
Demo of Full Stack Streamlit Concept. Deployed with 3 increasingly complicated backends.
See: Github Repo
Upload a CSV export from Fidelity investment account(s) and visualize profits and losses from select tickers and accounts.
See: Github Repo
Upload a CSV or excel with at least a date column and spending amount column to analyze maximum and average spending over different time periods.
Use async http request library httpx
to make 48 api calls roughly simultaneously in one Python process.
Feed a dashboard of weather for all 4,000 foot mountains in New Hampshire.
See: Github Repo
Demoing useful functionalities of Pandas library in a web app.
Currently:
read_html
: Parse dataframes from html (url, upload, or raw copy+paste)
Putting a frontend on TRDG CLI tool. Primary goal: creating classic videogame text screenshots with known ground truth labels
Shallow clone a repo then use unix + pandas tools to count how many lines of each file type are present
streamlit run github_code_analyze.py
Using AWS Textract + S3 + Streamlit to provide interactive OCR Web App.
See: Github Repo
Using Pandas + Plotly + SQLite to show a full stack use case of Time Series Data. Analyze spending over time per person (could be adapted to categories / tags / etc).
See: Github Repo
~1 for 1 exploration of Darts quick start with Streamlit + some interactive forecasting of Uploaded CSV.
Exploring Time Series in: Github Repo
Powered by Streamlit-Pydantic
.
Utilizes inspect.signature
to build an input class to a function definition.
What's the point?
Even more rapid development!
Take any CLI or other functional API and create an input form for them. Encourage developers to write accurate type hints ;)
from inspect import Parameter
import streamlit as st
import streamlit_pydantic as sp
from inspect import signature
from pydantic import create_model
def crunch_the_numbers(ticker: str, num_periods: int = 10) -> dict:
return {"ticker": ticker, "value": num_periods * 100.00}
pydantic_fields = {
x.name: (
(x.annotation, ...)
if x.default == x.empty
else (x.annotation, x.default)
)
for x in signature(crunch_the_numbers).parameters.values()
}
PydanticFormModel = create_model("PydanticFormModel", **pydantic_fields)
input_col, output_col = st.columns(2)
with input_col:
data = sp.pydantic_form(key="some_form", model=PydanticFormModel)
if data:
output_col.json(crunch_the_numbers(**data.dict()))
Powered by pypdf2
library, feel free to use reportlab or something else.
JPG or PNG output optionally, powered by pdf2image
(requires brew insall poppler
/ apt install poppler-utils
or conda install -c conda-forge poppler
? github)
Combines multiple PDFs into a single PDF or splits a single PDF into multiple PDFs.
Powered by Streamlit-Pydantic
.
Generate a Streamlit web form UI + Pydantic models from an example JSON data structure.
See: Github Repo
Not live, as there is no point.
Make a web frontend that you can access from local network and drop any number of files to host computer! All in ~10 lines of code.
import streamlit as st
from pathlib import Path
files = st.file_uploader("uploads", accept_multiple_files=True)
destination = Path('downloads')
destination.mkdir(exist_ok=True)
for f in files:
bytes_data = f.read()
st.write("filename:", f.name)
st.write(f"{len(bytes_data) = }")
new_file = destination / f.name
new_file.write_bytes(bytes_data)