Skip to content

Commit

Permalink
gca-sprint report outcome; uv docker config
Browse files Browse the repository at this point in the history
  • Loading branch information
N-Clerkx committed Dec 19, 2024
1 parent 9214026 commit aceb0e3
Show file tree
Hide file tree
Showing 25 changed files with 6,495 additions and 166 deletions.
10 changes: 10 additions & 0 deletions App/functions/report-python-cloud-run/.dockerignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
.venv/
*__pycache__/
*.pyc
*.pyo
*.pyd
*.pyw
*.pyz
*.pyj
*.pyx
*.pyd
52 changes: 32 additions & 20 deletions App/functions/report-python-cloud-run/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -12,23 +12,35 @@
# See the License for the specific language governing permissions and
# limitations under the License.

# Use the official lightweight Python image.
# https://hub.docker.com/_/python
FROM python:3.11-slim

# Allow statements and log messages to immediately appear in the Knative logs
ENV PYTHONUNBUFFERED True

# Copy local code to the container image.
ENV APP_HOME /app
WORKDIR $APP_HOME
COPY . ./

# Install production dependencies.
RUN pip install -r requirements.txt

# Run the web service on container startup. Here we use the gunicorn
# webserver, with one worker process and 8 threads.
# For environments with multiple CPU cores, increase the number of workers
# to be equal to the cores available.
CMD exec gunicorn --bind :$PORT --workers 1 --threads 8 --timeout 0 main:app
# Use the official uv python image
# Use a Python image with uv pre-installed
FROM ghcr.io/astral-sh/uv:python3.11-bookworm

# Install the project into `/app`
WORKDIR /app

# Enable bytecode compilation
ENV UV_COMPILE_BYTECODE=1

# Copy from the cache instead of linking since it's a mounted volume
ENV UV_LINK_MODE=copy
RUN apt-get update && apt-get install -y libgdal-dev libgl1

# Install the project's dependencies using the lockfile and settings
RUN --mount=type=cache,target=/root/.cache/uv \
--mount=type=bind,source=uv.lock,target=uv.lock \
--mount=type=bind,source=pyproject.toml,target=pyproject.toml \
uv sync --frozen --no-install-project --no-dev

# Then, add the rest of the project source code and install it
# Installing separately from its dependencies allows optimal layer caching
ADD . /app
RUN --mount=type=cache,target=/root/.cache/uv \
uv sync --frozen --no-dev

# Place executables in the environment at the front of the path
ENV PATH="/app/.venv/bin:$PATH"

# CMD uv run --with waitress waitress-serve main:app
# CMD uv run python main.py
CMD uv run --with gunicorn gunicorn --bind :8080 --workers 1 --threads 8 --timeout 0 main:app
Empty file.
9 changes: 9 additions & 0 deletions App/functions/report-python-cloud-run/docker-compose.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
services:
report-service:
build:
context: .
dockerfile: Dockerfile
env_file:
- .env.prod
ports:
- "8080:8080"
2 changes: 1 addition & 1 deletion App/functions/report-python-cloud-run/main.py
Original file line number Diff line number Diff line change
Expand Up @@ -73,4 +73,4 @@ def return_html():


if __name__ == "__main__":
app.run(debug=True, host="0.0.0.0", port=int(os.environ.get("PORT", 8080)))
app.run(host="0.0.0.0", port=int(os.environ.get("PORT", 8080)))
28 changes: 28 additions & 0 deletions App/functions/report-python-cloud-run/pyproject.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
[project]
name = "gca-report"
version = "0.1.0"
description = "Add your description here"
readme = "README.md"
requires-python = ">=3.10"
dependencies = [
"flask>=3.1.0",
"geopandas>=1.0.1",
"ipykernel>=6.29.5",
"jinja2>=3.1.4",
"langchain>=0.3.11",
"matplotlib>=3.9.3",
"numba>=0.60.0",
"openai>=1.57.1",
"opencv-python>=4.10.0.84",
"pymupdf>=1.25.1",
"pystac-client>=0.7",
"resilientplotterclass",
"rioxarray>=0.16.0",
"sentence-transformers>=3.3.1",
"weasyprint>=63.1",
"xarray>=2024.4.0",
"zarr>=2.18.2",
]

[tool.uv.sources]
resilientplotterclass = { git = "https://github.com/Deltares-research/ResilientPlotterClass" }
39 changes: 39 additions & 0 deletions App/functions/report-python-cloud-run/report/app.html
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
<div class="section-grid">
<h>The State of the Coast Report, <span style="color: rgb(0, 204, 150)">explained</span></h>
<p>The data of this report is obtained from various STACs and the content of this report is generated by the Azure AI.
This page explains the approach to process individual data from the STACs and relevant literature regarding the datasets.
The datasets can be found in the following STACs:</p>
<p>https://raw.githubusercontent.com/openearth/global-coastal-atlas/subsidence_etienne/STAC/data/current/catalog.json</p>
<p>https://raw.githubusercontent.com/openearth/coclicodata/main/current/catalog.json</p>

<p>Coastal Types: The dataset indicates the sediment composition of a beach at certain transects,
which are globally spaced shore-normal transects in every 500m. The transects can either labelled as sandy, muddy, vegetated, coastal cliff or other materials.
This dataset is derived from satellite images and other parameters and generated by using a supervised random forest classifier.
Details of the methodology can be referred to Breiman et al. (2001).</p>

<p>The Population: The dataset provides a global population count per pixel at approximately 100m resolution
and it is based on the United Nation Development Programme (UNDP) 2020 estimates for in total 183 countries.</p>

<p>Historical Shoreline Change: The dataset provides the annual shoreline position over the period from 1984 to 2021 along 1.8 million transects in the world.
The transects are 500m spaced shore-normal transects. The position of shoreline is derived from satellite images.
Details of the methodology can be referred to XXXX.</p>

<p>SSP: SSP stands for Shared Socioeconomic Pathway, which was introduced in the IPCC 6th Assessment Report (AR6).
This indicator describes different socioeconomic assumptions, such as population, economic growth, technological development and etc.
Details can be referred to the AR6.</p>

<p>RCP: RCP stands for Representative Concentration Pathway, which was introduced in IPCC 5th Assessment Report (AR5).
RCP4.5 and 8.5 describe scenarios with intermediate or very high greenhouse gas (GHG) emission and other radiative forcings.
RCP4.5 describes a scenario that the CO2 emission will remain around current levels until 2050 and fall but not reach net zero by 2100.
RCP8.5 describes a scenario that the CO2 emission will become triple by 2075.</p>

<p>Sea Level Rise Projection: This is median projections of regional sea level rise from 2020 to 2150, relative to a 1995-2014 baseline.
This projection data is originally from the AR6. The projections are based on several scenarios, including SSP126, SSP245 and SSP585.
Details of the projection can be found in XXX.</p>

<p>Future Shoreline Projections: The average shoreline change rate is based on projection of locations of sandy shorelines relative to their reference locations in RCP4.5 and RCP8.5 scenarios.
According to Luijendijk et al. (XXXX), the shoreline locations in 2021 and the projected shoreline locations in 2050 and 2100 are defined.
The average shoreline change rate is calculated based on the spatial distance difference of the shoreline location between two referenced years and divided by the year difference.
The future shoreline location is estimated based on XXXXX. Details can be found in “XXXX” (Luijendijk et al., XXXX ).
The erosion and accretion classification system is the same as the one in the historical shoreline change.</p>
</div>
Original file line number Diff line number Diff line change
@@ -1,13 +1,30 @@
from typing import Optional
import xarray as xr

from .datasetcontent import DatasetContent
from .esl import get_esl_content
from report.datasets.datasetcontent import DatasetContent

from report.datasets.shoremon import (
get_sedclass_content,
get_shoremon_content,
get_shoremon_fut_content,
)
from report.datasets.popgpd import get_world_pop_content
# from .subtreat import get_sub_threat_content


def get_dataset_content(dataset_id: str, xarr: xr.Dataset) -> Optional[DatasetContent]:
match dataset_id:
case "esl_gwl":
return get_esl_content(xarr)
# case "esl_gwl":
# return get_esl_content(xarr)
case "sed_class":
return get_sedclass_content(xarr)
case "shore_mon":
return get_shoremon_content(xarr)
case "shore_mon_fut":
return get_shoremon_fut_content(xarr)
case "world_pop":
return get_world_pop_content(xarr)
# case "sub_threat":
# return None
case _:
return None
Loading

0 comments on commit aceb0e3

Please sign in to comment.