diff --git a/wildfires_wise_jupyter_demo.ipynb b/wildfires_wise_jupyter_demo.ipynb
new file mode 100644
index 0000000..3823484
--- /dev/null
+++ b/wildfires_wise_jupyter_demo.ipynb
@@ -0,0 +1,803 @@
+{
+ "cells": [
+ {
+ "attachments": {},
+ "cell_type": "markdown",
+ "id": "dd697ec6-36d6-4378-8eff-e7ed9b70de85",
+ "metadata": {
+ "editable": true,
+ "slideshow": {
+ "slide_type": ""
+ },
+ "tags": []
+ },
+ "source": [
+ "# Use case wildfires: WISE\n",
+ "\n",
+ "### Joonas Kolstela, FMI\n",
+ "\n",
+ "### April 26, 2024\n",
+ "\n",
+ "## The Canadian Wildfire Intelligence and Simulation Engine (WISE)\n",
+ "\n",
+ "A deterministic fire spread model based on the Canadian Prometheus fire spread model.\n",
+ "Based on the Canadian Forest Fire Danger Rating System (CFFDRS) Fire Weather Index (FWI) and Fire Behaviour Prediction (FBP) systems.\n",
+ "Development is still ongoing, but the model is already in operational use by e.g. the Government of the Northwest Territories. A version is also adapted at the Finnish Meteorological Institute.\n",
+ "\n",
+ "### Fuel information + topography + meteorological conditions = Fire Behaviour Prediction information\n",
+ "- FWI: Estimates the moisture of different fuel types.\n",
+ "- FBP: Estimates fire spread rate and type of fire in different fuel classes.\n",
+ "\n",
+ "\n",
+ "# Fire spread calculations in the WISE system\n",
+ "\n",
+ "\n",
+ "\n",
+ "![WISE propagation](wise_propagation.jpg)\n",
+ "\n",
+ "#### The Huygens' principle to simulate fire growth:\n",
+ "\n",
+ "#### a - Model selects propagation points along the fire perimeter\n",
+ "\n",
+ "#### b - Fire propagation calculations are done using fuel, topography and weather data\n",
+ "\n",
+ "#### c - New fire perimeter is formed, fire behaviour characteristics are calculated and the loop is repeated\n",
+ "\n",
+ "- Fuel classes have different parameters for fire spread rates, crown base heights etc.\n",
+ "- In a level environment with no wind, fire will spread uniformly in all directions\n",
+ "- Longer drought periods increase a Buildup effect of different fuels, contributing to fire spread rates in them\n",
+ "- Hourly Fine Fuel Moisture Code (HFFMC) and Initial Spread Index (ISI) values are \n",
+ " "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "7d110dc3-4f2d-4bdd-a1a8-a16dae9a66ad",
+ "metadata": {},
+ "outputs": [],
+ "source": []
+ },
+ {
+ "cell_type": "markdown",
+ "id": "150c1827-2300-427e-98ef-3f5ff01e63eb",
+ "metadata": {},
+ "source": [
+ "# Code\n",
+ "\n",
+ "### WISE model is a open source project hosted in GitHub at https://github.com/WISE-Developers. The model along used python scripts were installed in a singularity container and moved to LUMI for use in the workflow.\n",
+ "\n",
+ "# Running the model\n",
+ "\n",
+ "Input variables consist of:\n",
+ "- Digital Elevation Model (DEM) (16 x 16 m) (National Land Survey of Finland).\n",
+ "- Fuel classification information (e.g. class 2 = spruce dominated boreal forest, 3 = pine dominated, 101 = non-burning) (16 x 16 m). Fuel classes have been calculated from the national forest inventory of Finland (National Resources institute Finland).\n",
+ "\n",
+ "![c6 fuel class example](c6_fuel.jpg)\n",
+ "\n",
+ "\n",
+ "\n",
+ "\n",
+ "\n",
+ "\n",
+ "\n",
+ "- Meteorological data at a hourly temporal resolution (nearest grid point to ignition working as a virtual weather station)\n",
+ "\n",
+ "Output variables consist of:\n",
+ "- Fire spread at an hourly temporal resolution\n",
+ "- Maximum flame length in each cell\n",
+ "- Maximum fire intensity in each cell\n",
+ "- Percent canopy burned in each cell\n",
+ "\n",
+ "\n",
+ "![WISE workflow](wise_workflow.jpg)\n",
+ "\n",
+ "## Requested data from the GSV\n",
+ "\n",
+ "- Temperature\n",
+ "- Dewpoint temperature\n",
+ "- Wind V & U components\n",
+ "- Precipitation\n",
+ "- 0.1 degree spatial resolution, hourly temporal resolution\n",
+ "\n",
+ "## run_wildfires_wise.py\n",
+ "\n",
+ "- Setting run start and end dates\n",
+ "- Running the wise.sif container"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "a019431a-4385-43ed-8531-76a6f683ef63",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "running run_wildfires.py\n",
+ "Dates formatted, running wise container\n",
+ "launching WISE runs\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "INFO: underlay of /etc/localtime required more than 50 (114) bind mounts\n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "running .fgmj modifier\n",
+ "fgmj file modified\n",
+ "fgmj file modified\n",
+ "fgmj file modified\n",
+ "modify_fgmj.py done\n",
+ "running ncdf_edits_multiarea.py\n",
+ "ncdf_edits_multiarea.py done, starting modify_fgmj.py\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "Warning 1: Self-intersection at or near point 24.154789463039751 64.005224576456456\n"
+ ]
+ }
+ ],
+ "source": [
+ "# import modules\n",
+ "print('running run_wildfires.py')\n",
+ "import sys\n",
+ "import argparse\n",
+ "import os\n",
+ "import subprocess\n",
+ "import csv\n",
+ "\n",
+ "# parser used in autosubmit workflow\n",
+ "\n",
+ "# creating the parser\n",
+ "#parser = argparse.ArgumentParser(description='Runscript for data notifier job.')\n",
+ "\n",
+ "# adding year, month, day and experiment id arguments\n",
+ "#parser.add_argument('-year_start', required=True, help='Input year start', default=1)\n",
+ "#parser.add_argument('-month_start', required=True, help='Input month start', default=2)\n",
+ "#parser.add_argument('-day_start', required=True, help='Input day start', default=3)\n",
+ "\n",
+ "#parser.add_argument('-year_end', required=True, help='Input year end', default=4)\n",
+ "#parser.add_argument('-month_end', required=True, help='Input month end', default=5)\n",
+ "#parser.add_argument('-day_end', required=True, help='Input day end', default=6)\n",
+ "\n",
+ "#parser.add_argument('-expid', required=True, help='experiment id', default=7)\n",
+ "\n",
+ "# parsing the arguments\n",
+ "#args = parser.parse_args()\n",
+ "\n",
+ "# combining all dates\n",
+ "#all_dates = ','.join([args.year_start, args.month_start, args.day_start, args.year_end, args.month_end, args.day_end])\n",
+ "\n",
+ "# placeholder values for manual runs\n",
+ "year_start = \"1990\"\n",
+ "year_end = \"1990\"\n",
+ "month_start = \"06\"\n",
+ "month_end = \"06\"\n",
+ "day_start = \"06\"\n",
+ "day_end = \"06\"\n",
+ "\n",
+ "# create combined variable from start and end dates\n",
+ "all_dates = ','.join([year_start,month_start,day_start,year_end,month_end,day_end])\n",
+ "\n",
+ "# creating a environment variable of the dates\n",
+ "os.environ['ALL_DATES'] = all_dates\n",
+ "\n",
+ "\n",
+ "print(\"Dates formatted, running wise container\")\n",
+ "#print(ALL_DATES)\n",
+ "# build the command for running the singularity container wise.sif\n",
+ "cmd = [\n",
+ " 'singularity',\n",
+ " 'run',\n",
+ " '--env', f'ALL_DATES={all_dates}',\n",
+ " '--bind', '/mnt/d/DESTINE_CATS/wildfire_wise_demo/wise_testset/wise_testset/wise_lumi_files:/testjobs',\n",
+ " '--bind', '/mnt/d/DESTINE_CATS/wildfire_wise_demo/wise_testset/wise_testset/wise_outputs:/testjobs/testjobs/area1/Outputs',\n",
+ " '--bind', '/mnt/d/DESTINE_CATS/wildfire_wise_demo/wise_testset/wise_testset/wise_outputs:/testjobs/testjobs/area2/Outputs',\n",
+ " '--bind', '/mnt/d/DESTINE_CATS/wildfire_wise_demo/wise_testset/wise_testset/wise_outputs:/testjobs/testjobs/area3/Outputs',\n",
+ " '--bind', '/mnt/d/DESTINE_CATS/wildfire_wise_demo/wise_testset/wise_testset/temp:/input_data',\n",
+ " '/mnt/d/DESTINE_CATS/wildfire_wise_demo/wise_testset/wise_testset/wise_tester.sif'\n",
+ "]\n",
+ "\n",
+ "# run the container wise.sif\n",
+ "print('launching WISE runs')\n",
+ "subprocess.run(cmd)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "e8474ba1-3083-4c9a-a698-ae7d4c0f8951",
+ "metadata": {},
+ "source": [
+ "### run_wise.py\n",
+ "\n",
+ "- Combining netcdf files and passing them to the data preprocessing script ncdf_edits_multiarea.py\n",
+ "- Running the WISE model for the three test areas"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "7b40560b-8894-4d1a-a8b0-4298fe532fb6",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "#!/usr/bin/python3\n",
+ "# import modules\n",
+ "print('running run_wise.py')\n",
+ "import sys\n",
+ "import argparse\n",
+ "import os\n",
+ "import xarray as xr\n",
+ "import subprocess\n",
+ "import csv\n",
+ "\n",
+ "# defining file input / output paths\n",
+ "in_path = '/input_data/'\n",
+ "out_path = '/input_data/'\n",
+ "\n",
+ "# reading the run dates file\n",
+ "#with open('/testjobs/run_dates.txt', 'r') as file:\n",
+ "# lines = file.read().splitlines()\n",
+ "\n",
+ "# using the environment variable to get run dates\n",
+ "dates_str = os.getenv('ALL_DATES')\n",
+ "print(dates_str)\n",
+ "if dates_str:\n",
+ " year_start, month_start, day_start, year_end, month_end, day_end = dates_str.split(',')\n",
+ "else:\n",
+ " print(\"Environment variable 'ALL_DATES' not found or is invalid.\")\n",
+ " sys.exit(1)\n",
+ " \n",
+ "\n",
+ "# Provide the data file name for all variables (weekly)\n",
+ "temp_name = f'{year_start}_{month_start}_{day_start}_T00_to_{year_end}_{month_end}_{day_end}_T23_2t_hourly_mean.nc' # temperature\n",
+ "dewpoint_name = f'{year_start}_{month_start}_{day_start}_T00_to_{year_end}_{month_end}_{day_end}_T23_2d_hourly_mean.nc' # dewpoint temperature\n",
+ "uwind_name = f'{year_start}_{month_start}_{day_start}_T00_to_{year_end}_{month_end}_{day_end}_T23_10u_hourly_mean.nc' # u wind\n",
+ "vwind_name = f'{year_start}_{month_start}_{day_start}_T00_to_{year_end}_{month_end}_{day_end}_T23_10v_hourly_mean.nc' # v wind\n",
+ "precip_name = f'{year_start}_{month_start}_{day_start}_T00_to_{year_end}_{month_end}_{day_end}_T23_tp_hourly_mean.nc' # precipitation\n",
+ "\n",
+ "# read the netcdf files and take variables\n",
+ "temp_nc = xr.open_dataset(in_path+temp_name)\n",
+ "dewpoint_nc = xr.open_dataset(in_path+dewpoint_name)\n",
+ "windu_nc = xr.open_dataset(in_path+uwind_name)\n",
+ "windv_nc = xr.open_dataset(in_path+vwind_name)\n",
+ "precip_nc = xr.open_dataset(in_path+precip_name)\n",
+ "\n",
+ "windu_var = windu_nc['10u']\n",
+ "windv_var = windv_nc['10v']\n",
+ "temp_var = temp_nc['2t']\n",
+ "dewpoint_var = dewpoint_nc['2d']\n",
+ "precip_var = precip_nc['tp']\n",
+ "\n",
+ "# combine all variables into singular file\n",
+ "combined_nc = xr.Dataset({\n",
+ " '10u': windu_var,\n",
+ " '10v': windv_var,\n",
+ " '2t': temp_var,\n",
+ " '2d': dewpoint_var,\n",
+ " 'tp': precip_var,\n",
+ "})\n",
+ "\n",
+ "file_name = out_path+'combined_ncdf.nc'\n",
+ "\n",
+ "# write the new netcdf file\n",
+ "combined_nc.to_netcdf(file_name)\n",
+ "\n",
+ "# current working dir\n",
+ "current_directory = os.getcwd()\n",
+ "\n",
+ "# get the group id\n",
+ "directory_stat = os.stat(current_directory)\n",
+ "\n",
+ "# get group ownership\n",
+ "group_owner_gid = directory_stat.st_gid\n",
+ "\n",
+ "parent_directory = os.path.dirname(file_name)\n",
+ "parent_gid = os.stat(parent_directory).st_gid\n",
+ "\n",
+ "# change group ownership\n",
+ "os.chown(file_name, -1, parent_gid)\n",
+ "\n",
+ "\n",
+ "# run the ncdf_edits_multiarea.py script\n",
+ "cmd = ['python3','/python_scripts/ncdf_edits_multiarea.py']\n",
+ "print('staring ncdf_edits_multiarea.py')\n",
+ "#subprocess.run(cmd + [out_path+'combined_ncdf.nc'])\n",
+ "\n",
+ "# run the WISE model for the three test areas in Finland\n",
+ "print('launching WISE runs')\n",
+ "cmd = ['wise','-r', '4', '-f', '0', '-t', '/testjobs/testjobs/area1/job.fgmj']\n",
+ "#subprocess.run(cmd)\n",
+ "cmd = ['wise','-r', '4', '-f', '0', '-t', '/testjobs/testjobs/area2/job.fgmj']\n",
+ "#subprocess.run(cmd)\n",
+ "cmd = ['wise','-r', '4', '-f', '0', '-t', '/testjobs/testjobs/area3/job.fgmj']\n",
+ "#subprocess.run(cmd)\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "819b3130-25a1-4836-9d2b-57ce7ace277b",
+ "metadata": {},
+ "source": [
+ "### ncdf_edits_multiarea.py\n",
+ "\n",
+ "- ### Weather data preprocessing (unit changes, relative humidity and wind speed and direction calculations) and weather.txt file creation for model runs\n",
+ "- ### Running the modify_fgmj.py script"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "31833912-8975-47c5-8a25-ca2ec56f704e",
+ "metadata": {
+ "editable": true,
+ "slideshow": {
+ "slide_type": ""
+ },
+ "tags": []
+ },
+ "outputs": [],
+ "source": [
+ "#!/usr/bin/python3\n",
+ "# import modules\n",
+ "print('running ncdf_edits_multiarea.py')\n",
+ "import os\n",
+ "import argparse\n",
+ "import numpy as np\n",
+ "import xarray as xr\n",
+ "import pandas as pd\n",
+ "import subprocess\n",
+ "import sys\n",
+ "from datetime import datetime\n",
+ "\n",
+ "# load netcdf dataset\n",
+ "dataset = xr.open_dataset('/input_data/combined_ncdf.nc')\n",
+ "\n",
+ "# calculate wind speed and direction from 10u and 10v components\n",
+ "wind_speed = np.sqrt(dataset['10u']**2 + dataset['10v']**2)\n",
+ "dataset['wind_speed'] = wind_speed\n",
+ "\n",
+ "wind_direction_rad = np.arctan2(dataset['10v'],dataset['10u'])\n",
+ "wind_direction_deg = np.degrees(wind_direction_rad)\n",
+ "wind_direction_deg = (wind_direction_deg + 360) % 360\n",
+ "dataset['wind_direction'] = wind_direction_deg\n",
+ "\n",
+ "# calculate relative humidity and convert temperatures to Celsius\n",
+ "temperature_celsius = dataset['2t'] - 273.15 # Convert from Kelvin to Celsius\n",
+ "dewpoint_celsius = dataset['2d'] - 273.15 # Convert from Kelvin to Celsius\n",
+ "relative_humidity = 100 * (np.exp((17.625 * dewpoint_celsius) / (243.04 + dewpoint_celsius)) / np.exp((17.625 * temperature_celsius) / (243.04 + temperature_celsius)))\n",
+ "\n",
+ "dataset['relative_humidity'] = relative_humidity\n",
+ "dataset['temperature'] = temperature_celsius\n",
+ "\n",
+ "# set the ignition coordinates for the three test areas\n",
+ "area1_lat = 64.007044\n",
+ "area1_lon = 24.152986\n",
+ "\n",
+ "area2_lat = 63.050609\n",
+ "area2_lon = 29.889436\n",
+ "\n",
+ "area3_lat = 63.433700\n",
+ "area3_lon = 30.540338\n",
+ "\n",
+ "# select only closest cell from netcdf to each ignition location\n",
+ "nearest_cell1 = dataset.sel(lat=area1_lat,lon=area1_lon,method='nearest')\n",
+ "nearest_cell2 = dataset.sel(lat=area2_lat,lon=area2_lon,method='nearest')\n",
+ "nearest_cell3 = dataset.sel(lat=area3_lat,lon=area3_lon,method='nearest')\n",
+ "\n",
+ "df1 = nearest_cell1.to_dataframe()\n",
+ "df2 = nearest_cell2.to_dataframe()\n",
+ "df3 = nearest_cell3.to_dataframe()\n",
+ "\n",
+ "# make required dataframe edits\n",
+ "df1.reset_index(inplace=True)\n",
+ "df1.set_index('time',inplace=True)\n",
+ "df2.reset_index(inplace=True)\n",
+ "df2.set_index('time',inplace=True)\n",
+ "df3.reset_index(inplace=True)\n",
+ "df3.set_index('time',inplace=True)\n",
+ "\n",
+ "df1['date'] = df1.index.date\n",
+ "df1['hour'] = df1.index.time\n",
+ "df2['date'] = df2.index.date\n",
+ "df2['hour'] = df2.index.time\n",
+ "df3['date'] = df3.index.date\n",
+ "df3['hour'] = df3.index.time\n",
+ "\n",
+ "# remove unused variables\n",
+ "variables_to_drop = ['10v','10u','2t','2d']\n",
+ "df1 = df1.drop(variables_to_drop, axis = 1)\n",
+ "df2 = df2.drop(variables_to_drop, axis = 1)\n",
+ "df3 = df3.drop(variables_to_drop, axis = 1)\n",
+ "\n",
+ "# create datetime series for scenario start and end times (start at each day 10:00 and end same day 21:00)\n",
+ "combined_datetime_series = pd.to_datetime(df1.index.date) + pd.to_timedelta([time.hour for time in df1.index], unit='h')\n",
+ "combined_datetime_series = pd.Series(combined_datetime_series)\n",
+ "\n",
+ "# reset the index to default integer index\n",
+ "combined_datetime_series = combined_datetime_series.reset_index(drop=True)\n",
+ "#print(combined_datetime_series)\n",
+ "# select scenario start and end dates\n",
+ "scenario_start = str(combined_datetime_series.iloc[1])\n",
+ "scenario_end = str(combined_datetime_series.iloc[-2])\n",
+ "scenario_start = scenario_start.replace(' ','T')\n",
+ "scenario_end = scenario_end.replace(' ','T')\n",
+ "scenario_start = scenario_start+':00'\n",
+ "scenario_end = scenario_end+':00'\n",
+ "\n",
+ "dates_at_10 = combined_datetime_series[combined_datetime_series.apply(lambda x: x.time() == pd.to_datetime('10:00:00').time())]\n",
+ "dates_at_21 = combined_datetime_series[combined_datetime_series.apply(lambda x: x.time() == pd.to_datetime('21:00:00').time())]\n",
+ "\n",
+ "# select the last three dates for model run\n",
+ "dates_at_10 = str(dates_at_10.iloc[0])\n",
+ "dates_at_10 = dates_at_10.replace(' ','T')\n",
+ "dates_at_21 = str(dates_at_21.iloc[-1])\n",
+ "dates_at_21 = dates_at_21.replace(' ','T')\n",
+ "dates_at_10 = dates_at_10+':00'\n",
+ "dates_at_21 = dates_at_21+':00'\n",
+ "\n",
+ "df1.reset_index(inplace=True)\n",
+ "df2.reset_index(inplace=True)\n",
+ "df3.reset_index(inplace=True)\n",
+ "\n",
+ "# set column order\n",
+ "new_column_order = ['date', 'hour', 'temperature', 'relative_humidity', 'wind_direction', 'wind_speed', 'tp']\n",
+ "df1 = df1[new_column_order]\n",
+ "df2 = df2[new_column_order]\n",
+ "df3 = df3[new_column_order]\n",
+ "\n",
+ "# Rename the columns\n",
+ "df1.rename(columns={\n",
+ " 'date': 'HOURLY',\n",
+ " 'hour': 'HOUR',\n",
+ " 'temperature': 'TEMP',\n",
+ " 'relative_humidity': 'RH',\n",
+ " 'wind_direction': 'WD',\n",
+ " 'wind_speed': 'WS',\n",
+ " 'tp': 'PRECIP',\n",
+ "}, inplace=True)\n",
+ "\n",
+ "df2.rename(columns={\n",
+ " 'date': 'HOURLY',\n",
+ " 'hour': 'HOUR',\n",
+ " 'temperature': 'TEMP',\n",
+ " 'relative_humidity': 'RH',\n",
+ " 'wind_direction': 'WD',\n",
+ " 'wind_speed': 'WS',\n",
+ " 'tp': 'PRECIP',\n",
+ "}, inplace=True)\n",
+ "\n",
+ "df3.rename(columns={\n",
+ " 'date': 'HOURLY',\n",
+ " 'hour': 'HOUR',\n",
+ " 'temperature': 'TEMP',\n",
+ " 'relative_humidity': 'RH',\n",
+ " 'wind_direction': 'WD',\n",
+ " 'wind_speed': 'WS',\n",
+ " 'tp': 'PRECIP',\n",
+ "}, inplace=True)\n",
+ "\n",
+ "# convert 'date' to datetime format\n",
+ "df1['HOURLY'] = pd.to_datetime(df1['HOURLY'], format='%d/%m/%Y')\n",
+ "df2['HOURLY'] = pd.to_datetime(df2['HOURLY'], format='%d/%m/%Y')\n",
+ "df3['HOURLY'] = pd.to_datetime(df3['HOURLY'], format='%d/%m/%Y')\n",
+ "\n",
+ "# convert 'hour' to integers\n",
+ "df1['HOUR'] = df1['HOUR'].apply(lambda x: x.hour).astype(int)\n",
+ "df2['HOUR'] = df2['HOUR'].apply(lambda x: x.hour).astype(int)\n",
+ "df3['HOUR'] = df3['HOUR'].apply(lambda x: x.hour).astype(int)\n",
+ "\n",
+ "# round all values to one decimal place\n",
+ "df1 = df1.round(1)\n",
+ "df2 = df2.round(1)\n",
+ "df3 = df3.round(1)\n",
+ "\n",
+ "# format the 'date' column as 'dd/mm/yyyy'\n",
+ "df1['HOURLY'] = df1['HOURLY'].dt.strftime('%d/%m/%Y')\n",
+ "df2['HOURLY'] = df2['HOURLY'].dt.strftime('%d/%m/%Y')\n",
+ "df3['HOURLY'] = df3['HOURLY'].dt.strftime('%d/%m/%Y')\n",
+ "\n",
+ "# save the new .txt format weather files to their designated job folders for WISE runs\n",
+ "file_path = '/testjobs/testjobs/'\n",
+ "file_name1 = f'{file_path}area1/Inputs/weather.txt'\n",
+ "file_name2 = f'{file_path}area2/Inputs/weather.txt'\n",
+ "file_name3 = f'{file_path}area3/Inputs/weather.txt'\n",
+ "df1.to_csv((file_name1), sep =',', index =False)\n",
+ "df2.to_csv((file_name2), sep =',', index =False)\n",
+ "df3.to_csv((file_name3), sep =',', index =False)\n",
+ "\n",
+ "# current working dir\n",
+ "current_directory = os.getcwd()\n",
+ "\n",
+ "# get the group id\n",
+ "directory_stat = os.stat(current_directory)\n",
+ "\n",
+ "# get group ownership\n",
+ "group_owner_gid = directory_stat.st_gid\n",
+ "\n",
+ "parent_directory = os.path.dirname(file_name1)\n",
+ "parent_gid = os.stat(parent_directory).st_gid\n",
+ "\n",
+ "# change group ownership\n",
+ "os.chown(file_name1, -1, parent_gid)\n",
+ "os.chown(file_name2, -1, parent_gid)\n",
+ "os.chown(file_name3, -1, parent_gid)\n",
+ "\n",
+ "\n",
+ "# run the modify_fgmj.py script\n",
+ "cmd = ['python3','/python_scripts/modify_fgmj.py']\n",
+ "arguments = [str(scenario_start),str(scenario_end),str(dates_at_10),str(dates_at_21),str(area1_lat),str(area1_lon),str(area2_lat),str(area2_lon),str(area3_lat),str(area3_lon)]\n",
+ "print('ncdf_edits_multiarea.py done, starting modify_fgmj.py')\n",
+ "#subprocess.run(cmd + arguments)\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "feef3ae2-6619-49a2-990d-7e3f9ffd411e",
+ "metadata": {},
+ "source": [
+ "## modify_fgmj.py\n",
+ "\n",
+ "- ### Defining necessary settings for each model run (ignition locations and times, file locations, used fuel types, requested output files)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "6b4edeee-50fd-44a7-9f04-6bf30ca366fa",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "#!/usr/bin/python3\n",
+ "# import modules\n",
+ "print('running .fgmj modifier')\n",
+ "import os\n",
+ "import sys\n",
+ "import json\n",
+ "from datetime import datetime\n",
+ "\n",
+ "# take the time and ignition lat lon variables\n",
+ "scenario_start = sys.argv[1]\n",
+ "scenario_end = sys.argv[2]\n",
+ "ignition_start = sys.argv[3]\n",
+ "ignition_end = sys.argv[4]\n",
+ "ignition_y_1 = float(sys.argv[5])\n",
+ "ignition_x_1 = float(sys.argv[6])\n",
+ "ignition_y_2 = float(sys.argv[7])\n",
+ "ignition_x_2 = float(sys.argv[8])\n",
+ "ignition_y_3 = float(sys.argv[9])\n",
+ "ignition_x_3 = float(sys.argv[10])\n",
+ "\n",
+ "# set scenario names\n",
+ "scen_name_1 = 'scen_kalajoki'\n",
+ "scen_name_2 = 'scen_koli'\n",
+ "scen_name_3 = 'scen_lieksa'\n",
+ "\n",
+ "# set input fgmj path and read the fgmj files\n",
+ "fgmj_path = '/testjobs/testjobs/job.fgmj'\n",
+ "\n",
+ "\n",
+ "with open(fgmj_path, 'r') as f:\n",
+ " fgmj_data1 = json.load(f)\n",
+ "\n",
+ "with open(fgmj_path, 'r') as f:\n",
+ " fgmj_data2 = json.load(f)\n",
+ "\n",
+ "with open(fgmj_path, 'r') as f:\n",
+ " fgmj_data3 = json.load(f)\n",
+ "\n",
+ "# set variables\n",
+ "scenario_start = ignition_start\n",
+ "scenario_end = scenario_end\n",
+ "local_start_time = ignition_start\n",
+ "start_time = ignition_start\n",
+ "end_time = scenario_end\n",
+ "ignition_start = ignition_start\n",
+ "output_time = scenario_end\n",
+ "\n",
+ "# function for replacing values in dictionary\n",
+ "def replace_in_dict(data, find, replace):\n",
+ " if isinstance(data, dict):\n",
+ " for key, value in data.items():\n",
+ " if isinstance(value, (dict, list)):\n",
+ " replace_in_dict(value, find, replace)\n",
+ " elif isinstance(value, str):\n",
+ " data[key] = value.replace(find, replace)\n",
+ "\n",
+ " elif isinstance(data, list):\n",
+ " for index, value in enumerate(data):\n",
+ " if isinstance(value, (dict, list)):\n",
+ " replace_in_dict(value, find, replace)\n",
+ " elif isinstance(value, str):\n",
+ " data[index] = value.replace(find, replace)\n",
+ "\n",
+ "# function for editing the job.fgmj files\n",
+ "def create_job(data_in, job_name, scen_name, ign_lon, ign_lat):\n",
+ "\n",
+ " data_in['project']['scenarios']['scenarioData'][0]['startTime']['time'] = scenario_start\n",
+ "\n",
+ " data_in['project']['scenarios']['scenarioData'][0]['endTime']['time'] = scenario_end\n",
+ "\n",
+ " data_in['project']['scenarios']['scenarioData'][0]['temporalConditions']['daily'][0]['localStartTime']['time'] = local_start_time\n",
+ "\n",
+ " data_in['project']['scenarios']['scenarioData'][0]['temporalConditions']['daily'][0]['startTime']['time'] = start_time\n",
+ "\n",
+ " data_in['project']['scenarios']['scenarioData'][0]['temporalConditions']['daily'][0]['endTime']['time'] = end_time\n",
+ "\n",
+ " data_in['project']['ignitions']['ignitionData'][0]['startTime']['time'] = ignition_start\n",
+ "\n",
+ " data_in['project']['ignitions']['ignitionData'][0]['ignitions']['ignitions'][0]['polygon']['polygon']['points'][0]['x']['value'] = ign_lon\n",
+ "\n",
+ " data_in['project']['ignitions']['ignitionData'][0]['ignitions']['ignitions'][0]['polygon']['polygon']['points'][0]['y']['value'] = ign_lat\n",
+ "\n",
+ " data_in['project']['outputs']['grids'][0]['exportTime']['time'] = output_time\n",
+ "\n",
+ " data_in['project']['outputs']['grids'][1]['exportTime']['time'] = output_time\n",
+ "\n",
+ " data_in['project']['outputs']['grids'][2]['exportTime']['time'] = output_time\n",
+ "\n",
+ " data_in['project']['outputs']['grids'][3]['exportTime']['time'] = output_time\n",
+ "\n",
+ " data_in['project']['outputs']['grids'][4]['exportTime']['time'] = output_time\n",
+ "\n",
+ " data_in['project']['outputs']['grids'][5]['exportTime']['time'] = output_time\n",
+ " \n",
+ " data_in['project']['outputs']['grids'][6]['exportTime']['time'] = output_time\n",
+ "\n",
+ " data_in['project']['outputs']['vectors'][0]['perimeterTime']['startTime']['time'] = ignition_start\n",
+ "\n",
+ " data_in['project']['outputs']['vectors'][0]['perimeterTime']['endTime']['time'] = output_time\n",
+ "\n",
+ " data_in['project']['stations']['wxStationData'][0]['streams'][0]['condition']['startTime']['time'] = scenario_start\n",
+ "\n",
+ " replace_in_dict(data_in, 'scen0', scen_name+'_'+ignition_start[0:10])\n",
+ "\n",
+ " with open(job_name, 'w') as f:\n",
+ " json.dump(data_in, f, indent=2)\n",
+ " print('fgmj file modified')\n",
+ "\n",
+ "# current date for filename\n",
+ "current_datetime = datetime.now()\n",
+ "formatted_datetime = current_datetime.strftime(\"%Y-%m-%d_%H:%M\")\n",
+ "\n",
+ "scen_name_1 = scen_name_1 + \"_\" + str(formatted_datetime)\n",
+ "scen_name_2 = scen_name_2 + \"_\" + str(formatted_datetime)\n",
+ "scen_name_3 = scen_name_3 + \"_\" + str(formatted_datetime)\n",
+ "\n",
+ "\n",
+ "# edit the job.fgmj files and save them in repective directories\n",
+ "file_name1 = '/testjobs/testjobs/area1/job.fgmj'\n",
+ "file_name2 = '/testjobs/testjobs/area2/job.fgmj'\n",
+ "file_name3 = '/testjobs/testjobs/area3/job.fgmj'\n",
+ "create_job(fgmj_data1,file_name1,scen_name_1,ignition_x_1,ignition_y_1)\n",
+ "create_job(fgmj_data2,file_name2,scen_name_2,ignition_x_2,ignition_y_2)\n",
+ "create_job(fgmj_data3,file_name3,scen_name_3,ignition_x_3,ignition_y_3)\n",
+ "\n",
+ "# current working dir\n",
+ "current_directory = os.getcwd()\n",
+ "\n",
+ "# get the group id\n",
+ "directory_stat = os.stat(current_directory)\n",
+ "\n",
+ "# get group ownership\n",
+ "group_owner_gid = directory_stat.st_gid\n",
+ "\n",
+ "parent_directory = os.path.dirname(file_name1)\n",
+ "parent_gid = os.stat(parent_directory).st_gid\n",
+ "\n",
+ "# change group ownership\n",
+ "os.chown(file_name1, -1, parent_gid)\n",
+ "os.chown(file_name2, -1, parent_gid)\n",
+ "os.chown(file_name3, -1, parent_gid)\n",
+ "\n",
+ "\n",
+ "#print('modify_fgmj.py done')"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "9327e16f-5197-439a-8b1a-8d0489d59202",
+ "metadata": {},
+ "source": [
+ "# Results\n",
+ "\n",
+ "- Hourly fire propagation vector files (.kml)\n",
+ "- Maximum flame lenght (m), maximum fire intensity (kw), maximum crown fraction burned (%) in each cell (16 x 16 m resolution) raster files (.tif)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "1cce05df-c730-4a43-aaaa-313c8534dd65",
+ "metadata": {},
+ "source": [
+ "# Test result from Western Finland Koli"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "0a94df14-f2cd-47c9-a188-2be86450c25c",
+ "metadata": {},
+ "source": [
+ "![WISE koli example](wise.jpg)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "d44f59e9-ed86-46ef-b2a0-48e096364fe6",
+ "metadata": {},
+ "source": [
+ "## Test result from daily fire spread simulations over 1.6.2000 - 31.8.2000 in the Kalajoki test area in Western Finland.\n",
+ "\n",
+ "a - Fuel map\n",
+ "\n",
+ "b - Number of times each cell burned\n",
+ "\n",
+ "c - Maximum flame length in each cell (m)\n",
+ "\n",
+ "d - Maximum fire intensity in each cell (kw)\n",
+ "\n",
+ "e - Example fire spread scenario from 23.06.2000"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "a299e368-058d-41aa-b53b-354acba83ec4",
+ "metadata": {},
+ "source": [
+ ""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "f293091d-7bf3-4d3e-8b42-86f51819f896",
+ "metadata": {},
+ "source": [
+ "# Some development goals for Phase 2:\n",
+ "- Users can bring their own fuel and topography information.\n",
+ "- Possibility to add fire breaks.\n",
+ "- Better capabilities of modelling areal fire risk with e.g. randomized ignition locations, different climate and/or land use scenarios"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "80275364-9137-474c-92a1-a9dcb97c16ca",
+ "metadata": {},
+ "outputs": [],
+ "source": []
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python 3 (ipykernel)",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.10.12"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 5
+}