diff --git a/.buildinfo b/.buildinfo new file mode 100644 index 00000000..bb6cdf85 --- /dev/null +++ b/.buildinfo @@ -0,0 +1,4 @@ +# Sphinx build info version 1 +# This file records the configuration used when building these files. When it is not found, a full rebuild will be done. +config: 4724ebffb4b01f919207576d4d776997 +tags: 645f666f9bcd5a90fca523b33c5a78b7 diff --git a/10min.html b/10min.html new file mode 100644 index 00000000..e671e271 --- /dev/null +++ b/10min.html @@ -0,0 +1,465 @@ + + + + + + + + + 10 Minutes to Modflow-setup — modflow-setup 0.5.0.post59+g65803fd documentation + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

10 Minutes to Modflow-setup

+

This is a short introduction to help get you up and running with Modflow-setup. A complete workflow can be found in the Pleasant Lake Example; additional examples of working configuration files can be found in the Configuration File Gallery.

+
+

1) Define the model active area and coordinate reference system

+

Depending on the problem, the model area might simply be a box enclosing features of interest and any relevant hydrologic boundaries, or an irregular shape surrounding a watershed or other feature. In either case, it may be helpful to download hydrography first, to ensure that the model area includes all important features. The model should be referenced to a projected coordinate reference system (CRS), ideally with length units of meters and an authority code (such as an EPSG code) that unambiguously defines it.

+

Modflow-setup provides two ways to define a model grid:

+
+
    +
  • x and y coordinates of the model origin (lower left or upper left corner), grid spacing, number of rows and columns, rotation, and CRS

  • +
  • As a rectangular area of specified discretization surrounding a polygon shapefile of the model active area (traced by hand or developed by some other means) or a feature of interest buffered by a specified distance.

  • +
+
+

The active model area is defined subsequently in the DIS package.

+
+
+

Note

+

Don’t forget about the farfield! Usually it is advised to include important competing sinks outside of the immediate area of interest (the nearfield), so that the solution is not over-specified by the perimeter boundary condition, and recognizing that the surface watershed boundary doesn’t always coincide exactly with the groundwatershed boundary. See Haitjema (1995) and Anderson and others (2015) for more info.

+
+
+

Note

+

Need a polygon defining a watershed? In the United States, the Watershed Boundary Dataset provides watershed deliniations at various scales.

+
+
+
+
+

2) Create a setup script and configuration file

+

Usually creating the desired grid requires some iteration. We can get started on this by making a model setup script and corresponding configuration file.

+

An initial model setup script for making the model grid:

+
+
 1from mfsetup import MF6model
+ 2
+ 3
+ 4def setup_grid(cfg_file):
+ 5    """Just set up (a shapefile of) the model grid.
+ 6    For trying different grid configurations."""
+ 7    m = MF6model(cfg=cfg_file)
+ 8    m.setup_grid()
+ 9    m.modelgrid.write_shapefile('postproc/shps/grid.shp')
+10
+11if __name__ == '__main__':
+12
+13    setup_grid('initial_config_poly.yaml')
+
+
+

Download the file: +initial_grid_setup.py

+
+

An initial configuration file for developing a model grid around a pre-defined active area:

+
+
 1simulation:
+ 2  sim_name: 'shellmound'
+ 3  version: 'mf6'
+ 4  sim_ws: 'model'
+ 5
+ 6model:
+ 7  simulation: 'shellmound'
+ 8  modelname: 'shellmound'
+ 9  options:
+10    print_input: True
+11    save_flows: True
+12    newton: True
+13  packages: [
+14  ]
+15
+16setup_grid:
+17  source_data:
+18    features_shapefile:
+19      filename: '../mfsetup/tests/data/shellmound/tmr_parent/gis/irregular_boundary.shp'
+20  buffer: 0
+21  dxy: 1000  # Uniform x, y spacing in meters
+22  rotation: 0.
+23  crs: 5070  # EPSG code for NAD83 CONUS Albers (meters)
+24  snap_to_NHG: True  # option to snap to the USGS National Hydrogeologic Grid
+
+
+

Download the file: +initial_config_poly.yaml

+
+

To define a model grid using an origin, grid spacing and dimensions, a setup_grid: block like this one could be substitued above:

+
+
setup_grid:
+  xoff: 501405 # lower left x-coordinate
+  yoff: 1175835 # lower left y-coordinate
+  nrow: 30
+  ncol: 35
+  dxy: 1000
+  rotation: 0.
+  epsg: 5070
+  snap_to_NHG: True
+
+
+

Download the file: +initial_config_poly.yaml

+
+

Now initial_setup_script.py can be run repeatedly to explore different grids.

+
+
+

3) Develop flowlines to represent streams

+

Next, let’s get some data for setting up boundary conditions. For streams, Modflow-setup can accept any linestring shapefile that has a routing column indicating how the lines connect to one another. This can be created by hand, or in the United States, obtained from the National Hydrography Dataset Plus (NHDPlus). There are two types of NHDPlus:

+
+
    +
  • NHDPlus version 2 is mapped at the 1:100,000 scale, and is therefore suitable for larger regional models with cell sizes of ~100s of meters to ~1km. NHDPlus version 2 can be the best choice for larger model areas (greater than approx 1,000 km2), where NHDPlus HR might have too many lines. NHDPlus version 2 can be obtained from the EPA.

  • +
  • NHDPlus High Resolution (HR) is mapped at the finer 1:24,000 scale, and may therefore work better for smaller problems (discretizations of ~100 meters or less) where better alignment between the mapped lines and stream channel in the DEM is desired, and where the number of linestring features to manage won’t be prohibitive. NHDPlus HR can be accessed via the National Map Downloader.

  • +
+
+
+

Preprocessing NHDPlus HR

+

Currently, NHDPlus HR data, which comes in a file geodatabase (GDB), must be preprocessed into a shapefile for input to Modflow-setup and SFRmaker (which Modflow-setup uses to build the stream network). In many cases, multiple GDBs may need to be combined and undesired line features such as storm sewers culled. The SFRmaker documentation has examples for how to read and preprocesses NHDPlus HR.

+
+
+

Preprocessing NHDPlus version 2

+

Depending on the application, NHDPlus version 2 may not need to be preprocessed. Reasons to preprocess include:

+
    +
  • the model area is large, and

    +
    +
      +
    • read times for one or more NHDPlus drainage basins are slowing the model build

    • +
    • the DEM being used for the model top is relatively coarse, and sampling a fine DEM during the model build is prohibitive for time or space reasons.

    • +
    +
    +
  • +
  • the stream network is too dense, with too many model cells containing SFR reaches (especially a problem in the eastern US at the 1 km resolution); or there are too many ephemeral streams represented.

  • +
  • the stream network has divergences where one or more distributary lines are downstream of a confluence.

  • +
+

The preprocessing module in SFRmaker can resolve these issues, producing a single set of culled flowlines with width and elevation information and divergences removed. The elevation functionality in the preprocessing module requires a DEM.

+
+
+
+

4) Get a DEM

+

The National Map Downloader has 10 meter DEMs for the United States, with finer resolutions available in many areas. Typically, these come in 1 degree x 1 degree tiles. If many tiles are needed, the uGet Download Manager linked to on the National Map site can automate downloading many tiles. Alternatively, links to the files follow a consistent format, and are therefore amenable to scripted or manual downloads. For example, the tile located between -88 and -87 west and 43 and 44 north is available at:

+

https://prd-tnm.s3.amazonaws.com/StagedProducts/Elevation/13/TIFF/current/n44w088/USGS_13_n44w088.tif

+
+

Making a virtual raster

+

Once all of the tiles are downloaded, a virtual raster can be made that allows them to be treated as a single file, without any modifications to the original data. This is required for input to SFRmaker and Modflow-setup. For example, in QGIS:

+
+
    +
  1. Load all of the tiles to verify that they are correct and cover the whole model active area.

  2. +
  3. From the Raster menu, select Miscellaneous > Build Virtual Raster. This will make a virtual raster file with a .vrt extension that points to the original set of GeoTIFFs, but allows them to be treated as a single continuous raster.

  4. +
+
+
+
+
+

5) Make a minimum working configuration file and model build script

+

Now that we have a set of flowlines and a DEM (and perhaps shapefiles for other surface water boundaries), we can fill out the rest of the configuration file to get an initial working model. Later, additional details such as more layers, a well package, observations, or other features can be added in a stepwise approach (Haitjema, 1995).

+
+
  1simulation:
+  2  sim_name: 'shellmound'
+  3  version: 'mf6'
+  4  sim_ws: 'model'
+  5
+  6model:
+  7  simulation: 'shellmound'
+  8  modelname: 'shellmound'
+  9  options:
+ 10    print_input: True
+ 11    save_flows: True
+ 12    newton: True
+ 13  packages:
+ 14    - dis
+ 15    - ic
+ 16    - np
+ 17    - oc
+ 18    - sto
+ 19    - rch
+ 20    - sfr
+ 21    - wel
+ 22
+ 23setup_grid:
+ 24  source_data:
+ 25    features_shapefile:
+ 26      filename: '../mfsetup/tests/data/shellmound/tmr_parent/gis/irregular_boundary.shp'
+ 27  buffer: 0
+ 28  dxy: 1000  # Uniform x, y spacing in meters
+ 29  rotation: 0.
+ 30  crs: 5070  # EPSG code for NAD83 CONUS Albers (meters)
+ 31  snap_to_NHG: True  # option to snap to the USGS National Hydrogeologic Grid
+ 32
+ 33dis:
+ 34  remake_top: True
+ 35  options:
+ 36    length_units: 'meters'
+ 37  dimensions:
+ 38    nlay: 1
+ 39  source_data:
+ 40    top:
+ 41      filename: '../mfsetup/tests/data/shellmound/rasters/meras_100m_dem.tif'
+ 42      elevation_units: 'feet'
+ 43    botm:
+ 44      filenames:
+ 45        0: '../mfsetup/tests/data/shellmound/rasters/mdwy_surf.tif'
+ 46      elevation_units: 'feet'
+ 47    idomain:
+ 48      # polygon shapefile of model active area
+ 49      filename: '../mfsetup/tests/data/shellmound/tmr_parent/gis/irregular_boundary.shp'
+ 50
+ 51tdis:
+ 52  options:
+ 53    time_units: 'days'
+ 54    start_date_time: '2020-01-01'
+ 55  perioddata:
+ 56    group 1:
+ 57      perlen: 1
+ 58      nper: 1
+ 59      nstp: 1
+ 60      steady: True
+ 61
+ 62npf:
+ 63  options:
+ 64    save_flows: True
+ 65    rewet: True
+ 66  griddata:
+ 67    icelltype: 1
+ 68    k: 30.
+ 69    k33: 0.3
+ 70
+ 71sto:
+ 72  options:
+ 73    save_flows: True
+ 74  griddata:
+ 75    iconvert: 1  # convertible layers
+ 76    sy: 0.2
+ 77    ss: 1.e-6
+ 78
+ 79rch:
+ 80  options:
+ 81    print_input: True
+ 82    print_flows: False
+ 83    save_flows: True
+ 84    readasarrays: True
+ 85  recharge: 0.00025  # 0.00025 m/d ~ 3.5 inches/year
+ 86
+ 87sfr:
+ 88  options:
+ 89    save_flows: True
+ 90  source_data:
+ 91    flowlines:
+ 92      filename: '../mfsetup/tests/data/shellmound/shps/flowlines.shp'
+ 93      id_column: 'COMID'  # arguments to sfrmaker.lines.from_shapefile
+ 94      routing_column: 'tocomid'
+ 95      width1_column: 'width1'
+ 96      width2_column: 'width2'
+ 97      up_elevation_column: 'elevupsmo'
+ 98      dn_elevation_column: 'elevdnsmo'
+ 99      name_column: 'GNIS_NAME'
+100      width_units: 'feet'  # units of flowline widths
+101      elevation_units: 'feet'  # units of flowline elevations
+102  sfrmaker_options:
+103    one_reach_per_cell: True #  consolidate SFR reaches to one per i, j location
+104    to_riv: # convert this line and all downstream lines to the RIV package
+105      - 18047206
+106
+107oc:
+108  period_options:
+109    0: ['save head last','save budget last']
+110
+111ims:
+112  options:
+113    print_option: 'all'
+114    complexity: 'complex'
+115    csv_output_filerecord: 'solver_out.csv'
+116  nonlinear:
+117    outer_dvclose: 1.  # m3/d in SFR package
+118    outer_maximum: 50
+119  linear:
+120    inner_maximum: 100
+121    inner_dvclose: 0.01
+122    rcloserecord: [0.001, 'relative_rclose']
+
+
+

Download the file: +initial_config_full.yaml

+
+

A setup script for making a minimum working model. Additional functions can be added later to further customize the model outside of the Modflow-setup build step.

+
+
 1import os
+ 2
+ 3from mfsetup import MF6model
+ 4
+ 5
+ 6def setup_grid(cfg_file):
+ 7    """Just set up (a shapefile of) the model grid.
+ 8    For trying different grid configurations."""
+ 9    cwd = os.getcwd()
+10    m = MF6model(cfg=cfg_file)
+11    m.setup_grid()
+12    m.modelgrid.write_shapefile('postproc/shps/grid.shp')
+13    # Modflow-setup changes the working directory
+14    # to the model workspace; change it back
+15    os.chdir(cwd)
+16
+17
+18def setup_model(cfg_file):
+19    """Set up the whole model."""
+20    cwd = os.getcwd()
+21    m = MF6model.setup_from_yaml(cfg_file)
+22    m.write_input()
+23    os.chdir(cwd)
+24    return m
+25
+26
+27if __name__ == '__main__':
+28
+29    #setup_grid('initial_config_poly.yaml')
+30    setup_model('initial_config_full.yaml')
+
+
+

Download the file: +initial_model_setup.py

+
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/_downloads/509f6a4d0dd4389c5a7ae0a232072191/initial_config_full.yaml b/_downloads/509f6a4d0dd4389c5a7ae0a232072191/initial_config_full.yaml new file mode 100644 index 00000000..27436c53 --- /dev/null +++ b/_downloads/509f6a4d0dd4389c5a7ae0a232072191/initial_config_full.yaml @@ -0,0 +1,122 @@ +simulation: + sim_name: 'shellmound' + version: 'mf6' + sim_ws: 'model' + +model: + simulation: 'shellmound' + modelname: 'shellmound' + options: + print_input: True + save_flows: True + newton: True + packages: + - dis + - ic + - np + - oc + - sto + - rch + - sfr + - wel + +setup_grid: + source_data: + features_shapefile: + filename: '../mfsetup/tests/data/shellmound/tmr_parent/gis/irregular_boundary.shp' + buffer: 0 + dxy: 1000 # Uniform x, y spacing in meters + rotation: 0. + crs: 5070 # EPSG code for NAD83 CONUS Albers (meters) + snap_to_NHG: True # option to snap to the USGS National Hydrogeologic Grid + +dis: + remake_top: True + options: + length_units: 'meters' + dimensions: + nlay: 1 + source_data: + top: + filename: '../mfsetup/tests/data/shellmound/rasters/meras_100m_dem.tif' + elevation_units: 'feet' + botm: + filenames: + 0: '../mfsetup/tests/data/shellmound/rasters/mdwy_surf.tif' + elevation_units: 'feet' + idomain: + # polygon shapefile of model active area + filename: '../mfsetup/tests/data/shellmound/tmr_parent/gis/irregular_boundary.shp' + +tdis: + options: + time_units: 'days' + start_date_time: '2020-01-01' + perioddata: + group 1: + perlen: 1 + nper: 1 + nstp: 1 + steady: True + +npf: + options: + save_flows: True + rewet: True + griddata: + icelltype: 1 + k: 30. + k33: 0.3 + +sto: + options: + save_flows: True + griddata: + iconvert: 1 # convertible layers + sy: 0.2 + ss: 1.e-6 + +rch: + options: + print_input: True + print_flows: False + save_flows: True + readasarrays: True + recharge: 0.00025 # 0.00025 m/d ~ 3.5 inches/year + +sfr: + options: + save_flows: True + source_data: + flowlines: + filename: '../mfsetup/tests/data/shellmound/shps/flowlines.shp' + id_column: 'COMID' # arguments to sfrmaker.lines.from_shapefile + routing_column: 'tocomid' + width1_column: 'width1' + width2_column: 'width2' + up_elevation_column: 'elevupsmo' + dn_elevation_column: 'elevdnsmo' + name_column: 'GNIS_NAME' + width_units: 'feet' # units of flowline widths + elevation_units: 'feet' # units of flowline elevations + sfrmaker_options: + one_reach_per_cell: True # consolidate SFR reaches to one per i, j location + to_riv: # convert this line and all downstream lines to the RIV package + - 18047206 + +oc: + period_options: + 0: ['save head last','save budget last'] + +ims: + options: + print_option: 'all' + complexity: 'complex' + csv_output_filerecord: 'solver_out.csv' + nonlinear: + outer_dvclose: 1. # m3/d in SFR package + outer_maximum: 50 + linear: + inner_maximum: 100 + inner_dvclose: 0.01 + rcloserecord: [0.001, 'relative_rclose'] diff --git a/_downloads/6ba4574d50dc50b6efce72feea6132c7/update_starting_heads_from_previous.py b/_downloads/6ba4574d50dc50b6efce72feea6132c7/update_starting_heads_from_previous.py new file mode 100644 index 00000000..def5112c --- /dev/null +++ b/_downloads/6ba4574d50dc50b6efce72feea6132c7/update_starting_heads_from_previous.py @@ -0,0 +1,20 @@ +"""Update model starting heads from previous (initial steady-state) solution +""" +from pathlib import Path + +import numpy as np +from flopy.utils import binaryfile as bf + +model_ws = Path('.') +headfile = model_ws / 'model.hds' +starting_heads_file_fmt = str(model_ws / 'external/strt_{:03d}.dat') + + +hdsobj = bf.HeadFile(headfile) +print(f'reading {headfile}...') + +initial_ss_heads = hdsobj.get_data(kstpkper=(0, 0)) +for per, layer_heads in enumerate(initial_ss_heads): + outfile = starting_heads_file_fmt.format(per) + np.savetxt(starting_heads_file_fmt.format(per), layer_heads, fmt='%.2f') + print(f"updated {outfile}") diff --git a/_downloads/ad797c943aa792619c7a8c98cfbd6638/initial_config_poly.yaml b/_downloads/ad797c943aa792619c7a8c98cfbd6638/initial_config_poly.yaml new file mode 100644 index 00000000..f4ddca8f --- /dev/null +++ b/_downloads/ad797c943aa792619c7a8c98cfbd6638/initial_config_poly.yaml @@ -0,0 +1,24 @@ +simulation: + sim_name: 'shellmound' + version: 'mf6' + sim_ws: 'model' + +model: + simulation: 'shellmound' + modelname: 'shellmound' + options: + print_input: True + save_flows: True + newton: True + packages: [ + ] + +setup_grid: + source_data: + features_shapefile: + filename: '../mfsetup/tests/data/shellmound/tmr_parent/gis/irregular_boundary.shp' + buffer: 0 + dxy: 1000 # Uniform x, y spacing in meters + rotation: 0. + crs: 5070 # EPSG code for NAD83 CONUS Albers (meters) + snap_to_NHG: True # option to snap to the USGS National Hydrogeologic Grid diff --git a/_downloads/b6ae5d6478362c1168ffc009e617fa38/initial_grid_setup.py b/_downloads/b6ae5d6478362c1168ffc009e617fa38/initial_grid_setup.py new file mode 100644 index 00000000..850071b7 --- /dev/null +++ b/_downloads/b6ae5d6478362c1168ffc009e617fa38/initial_grid_setup.py @@ -0,0 +1,13 @@ +from mfsetup import MF6model + + +def setup_grid(cfg_file): + """Just set up (a shapefile of) the model grid. + For trying different grid configurations.""" + m = MF6model(cfg=cfg_file) + m.setup_grid() + m.modelgrid.write_shapefile('postproc/shps/grid.shp') + +if __name__ == '__main__': + + setup_grid('initial_config_poly.yaml') diff --git a/_downloads/baf13b4007664691088da3dfada3c732/initial_config_box.yaml b/_downloads/baf13b4007664691088da3dfada3c732/initial_config_box.yaml new file mode 100644 index 00000000..83458708 --- /dev/null +++ b/_downloads/baf13b4007664691088da3dfada3c732/initial_config_box.yaml @@ -0,0 +1,24 @@ +simulation: + sim_name: 'shellmound' + version: 'mf6' + sim_ws: 'model' + +model: + simulation: 'shellmound' + modelname: 'shellmound' + options: + print_input: True + save_flows: True + newton: True + packages: [ + ] + +setup_grid: + xoff: 501405 # lower left x-coordinate + yoff: 1175835 # lower left y-coordinate + nrow: 30 + ncol: 35 + dxy: 1000 + rotation: 0. + epsg: 5070 + snap_to_NHG: True diff --git a/_downloads/e1fbf3d202706a1435f28708d4ab74a5/initial_model_setup.py b/_downloads/e1fbf3d202706a1435f28708d4ab74a5/initial_model_setup.py new file mode 100644 index 00000000..9c327190 --- /dev/null +++ b/_downloads/e1fbf3d202706a1435f28708d4ab74a5/initial_model_setup.py @@ -0,0 +1,30 @@ +import os + +from mfsetup import MF6model + + +def setup_grid(cfg_file): + """Just set up (a shapefile of) the model grid. + For trying different grid configurations.""" + cwd = os.getcwd() + m = MF6model(cfg=cfg_file) + m.setup_grid() + m.modelgrid.write_shapefile('postproc/shps/grid.shp') + # Modflow-setup changes the working directory + # to the model workspace; change it back + os.chdir(cwd) + + +def setup_model(cfg_file): + """Set up the whole model.""" + cwd = os.getcwd() + m = MF6model.setup_from_yaml(cfg_file) + m.write_input() + os.chdir(cwd) + return m + + +if __name__ == '__main__': + + #setup_grid('initial_config_poly.yaml') + setup_model('initial_config_full.yaml') diff --git a/_images/notebooks_Pleasant_lake_lgr_example_24_1.png b/_images/notebooks_Pleasant_lake_lgr_example_24_1.png new file mode 100644 index 00000000..3ee50298 Binary files /dev/null and b/_images/notebooks_Pleasant_lake_lgr_example_24_1.png differ diff --git a/_images/notebooks_Pleasant_lake_lgr_example_26_0.png b/_images/notebooks_Pleasant_lake_lgr_example_26_0.png new file mode 100644 index 00000000..a4bc0d31 Binary files /dev/null and b/_images/notebooks_Pleasant_lake_lgr_example_26_0.png differ diff --git a/_images/notebooks_Pleasant_lake_lgr_example_36_0.png b/_images/notebooks_Pleasant_lake_lgr_example_36_0.png new file mode 100644 index 00000000..5bbf2650 Binary files /dev/null and b/_images/notebooks_Pleasant_lake_lgr_example_36_0.png differ diff --git a/_images/pleasant_lgr.png b/_images/pleasant_lgr.png new file mode 100644 index 00000000..b75bb66f Binary files /dev/null and b/_images/pleasant_lgr.png differ diff --git a/_images/pleasant_lgr_xsection.png b/_images/pleasant_lgr_xsection.png new file mode 100644 index 00000000..a4bc0d31 Binary files /dev/null and b/_images/pleasant_lgr_xsection.png differ diff --git a/_images/pleasant_vlgr_xsection.png b/_images/pleasant_vlgr_xsection.png new file mode 100644 index 00000000..d418b025 Binary files /dev/null and b/_images/pleasant_vlgr_xsection.png differ diff --git a/_modules/index.html b/_modules/index.html new file mode 100644 index 00000000..b71f253a --- /dev/null +++ b/_modules/index.html @@ -0,0 +1,141 @@ + + + + + + + + Overview: module code — modflow-setup 0.5.0.post59+g65803fd documentation + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+
    +
  • + +
  • +
  • +
+
+
+ + +
+
+
+
+ + + + \ No newline at end of file diff --git a/_modules/mfsetup/discretization.html b/_modules/mfsetup/discretization.html new file mode 100644 index 00000000..34019882 --- /dev/null +++ b/_modules/mfsetup/discretization.html @@ -0,0 +1,956 @@ + + + + + + + + mfsetup.discretization — modflow-setup 0.5.0.post59+g65803fd documentation + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for mfsetup.discretization

+"""
+Functions related to the Discretization Package.
+"""
+import time
+from re import L
+
+import flopy
+import numpy as np
+from flopy.mf6.data.mfdatalist import MFList
+from scipy import ndimage
+from scipy.signal import convolve2d
+
+
+
+[docs] +class ModflowGwfdis(flopy.mf6.ModflowGwfdis): + def __init__(self, *args, **kwargs): + flopy.mf6.ModflowGwfdis.__init__(self, *args, **kwargs) + + @property + def thickness(self): + return -1 * np.diff(np.stack([self.top.array] + + [b for b in self.botm.array]), axis=0)
+ + + +
+[docs] +def adjust_layers(dis, minimum_thickness=1): + """ + Adjust bottom layer elevations to maintain a minimum thickness. + + Parameters + ---------- + dis : flopy.modflow.ModflowDis instance + + Returns + ------- + new_layer_elevs : ndarray of shape (nlay, ncol, nrow) + New layer bottom elevations + """ + nrow, ncol, nlay, nper = dis.parent.nrow_ncol_nlay_nper + new_layer_elevs = np.zeros((nlay+1, nrow, ncol)) + new_layer_elevs[0] = dis.top.array + new_layer_elevs[1:] = dis.botm.array + + # constrain everything to model top + for i in np.arange(1, nlay + 1): + thicknesses = new_layer_elevs[0] - new_layer_elevs[i] + too_thin = thicknesses < minimum_thickness * i + new_layer_elevs[i, too_thin] = new_layer_elevs[0, too_thin] - minimum_thickness * i + + # constrain to underlying botms + for i in np.arange(1, nlay)[::-1]: + thicknesses = new_layer_elevs[i] - new_layer_elevs[i + 1] + too_thin = thicknesses < minimum_thickness + new_layer_elevs[i, too_thin] = new_layer_elevs[i + 1, too_thin] + minimum_thickness + + return new_layer_elevs[1:]
+ + + +
+[docs] +def deactivate_idomain_above(idomain, packagedata): + """Sets ibound to 0 for all cells above active SFR cells. + + Parameters + ---------- + packagedata : MFList, recarray or DataFrame + SFR package reach data + + Notes + ----- + This routine updates the ibound array of the flopy.model.ModflowBas6 instance. To produce a + new BAS6 package file, model.write() or flopy.model.ModflowBas6.write() + must be run. + """ + if isinstance(packagedata, MFList): + packagedata = packagedata.array + idomain = idomain.copy() + if isinstance(packagedata, np.recarray): + packagedata.columns = packagedata.dtype.names + if 'cellid' in packagedata.columns: + k, i, j = cellids_to_kij(packagedata['cellid']) + else: + k, i, j = packagedata['k'], packagedata['i'], packagedata['j'] + deact_lays = [list(range(ki)) for ki in k] + for ks, ci, cj in zip(deact_lays, i, j): + for ck in ks: + idomain[ck, ci, cj] = 0 + return idomain
+ + + +
+[docs] +def find_remove_isolated_cells(array, minimum_cluster_size=10): + """Identify clusters of isolated cells in a binary array. + Remove clusters less than a specified minimum cluster size. + """ + if len(array.shape) == 2: + arraylist = [array] + else: + arraylist = array + + # exclude diagonal connections + structure = np.zeros((3, 3)) + structure[1, :] = 1 + structure[:, 1] = 1 + + retained_arraylist = [] + for arr in arraylist: + + # for each cell in the binary array arr (i.e. representing active cells) + # take the sum of the cell and 4 immediate neighbors (excluding diagonal connections) + # values > 2 in the output array indicate cells with at least two connections + convolved = convolve2d(arr, structure, mode='same') + # taking union with (arr == 1) prevents inactive cells from being activated + atleast_2_connections = (arr == 1) & (convolved > 2) + + # then apply connected component analysis + # to identify small clusters of isolated cells to exclude + labeled, ncomponents = ndimage.measurements.label(atleast_2_connections, + structure=structure) + retain_areas = [c for c in range(1, ncomponents+1) + if (labeled == c).sum() >= minimum_cluster_size] + retain = np.in1d(labeled.ravel(), retain_areas) + retained = np.reshape(retain, arr.shape).astype(array.dtype) + retained_arraylist.append(retained) + if len(array.shape) == 3: + return np.array(retained_arraylist, dtype=array.dtype) + return retained_arraylist[0]
+ + + +
+[docs] +def cellids_to_kij(cellids, drop_inactive=True): + """Unpack tuples of MODFLOW-6 cellids (k, i, j) to + lists of k, i, j values; ignoring instances + where cellid is None (unconnected cells). + + Parameters + ---------- + cellids : sequence of (k, i, j) tuples + drop_inactive : bool + If True, drop cellids == 'none'. If False, + distribute these to k, i, j. + + Returns + ------- + k, i, j : 1D numpy arrays of integers + """ + active = np.array(cellids) != 'none' + if drop_inactive: + k, i, j = map(np.array, zip(*cellids[active])) + else: + k = np.array([cid[0] if cid != 'none' else None for cid in cellids]) + i = np.array([cid[1] if cid != 'none' else None for cid in cellids]) + j = np.array([cid[2] if cid != 'none' else None for cid in cellids]) + return k, i, j
+ + + +
+[docs] +def create_vertical_pass_through_cells(idomain): + """Replaces inactive cells with vertical pass-through cells at locations that have an active cell + above and below by setting these cells to -1. + + Parameters + ---------- + idomain : np.ndarray with 2 or 3 dimensions. 2D arrays are returned as-is. + + Returns + ------- + revised : np.ndarray + idomain with -1s added at locations that were previous <= 0 + that have an active cell (idomain=1) above and below. + """ + if len(idomain.shape) == 2: + return idomain + revised = idomain.copy() + for i in range(1, idomain.shape[0]-1): + has_active_above = np.any(idomain[:i] > 0, axis=0) + has_active_below = np.any(idomain[i+1:] > 0, axis=0) + bounded = has_active_above & has_active_below + pass_through = (idomain[i] <= 0) & bounded + assert not np.any(revised[i][pass_through] > 0) + revised[i][pass_through] = -1 + + # scrub any pass through cells that aren't bounded by active cells + revised[i][(idomain[i] <= 0) & ~bounded] = 0 + for i in (0, -1): + revised[i][revised[i] < 0] = 0 + return revised
+ + + +
+[docs] +def fill_empty_layers(array): + """Fill empty layers in a 3D array by linearly interpolating + between the values above and below. Layers are defined + as empty if they contain all nan values. In the example of + model layer elevations, this would create equal layer thicknesses + between layer surfaces with values. + + Parameters + ---------- + array : 3D numpy.ndarray + + Returns + ------- + filled : ndarray of same shape as array + """ + def get_next_below(seq, value): + for item in sorted(seq): + if item > value: + return item + + def get_next_above(seq, value): + for item in sorted(seq)[::-1]: + if item < value: + return item + + array = array.copy() + nlay = array.shape[0] + layers_with_values = [k for k in range(nlay) if not np.all(np.isnan(array[k]), axis=(0, 1))] + empty_layers = [k for k in range(nlay) if k not in layers_with_values] + + for k in empty_layers: + nextabove = get_next_above(layers_with_values, k) + nextbelow = get_next_below(layers_with_values, k) + + # linearly interpolate layer values between next layers + # above and below that have values + # (in terms of elevation + n = nextbelow - nextabove + diff = (array[nextbelow] - array[nextabove]) / n + for i in range(k, nextbelow): + array[i] = array[i - 1] + diff + k = i + return array
+ + + +
+[docs] +def fill_cells_vertically(top, botm): + """In MODFLOW 6, cells where idomain < 1 are excluded from the solution. + However, in the botm array, values are needed in overlying cells to + compute layer thickness (cells with idomain != 1 overlying cells with idomain >= 1 need + values in botm). Given a 3D numpy array with nan values indicating excluded cells, + fill in the nans with the overlying values. For example, given the column of cells + [10, nan, 8, nan, nan, 5, nan, nan, nan, 1], fill the nan values to make + [10, 10, 8, 8, 8, 5, 5, 5, 5], so that layers 2, 5, and 9 (zero-based) + all have valid thicknesses (and all other layers have zero thicknesses). + + algorithm: + * given a top and botm array (top of the model and layer bottom elevations), + get the layer thicknesses (accounting for any nodata values) idomain != 1 cells in + thickness array must be set to np.nan + * set thickness to zero in nan cells take the cumulative sum of the thickness array + along the 0th (depth) axis, from the bottom of the array to the top + (going backwards in a depth-positive sense) + * add the cumulative sum to the array bottom elevations. The backward difference in + bottom elevations should be zero in inactive cells, and representative of the + desired thickness in the active cells. + * append the model bottom elevations (excluded in bottom-up difference) + + Parameters + ---------- + top : 2D numpy array; model top elevations + botm : 3D (nlay, nrow, ncol) array; model bottom elevations + + Returns + ------- + top, botm : filled top and botm arrays + """ + thickness = get_layer_thicknesses(top, botm) + assert np.all(np.isnan(thickness[np.isnan(thickness)])) + thickness[np.isnan(thickness)] = 0 + # cumulative sum from bottom to top + filled = np.cumsum(thickness[::-1], axis=0)[::-1] + # add in the model bottom elevations + # use the minimum values instead of the bottom layer, + # in case there are nans in the bottom layer + # include the top, in case there are nans in all botms + # introducing nans into the top can cause issues + # with partical vertical LGR + all_surfaces = np.stack([top] + [arr2d for arr2d in botm]) + filled += np.nanmin(all_surfaces, axis=0) # botm[-1] + # append the model bottom elevations + filled = np.append(filled, [np.nanmin(all_surfaces, axis=0)], axis=0) + return filled[1:].copy()
+ + + +
+[docs] +def fix_model_layer_conflicts(top_array, botm_array, + ibound_array=None, + minimum_thickness=3): + """Compare model layer elevations; adjust layer bottoms downward + as necessary to maintain a minimum thickness. + + Parameters + ---------- + top_array : 2D numpy array (nrow * ncol) + Model top elevations + botm_array : 3D numpy array (nlay * nrow * ncol) + Model bottom elevations + minimum thickness : scalar + Minimum layer thickness to enforce + + Returns + ------- + new_botm_array : 3D numpy array of new layer bottom elevations + """ + top = top_array.copy() + botm = botm_array.copy() + nlay, nrow, ncol = botm.shape + if ibound_array is None: + ibound_array = np.ones(botm.shape, dtype=int) + # fix thin layers in the DIS package + new_layer_elevs = np.empty((nlay + 1, nrow, ncol)) + new_layer_elevs[1:, :, :] = botm + new_layer_elevs[0] = top + for i in np.arange(1, nlay + 1): + active = ibound_array[i - 1] > 0. + thicknesses = new_layer_elevs[i - 1] - new_layer_elevs[i] + with np.errstate(invalid='ignore'): + too_thin = active & (thicknesses < minimum_thickness) + new_layer_elevs[i, too_thin] = new_layer_elevs[i - 1, too_thin] - minimum_thickness * 1.001 + try: + assert np.nanmax(np.diff(new_layer_elevs, axis=0)[ibound_array > 0]) * -1 >= minimum_thickness + except: + j=2 + return new_layer_elevs[1:]
+ + + +
+[docs] +def get_layer(botm_array, i, j, elev): + """Return the layers for elevations at i, j locations. + + Parameters + ---------- + botm_array : 3D numpy array of layer bottom elevations + i : scaler or sequence + row index (zero-based) + j : scaler or sequence + column index + elev : scaler or sequence + elevation (in same units as model) + + Returns + ------- + k : np.ndarray (1-D) or scalar + zero-based layer index + """ + def to_array(arg): + if np.isscalar(arg): + np.array([arg]) + #if not isinstance(arg, np.ndarray): + # return np.array([arg]) + else: + return np.array(arg) + + i = to_array(i) + j = to_array(j) + nlay = botm_array.shape[0] + elev = to_array(elev) + botms = botm_array[:, i, j].tolist() + layers = np.sum(((botms - elev) > 0), axis=0) + # force elevations below model bottom into bottom layer + layers[layers > nlay - 1] = nlay - 1 + layers = np.atleast_1d(np.squeeze(layers)) + if len(layers) == 1: + layers = layers[0] + return layers
+ + + +
+[docs] +def verify_minimum_layer_thickness(top, botm, isactive, minimum_layer_thickness): + """Verify that model layer thickness is equal to or + greater than a minimum thickness.""" + top = top.copy() + botm = botm.copy() + isactive = isactive.copy().astype(bool) + nlay, nrow, ncol = botm.shape + all_layers = np.zeros((nlay+1, nrow, ncol)) + all_layers[0] = top + all_layers[1:] = botm + isvalid = np.nanmax(np.diff(all_layers, axis=0)[isactive]) * -1 + 1e-4 >= \ + minimum_layer_thickness + return isvalid
+ + + +
+[docs] +def make_ibound(top, botm, nodata=-9999, + minimum_layer_thickness=1, + drop_thin_cells=True, tol=1e-4): + """Make the ibound array that specifies + cells that will be excluded from the simulation. Cells are + excluded based on: + + + Parameters + ---------- + model : mfsetup.MFnwtModel model instance + + Returns + ------- + idomain : np.ndarray (int) + + """ + top = top.copy() + botm = botm.copy() + top[top == nodata] = np.nan + botm[botm == nodata] = np.nan + criteria = np.isnan(botm) + + # compute layer thicknesses, considering pinched cells (nans) + b = get_layer_thicknesses(top, botm) + all_cells_thin = np.all(b < minimum_layer_thickness + tol, axis=0) + criteria = criteria | np.isnan(b) # cells without thickness values + + if drop_thin_cells: + criteria = criteria | all_cells_thin + #all_layers = np.stack([top] + [b for b in botm]) + #min_layer_thickness = minimum_layer_thickness + #isthin = np.diff(all_layers, axis=0) * -1 < min_layer_thickness + tol + #criteria = criteria | isthin + idomain = np.abs(~criteria).astype(int) + return idomain
+ + + +
+[docs] +def make_lgr_idomain(parent_modelgrid, inset_modelgrid, + ncppl): + """Inactivate cells in parent_modelgrid that coincide + with area of inset_modelgrid.""" + if parent_modelgrid.rotation != inset_modelgrid.rotation: + raise ValueError('LGR parent and inset models must have same rotation.' + f'\nParent rotation: {parent_modelgrid.rotation}' + f'\nInset rotation: {inset_modelgrid.rotation}' + ) + # upper left corner of inset model in parent model + # use the cell centers, to avoid edge situation + # where neighboring parent cell is accidentally selected + x0 = inset_modelgrid.xcellcenters[0, 0] + y0 = inset_modelgrid.ycellcenters[0, 0] + pi0, pj0 = parent_modelgrid.intersect(x0, y0, forgive=True) + # lower right corner of inset model + x1 = inset_modelgrid.xcellcenters[-1, -1] + y1 = inset_modelgrid.ycellcenters[-1, -1] + pi1, pj1 = parent_modelgrid.intersect(x1, y1, forgive=True) + idomain = np.ones(parent_modelgrid.shape, dtype=int) + if any(np.isnan([pi0, pj0])): + raise ValueError(f"LGR model upper left corner {pi0}, {pj0} " + "is outside of the parent model domain! " + "Check the grid offset and dimensions." + ) + if any(np.isnan([pi1, pj1])): + raise ValueError(f"LGR model lower right corner {pi0}, {pj0} " + "is outside of the parent model domain! " + "Check the grid offset and dimensions." + ) + idomain[0:(np.array(ncppl) > 0).sum(), + pi0:pi1+1, pj0:pj1+1] = 0 + return idomain
+ + + +
+[docs] +def make_idomain(top, botm, nodata=-9999, + minimum_layer_thickness=1, + drop_thin_cells=True, tol=1e-4): + """Make the idomain array for MODFLOW 6 that specifies + cells that will be excluded from the simulation. Cells are + excluded based on: + 1) np.nans or nodata values in the botm array + 2) np.nans or nodata values in the top array (applies to the highest cells with valid botm elevations; + in other words, these cells have no thicknesses) + 3) layer thicknesses less than the specified minimum thickness plus a tolerance (tol) + + Parameters + ---------- + model : mfsetup.MF6model model instance + + Returns + ------- + idomain : np.ndarray (int) + + """ + top = top.copy() + botm = botm.copy() + top[top == nodata] = np.nan + botm[botm == nodata] = np.nan + criteria = np.isnan(botm) + + # compute layer thicknesses, considering pinched cells (nans) + b = get_layer_thicknesses(top, botm) + criteria = criteria | np.isnan(b) # cells without thickness values + + if drop_thin_cells: + criteria = criteria | (b < minimum_layer_thickness + tol) + #all_layers = np.stack([top] + [b for b in botm]) + #min_layer_thickness = minimum_layer_thickness + #isthin = np.diff(all_layers, axis=0) * -1 < min_layer_thickness + tol + #criteria = criteria | isthin + idomain = np.abs(~criteria).astype(int) + return idomain
+ + + +
+[docs] +def get_highest_active_layer(idomain, null_value=-9999): + """Get the highest active model layer at each + i, j location, accounting for inactive and + vertical pass-through cells.""" + idm = idomain.copy() + # reset all inactive/passthrough values to large positive value + # for min calc + idm[idm < 1] = 9999 + highest_active_layer = np.argmin(idm, axis=0) + # set locations with all inactive cells to null values + highest_active_layer[(idm == 9999).all(axis=0)] = null_value + return highest_active_layer
+ + + +
+[docs] +def make_irch(idomain): + """Make an irch array for the MODFLOW 6 Recharge Package, + which specifies the highest active model layer at each + i, j location, accounting for inactive and + vertical pass-through cells. Set all i, j locations + with no active layers to 1 (MODFLOW 6 only allows + valid layer numbers in the irch array). + """ + irch = get_highest_active_layer(idomain, null_value=-9999) + # set locations where all layers are inactive back to 0 + irch[irch == -9999] = 0 + irch += 1 # set to one-based + return irch
+ + + +
+[docs] +def get_layer_thicknesses(top, botm, idomain=None): + """For each i, j location in the grid, get thicknesses + between pairs of subsequent valid elevation values. Make + a thickness array of the same shape as the model grid, assign the + computed thicknesses for each pair of valid elevations to the + position of the elevation representing the cell botm. For example, + given the column of cells [nan nan 8. nan nan nan nan nan 2. nan], + a thickness of 6 would be assigned to the second to last layer + (position -2). + + Parameters + ---------- + top : nrow x ncol array of model top elevations + botm : nlay x nrow x ncol array of model botm elevations + idomain : nlay x nrow x ncol array indicating cells to be + included in the model solution. idomain=0 are converted to np.nans + in the example column of cells above. (optional) + If idomain is not specified, excluded cells are expected to be + designated in the top and botm arrays as np.nans. + + Examples + -------- + Make a fake model grid with 7 layers, but only top and two layer bottoms specified: + >>> top = np.reshape([[10]]* 4, (2, 2)) + >>> botm = np.reshape([[np.nan, 8., np.nan, np.nan, np.nan, 2., np.nan]]*4, (2, 2, 7)).transpose(2, 0, 1) + >>> result = get_layer_thicknesses(top, botm) + >>> result[:, 0, 0] + array([nan 2. nan nan nan 6. nan]) + + example with all layer elevations specified + note: this is the same result that np.diff(... axis=0) would produce; + except positive in the direction of the zero axis + >>> top = np.reshape([[10]] * 4, (2, 2)) + >>> botm = np.reshape([[9, 8., 8, 6, 3, 2., -10]] * 4, (2, 2, 7)).transpose(2, 0, 1) + >>> result = get_layer_thicknesses(top, botm) + array([1., 1., 0., 2., 3., 1., 12.]) + """ + print('computing cell thicknesses...') + t0 = time.time() + top = top.copy() + botm = botm.copy() + if idomain is not None: + idomain = idomain >= 1 + top[~idomain[0]] = np.nan + botm[~idomain] = np.nan + all_layers = np.stack([top] + [b for b in botm]) + thicknesses = np.zeros_like(botm) * np.nan + nrow, ncol = top.shape + for i in range(nrow): + for j in range(ncol): + cells = all_layers[:, i, j] + valid_b = list(-np.diff(cells[~np.isnan(cells)])) + b_ij = np.zeros_like(cells[1:]) * np.nan + has_top = False + for k, elev in enumerate(cells): + if not has_top and not np.isnan(elev): + has_top = True + elif has_top and not np.isnan(elev): + b_ij[k-1] = valid_b.pop(0) + thicknesses[:, i, j] = b_ij + thicknesses[thicknesses == 0] = 0 # get rid of -0. + print("finished in {:.2f}s\n".format(time.time() - t0)) + return thicknesses
+ + + +
+[docs] +def weighted_average_between_layers(arr0, arr1, weight0=0.5): + """""" + weights = [weight0, 1-weight0] + return np.average([arr0, arr1], axis=0, weights=weights)
+ + + +
+[docs] +def populate_values(values_dict, array_shape=None): + """Given an input dictionary with non-consecutive keys, + make a second dictionary with consecutive keys, with values + that are linearly interpolated from the first dictionary, + based on the key values. For example, given {0: 1.0, 2: 2.0}, + {0: 1.0, 1: 1.5, 2: 2.0} would be returned. + + Examples + -------- + >>> populate_values({0: 1.0, 2: 2.0}, array_shape=None) + {0: 1.0, 1: 1.5, 2: 2.0} + >>> populate_values({0: 1.0, 2: 2.0}, array_shape=(2, 2)) + {0: array([[1., 1.], + [1., 1.]]), + 1: array([[1.5, 1.5], + [1.5, 1.5]]), + 2: array([[2., 2.], + [2., 2.]])} + """ + sorted_layers = sorted(list(values_dict.keys())) + values = {} + for i in range(len(sorted_layers[:-1])): + l1 = sorted_layers[i] + l2 = sorted_layers[i+1] + v1 = values_dict[l1] + v2 = values_dict[l2] + layers = np.arange(l1, l2+1) + interp_values = dict(zip(layers, np.linspace(v1, v2, len(layers)))) + + # if an array shape is given, fill an array of that shape + # or reshape to that shape + if array_shape is not None: + for k, v in interp_values.items(): + if np.isscalar(v): + v = np.ones(array_shape, dtype=float) * v + else: + v = np.reshape(v, array_shape) + interp_values[k] = v + values.update(interp_values) + return values
+ + + +
+[docs] +def voxels_to_layers(voxel_array, z_edges, model_top=None, model_botm=None, no_data_value=0, + extend_top=True, extend_botm=False, tol=0.1, + minimum_frac_active_cells=0.01): + """Combine a voxel array (voxel_array), with no-data values and either uniform or non-uniform top + and bottom elevations, with land-surface elevations (model_top; to form the top of the grid), and + additional elevation surfaces forming layering below the voxel grid (model_botm). + + * In places where the model_botm elevations are above the lowest voxel elevations, + the voxels are given priority, and the model_botm elevations reset to equal the lowest voxel elevations + (effectively giving the underlying layer zero-thickness). + * Voxels with no_data_value(s) are also given zero-thickness. Typically these would be cells beyond a + no-flow boundary, or below the depth of investigation (for example, in an airborne electromagnetic survey + of aquifer electrical resisitivity). The vertical extent of the layering representing the voxel data then spans the highest and lowest valid voxels. + * In places where the model_top (typically land-surface) elevations are higher than the highest valid voxel, + the voxel layer can either be extended to the model_top (extend_top=True), or an additional layer + can be created between the top edge of the highest voxel and model_top (extent_top=False). + * Similarly, in places where elevations in model_botm are below the lowest valid voxel, the lowest voxel + elevation can be extended to the highest underlying layer (extend_botm=True), or an additional layer can fill + the gap between the lowest voxel and highest model_botm (extend_botm=False). + + Parameters + ---------- + voxel_array : 3D numpy array + 3D array of voxel data- could be zones or actually aquifer properties. Empty voxels + can be marked with a no_data_value. Voxels are assumed to have the same horizontal + discretization as the model_top and model_botm layers. + z_edges : 3D numpy array or sequence + Top and bottom edges of the voxels (length is voxel_array.shape[0] + 1). A sequence + can be used to specify uniform voxel edge elevations; non-uniform top and bottom + elevations can be specified with a 3D numpy array (similar to the botm array in MODFLOW). + model_top : 2D numpy array + Top elevations of the model at each row/column location. + model_botm : 2D or 3D numpy array + Model layer(s) underlying the voxel grid. + no_data_value : scalar, optional + Indicates empty voxels in voxel_array. + extend_top : bool, optional + Option to extend the top voxel layer to the model_top, by default True. + extend_botm : bool, optional + Option to extend the bottom voxel layer to the next layer below in model_botm, + by default False. + tol : float, optional + Depth tolerance used in comparing the voxel edges to model_top and model_botm. + For example, if model_top - z_edges[0] is less than tol, the model_top and top voxel + edge will be considered equal, and no additional layer will be added, regardless of extend_top. + by default 0.1 + minimum_frac_active_cells : float + Minimum fraction of cells with a thickness of > 0 for a layer to be retained, + by default 0.01. + + Returns + ------- + layers : 3D numpy array of shape (nlay +1, nrow, ncol) + Model layer elevations (vertical edges of cells), including the model top. + + + Raises + ------ + ValueError + If z_edges is not 1D or 3D + """ + model_top = model_top.copy() + model_botm = model_botm.copy() + if len(model_botm.shape) == 2: + model_botm = np.reshape(model_botm, (1, *model_botm.shape)) + if np.any(np.isnan(z_edges)): + raise NotImplementedError("Nan values in z_edges array not allowed!") + z_values = np.array(z_edges)[1:] + + # convert nodata values to nans + hasdata = voxel_array.astype(float).copy() + hasdata[hasdata == no_data_value] = np.nan + hasdata[~np.isnan(hasdata)] = 1 + thicknesses = -np.diff(z_edges, axis=0) + + # apply nodata to thicknesses and botm elevations + if len(z_values.shape) == 3: + z = hasdata * z_values + b = hasdata * thicknesses + elif len(z_values.shape) == 1: + z = (hasdata.transpose(1, 2, 0) * z_values).transpose(2, 0, 1) + b = (hasdata.transpose(1, 2, 0) * thicknesses).transpose(2, 0, 1) + else: + msg = 'z_edges.shape = {}; z_edges must be a 3D or 1D numpy array' + raise ValueError(msg.format(z_edges.shape)) + + assert np.all(np.isnan(b[np.isnan(b)])) + b[np.isnan(b)] = 0 + # cumulative sum from bottom to top + layers = np.cumsum(b[::-1], axis=0)[::-1] + # add in the model bottom elevations + # use the minimum values instead of the bottom layer, + # in case there are nans in the bottom layer + layers += np.nanmin(z, axis=0) # botm[-1] + # append the model bottom elevations + layers = np.append(layers, [np.nanmin(z, axis=0)], axis=0) + + # set all voxel edges greater than land surface to land surface + k, i, j = np.where(layers > model_top) + layers[k, i, j] = model_top[i, j] + + # reset model bottom to lowest valid voxels, where they are lower than model bottom + lowest_valid_edges = np.nanmin(layers, axis=0) + for i, layer_botm in enumerate(model_botm): + loc = layer_botm > lowest_valid_edges + model_botm[i][loc] = lowest_valid_edges[loc] + + # option to add another layer on top of voxel sequence, + # if any part of the model top is above the highest valid voxel edges + if np.any(layers[0] < model_top - tol) and not extend_top: + layers = np.vstack([np.reshape(model_top, (1, *model_top.shape)), layers]) + # otherwise set the top edges of the voxel sequence to be consistent with model top + else: + layers[0] = model_top + + # option to add additional layers below the voxel sequence, + # if any part of those layers in model botm array are below the lowest valid voxel edges + if not extend_botm: + new_botms = [layers] + for layer_botm in model_botm: + # get the percentage of active cells with > 0 thickness + pct_cells = np.sum(layers[-1] > layer_botm + tol)/layers[-1].size + if pct_cells > minimum_frac_active_cells: + new_botms.append(np.reshape(layer_botm, (1, *layer_botm.shape))) + layers = np.vstack(new_botms) + # otherwise just set the lowest voxel edges to the highest layer in model botm + # (model botm was already set to lowest valid voxels that were lower than the model botm; + # this extends any voxels that were above the model botm to the model botm) + else: + layers[-1] = model_botm[0] + + # finally, fill any remaining nans with next layer elevation (going upward) + # might still have nans in areas where there are no voxel values, but model top and botm values + botm = fill_cells_vertically(layers[0], layers[1:]) + layers = np.vstack([np.reshape(layers[0], (1, *layers[0].shape)), botm]) + return layers
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/_modules/mfsetup/fileio.html b/_modules/mfsetup/fileio.html new file mode 100644 index 00000000..492c31e4 --- /dev/null +++ b/_modules/mfsetup/fileio.html @@ -0,0 +1,1449 @@ + + + + + + + + mfsetup.fileio — modflow-setup 0.5.0.post59+g65803fd documentation + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for mfsetup.fileio

+"""Functions for reading and writing stuff to disk, and working with file paths.
+"""
+import datetime as dt
+import inspect
+import json
+import os
+import shutil
+import sys
+import time
+from pathlib import Path
+
+import flopy
+import numpy as np
+import pandas as pd
+import yaml
+from flopy.mf6.data import mfstructure
+from flopy.mf6.mfbase import (
+    ExtFileAction,
+    FlopyException,
+    MFDataException,
+    MFFileMgmt,
+    PackageContainer,
+    PackageContainerType,
+    VerbosityLevel,
+)
+from flopy.mf6.modflow import mfims, mftdis
+from flopy.modflow.mf import ModflowGlobal
+from flopy.utils import TemporalReference, mfreadnam
+
+import mfsetup
+from mfsetup.grid import MFsetupGrid
+from mfsetup.utils import get_input_arguments, update
+
+
+
+[docs] +def check_source_files(fileslist): + """Check that the files in fileslist exist. + """ + if isinstance(fileslist, str): + fileslist = [fileslist] + for f in fileslist: + f = Path(f) + if not f.exists(): + raise IOError(f'Cannot find {f.absolute()}')
+ + + +
+[docs] +def load(filename): + """Load a configuration file.""" + filename = Path(filename) + if set(filename.suffixes).intersection({'.yml', '.yaml'}): + return load_yml(filename) + elif filename.suffix == '.json': + return load_json(filename)
+ + + +
+[docs] +def dump(filename, data): + """Write a dictionary to a configuration file.""" + if str(filename).endswith('.yml') or str(filename).endswith('.yaml'): + return dump_yml(filename, data) + elif filename.endswith('.json'): + return dump_json(filename, data)
+ + + +
+[docs] +def load_json(jsonfile): + """Convenience function to load a json file; replacing + some escaped characters.""" + with open(jsonfile) as f: + return json.load(f)
+ + + +
+[docs] +def dump_json(jsonfile, data): + """Write a dictionary to a json file.""" + with open(jsonfile, 'w') as output: + json.dump(data, output, indent=4, sort_keys=True) + print('wrote {}'.format(jsonfile))
+ + + +
+[docs] +def load_modelgrid(filename): + """Create a MFsetupGrid instance from model config json file.""" + cfg = load(filename) + rename = {'xll': 'xoff', + 'yll': 'yoff', + } + for k, v in rename.items(): + if k in cfg: + cfg[v] = cfg.pop(k) + if np.isscalar(cfg['delr']): + cfg['delr'] = np.ones(cfg['ncol'])* cfg['delr'] + if np.isscalar(cfg['delc']): + cfg['delc'] = np.ones(cfg['nrow']) * cfg['delc'] + kwargs = get_input_arguments(cfg, MFsetupGrid) + return MFsetupGrid(**kwargs)
+ + + +
+[docs] +def load_yml(yml_file): + """Load yaml file into a dictionary.""" + with open(yml_file) as src: + cfg = yaml.load(src, Loader=yaml.Loader) + return cfg
+ + + +
+[docs] +def dump_yml(yml_file, data): + """Write a dictionary to a yaml file.""" + with open(yml_file, 'w') as output: + yaml.dump(data, output)#, Dumper=yaml.Dumper) + print('wrote {}'.format(yml_file))
+ + + +
+[docs] +def load_array(filename, shape=None, nodata=-9999): + """Load an array, ensuring the correct shape.""" + t0 = time.time() + if not isinstance(filename, list): + filename = [filename] + shape2d = shape + if shape is not None and len(shape) == 3: + shape2d = shape[1:] + + arraylist = [] + for f in filename: + if isinstance(f, dict): + f = f['filename'] + txt = 'loading {}'.format(f) + if shape2d is not None: + txt += ', shape={}'.format(shape2d) + print(txt, end=', ') + # arr = np.loadtxt + # pd.read_csv is >3x faster than np.load_txt + arr = pd.read_csv(f, delim_whitespace=True, header=None).values + if shape2d is not None: + if arr.shape != shape2d: + if arr.size == np.prod(shape2d): + arr = np.reshape(arr, shape2d) + else: + raise ValueError("Data in {} have size {}; should be {}" + .format(f, arr.shape, shape2d)) + arraylist.append(arr) + array = np.squeeze(arraylist) + if issubclass(array.dtype.type, np.floating): + array[array == nodata] = np.nan + print("took {:.2f}s".format(time.time() - t0)) + return array
+ + + +
+[docs] +def save_array(filename, arr, nodata=-9999, + **kwargs): + """Save and array and print that it was written.""" + if isinstance(filename, dict) and 'filename' in filename.keys(): + filename = filename.copy().pop('filename') + t0 = time.time() + if np.issubdtype(arr.dtype, np.unsignedinteger): + arr = arr.copy() + arr = arr.astype(int) + arr[np.isnan(arr)] = nodata + np.savetxt(filename, arr, **kwargs) + print('wrote {}'.format(filename), end=', ') + print("took {:.2f}s".format(time.time() - t0))
+ + + +
+[docs] +def append_csv(filename, df, **kwargs): + """Read data from filename, + append to dataframe, and write appended dataframe + back to filename.""" + if os.path.exists(filename): + written = pd.read_csv(filename) + df = pd.concat([df, written], axis=0) + df.to_csv(filename, **kwargs)
+ + + +
+[docs] +def load_cfg(cfgfile, verbose=False, default_file=None): + """This method loads a YAML or JSON configuration file, + applies configuration defaults from a default_file if specified, + adds the absolute file path of the configuration file + to the configuration dictionary, and converts any + relative paths in the configuration dictionary to + absolute paths, assuming the paths are relative to + the configuration file location. + + Parameters + ---------- + cfgfile : str + Path to MFsetup configuration file (json or yaml) + + Returns + ------- + cfg : dict + Dictionary of configuration data + + Notes + ----- + This function is used by the model instance load and setup_from_yaml + classmethods, so that configuration defaults can be applied to the + simulation and model blocks before they are passed to the flopy simulation + constructor and the model constructor. + """ + print('loading configuration file {}...'.format(cfgfile)) + source_path = Path(__file__).parent + default_file = Path(default_file) + check_source_files([cfgfile, source_path / default_file]) + + # default configuration + default_cfg = {} + if default_file is not None: + default_cfg = load(source_path / default_file) + default_cfg['filename'] = source_path / default_file + + # for now, only apply defaults for the model and simulation blocks + # which are needed for the model instance constructor + # other defaults are applied in _set_cfg, + # which is called by model.__init__ + # intermediate_data is needed by some tests + apply_defaults = {'simulation', 'model', 'intermediate_data'} + default_cfg = {k: v for k, v in default_cfg.items() + if k in apply_defaults} + + # recursively update defaults with information from yamlfile + cfg = default_cfg.copy() + user_specified_cfg = load(cfgfile) + + update(cfg, user_specified_cfg) + cfg['model'].update({'verbose': verbose}) + cfg['filename'] = os.path.abspath(cfgfile) + + # convert relative paths in the configuration dictionary + # to absolute paths, based on the location of the config file + config_file_location = os.path.split(os.path.abspath(cfgfile))[0] + cfg = set_cfg_paths_to_absolute(cfg, config_file_location) + return cfg
+ + + +
+[docs] +def set_cfg_paths_to_absolute(cfg, config_file_location): + version = None + if 'simulation' in cfg: + version = 'mf6' + else: + version = cfg['model'].get('version') + if version == 'mf6': + file_path_keys_relative_to_config = [ + 'simulation.sim_ws', + 'parent.model_ws', + 'parent.simulation.sim_ws', + 'parent.headfile', + #'setup_grid.lgr.config_file' + ] + model_ws = os.path.normpath(os.path.join(config_file_location, + cfg['simulation']['sim_ws'])) + else: + file_path_keys_relative_to_config = [ + 'model.model_ws', + 'parent.model_ws', + 'parent.simulation.sim_ws', + 'parent.headfile', + 'nwt.use_existing_file' + ] + model_ws = os.path.normpath(os.path.join(config_file_location, + cfg['model']['model_ws'])) + file_path_keys_relative_to_model_ws = [ + 'setup_grid.grid_file' + ] + # add additional paths by looking for source_data + # within these input blocks, convert file paths to absolute + look_for_files_in = ['source_data', + 'perimeter_boundary', + 'lgr', + 'sfrmaker_options' + ] + for pckgname, pckg in cfg.items(): + if isinstance(pckg, dict): + for input_block in look_for_files_in: + if input_block in pckg.keys(): + # handle LGR sub-blocks separately + # if LGR configuration is specified within the yaml file + # (or as a dictionary), we don't want to touch it at this point + # (just convert filepaths to configuration files for sub-models) + if input_block == 'lgr': + for model_name, config in pckg[input_block].items(): + if 'filename' in config: + file_keys = _parse_file_path_keys_from_source_data( + {model_name: config}) + else: + file_keys = _parse_file_path_keys_from_source_data(pckg[input_block]) + for key in file_keys: + file_path_keys_relative_to_config. \ + append('.'.join([pckgname, input_block, key])) + for loc in ['output_files', + 'output_folders', + 'output_folder', + 'output_path']: + if loc in pckg.keys(): + file_keys = _parse_file_path_keys_from_source_data(pckg[loc], paths=True) + for key in file_keys: + file_path_keys_relative_to_model_ws. \ + append('.'.join([pckgname, loc, key]).strip('.')) + + # set locations that are relative to configuration file + cfg = _set_absolute_paths_to_location(file_path_keys_relative_to_config, + config_file_location, cfg) + + # set locations that are relative to model_ws + cfg = _set_absolute_paths_to_location(file_path_keys_relative_to_model_ws, + model_ws, + cfg) + return cfg
+ + + +def _set_path(keys, abspath, cfg): + """From a sequence of keys that point to a file + path in a nested dictionary, convert the file + path at that location from relative to absolute, + based on a provided absolute path. + + Parameters + ---------- + keys : sequence or str of dict keys separated by '.' + that point to a relative path + Example: 'parent.model_ws' for cfg['parent']['model_ws'] + abspath : absolute path + cfg : dictionary + + Returns + ------- + updates cfg with an absolute path based on abspath, + at the location in the dictionary specified by keys. + """ + if isinstance(keys, str): + keys = keys.split('.') + d = cfg.get(keys[0]) + if d is not None: + for level in range(1, len(keys)): + if level == len(keys) - 1: + k = keys[level] + if k in d: + if d[k] is not None: + d[k] = os.path.normpath(os.path.join(abspath, d[k])) + elif k.isdigit(): + k = int(k) + if d[k] is not None: + d[k] = os.path.join(abspath, d[k]) + else: + key = keys[level] + if key in d: + d = d[keys[level]] + return cfg + + +def _set_absolute_paths_to_location(paths, location, cfg): + """Set relative file paths in a configuration dictionary + to a specified location. + + Parameters + ---------- + paths : sequence + Sequence of dictionary keys read by set_path. + e.g. ['parent.model_ws', 'parent.headfile'] + location : str (path to folder) + cfg : configuration dictionary (as read in by load_cfg) + + """ + for keys in paths: + cfg = _set_path(keys, location, cfg) + return cfg + + +def _parse_file_path_keys_from_source_data(source_data, prefix=None, paths=False): + """Parse a source data entry in the configuration file. + + pseudo code: + For each key or item in source_data, + If it is a string that ends with a valid extension, + a file is expected. + If it is a dict or list, + it is expected to be a file or set of files with metadata. + For each item in the dict or list, + If it is a string that ends with a valid extension, + a file is expected. + If it is a dict or list, + A set of files corresponding to + model layers or stress periods is expected. + + valid source data file extensions: csv, shp, tif, asc + + Parameters + ---------- + source_data : dict + prefix : str + text to prepend to results, e.g. + keys = prefix.keys + paths = Bool + if True, overrides check for valid extension + + Returns + ------- + keys + """ + valid_extensions = ['csv', 'shp', 'tif', + 'ref', 'dat', + 'nc', + 'yml', 'json', + 'hds', 'cbb', 'cbc', + 'grb'] + file_keys = ['filename', + 'filenames', + 'binaryfile', + 'nhdplus_paths'] + keys = [] + if source_data is None: + return [] + if isinstance(source_data, str): + return [''] + if isinstance(source_data, list): + items = enumerate(source_data) + elif isinstance(source_data, dict): + items = source_data.items() + for k0, v in items: + if isinstance(v, str): + if k0 in file_keys: + keys.append(k0) + elif v[-3:] in valid_extensions or paths: + keys.append(k0) + elif 'output' in source_data: + keys.append(k0) + elif isinstance(v, list): + for i, v1 in enumerate(v): + if k0 in file_keys: + keys.append('.'.join([str(k0), str(i)])) + elif paths or isinstance(v1, str) and v1[-3:] in valid_extensions: + keys.append('.'.join([str(k0), str(i)])) + elif isinstance(v, dict): + keys += _parse_file_path_keys_from_source_data(v, prefix=k0, paths=paths) + if prefix is not None: + keys = ['{}.{}'.format(prefix, k) for k in keys] + return keys + + +
+[docs] +def setup_external_filepaths(model, package, variable_name, + filename_format, file_numbers=None, + relative_external_paths=True): + """Set up external file paths for a MODFLOW package variable. Sets paths + for intermediate files, which are written from the (processed) source data. + Intermediate files are supplied to Flopy as external files for a given package + variable. Flopy writes external files to a specified location when the MODFLOW + package file is written. This method gets the external file paths that + will be written by FloPy, and puts them in the configuration dictionary + under their respective variables. + + Parameters + ---------- + model : mfsetup.MF6model or mfsetup.MFnwtModel instance + Model with cfg attribute to update. + package : str + Three-letter package abreviation (e.g. 'DIS' for discretization) + variable_name : str + FloPy name of variable represented by external files (e.g. 'top' or 'botm') + filename_format : str + File path to the external file(s). Can be a string representing a single file + (e.g. 'top.dat'), or for variables where a file is written for each layer or + stress period, a format string that will be formated with the zero-based layer + number (e.g. 'botm{}.dat') for files botm0.dat, botm1.dat, ... + file_numbers : list of ints + List of numbers for the external files. Usually these represent zero-based + layers or stress periods. + + Returns + ------- + filepaths : list + List of external file paths + + Adds intermediated file paths to model.cfg[<package>]['intermediate_data'] + For MODFLOW-6 models, Adds external file paths to model.cfg[<package>][<variable_name>] + """ + package = package.lower() + if file_numbers is None: + file_numbers = [0] + + # in lieu of a way to get these from Flopy somehow + griddata_variables = ['top', 'botm', 'idomain', 'strt', + 'k', 'k33', 'sy', 'ss'] + transient2D_variables = {'rech', 'recharge', + 'finf', 'pet', 'extdp', 'extwc', + } + transient3D_variables = {'lakarr', 'bdlknc'} + tabular_variables = {'connectiondata'} + transient_tabular_variables = {'stress_period_data'} + transient_variables = transient2D_variables | transient3D_variables | transient_tabular_variables + + model.get_package(package) + # intermediate data + filename_format = os.path.split(filename_format)[-1] + if not relative_external_paths: + intermediate_files = [os.path.normpath(os.path.join(model.tmpdir, + filename_format).format(i)) for i in file_numbers] + else: + intermediate_files = [os.path.join(model.tmpdir, + filename_format).format(i) for i in file_numbers] + + if variable_name in transient2D_variables or variable_name in transient_tabular_variables: + model.cfg['intermediate_data'][variable_name] = {per: f for per, f in + zip(file_numbers, intermediate_files)} + elif variable_name in transient3D_variables: + model.cfg['intermediate_data'][variable_name] = {0: intermediate_files} + elif variable_name in tabular_variables: + model.cfg['intermediate_data']['{}_{}'.format(package, variable_name)] = intermediate_files + else: + model.cfg['intermediate_data'][variable_name] = intermediate_files + + # external array(s) read by MODFLOW + # (set to reflect expected locations where flopy will save them) + if not relative_external_paths: + external_files = [os.path.normpath(os.path.join(model.model_ws, + model.external_path, + filename_format.format(i))) for i in file_numbers] + else: + external_files = [os.path.join(model.model_ws, + model.external_path, + filename_format.format(i)) for i in file_numbers] + + if variable_name in transient2D_variables or variable_name in transient_tabular_variables: + model.cfg['external_files'][variable_name] = {per: f for per, f in + zip(file_numbers, external_files)} + elif variable_name in transient3D_variables: + model.cfg['external_files'][variable_name] = {0: external_files} + else: + model.cfg['external_files'][variable_name] = external_files + + if model.version == 'mf6': + # skip these for now (not implemented yet for MF6) + if variable_name in transient3D_variables: + return + ext_files_key = 'external_files' + if variable_name not in transient_variables: + filepaths = [{'filename': f} for f in model.cfg[ext_files_key][variable_name]] + else: + filepaths = {per: {'filename': f} + for per, f in model.cfg[ext_files_key][variable_name].items()} + # set package variable input (to Flopy) + if variable_name in griddata_variables: + model.cfg[package]['griddata'][variable_name] = filepaths + elif variable_name in tabular_variables: + model.cfg[package][variable_name] = filepaths[0] + model.cfg[ext_files_key]['{}_{}'.format(package, variable_name)] = model.cfg[ext_files_key].pop(variable_name) + #elif variable_name in transient_variables: + # filepaths = {per: {'filename': f} for per, f in + # zip(file_numbers, model.cfg[ext_files_key][variable_name])} + # model.cfg[package][variable_name] = filepaths + elif variable_name in transient_tabular_variables: + model.cfg[package][variable_name] = filepaths + model.cfg[ext_files_key]['{}_{}'.format(package, variable_name)] = model.cfg[ext_files_key].pop(variable_name) + else: + model.cfg[package][variable_name] = filepaths # {per: d for per, d in zip(file_numbers, filepaths)} + else: + filepaths = model.cfg['intermediate_data'][variable_name] + model.cfg[package][variable_name] = filepaths + + return filepaths
+ + + +
+[docs] +def flopy_mf2005_load(m, load_only=None, forgive=False, check=False): + """Execute the code in flopy.modflow.Modflow.load on an existing + flopy.modflow.Modflow instance.""" + version = m.version + verbose = m.verbose + model_ws = m.model_ws + + # similar to modflow command: if file does not exist , try file.nam + namefile_path = os.path.join(model_ws, m.namefile) + if (not os.path.isfile(namefile_path) and + os.path.isfile(namefile_path + '.nam')): + namefile_path += '.nam' + if not os.path.isfile(namefile_path): + raise IOError('cannot find name file: ' + str(namefile_path)) + + files_successfully_loaded = [] + files_not_loaded = [] + + # set the reference information + attribs = mfreadnam.attribs_from_namfile_header(namefile_path) + + #ref_attributes = SpatialReference.load(namefile_path) + + # read name file + ext_unit_dict = mfreadnam.parsenamefile( + namefile_path, m.mfnam_packages, verbose=verbose) + if m.verbose: + print('\n{}\nExternal unit dictionary:\n{}\n{}\n' + .format(50 * '-', ext_unit_dict, 50 * '-')) + + # create a dict where key is the package name, value is unitnumber + ext_pkg_d = {v.filetype: k for (k, v) in ext_unit_dict.items()} + + # reset version based on packages in the name file + if "NWT" in ext_pkg_d or "UPW" in ext_pkg_d: + version = "mfnwt" + if "GLOBAL" in ext_pkg_d: + if version != "mf2k": + m.glo = ModflowGlobal(m) + version = "mf2k" + if "SMS" in ext_pkg_d: + version = "mfusg" + if "DISU" in ext_pkg_d: + version = "mfusg" + m.structured = False + # update the modflow version + m.set_version(version) + + # reset unit number for glo file + if version == "mf2k": + if "GLOBAL" in ext_pkg_d: + unitnumber = ext_pkg_d["GLOBAL"] + filepth = os.path.basename(ext_unit_dict[unitnumber].filename) + m.glo.unit_number = [unitnumber] + m.glo.file_name = [filepth] + else: + # TODO: is this necessary? it's not done for LIST. + m.glo.unit_number = [0] + m.glo.file_name = [""] + + # reset unit number for list file + if 'LIST' in ext_pkg_d: + unitnumber = ext_pkg_d['LIST'] + filepth = os.path.basename(ext_unit_dict[unitnumber].filename) + m.lst.unit_number = [unitnumber] + m.lst.file_name = [filepth] + + # look for the free format flag in bas6 + bas_key = ext_pkg_d.get('BAS6') + if bas_key is not None: + bas = ext_unit_dict[bas_key] + start = bas.filehandle.tell() + line = bas.filehandle.readline() + while line.startswith("#"): + line = bas.filehandle.readline() + if "FREE" in line.upper(): + m.free_format_input = True + bas.filehandle.seek(start) + if verbose: + print("ModflowBas6 free format:{0}\n".format(m.free_format_input)) + + # load dis + dis_key = ext_pkg_d.get('DIS') or ext_pkg_d.get('DISU') + if dis_key is None: + raise KeyError('discretization entry not found in nam file') + disnamdata = ext_unit_dict[dis_key] + dis = disnamdata.package.load( + disnamdata.filename, m, + ext_unit_dict=ext_unit_dict, check=False) + files_successfully_loaded.append(disnamdata.filename) + if m.verbose: + print(' {:4s} package load...success'.format(dis.name[0])) + m.setup_grid() # reset model grid now that DIS package is loaded + assert m.pop_key_list.pop() == dis_key + ext_unit_dict.pop(dis_key) #.filehandle.close() + #start_datetime = attribs.pop("start_datetime", "01-01-1970") + #itmuni = attribs.pop("itmuni", 4) + #ref_source = attribs.pop("source", "defaults") + # if m.structured: + # # get model units from usgs.model.reference, if provided + # if ref_source == 'usgs.model.reference': + # pass + # # otherwise get them from the DIS file + # else: + # itmuni = dis.itmuni + # ref_attributes['lenuni'] = dis.lenuni + # sr = SpatialReference(delr=m.dis.delr.array, delc=ml.dis.delc.array, + # **ref_attributes) + # else: + # sr = None + # + #dis.sr = m.sr + #dis.tr = TemporalReference(itmuni=itmuni, start_datetime=start_datetime) + #dis.start_datetime = start_datetime + + if load_only is None: + # load all packages/files + load_only = ext_pkg_d.keys() + else: # check items in list + if not isinstance(load_only, list): + load_only = [load_only] + not_found = [] + for i, filetype in enumerate(load_only): + load_only[i] = filetype = filetype.upper() + if filetype not in ext_pkg_d: + not_found.append(filetype) + if not_found: + raise KeyError( + "the following load_only entries were not found " + "in the ext_unit_dict: " + str(not_found)) + + # zone, mult, pval + if "PVAL" in ext_pkg_d: + m.mfpar.set_pval(m, ext_unit_dict) + assert m.pop_key_list.pop() == ext_pkg_d.get("PVAL") + if "ZONE" in ext_pkg_d: + m.mfpar.set_zone(m, ext_unit_dict) + assert m.pop_key_list.pop() == ext_pkg_d.get("ZONE") + if "MULT" in ext_pkg_d: + m.mfpar.set_mult(m, ext_unit_dict) + assert m.pop_key_list.pop() == ext_pkg_d.get("MULT") + + # try loading packages in ext_unit_dict + for key, item in ext_unit_dict.items(): + if item.package is not None: + if item.filetype in load_only: + if forgive: + try: + package_load_args = \ + list(inspect.getfullargspec(item.package.load))[0] + if "check" in package_load_args: + item.package.load( + item.filename, m, + ext_unit_dict=ext_unit_dict, check=False) + else: + item.package.load( + item.filename, m, + ext_unit_dict=ext_unit_dict) + files_successfully_loaded.append(item.filename) + if m.verbose: + print(' {:4s} package load...success' + .format(item.filetype)) + except Exception as e: + m.load_fail = True + if m.verbose: + print(' {:4s} package load...failed\n {!s}' + .format(item.filetype, e)) + files_not_loaded.append(item.filename) + else: + package_load_args = \ + list(inspect.getfullargspec(item.package.load))[0] + if "check" in package_load_args: + item.package.load( + item.filename, m, + ext_unit_dict=ext_unit_dict, check=False) + else: + item.package.load( + item.filename, m, + ext_unit_dict=ext_unit_dict) + files_successfully_loaded.append(item.filename) + if m.verbose: + print(' {:4s} package load...success' + .format(item.filetype)) + else: + if m.verbose: + print(' {:4s} package load...skipped' + .format(item.filetype)) + files_not_loaded.append(item.filename) + elif "data" not in item.filetype.lower(): + files_not_loaded.append(item.filename) + if m.verbose: + print(' {:4s} package load...skipped' + .format(item.filetype)) + elif "data" in item.filetype.lower(): + if m.verbose: + print(' {} file load...skipped\n {}' + .format(item.filetype, + os.path.basename(item.filename))) + if key not in m.pop_key_list: + # do not add unit number (key) if it already exists + if key not in m.external_units: + m.external_fnames.append(item.filename) + m.external_units.append(key) + m.external_binflag.append("binary" + in item.filetype.lower()) + m.external_output.append(False) + else: + raise KeyError('unhandled case: {}, {}'.format(key, item)) + + # pop binary output keys and any external file units that are now + # internal + for key in m.pop_key_list: + try: + m.remove_external(unit=key) + ext_unit_dict.pop(key) + except KeyError: + if m.verbose: + print('Warning: external file unit {} does not exist in ' + 'ext_unit_dict.'.format(key)) + + # write message indicating packages that were successfully loaded + if m.verbose: + print('') + print(' The following {0} packages were successfully loaded.' + .format(len(files_successfully_loaded))) + for fname in files_successfully_loaded: + print(' ' + os.path.basename(fname)) + if len(files_not_loaded) > 0: + print(' The following {0} packages were not loaded.' + .format(len(files_not_loaded))) + for fname in files_not_loaded: + print(' ' + os.path.basename(fname)) + if check: + m.check(f='{}.chk'.format(m.name), verbose=m.verbose, level=0) + + # return model object + return m
+ + + +
+[docs] +def flopy_mfsimulation_load(sim, model, strict=True, load_only=None, + verify_data=False): + """Execute the code in flopy.mf6.MFSimulation.load on + existing instances of flopy.mf6.MFSimulation and flopy.mf6.MF6model""" + + instance = sim + if not isinstance(model, list): + model_instances = [model] + else: + model_instances = model + version = sim.version + exe_name = sim.exe_name + verbosity_level = instance.simulation_data.verbosity_level + + if verbosity_level.value >= VerbosityLevel.normal.value: + print('loading simulation...') + + # build case consistent load_only dictionary for quick lookups + load_only = PackageContainer._load_only_dict(load_only) + + # load simulation name file + if verbosity_level.value >= VerbosityLevel.normal.value: + print(' loading simulation name file...') + instance.name_file.load(strict) + + # load TDIS file + tdis_pkg = 'tdis{}'.format(mfstructure.MFStructure(). + get_version_string()) + tdis_attr = getattr(instance.name_file, tdis_pkg) + instance._tdis_file = mftdis.ModflowTdis(instance, + filename=tdis_attr.get_data()) + + instance._tdis_file._filename = instance.simulation_data.mfdata[ + ('nam', 'timing', tdis_pkg)].get_data() + if verbosity_level.value >= VerbosityLevel.normal.value: + print(' loading tdis package...') + instance._tdis_file.load(strict) + + # load models + try: + model_recarray = instance.simulation_data.mfdata[('nam', 'models', + 'models')] + models = model_recarray.get_data() + except MFDataException as mfde: + message = 'Error occurred while loading model names from the ' \ + 'simulation name file.' + raise MFDataException(mfdata_except=mfde, + model=instance.name, + package='nam', + message=message) + for item in models: + # resolve model working folder and name file + path, name_file = os.path.split(item[1]) + + # get the existing model instance + # corresponding to its entry in the simulation name file + # (in flopy the model instance is obtained from PackageContainer.model_factory below) + model_obj = [m for m in model_instances if m.namefile == name_file] + if len(model_obj) == 0: + print('model {} attached to {} not found in {}'.format(item, instance, model_instances)) + return + model_obj = model_obj[0] + #model_obj = PackageContainer.model_factory(item[0][:-1].lower()) + + # load model + if verbosity_level.value >= VerbosityLevel.normal.value: + print(' loading model {}...'.format(item[0].lower())) + + instance._models[item[2]] = flopy_mf6model_load(instance, model_obj, + strict=strict, + model_rel_path=path, + load_only=load_only) + + # original flopy code to load model + #instance._models[item[2]] = model_obj.load( + # instance, + # instance.structure.model_struct_objs[item[0].lower()], item[2], + # name_file, version, exe_name, strict, path, load_only) + + # load exchange packages and dependent packages + try: + exchange_recarray = instance.name_file.exchanges + has_exch_data = exchange_recarray.has_data() + except MFDataException as mfde: + message = 'Error occurred while loading exchange names from the ' \ + 'simulation name file.' + raise MFDataException(mfdata_except=mfde, + model=instance.name, + package='nam', + message=message) + if has_exch_data: + try: + exch_data = exchange_recarray.get_data() + except MFDataException as mfde: + message = 'Error occurred while loading exchange names from ' \ + 'the simulation name file.' + raise MFDataException(mfdata_except=mfde, + model=instance.name, + package='nam', + message=message) + for exgfile in exch_data: + if load_only is not None and not \ + PackageContainer._in_pkg_list(load_only, exgfile[0], + exgfile[2]): + if instance.simulation_data.verbosity_level.value >= \ + VerbosityLevel.normal.value: + print(' skipping package {}..' + '.'.format(exgfile[0].lower())) + continue + # get exchange type by removing numbers from exgtype + exchange_type = ''.join([char for char in exgfile[0] if + not char.isdigit()]).upper() + # get exchange number for this type + if exchange_type not in instance._exg_file_num: + exchange_file_num = 0 + instance._exg_file_num[exchange_type] = 1 + else: + exchange_file_num = instance._exg_file_num[exchange_type] + instance._exg_file_num[exchange_type] += 1 + + exchange_name = '{}_EXG_{}'.format(exchange_type, + exchange_file_num) + # find package class the corresponds to this exchange type + package_obj = PackageContainer.package_factory( + exchange_type.replace('-', '').lower(), '') + if not package_obj: + message = 'An error occurred while loading the ' \ + 'simulation name file. Invalid exchange type ' \ + '"{}" specified.'.format(exchange_type) + type_, value_, traceback_ = sys.exc_info() + raise MFDataException(instance.name, + 'nam', + 'nam', + 'loading simulation name file', + exchange_recarray.structure.name, + inspect.stack()[0][3], + type_, value_, traceback_, message, + instance._simulation_data.debug) + + # build and load exchange package object + exchange_file = package_obj(instance, exgtype=exgfile[0], + exgmnamea=exgfile[2], + exgmnameb=exgfile[3], + filename=exgfile[1], + pname=exchange_name, + loading_package=True) + if verbosity_level.value >= VerbosityLevel.normal.value: + print(' loading exchange package {}..' + '.'.format(exchange_file._get_pname())) + exchange_file.load(strict) + # Flopy>=3.9 + if hasattr(instance, '_package_container'): + instance._package_container.add_package(exchange_file) + instance._exchange_files[exgfile[1]] = exchange_file + + # load simulation packages + solution_recarray = instance.simulation_data.mfdata[('nam', + 'solutiongroup', + 'solutiongroup' + )] + + try: + solution_group_dict = solution_recarray.get_data() + except MFDataException as mfde: + message = 'Error occurred while loading solution groups from ' \ + 'the simulation name file.' + raise MFDataException(mfdata_except=mfde, + model=instance.name, + package='nam', + message=message) + for solution_group in solution_group_dict.values(): + for solution_info in solution_group: + if load_only is not None and not PackageContainer._in_pkg_list( + load_only, solution_info[0], solution_info[2]): + if instance.simulation_data.verbosity_level.value >= \ + VerbosityLevel.normal.value: + print(' skipping package {}..' + '.'.format(solution_info[0].lower())) + continue + ims_file = mfims.ModflowIms(instance, filename=solution_info[1], + pname=solution_info[2]) + if verbosity_level.value >= VerbosityLevel.normal.value: + print(' loading ims package {}..' + '.'.format(ims_file._get_pname())) + ims_file.load(strict) + + instance.simulation_data.mfpath.set_last_accessed_path() + if verify_data: + instance.check() + return instance
+ + + +
+[docs] +def flopy_mf6model_load(simulation, model, strict=True, model_rel_path='.', + load_only=None): + """Execute the code in flopy.mf6.MFmodel.load_base on an + existing instance of MF6model.""" + + instance = model + modelname = model.name + structure = model.structure + + # build case consistent load_only dictionary for quick lookups + load_only = PackageContainer._load_only_dict(load_only) + + # load name file + instance.name_file.load(strict) + + # order packages + vnum = mfstructure.MFStructure().get_version_string() + # FIX: Transport - Priority packages maybe should not be hard coded + priority_packages = {'dis{}'.format(vnum): 1, 'disv{}'.format(vnum): 1, + 'disu{}'.format(vnum): 1} + packages_ordered = [] + package_recarray = instance.simulation_data.mfdata[(modelname, 'nam', + 'packages', + 'packages')] + for item in package_recarray.get_data(): + if item[0] in priority_packages: + packages_ordered.insert(0, (item[0], item[1], item[2])) + else: + packages_ordered.append((item[0], item[1], item[2])) + + # load packages + sim_struct = mfstructure.MFStructure().sim_struct + instance._ftype_num_dict = {} + for ftype, fname, pname in packages_ordered: + ftype_orig = ftype + ftype = ftype[0:-1].lower() + if ftype in structure.package_struct_objs or ftype in \ + sim_struct.utl_struct_objs: + if ( + load_only is not None + and not PackageContainer._in_pkg_list( + priority_packages, ftype_orig, pname + ) + and not PackageContainer._in_pkg_list(load_only, ftype_orig, pname) + ): + if ( + simulation.simulation_data.verbosity_level.value + >= VerbosityLevel.normal.value + ): + print(f" skipping package {ftype}...") + continue + if model_rel_path and model_rel_path != '.': + # strip off model relative path from the file path + filemgr = simulation.simulation_data.mfpath + fname = filemgr.strip_model_relative_path(modelname, + fname) + if simulation.simulation_data.verbosity_level.value >= \ + VerbosityLevel.normal.value: + print(' loading package {}...'.format(ftype)) + # load package + instance.load_package(ftype, fname, pname, strict, None) + + # load referenced packages + if modelname in instance.simulation_data.referenced_files: + for ref_file in \ + instance.simulation_data.referenced_files[modelname].values(): + if (ref_file.file_type in structure.package_struct_objs or + ref_file.file_type in sim_struct.utl_struct_objs) and \ + not ref_file.loaded: + instance.load_package(ref_file.file_type, + ref_file.file_name, None, strict, + ref_file.reference_path) + ref_file.loaded = True + + # TODO: fix jagged lists where appropriate + + return instance
+ + + +
+[docs] +def which(program): + """Check for existance of executable. + https://stackoverflow.com/questions/377017/test-if-executable-exists-in-python + """ + def is_exe(fpath): + return os.path.isfile(fpath) and os.access(fpath, os.X_OK) + + fpath, fname = os.path.split(program) + if fpath: + if is_exe(program): + return program + else: + for path in os.environ["PATH"].split(os.pathsep): + exe_file = os.path.join(path, program) + if is_exe(exe_file): + return exe_file
+ + + +
+[docs] +def exe_exists(exe_name): + exe_path = which(exe_name) + if exe_path is not None: + return os.path.exists(exe_path) and \ + os.access(which(exe_path), os.X_OK)
+ + + +
+[docs] +def read_mf6_block(filename, blockname): + blockname = blockname.lower() + data = {} + read = False + per = None + with open(filename) as src: + for line in src: + line = line.lower() + if 'begin' in line and blockname in line: + if blockname == 'period': + per = int(line.strip().split()[-1]) + data[per] = [] + elif blockname == 'continuous': + fname = line.strip().split()[-1] + data[fname] = [] + elif blockname == 'packagedata': + data['packagedata'] = [] + else: + blockname = line.strip().split()[-1] + data[blockname] = [] + read = blockname + continue + if 'end' in line and blockname in line: + per = None + read = False + #break + if read == 'options': + line = line.strip().split() + data[line[0]] = line[1:] + elif read == 'packages': + pckg, fname, ext = line.strip().split() + data[pckg] = fname + elif read == 'period': + data[per].append(' '.join(line.strip().split())) + elif read == 'continuous': + data[fname].append(' '.join(line.strip().split())) + elif read == 'packagedata': + data['packagedata'].append(' '.join(line.strip().split())) + elif read == blockname: + data[blockname].append(' '.join(line.strip().split())) + return data
+ + + +
+[docs] +def read_lak_ggo(f, model, + start_datetime='1970-01-01', + keep_only_last_timestep=True): + lake, hydroid = os.path.splitext(os.path.split(f)[1])[0].split('_') + lak_number = int(lake.strip('lak')) + df = read_ggofile(f, model=model, + start_datetime=start_datetime, + keep_only_last_timestep=keep_only_last_timestep) + df['lake'] = lak_number + df['hydroid'] = hydroid + return df
+ + + +
+[docs] +def read_ggofile(gagefile, model, + start_datetime='1970-01-01', + keep_only_last_timestep=True): + with open(gagefile) as src: + next(src) + namesstr = next(src) + names = namesstr.replace('DATA:', '').replace('.', '')\ + .replace('-', '_').replace('(', '').replace(')', '')\ + .replace('"','').strip().split() + names = [n.lower() for n in names] + df = pd.read_csv(src, skiprows=0, + header=None, + delim_whitespace=True, + names=names + ) + kstp = [] + kper = [] + for i, nstp in enumerate(model.dis.nstp.array): + for j in range(nstp): + kstp.append(j) + kper.append(i) + if len(df) == len(kstp) + 1: + df = df.iloc[1:].copy() + if df.time.iloc[0] == 1: + df['time'] -= 1 + df['kstp'] = kstp + df['kper'] = kper + if keep_only_last_timestep: + df = df.groupby('kper').last() + + start_ts = pd.Timestamp(start_datetime) + df['datetime'] = pd.to_timedelta(df.time, unit='D') + start_ts + df.index = df.datetime + return df
+ + + +
+[docs] +def add_version_to_fileheader(filename, model_info=None): + """Add modflow-setup, flopy and optionally model + version info to an existing file header denoted by + the comment characters ``#``, ``!``, or ``//``. + """ + tempfile = str(filename) + '.temp' + shutil.copy(filename, tempfile) + with open(tempfile) as src: + with open(filename, 'w') as dest: + if model_info is None: + header = '' + else: + header = f'# {model_info}\n' + read_header = True + for line in src: + if read_header and len(line.strip()) > 0 and \ + line.strip()[0] in {'#', '!', '//'}: + if model_info is None or model_info not in line: + header += line + elif read_header: + if 'modflow-setup' not in header: + headerlist = header.strip().split('\n') + if 'flopy' in header.lower(): + pos, flopy_info = [(i, s) for i, s in enumerate(headerlist) + if 'flopy' in s.lower()][0] + #flopy_info = header.strip().split('\n')[-1] + if 'version' not in flopy_info.lower(): + flopy_version = f'flopy version {flopy.__version__}' + flopy_info = flopy_info.lower().replace('flopy', + flopy_version) + headerlist[pos] = flopy_info + + #header = '\n'.join(header.split('\n')[:-2] + + # [flopy_info + '\n']) + mfsetup_text = '# via ' + pos += 1 # insert mfsetup header after flopy + else: + mfsetup_text = '# File created by ' + pos = -1 # insert mfsetup header at end + mfsetup_text += 'modflow-setup version {}'.format(mfsetup.__version__) + mfsetup_text += ' at {:%Y-%m-%d %H:%M:%S}'.format(dt.datetime.now()) + headerlist.insert(pos, mfsetup_text) + header = '\n'.join(headerlist) + '\n' + dest.write(header) + read_header = False + dest.write(line) + else: + dest.write(line) + os.remove(tempfile)
+ + + +
+[docs] +def remove_file_header(filename): + """Remove the header of a MODFLOW input file, + to allow comparison betwee files that have different + headers but are otherwise the same, for example.""" + backup_file = str(filename) + '.backup' + shutil.copy(filename, backup_file) + with open(backup_file) as src: + with open(filename, 'w') as dest: + for line in src: + if not line.strip().startswith('#'): + dest.write(line) + os.remove(backup_file)
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/_modules/mfsetup/grid.html b/_modules/mfsetup/grid.html new file mode 100644 index 00000000..ac8bb342 --- /dev/null +++ b/_modules/mfsetup/grid.html @@ -0,0 +1,1545 @@ + + + + + + + + mfsetup.grid — modflow-setup 0.5.0.post59+g65803fd documentation + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for mfsetup.grid

+"""
+Code for creating and working with regular (structured) grids. Focus is on the 2D representation of
+the grid in the cartesian plane. For methods involving layering (in the vertical dimension), see
+the discretization module.
+"""
+import collections
+import time
+import warnings
+from pathlib import Path
+
+import geopandas as gpd
+import gisutils
+import numpy as np
+import pandas as pd
+import pyproj
+import shapely
+from flopy.discretization import StructuredGrid
+from flopy.mf6.utils.binarygrid_util import MfGrdFile
+from geopandas.geodataframe import GeoDataFrame
+from gisutils import df2shp, get_proj_str, project, shp2df
+from packaging import version
+from rasterio import Affine
+from scipy import spatial
+from shapely.geometry import MultiPolygon, Point, Polygon, box
+
+from mfsetup import fileio as fileio
+
+from .mf5to6 import get_model_length_units
+from .units import convert_length_units
+from .utils import get_input_arguments
+
+
+
+[docs] +class MFsetupGrid(StructuredGrid): + """Class representing a structured grid. Extends flopy.discretization.StructuredGrid + to facilitate gis operations in a projected (real-word) coordinate reference system (CRS). + + Parameters + ---------- + delc : ndarray + 1D numpy array of grid spacing along a column (len nrow), in CRS units. + delr : ndarray + 1D numpy array of grid spacing along a row (len ncol), in CRS units. + top : ndarray + 2D numpy array of model top elevations + botm : ndarray + 3D numpy array of model bottom elevations + idomain : ndarray + 3D numpy array of model idomain values + laycbd : ndarray + (Modflow 2005 and earlier style models only): + LAYCBD—is a flag, with one value for each model layer, + that indicates whether or not a layer has a Quasi-3D + confining bed below it. 0 indicates no confining bed, + and not zero indicates a confining bed. + LAYCBD for the bottom layer must be 0. + lenuni : int, optional + MODFLOW length units variable. See + `the Online Guide to MODFLOW <https://water.usgs.gov/ogw/modflow-nwt/MODFLOW-NWT-Guide/index.html?beginners_guide_to_modflow.htm>`_ + epsg : int, optional + EPSG code for the model CRS + proj_str : str, optional + PROJ string for model CRS. In general, a spatial reference ID + (such as an EPSG code) or Well-Known Text (WKT) string is prefered + over a PROJ string (see References) + prj : str, optional + Filepath for ESRI projection file (containing wkt) describing model CRS + wkt : str, optional + Well-known text string describing model CRS. + crs : obj, optional + A Python int, dict, str, or pyproj.crs.CRS instance + passed to :meth:`pyproj.crs.CRS.from_user_input` + Can be any of: + + - PROJ string + - Dictionary of PROJ parameters + - PROJ keyword arguments for parameters + - JSON string with PROJ parameters + - CRS WKT string + - An authority string [i.e. 'epsg:4326'] + - An EPSG integer code [i.e. 4326] + - A tuple of ("auth_name": "auth_code") [i.e ('epsg', '4326')] + - An object with a `to_wkt` method. + - A :class:`pyproj.crs.CRS` class + + xoff, yoff : float, float, optional + Model grid offset (location of lower left corner), by default 0.0, 0.0 + xul, yul : float, float, optional + Model grid offset (location of upper left corner), by default 0.0, 0.0 + angrot : float, optional + Rotation of the model grid, in degrees counter-clockwise about the lower left corner. + Non-zero rotation values require input of xoff, yoff (xul, yul not supported). + By default 0.0 + + References + ---------- + https://proj.org/faq.html#what-is-the-best-format-for-describing-coordinate-reference-systems + + """ + + def __init__(self, delc, delr, top=None, botm=None, idomain=None, + laycbd=None, lenuni=None, binary_grid_file=None, + epsg=None, proj_str=None, prj=None, wkt=None, crs=None, + xoff=0.0, yoff=0.0, xul=None, yul=None, angrot=0.0): + super(MFsetupGrid, self).__init__(delc=np.array(delc), delr=np.array(delr), + top=top, botm=botm, idomain=idomain, + laycbd=laycbd, lenuni=lenuni, + epsg=epsg, proj4=proj_str, prj=prj, + xoff=xoff, yoff=yoff, angrot=angrot + ) + + # properties + self._crs = None + # pass all CRS representations through pyproj.CRS.from_user_input + # to convert to pyproj.CRS instance + self.crs = get_crs(crs=crs, epsg=epsg, prj=prj, wkt=wkt, proj_str=proj_str) + + # other CRS-related properties are set in the flopy Grid base class + self._vertices = None + self._polygons = None + self._dataframe = None + + # MODFLOW 6 binary grid file, for getting intercell connections + # (needed for reading cell budget files) + self.binary_grid_file = binary_grid_file + + # if no epsg, set from proj4 string if possible + #if epsg is None and proj_str is not None and 'epsg' in proj_str.lower(): + # self.epsg = int(proj_str.split(':')[1]) + + # in case the upper left corner is known but the lower left corner is not + if xul is not None and yul is not None: + xll = self._xul_to_xll(xul) + yll = self._yul_to_yll(yul) + self.set_coord_info(xoff=xll, yoff=yll, epsg=epsg, proj4=proj_str, angrot=angrot) + + def __eq__(self, other): + if not isinstance(other, StructuredGrid): + return False + if not np.allclose(other.xoffset, self.xoffset): + return False + if not np.allclose(other.yoffset, self.yoffset): + return False + if not np.allclose(other.angrot, self.angrot): + return False + if not other.crs == self.crs: + return False + if not np.array_equal(other.delr, self.delr): + return False + if not np.array_equal(other.delc, self.delc): + return False + return True + + def __repr__(self): + txt = '' + if self.nlay is not None: + txt += f'{self.nlay:d} layer(s), ' + txt += f'{self.nrow:d} row(s), {self.ncol:d} column(s)\n' + txt += (f'delr: [{self.delr[0]:.2f}...{self.delr[-1]:.2f}]' + f' {self.units}\n' + f'delc: [{self.delc[0]:.2f}...{self.delc[-1]:.2f}]' + f' {self.units}\n' + ) + txt += f'CRS: {self.crs}\n' + txt += f'length units: {self.length_units}\n' + txt += f'xll: {self.xoffset}; yll: {self.yoffset}; rotation: {self.rotation}\n' + txt += 'Bounds: {}\n'.format(self.extent) + return txt + + def __str__(self): + return StructuredGrid.__repr__(self) + + @property + def xul(self): + x0 = self.xyedges[0][0] + y0 = self.xyedges[1][0] + x0r, y0r = self.get_coords(x0, y0) + return x0r + + @property + def yul(self): + x0 = self.xyedges[0][0] + y0 = self.xyedges[1][0] + x0r, y0r = self.get_coords(x0, y0) + return y0r + + @property + def bbox(self): + """Shapely polygon bounding box of the model grid.""" + return get_grid_bounding_box(self) + + @property + def bounds(self): + """Grid bounding box in order used by shapely. + """ + x0, x1, y0, y1 = self.extent + return x0, y0, x1, y1 + + @property + def size(self): + if self.nlay is None: + return self.nrow * self.ncol + return self.nlay * self.nrow * self.ncol + + @property + def transform(self): + """Rasterio Affine object (same as transform attribute of rasters). + """ + return get_transform(self) + + @property + def crs(self): + """pyproj.crs.CRS instance describing the coordinate reference system + for the model grid. + """ + return self._crs + + @crs.setter + def crs(self, crs): + """Get a pyproj CRS instance from various inputs + (epsg, proj string, wkt, etc.). + + crs : obj, optional + Coordinate reference system for model grid. + A Python int, dict, str, or pyproj.crs.CRS instance + passed to the pyproj.crs.from_user_input + See http://pyproj4.github.io/pyproj/stable/api/crs/crs.html#pyproj.crs.CRS.from_user_input. + Can be any of: + - PROJ string + - Dictionary of PROJ parameters + - PROJ keyword arguments for parameters + - JSON string with PROJ parameters + - CRS WKT string + - An authority string [i.e. 'epsg:4326'] + - An EPSG integer code [i.e. 4326] + - A tuple of ("auth_name": "auth_code") [i.e ('epsg', '4326')] + - An object with a `to_wkt` method. + - A :class:`pyproj.crs.CRS` class + """ + crs = get_crs(crs=crs) + self._crs = crs + + @property + def proj_str(self): + if self.crs is not None: + return self.crs.to_proj4() + + @property + def wkt(self): + if self.crs is not None: + return self.crs.to_wkt(pretty=True) + + @property + def length_units(self): + return get_crs_length_units(self.crs) + + @property + def vertices(self): + """Vertices for grid cell polygons.""" + if self._vertices is None: + self._set_vertices() + return self._vertices + + @property + def polygons(self): + """Vertices for grid cell polygons.""" + if self._polygons is None: + self._set_polygons() + return self._polygons + + @property + def dataframe(self): + """Pandas DataFrame of grid cell polygons + with i, j locations.""" + if self._dataframe is None: + self._dataframe = self.get_dataframe(layers=True) + return self._dataframe + + @property + def intercell_connections(self): + """Pandas DataFrame of flow connections between grid cells.""" + if self._intercell_connections is None: + self._intercell_connections = self.get_intercell_connections() + return self._intercell_connections + + @property + def top(self): + return self._top + + @top.setter + def top(self, top): + self._top = top + + @property + def botm(self): + return self._botm + + @botm.setter + def botm(self, botm): + if (self._StructuredGrid__nrow, self._StructuredGrid__ncol) != botm.shape[1:]: + raise ValueError("botm array shape is inconsistent with the model grid") + self._StructuredGrid__nlay = botm.shape[0] + if self._laycbd.size != botm.shape[0]: + self._laycbd = np.zeros(botm.shape[0], dtype=int) + self._botm = botm + +
+[docs] + def get_intercell_connections(self, binary_grid_file=None): + """_summary_ + + Parameters + ---------- + binary_grid_file : str or pathlike + MODFLOW 6 binary grid file + + Returns + ------- + df : DataFrame + Intercell connections, with the following columns: + + === ============================================================= + n from zero-based node number + kn from zero-based layer + in from zero-based row + jn from zero-based column + m to zero-based node number + kn to zero-based layer + in to zero-based row + jn to zero-based column + === ============================================================= + + Raises + ------ + ValueError + _description_ + """ + if binary_grid_file is not None: + self.binary_grid_file = binary_grid_file + if self.binary_grid_file is None: + raise ValueError("A MODFLOW 6 binary_grid_file " + "is needed to get intercell connections. " + "Either run get_intercell_connections or " + "re-instantiate the grid with a binary_grid_file argument.") + self._intercell_connections = get_intercell_connections(self.binary_grid_file) + return self._intercell_connections
+ + +
+[docs] + def get_dataframe(self, layers=True): + """Get a pandas DataFrame of grid cell polygons + with i, j locations. + + Parameters + ---------- + layers : bool + If True, return a row for each k, i, j location + and a 'k' column; if False, only return i, j + locations with no 'k' column. By default, True + + Returns + ------- + layers : DataFrame + Pandas Dataframe with k, i, j and geometry column + with a shapely polygon representation of each model cell. + """ + # get dataframe of model grid cells + i, j = np.indices((self.nrow, self.ncol)) + geoms = self.polygons + df = gpd.GeoDataFrame({'i': i.ravel(), + 'j': j.ravel(), + 'geometry': geoms}, crs=5070) + if layers and self.nlay is not None: + # add layer information + dfs = [] + for k in range(self.nlay): + layer_df = df.copy() + layer_df['k'] = k + dfs.append(layer_df) + df = pd.concat(dfs) + df = df[['k', 'i', 'j', 'geometry']].copy() + return df
+ + +
+[docs] + def write_bbox_shapefile(self, filename='grid_bbox.shp'): + write_bbox_shapefile(self, filename)
+ + +
+[docs] + def write_shapefile(self, filename='grid.shp'): + i, j = np.indices((self.nrow, self.ncol)) + df = pd.DataFrame({'node': list(range(len(self.polygons))), + 'i': i.ravel(), + 'j': j.ravel(), + 'geometry': self.polygons + }) + df2shp(df, filename, epsg=self.epsg, proj_str=self.proj_str)
+ + + def _set_polygons(self): + """ + Create shapely polygon for each grid cell + """ + print('creating shapely Polygons of grid cells...') + t0 = time.time() + self._polygons = [Polygon(verts) for verts in self.vertices] + print("finished in {:.2f}s\n".format(time.time() - t0)) + + # stuff to conform to sr + @property + def length_multiplier(self): + return convert_length_units(self.lenuni, + 2) + + @property + def rotation(self): + return self.angrot + +
+[docs] + def get_vertices(self, i, j): + """Get vertices for a single cell or sequence if i, j locations.""" + return self._cell_vert_list(i, j)
+ + + def _set_vertices(self): + """ + Populate vertices for the whole grid + """ + jj, ii = np.meshgrid(range(self.ncol), range(self.nrow)) + jj, ii = jj.ravel(), ii.ravel() + self._vertices = self._cell_vert_list(ii, jj)
+ + + +# definition of national hydrogeologic grid +national_hydrogeologic_grid_parameters = { + 'xul': -2553045.0, # upper left corner + 'yul': 3907285.0, + 'height': 4000, + 'width': 4980, + 'dx': 1000, + 'dy': 1000, + 'rotation': 0. +} + + +
+[docs] +def get_crs(crs=None, epsg=None, prj=None, wkt=None, proj_str=None): + """Get a pyproj CRS instance from various CRS representations. + """ + if crs is not None: + crs = pyproj.CRS.from_user_input(crs) + elif epsg is not None: + crs = pyproj.CRS.from_epsg(epsg) + elif prj is not None: + with open(prj) as src: + wkt = src.read() + crs = pyproj.CRS.from_wkt(wkt) + elif wkt is not None: + crs = pyproj.CRS.from_wkt(wkt) + elif proj_str is not None: + crs = pyproj.CRS.from_string(proj_str) + else: # crs is None + return + # if possible, have pyproj try to find the closest + # authority name and code matching the crs + # so that input from epsg codes, proj strings, and prjfiles + # results in equal pyproj_crs instances + authority = crs.to_authority() + if authority is not None: + crs = pyproj.CRS.from_user_input(crs.to_authority()) + return crs
+ + + +
+[docs] +def get_crs_length_units(crs): + length_units = crs.axis_info[0].unit_name + if 'foot' in length_units.lower() or 'feet' in length_units.lower(): + length_units = 'feet' + elif 'metre' in length_units.lower() or 'meter' in length_units.lower(): + length_units = 'meters' + return length_units
+ + + +
+[docs] +def get_ij(grid, x, y, local=False): + """Return the row and column of a point or sequence of points + in real-world coordinates. + + Parameters + ---------- + grid : flopy.discretization.StructuredGrid instance + x : scalar or sequence of x coordinates + y : scalar or sequence of y coordinates + local: bool (optional) + If True, x and y are in local coordinates (defaults to False) + + Returns + ------- + i : row or sequence of rows (zero-based) + j : column or sequence of columns (zero-based) + """ + xc, yc = grid.xcellcenters, grid.ycellcenters + if local: + x, y = grid.get_coords(x, y) + print('getting i, j locations...') + t0 = time.time() + xyc = np.array([xc.ravel(), yc.ravel()]).transpose() + pxy = np.array([x, y]).transpose() + kdtree = spatial.KDTree(xyc) + distance, loc = kdtree.query(pxy) + i, j = np.unravel_index(loc, (grid.nrow, grid.ncol)) + print("finished in {:.2f}s\n".format(time.time() - t0)) + return i, j
+ + + +
+[docs] +def get_kij_from_node3d(node3d, nrow, ncol): + """For a consecutive cell number in row-major order + (row, column, layer), get the zero-based row, column position. + """ + node2d = node3d % (nrow * ncol) + k = node3d // (nrow * ncol) + i = node2d // ncol + j = node2d % ncol + return k, i, j
+ + + +
+[docs] +def get_grid_bounding_box(modelgrid): + """Get bounding box of potentially rotated modelgrid + as a shapely Polygon object. + + Parameters + ---------- + modelgrid : flopy.discretization.StructuredGrid instance + """ + mg = modelgrid + #x0 = mg.xedge[0] + #x1 = mg.xedge[-1] + #y0 = mg.yedge[0] + #y1 = mg.yedge[-1] + + x0 = mg.xyedges[0][0] + x1 = mg.xyedges[0][-1] + y0 = mg.xyedges[1][0] + y1 = mg.xyedges[1][-1] + + # upper left point + #x0r, y0r = mg.transform(x0, y0) + x0r, y0r = mg.get_coords(x0, y0) + + # upper right point + #x1r, y1r = mg.transform(x1, y0) + x1r, y1r = mg.get_coords(x1, y0) + + # lower right point + #x2r, y2r = mg.transform(x1, y1) + x2r, y2r = mg.get_coords(x1, y1) + + # lower left point + #x3r, y3r = mg.transform(x0, y1) + x3r, y3r = mg.get_coords(x0, y1) + + return Polygon([(x0r, y0r), + (x1r, y1r), + (x2r, y2r), + (x3r, y3r), + (x0r, y0r)])
+ + + +
+[docs] +def get_nearest_point_on_grid(x, y, transform=None, + xul=None, yul=None, + dx=None, dy=None, rotation=0., + offset='center', op=None): + """ + + Parameters + ---------- + x : float + x-coordinate of point + y : float + y-coordinate of point + transform : Affine instance, optional + Affine object instance describing grid + xul : float + x-coordinate of upper left corner of the grid + yul : float + y-coordinate of upper left corner of the grid + dx : float + grid spacing in the x-direction (along rows) + dy : float + grid spacing in the y-direction (along columns) + rotation : float + grid rotation about the upper left corner, in degrees clockwise from the x-axis + offset : str, {'center', 'edge'} + Whether the point on the grid represents a cell center or corner (edge). This + argument is only used if xul, yul, dx, dy and rotation are supplied. If + an Affine transform instance is supplied, it is assumed to already incorporate + the offset. + op : function, optional + Function to convert fractional pixels to whole numbers (np.round, np.floor, np.ceiling). + Defaults to np.round if offset == 'center'; otherwise defaults to np.floor. + + + + Returns + ------- + x_nearest, y_nearest : float + Coordinates of nearest grid cell center. + + """ + # get the closet (fractional) grid cell location + # (in case the grid is rotated) + if transform is None: + transform = Affine(dx, 0., xul, + 0., dy, yul) * \ + Affine.rotation(rotation) + if offset == 'center': + transform *= Affine.translation(0.5, 0.5) + x_raster, y_raster = ~transform * (x, y) + + if offset == 'center': + op = np.round + elif op is None: + op = np.floor + + j = int(op(x_raster)) + i = int(op(y_raster)) + + x_nearest, y_nearest = transform * (j, i) + return x_nearest, y_nearest
+ + + +
+[docs] +def get_point_on_national_hydrogeologic_grid(x, y, offset='edge', **kwargs): + """Given an x, y location representing the upper left + corner of a model grid, return the upper left corner + of the cell in the National Hydrogeologic Grid that + contains it.""" + params = get_input_arguments(national_hydrogeologic_grid_parameters, get_nearest_point_on_grid) + params.update(kwargs) + return get_nearest_point_on_grid(x, y, offset=offset, **params)
+ + + +
+[docs] +def write_bbox_shapefile(modelgrid, outshp): + outline = get_grid_bounding_box(modelgrid) + gdf = gpd.GeoDataFrame({'desc': ['model bounding box'], + 'geometry': [outline]}, + crs=modelgrid.crs) + gdf.to_file(outshp, index=False)
+ + + +
+[docs] +def rasterize(feature, grid, id_column=None, + include_ids=None, exclude_ids=None, names_column=None, + crs=None, **kwargs): + """Rasterize a feature onto the model grid, using + the rasterio.features.rasterize method. Features are intersected + if they contain the cell center. + + Parameters + ---------- + feature : str (shapefile path), list of shapely objects, + or dataframe with geometry column + id_column : str + Column with unique integer identifying each feature; values + from this column will be assigned to the output raster. + include_ids : sequence + Subset of IDs in id_column to include + exclude_ids : sequence + Subset of IDs in id_column to exclude + names_column : str, optional + By default, the IDs in id_column, or sequential integers + are returned. This option allows another column of strings + to be specified (i.e. feature names); in which case + an array of the strings will be returned. + grid : grid.StructuredGrid instance + crs : obj + A Python int, dict, str, or pyproj.crs.CRS instance + passed to :meth:`pyproj.crs.CRS.from_user_input` + Can be any of: + + - PROJ string + - Dictionary of PROJ parameters + - PROJ keyword arguments for parameters + - JSON string with PROJ parameters + - CRS WKT string + - An authority string [i.e. 'epsg:4326'] + - An EPSG integer code [i.e. 4326] + - A tuple of ("auth_name": "auth_code") [i.e ('epsg', '4326')] + - An object with a `to_wkt` method. + - A :class:`pyproj.crs.CRS` class + **kwargs : keyword arguments to rasterio.features.rasterize() + https://rasterio.readthedocs.io/en/stable/api/rasterio.features.html + + Returns + ------- + 2D numpy array with intersected values + + """ + try: + from rasterio import Affine, features + except: + print('This method requires rasterio.') + return + + if crs is not None: + if version.parse(gisutils.__version__) < version.parse('0.2.0'): + raise ValueError("The rasterize function requires gisutils >= 0.2") + from gisutils import get_authority_crs + crs = get_authority_crs(crs) + + trans = get_transform(grid) + + if isinstance(feature, str) or isinstance(feature, Path): + df = gpd.read_file(feature) + elif isinstance(feature, pd.DataFrame): + df = feature.copy() + df = gpd.GeoDataFrame(df, crs=crs) + elif isinstance(feature, collections.abc.Iterable): + # list of shapefiles + if isinstance(feature[0], str) or isinstance(feature[0], Path): + # use shp2df to read multiple shapefiles + # then convert to gdf + df = shp2df(feature, dest_crs=grid.crs) + df = gpd.GeoDataFrame(df, crs=grid.crs) + else: + df = pd.DataFrame({'geometry': feature}) + df = gpd.GeoDataFrame(df, crs=crs) + elif not isinstance(feature, collections.abc.Iterable): + df = pd.DataFrame({'geometry': [feature]}) + df = gpd.GeoDataFrame(df, crs=crs) + else: + print('unrecognized feature input') + return + + # reproject to grid crs + if df.crs is not None: + orig_crs = df.crs + try: + df.to_crs(grid.crs, inplace=True) + except: + df.to_crs(grid.crs, inplace=True) + if not df['geometry'].is_valid.all(): + df['geometry'] = [g.buffer(0) for g in df.geometry] + geoms_are_valid = df['geometry'].is_valid.all() & \ + (not df.geometry.is_empty.any()) & \ + np.isfinite(df.geometry.bounds.sum().sum()) + if not geoms_are_valid: + raise ValueError('Something went wrong with reprojecting ' + f'the input features from\n{orig_crs}\nto\n{grid.crs}\n' + 'Check the input feature and model grid projections' + 'If you are on a network that requires special ' + 'SSL authentication, try running this operation ' + 'again off-network.' + ) + + # subset to include_ids + if id_column is not None and include_ids is not None: + df = df.loc[df[id_column].isin(include_ids)].copy() + if id_column is not None and exclude_ids is not None: + df = df.loc[~df[id_column].isin(exclude_ids)].copy() + # create list of GeoJSON features, with unique value for each feature + if id_column is None: + numbers = list(range(1, len(df)+1)) + # if IDs are strings, get a number for each one + # pd.DataFrame.unique() generally preserves order + elif df[id_column].dtype == object: + unique_values = df[id_column].unique() + values = dict(zip(unique_values, range(1, len(unique_values) + 1))) + numbers = [values[n] for n in df[id_column]] + else: + # enforce integers; very long NHDPlusIDs + # can cause trouble if they are in float64 format + numbers = df[id_column].values.astype('int64') + # add one if the lowest number is 0 + # (zero indicates non-intersected raster cells) + if np.min(numbers) == 0: + numbers += 1 + elif np.min(numbers) < 0: + raise ValueError("id_column must have positive integers!") + numbers = list(numbers) + + geoms = list(zip(df.geometry, numbers)) + result = features.rasterize(geoms, + out_shape=(grid.nrow, grid.ncol), + transform=trans, **kwargs) + assert result.sum(axis=(0, 1)) != 0, "Nothing was intersected!" + if names_column is not None: + names_lookup = dict(zip(numbers, df[names_column])) + result = [names_lookup.get(n, '') for n in result.flat] + result = np.reshape(result, (grid.nrow, grid.ncol)) + result = result.astype(object) + return result
+ + + +
+[docs] +def snap_to_cell_corner(x, y, modelgrid, corner='nearest'): + """Move an x, y location to the nearest cell corner on + a rectilinear modelgrid. + + Parameters + ---------- + x : float + x coordinate in coordinate reference system of modelgrid. + y : _type_ + y coordinate in coordinate reference system of modelgrid. + modelgrid : Flopy StructuredGrid instance + corner : str, optional + 'upper left', 'lower right' or 'nearest', by default 'nearest' + + Returns + ------- + x_corner, y_corner + x, y location of cell corner in coordinate reference system + of modelgrid. + + Raises + ------ + ValueError + If x, y are outside of the model domain, or if an invalid + cell corner is specified. + """ + if corner == 'nearest': + vx, vy, vz = modelgrid.xyzvertices + loc = np.argmin(np.sqrt((x-vx)**2 + (y-vy)**2)) + x_corner, y_corner = vx.flat[loc], vy.flat[loc] + return x_corner, y_corner + + x_model, y_model = modelgrid.get_local_coords(x, y) + + # move away from the corner of a cell + # delr: column spacing along a row + # delc: row spacing along a column + # use .min() values of delr/delc because + # we may not be able to get the i, j location + # from Flopy without first backing the point away from the corner + # (if the x, y is initially very close to the cell corner) + if corner == 'upper left': + x_model += 1e-6 #(modelgrid.delr.min() * 0.25) + y_model -= 1e-6 #(modelgrid.delc.min() * 0.25) + elif corner == 'lower right': + x_model -= 1e-6 #(modelgrid.delr.min() * 0.25) + y_model += 1e-6 #(modelgrid.delc.min() * 0.25) + else: + raise ValueError("Only snapping to 'upper left' and " + "'lower right' corners is supported") + # flip back to world coords + #x1, y1 = modelgrid.get_coords(x_model, y_model) + # get corresponding cell + pi, pj = modelgrid.intersect(x_model, y_model, local=True, forgive=True) + #pi, pj = modelgrid.intersect(x1, y1, forgive=True) + if any(np.isnan([pi, pj])): + raise ValueError(f"Point {x:.2f}, {y:.2f} " + "is outside of the model domain!") + # find the vertices of that cell + verts = np.array(modelgrid.get_cell_vertices(pi, pj)) + # flip to model space to easily locate the corner + verts_model_space = np.array([modelgrid.get_local_coords(xv ,yv) + for xv, yv in verts]) + if corner == 'upper left': + x_corner_model = verts_model_space[:, 0].min() + y_corner_model = verts_model_space[:, 1].max() + elif corner == 'lower right': + x_corner_model = verts_model_space[:,0].max() + y_corner_model = verts_model_space[:,1].min() + else: + raise ValueError("Only snapping to 'upper left' and " + "'lower right' corners is supported") + # finally, back to world space + x_corner, y_corner = modelgrid.get_coords(x_corner_model, y_corner_model) + return x_corner, y_corner
+ + + +
+[docs] +def setup_structured_grid(xoff=None, yoff=None, xul=None, yul=None, + nrow=None, ncol=None, nlay=None, + dxy=None, delr=None, delc=None, + top=None, botm=None, + rotation=0., + parent_model=None, snap_to_parent=True, snap_to_NHG=False, + features=None, features_shapefile=None, + id_column=None, include_ids=None, + buffer=1000, + crs=None, epsg=None, prj=None, wkt=None, + model_length_units=None, + grid_file='grid.json', + bbox_shapefile=None, **kwargs): + """_summary_ + + Parameters + ---------- + xoff : _type_, optional + _description_, by default None + yoff : _type_, optional + _description_, by default None + xul : _type_, optional + _description_, by default None + yul : _type_, optional + _description_, by default None + nrow : _type_, optional + _description_, by default None + ncol : _type_, optional + _description_, by default None + nlay : _type_, optional + _description_, by default None + dxy : _type_, optional + Specified uniform row/column spacing, in model grid + (coordinate reference system) units, by default None + delr : scalar or sequence, optional + Column spacing along a row, in model grid + (coordinate reference system) units, + by default None + delc : scalar or sequence, optional + Row spacing along a column, in model grid + (coordinate reference system) units, + by default None + top : _type_, optional + _description_, by default None + botm : _type_, optional + _description_, by default None + rotation : _type_, optional + _description_, by default 0. + parent_model : _type_, optional + _description_, by default None + snap_to_parent : bool, optional + _description_, by default True + snap_to_NHG : bool, optional + _description_, by default False + features : _type_, optional + _description_, by default None + features_shapefile : _type_, optional + _description_, by default None + id_column : _type_, optional + _description_, by default None + include_ids : _type_, optional + _description_, by default None + buffer : int, optional + _description_, by default 1000 + crs : _type_, optional + _description_, by default None + epsg : _type_, optional + _description_, by default None + prj : _type_, optional + _description_, by default None + wkt : _type_, optional + _description_, by default None + model_length_units : _type_, optional + _description_, by default None + grid_file : str, optional + _description_, by default 'grid.json' + bbox_shapefile : _type_, optional + _description_, by default None + + Returns + ------- + _type_ + _description_ + + Raises + ------ + ValueError + _description_ + ValueError + _description_ + ValueError + _description_ + ValueError + _description_ + ValueError + _description_ + ValueError + _description_ + """ """""" + print('setting up model grid...') + t0 = time.time() + + if parent_model is None: + snap_to_parent = False + elif not np.allclose(parent_model.modelgrid.rotation, rotation): + snap_to_parent = False + + # make sure crs is populated, then get CRS units for the grid + crs = get_crs(crs=crs, epsg=epsg, prj=prj, wkt=wkt) + if crs is None and parent_model is not None: + crs = parent_model.modelgrid.crs + + grid_units = get_crs_length_units(crs) + if grid_units not in {'feet', 'meters'}: + raise ValueError(f'unrecognized CRS units {grid_units}: CRS must be projected in feet or meters') + + # conversion from model length units + # to model grid (coordinate reference system) units + to_grid_units_inset = convert_length_units(model_length_units, grid_units) + + regular = True + if dxy is not None: + delr_grid = np.round(dxy, 4) # dxy is specified in CRS units + delc_grid = delr_grid + if delr is not None: + # delr is expected to be in model grid (CRS) units + delr_grid = np.round(np.array(delr), 4) + if not np.isscalar(delr_grid): + if len(set(delr_grid)) == 1: + delr_grid = delr_grid[0] + else: + regular = False + if delc is not None: + delc_grid = np.round(np.array(delc), 4) + if not np.isscalar(delc_grid): + if len(set(delc_grid)) == 1: + delc_grid = delc_grid[0] + else: + regular = False + if parent_model is not None and snap_to_parent: + to_grid_units_parent = convert_length_units(get_model_length_units(parent_model), grid_units) + # parent model grid spacing in meters + #parent_delr_grid = np.round(parent_model.dis.delr.array[0] * to_grid_units_parent, 4) + #if not parent_delr_grid % delr_grid % parent_delr_grid == 0: + # raise ValueError('inset delr spacing of {} must be factor of parent spacing of {}'.format(delr_grid, + # parent_delr_grid)) + #parent_delc_grid = np.round(parent_model.dis.delc.array[0] * to_grid_units_parent, 4) + #if not parent_delc_grid % delc_grid % parent_delc_grid == 0: + # raise ValueError('inset delc spacing of {} must be factor of parent spacing of {}'.format(delc_grid, + # parent_delc_grid)) + + # option 1: make grid from xoff, yoff and specified dimensions + if xoff is not None and yoff is not None: + assert nrow is not None and ncol is not None, \ + "Need to specify nrow and ncol if specifying xoffset and yoffset." + if regular: + height_grid = np.round(delc_grid * nrow, 4) + width_grid = np.round(delr_grid * ncol, 4) + else: + height_grid = np.sum(delc_grid) + width_grid = np.sum(delr_grid) + + # optionally align grid with national hydrologic grid + # grids snapping to NHD must have spacings that are a factor of 1 km + if snap_to_NHG: + if rotation != 0: + raise ValueError(f'rotation = {rotation}: snap_to_NHD option ' + 'only compatible with unrotated grids!') + if not (regular and np.allclose(1000 % delc_grid, 0, atol=1e-4)): + raise ValueError(f'snap_to_NHD option ' + 'only compatible with uniformly spaced ' + 'structured grids!') + x, y = get_point_on_national_hydrogeologic_grid(xoff, yoff, + offset='edge', op=np.floor) + xoff = x + yoff = y + + + # make a bounding box so that other important corners can be specified + lower_left_corner = Point(xoff, yoff) + unrotated_bbox = box(xoff, yoff, xoff + width_grid, yoff + height_grid) + # get the upper right corner + ur = shapely.affinity.rotate(Point(xoff, yoff + height_grid), rotation, + origin=lower_left_corner) + xul, yul = ur.x, ur.y + + # option 2: make grid using buffered feature bounding box + else: + # read in the feature from a shapefile + if features is None and features_shapefile is not None: + bbox_filter = None + if parent_model is not None: + pmg_l, pmg_r, pmg_b, pmg_t = parent_model.modelgrid.extent + bbox_filter = gpd.GeoSeries(box(pmg_l, pmg_b, pmg_r, pmg_t), + crs=parent_model.modelgrid.crs) + df = gpd.read_file(features_shapefile, bbox=bbox_filter) + if id_column is not None and include_ids is not None: + datatype = set(type(s) for s in include_ids) + if len(datatype) > 1: + raise ValueError(f"Inconsistent datatypes in include_ids: {include_ids}") + datatype = datatype.pop() + dtype = {id_column: datatype} + df = df.loc[df[id_column].astype(dtype).isin(include_ids)] + # inexplicable shapely.errors.GEOSException: IllegalArgumentException: + # Points of LinearRing do not form a closed linestring + # error resolved by calling to_crs twice + # (for mfsetup/tests/test_grid.py::test_grid_crs_units[3696-feet-meters]) + try: + df.to_crs(crs, inplace=True) + except: + df.to_crs(crs, inplace=True) + # use all features by default + features = df.geometry.tolist() + elif features is None and features_shapefile is not None: + raise ValueError( + "setup_grid: need one of xoff/yoff, xul/yul, features_shapefile or " + "features inputs") + # alternatively, accept features as an argument + # convert multiple features to a MultiPolygon + if isinstance(features, list): + if len(features) > 1: + features = MultiPolygon(features) + else: + features = features[0] + + # size the grid based on the bbox for features + # buffer and then unrotate the feature + buffered_features = features.buffer(buffer) + unrotated_features = shapely.affinity.rotate(buffered_features, -rotation, + origin=buffered_features.centroid) + unrotated_bbox = box(*unrotated_features.bounds) + + # Get the initial grid height and width + height_grid = np.round(unrotated_bbox.bounds[3] - unrotated_bbox.bounds[1]) + width_grid = np.round(unrotated_bbox.bounds[2] - unrotated_bbox.bounds[0]) + # initial rows and columns (prior to snapping, if specified) + nrow = int(np.ceil(height_grid / delc_grid)) + ncol = int(np.ceil(width_grid / delr_grid)) + # correct the height and width to be consistent with nrow, ncol + height_grid = nrow * delc_grid + width_grid = ncol * delr_grid + # make a new box with the corrected height + unrotated_bbox = box(unrotated_bbox.bounds[0], unrotated_bbox.bounds[1], + unrotated_bbox.bounds[0] + width_grid, + unrotated_bbox.bounds[1] + height_grid) + # Get important corners + # upper left corner + xul_ur, yul_ur = unrotated_bbox.bounds[0], unrotated_bbox.bounds[3] + ul = shapely.affinity.rotate(Point(xul_ur, yul_ur), rotation, + origin=buffered_features.centroid) + xul, yul = ul.x, ul.y + # lower left corner + xll_ur, yll_ur = unrotated_bbox.bounds[0], unrotated_bbox.bounds[1] + lower_left_corner = shapely.affinity.rotate( + Point(xll_ur, yll_ur), rotation, origin=buffered_features.centroid) + # lower right corner + xlr_ur, ylr_ur = unrotated_bbox.bounds[2], unrotated_bbox.bounds[1] + lower_right_corner = shapely.affinity.rotate( + Point(xlr_ur, ylr_ur), rotation, origin=buffered_features.centroid) + # xoff, yoff here for consistency with flopy model grid language + xoff, yoff = lower_left_corner.x, lower_left_corner.y + + + # align model with parent grid if there is a parent model + # (and not snapping to national hydrologic grid) + # for grids created from a buffer around a feature + # (without a pre-defined number of rows and columns) + # this likely means increasing nrow and ncol + if parent_model is not None and (snap_to_parent and not snap_to_NHG): + + if features is not None: + # snap the upper left corner + # to ensure that grid perimeter is at least buffer distance from feature(s) + xul, yul = snap_to_cell_corner(xul, yul, parent_model.modelgrid, + corner='upper left') + ul_ur = shapely.affinity.rotate(Point(xul, yul), + -rotation, + origin=buffered_features.centroid) + # snap the lower right corner for the same reason + xlr, ylr = snap_to_cell_corner(lower_right_corner.x, lower_right_corner.y, + parent_model.modelgrid, + corner='lower right') + lr_ur = shapely.affinity.rotate(Point(xlr, ylr), + -rotation, + origin=buffered_features.centroid) + grid_height = ul_ur.y - lr_ur.y + grid_width = lr_ur.x - ul_ur.x + assert np.round(grid_height) % delc_grid == 0. + assert np.round(grid_width) % delr_grid == 0. + nrow = int(round(grid_height / delc_grid)) + ncol = int(round(grid_width / delr_grid)) + + # get revised lower left corner (offset) + ll = shapely.affinity.rotate(Point(ul_ur.x, lr_ur.y), + rotation, + origin=buffered_features.centroid) + xoff, yoff = ll.x, ll.y + + else: + xoff, yoff = snap_to_cell_corner(xoff, yoff, parent_model.modelgrid, + corner='nearest') + grid_height = unrotated_bbox.bounds[3] - unrotated_bbox.bounds[1] + xul_ur, yul_ur = xoff, yoff + grid_height + upper_left_corner = shapely.affinity.rotate(Point(xul_ur, yul_ur), rotation, + origin=Point(xoff, yoff)) + xul, yul = upper_left_corner.x, upper_left_corner.y + + assert xoff is not None + # xoff = xul + (np.sin(np.radians(rotation)) * height_grid) + assert yoff is not None + # yoff = yul - (np.cos(np.radians(rotation)) * height_grid) + # check that the top left and bottom left corners are consistent with discretization + if np.isscalar(delr_grid): + pass#assert np.allclose(np.sqrt((yul - yoff)**2 + (xul - xoff)**2), + # nrow * delc_grid) + else: + assert np.allclose(np.sqrt((yul - yoff)**2 + (xul - xoff)**2), + delc_grid.sum()) + # set the grid configuration dictionary + grid_cfg = {'nrow': int(nrow), 'ncol': int(ncol), + 'nlay': nlay, + 'delr': delr_grid, 'delc': delc_grid, + 'xoff': xoff, 'yoff': yoff, + 'xul': xul, 'yul': yul, + 'rotation': rotation, + #'lenuni': 2, + 'structured': True + } + + if regular: + grid_cfg['delr'] = np.ones(grid_cfg['ncol'], dtype=float) * grid_cfg['delr'] + grid_cfg['delc'] = np.ones(grid_cfg['nrow'], dtype=float) * grid_cfg['delc'] + grid_cfg['delr'] = grid_cfg['delr'].tolist() # for serializing to json + grid_cfg['delc'] = grid_cfg['delc'].tolist() + + # renames for flopy modelgrid + renames = {'rotation': 'angrot'} + for k, v in renames.items(): + if k in grid_cfg: + grid_cfg[v] = grid_cfg.pop(k) + + # add epsg or wkt if there isn't an epsg + if crs is not None: + grid_cfg['crs'] = crs + elif epsg is not None: + grid_cfg['epsg'] = epsg + else: + warnings.warn(("Coordinate Reference System information must be supplied via" + "the 'crs'' argument.")) + + # set up the model grid instance + grid_cfg['top'] = top + grid_cfg['botm'] = botm + grid_cfg.update(kwargs) # update with any kwargs from function call + kwargs = get_input_arguments(grid_cfg, MFsetupGrid) + modelgrid = MFsetupGrid(**kwargs) + modelgrid.cfg = grid_cfg + + # write grid info to json, and shapefile of bbox + # omit top and botm arrays from json represenation of grid + # (just for horizontal disc.) + del grid_cfg['top'] + del grid_cfg['botm'] + + # crs needs to be cast to epsg or wkt to be serialized + if isinstance(crs, pyproj.CRS): + grid_cfg['epsg'] = grid_cfg['crs'].to_epsg() + if grid_cfg['epsg'] is None: + grid_cfg['wkt'] = grid_cfg['crs'].to_wkt() + del grid_cfg['crs'] + + fileio.dump(grid_file, grid_cfg) + if bbox_shapefile is not None: + write_bbox_shapefile(modelgrid, bbox_shapefile) + print("finished in {:.2f}s\n".format(time.time() - t0)) + return modelgrid
+ + + +
+[docs] +def get_cellface_midpoint(grid, k, i, j, direction): + """Return the midpoint of vertical cell face within a structured grid. + For example, the midpoint for the right cell face is halfway between + the upper and lower right corners of the cell, halfway between the + top and bottom edges.""" + if np.isscalar(k): + k = [k] + if np.isscalar(i): + i = [i] + if np.isscalar(j): + j = [j] + k = np.array(k).astype(int) + i = np.array(i).astype(int) + j = np.array(j).astype(int) + if isinstance(direction, str): + direction = [direction] * len(k) + x_edges_model = grid.xyedges[0] + x_centers_model = grid.xycenters[0] + y_edges_model = grid.xyedges[1] + y_centers_model = grid.xycenters[1] + model_x = [] + model_y = [] + for ii, jj, dn in zip(i, j, direction): + if dn == 'right': + x = x_edges_model[jj+1] + y = y_centers_model[ii] + elif dn == 'left': + x = x_edges_model[jj] + y = y_centers_model[ii] + elif dn == 'top': + x = x_centers_model[jj] + y = y_edges_model[ii] + elif dn == 'bottom': + x = x_centers_model[jj] + y = y_edges_model[ii+1] + else: + raise ValueError("direction needs to be right, left, top or bottom") + model_x.append(x) + model_y.append(y) + x, y = grid.get_coords(model_x, model_y) + z = grid.zcellcenters[k, i, j] + return x, y, z
+ + + +
+[docs] +def get_intercell_connections(binary_grid_file): + """Get all of the connections between cells in a + MODFLOW 6 structured grid. + + Parameters + ---------- + binary_grid_file : str or pathlike + MODFLOW 6 binary grid file + + Returns + ------- + df : DataFrame + Intercell connections, with the following columns: + + ==== ============================================================= + n from zero-based node number + kn from zero-based layer + in from zero-based row + jn from zero-based column + m to zero-based node number + kn to zero-based layer + in to zero-based row + jn to zero-based column + qidx index position of flow in cell budget file + ==== ============================================================= + + Raises + ------ + ValueError + _description_ + """ + print('Getting intercell connections...') + ta = time.time() + bgf = MfGrdFile(binary_grid_file) + nrow = bgf.nrow + ncol = bgf.ncol + # IA array maps cell number to connection number + # (one-based index number of first connection at each cell)? + # taking the forward difference then yields nconnections per cell + ia = bgf._datadict['IA'] - 1 + # Connections in the JA array correspond directly with the + # FLOW-JA-FACE record that is written to the budget file. + ja = bgf._datadict['JA'] - 1 # cell connections + + all_n = [] + m = [] + qidx = [] + for n in range(len(ia)-1): + for ipos in range(ia[n] + 1, ia[n+1]): + all_n.append(n) + m.append(ja[ipos]) # m is the cell that n connects to + qidx.append(ipos) + df = pd.DataFrame({'n': all_n, 'm': m, 'qidx': qidx}) + k, i, j = get_kij_from_node3d(df['n'].values, nrow, ncol) + df['kn'], df['in'], df['jn'] = k, i, j + k, i, j = get_kij_from_node3d(df['m'].values, nrow, ncol) + df['km'], df['im'], df['jm'] = k, i, j + df.reset_index() + print(f"Getting intercell connections took {time.time() - ta:.2f}s\n") + return df
+ + + +
+[docs] +def get_transform(modelgrid): + """Get a rasterio Affine object from a Flopy modelgrid + (same as transform attribute of rasters). + """ + if not isinstance(modelgrid, StructuredGrid): + raise ValueError( + f"{type(modelgrid)}: Input needs to be a flopy.discretization.StructuredGrid") + x0 = modelgrid.xyedges[0][0] + y0 = modelgrid.xyedges[1][0] + xul, yul = modelgrid.get_coords(x0, y0) + return Affine(modelgrid.delr[0], 0., xul, + 0., -modelgrid.delc[0], yul) * \ + Affine.rotation(-modelgrid.angrot)
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/_modules/mfsetup/interpolate.html b/_modules/mfsetup/interpolate.html new file mode 100644 index 00000000..419dee14 --- /dev/null +++ b/_modules/mfsetup/interpolate.html @@ -0,0 +1,560 @@ + + + + + + + + mfsetup.interpolate — modflow-setup 0.5.0.post59+g65803fd documentation + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for mfsetup.interpolate

+import itertools
+import time
+
+import flopy
+import numpy as np
+from scipy.interpolate import griddata
+from scipy.spatial import qhull as qhull
+
+
+
+[docs] +def get_source_dest_model_xys(source_model, dest_model, + source_mask=None): + """Get the xyz and uvw inputs to the interp_weights function. + + Parameters + ---------- + source_model : flopy.modeflow.Modflow, flopy.mf6.MFModel, or MFsetupGrid instance + dest_model : mfsetup.MFnwtModel, mfsetup.MF6model instance + """ + source_modelgrid = source_model + if isinstance(source_model, flopy.mbase.ModelInterface): + source_modelgrid = source_model.modelgrid + dest_modelgrid = dest_model.modelgrid + + if source_mask is None: + if dest_model.parent_mask.shape == source_modelgrid.xcellcenters.shape: + source_mask = dest_model.parent_mask + else: + source_mask = np.ones(source_modelgrid.xcellcenters.shape, dtype=bool) + else: + if source_mask.shape != source_modelgrid.xcellcenters.shape: + msg = 'source mask of shape {} incompatible with source grid of shape {}' + raise ValueError(msg.format(source_mask.shape, + source_modelgrid.xcellcenters.shape)) + x = source_modelgrid.xcellcenters[source_mask].flatten() + y = source_modelgrid.ycellcenters[source_mask].flatten() + x2, y2 = dest_modelgrid.xcellcenters.ravel(), \ + dest_modelgrid.ycellcenters.ravel() + source_model_xy = np.array([x, y]).transpose() + dest_model_xy = np.array([x2, y2]).transpose() + return source_model_xy, dest_model_xy
+ + + +
+[docs] +def interp_weights(xyz, uvw, d=2, mask=None): + """Speed up interpolation vs scipy.interpolate.griddata (method='linear'), + by only computing the weights once: + https://stackoverflow.com/questions/20915502/speedup-scipy-griddata-for-multiple-interpolations-between-two-irregular-grids + + Parameters + ---------- + xyz : ndarray or tuple + x, y, z, ... locations of source data. + (shape n source points x ndims) + uvw : ndarray or tuple + x, y, z, ... locations of where source data will be interpolated + (shape n destination points x ndims) + d : int + Number of dimensions (2 for 2D, 3 for 3D, etc.) + + Returns + ------- + indices : ndarray of shape n destination points x 3 + Index positions in flattened (1D) xyz array + weights : ndarray of shape n destination points x 3 + Fractional weights for each row position + in indices. Weights in each row sum to 1 + across the 3 columns. + """ + print(f'Calculating {d}D interpolation weights...') + # convert input to ndarrays of the right shape + uvw = np.array(uvw) + if uvw.shape[-1] != d: + uvw = uvw.T + xyz = np.array(xyz) + if xyz.shape[-1] != d: + xyz = xyz.T + t0 = time.time() + tri = qhull.Delaunay(xyz) + simplex = tri.find_simplex(uvw) + vertices = np.take(tri.simplices, simplex, axis=0) + temp = np.take(tri.transform, simplex, axis=0) + delta = uvw - temp[:, d] + bary = np.einsum('njk,nk->nj', temp[:, :d, :], delta) + weights = np.hstack((bary, 1 - bary.sum(axis=1, keepdims=True))) + # round the weights, + # so that the weights for each simplex sum to 1 + # sums not exactly == 1 seem to cause spurious values + weights = np.round(weights, 6) + print("finished in {:.2f}s\n".format(time.time() - t0)) + return vertices, weights
+ + + +
+[docs] +def interpolate(values, vtx, wts, fill_value='mean'): + """Apply the interpolation weights to a set of values. + + Parameters + ---------- + values : 1D array of length n source points (same as xyz in interp_weights) + vtx : indices returned by interp_weights + wts : weights returned by interp_weights + fill_value : float + Value used to fill in for requested points outside of the convex hull + of the input points (i.e., those with at least one negative weight). + If not provided, then the default is nan. + Returns + ------- + interpolated values + """ + result = np.einsum('nj,nj->n', np.take(values, vtx), wts) + + # fill nans that might result from + # child grid points that are outside of the convex hull of the parent grid + # and for an unknown reason on the Appveyor Windows environment + if fill_value == 'mean': + fill_value = np.nanmean(result) + if fill_value is not None: + result[np.any(wts < 0, axis=1)] = fill_value + return result
+ + + +
+[docs] +def regrid(arr, grid, grid2, mask1=None, mask2=None, method='linear'): + """Interpolate array values from one model grid to another, + using scipy.interpolate.griddata. + + Parameters + ---------- + arr : 2D numpy array + Source data + grid : flopy.discretization.StructuredGrid instance + Source grid + grid2 : flopy.discretization.StructuredGrid instance + Destination grid (to interpolate onto) + mask1 : boolean array + mask for source grid. Areas that are masked will be converted to + nans, and not included in the interpolation. + mask2 : boolean array + mask denoting active area for destination grid. + The mean value will be applied to inactive areas if linear interpolation + is used (not for integer/categorical arrays). + method : str + interpolation method ('nearest', 'linear', or 'cubic') + """ + try: + from scipy.interpolate import griddata + except: + print('scipy not installed\ntry pip install scipy') + return None + + arr = arr.copy() + # only include points specified by mask + x, y = grid.xcellcenters, grid.ycellcenters + if mask1 is not None: + mask1 = mask1.astype(bool) + arr = arr[mask1] + x = x[mask1] + y = y[mask1] + + points = np.array([x.ravel(), y.ravel()]).transpose() + + arr2 = griddata(points, arr.flatten(), + (grid2.xcellcenters, grid2.ycellcenters), + method=method, fill_value=np.nan) + + # fill any areas that are nan + # (new active area includes some areas not in uwsp model) + fill = np.isnan(arr2) + + # if new active area is supplied, fill areas outside of that too + if mask2 is not None: + mask2 = mask2.astype(bool) + fill = ~mask2 | fill + + # only fill with mean value if linear interpolation used + # (floating point arrays) + if method == 'linear': + fill_value = np.nanmean(arr2) + arr2[fill] = np.nanmean(arr2[~fill]) + #else: + # arr2[fill] = nodataval + if arr2.min() < 0: + j=2 + return arr2
+ + + +
+[docs] +def regrid3d(arr, grid, grid2, mask1=None, mask2=None, method='linear'): + """Interpolate array values from one model grid to another, + using scipy.interpolate.griddata. + + Parameters + ---------- + arr : 3D numpy array + Source data + grid : flopy.discretization.StructuredGrid instance + Source grid + grid2 : flopy.discretization.StructuredGrid instance + Destination grid (to interpolate onto) + mask1 : boolean array + mask for source grid. Areas that are masked will be converted to + nans, and not included in the interpolation. + mask2 : boolean array + mask denoting active area for destination grid. + The mean value will be applied to inactive areas if linear interpolation + is used (not for integer/categorical arrays). + method : str + interpolation method ('nearest', 'linear', or 'cubic') + + Returns + ------- + arr : 3D numpy array + Interpolated values at the x, y, z locations in grid2. + """ + try: + from scipy.interpolate import griddata + except: + print('scipy not installed\ntry pip install scipy') + return None + + assert len(arr.shape) == 3, "input array must be 3d" + if grid2.botm is None: + raise ValueError('regrid3d: grid2.botm is None; grid2 must have cell bottom elevations') + + # source model grid points + px, py, pz = grid.xyzcellcenters + + # pad z cell centers to avoid nans + # from dest cells that are above or below source cells + # pad top by top layer thickness + b1 = grid.top - grid.botm[0] + top = pz[0] + b1 + # pad botm by botm layer thickness + if grid.shape[0] > 1: + b2 = -np.diff(grid.botm[-2:], axis=0)[0] + else: + b2 = b1 + botm = pz[-1] - b2 + pz = np.vstack([[top], pz, [botm]]) + nlay, nrow, ncol = pz.shape + px = np.tile(px, (nlay, 1, 1)) + py = np.tile(py, (nlay, 1, 1)) + + # pad the source array (and mask) on the top and bottom + # so that dest cells above and below the top/bottom cell centers + # will be within the interpolation space + # (source x, y, z locations already contain this pad) + arr = np.pad(arr, pad_width=1, mode='edge')[:, 1:-1, 1:-1] + + # apply the mask + if mask1 is not None: + mask1 = mask1.astype(bool) + # tile the mask to nlay x nrow x ncol + if len(mask1.shape) == 2: + mask1 = np.tile(mask1, (nlay, 1, 1)) + # pad the mask vertically to match the source array + elif (len(mask1.shape) == 3) and (mask1.shape[0] == (nlay - 2)): + mask1 = np.pad(mask1, pad_width=1, mode='edge')[:, 1:-1, 1:-1] + arr = arr[mask1] + px = px[mask1] + py = py[mask1] + pz = pz[mask1] + else: + px = px.ravel() + py = py.ravel() + pz = pz.ravel() + arr = arr.ravel() + + # dest modelgrid points + x, y, z = grid2.xyzcellcenters + try: + nlay, nrow, ncol = z.shape + except: + j=2 + x = np.tile(x, (nlay, 1, 1)) + y = np.tile(y, (nlay, 1, 1)) + + # interpolate inset boundary heads from 3D parent head solution + arr2 = griddata((px, py, pz), arr, + (x, y, z), method='linear') + # get the locations of any bad values + bk, bi, bj = np.where(np.isnan(arr2)) + bx = x[bk, bi, bj] + by = y[bk, bi, bj] + bz = z[bk, bi, bj] + # tweak the result slightly to resolve any apparent triangulation errors + fixed = griddata((px, py, pz), arr, + (bx+0.0001, by+0.0001, bz+0.0001), method='linear') + arr2[bk, bi, bj] = fixed + + # fill any remaining areas that are nan + # (new active area includes some areas not in uwsp model) + fill = np.isnan(arr2) + + # if new active area is supplied, fill areas outside of that too + if mask2 is not None: + mask2 = mask2.astype(bool) + fill = ~mask2 | fill + + # only fill with mean value if linear interpolation used + # (floating point arrays) + if method == 'linear': + arr2[fill] = np.nanmean(arr2[~fill]) + return arr2
+ + + +
+[docs] +class Interpolator: + """Speed up barycentric interpolation similar to scipy.interpolate.griddata + (method='linear'), by computing the weights once and then re-using them for + successive interpolation with the same source and destination points. + + Parameters + ---------- + xyz : ndarray or tuple + x, y, z, ... locations of source data. + (shape n source points x ndims) + uvw : ndarray or tuple + x, y, z, ... locations of where source data will be interpolated + (shape n destination points x ndims) + d : int + Number of dimensions (2 for 2D, 3 for 3D, etc.) + source_values_mask : boolean array + Boolean array of same structure as the `source_values` array + input to the :meth:`~mfsetup.interpolate.Interpolator.interpolate` method, + with the same number of active values as the size of `xyz`. + + Notes + ----- + The methods employed are based on this Stack Overflow post: + https://stackoverflow.com/questions/20915502/speedup-scipy-griddata-for-multiple-interpolations-between-two-irregular-grids + + """ + def __init__(self, xyz, uvw, d=2, source_values_mask=None): + + self.xyz = xyz + self.uvw = uvw + self.d = d + + # properties + self._interp_weights = None + self._source_values_mask = None + self.source_values_mask = source_values_mask + + @property + def interp_weights(self): + """Calculate the interpolation weights.""" + if self._interp_weights is None: + self._interp_weights = interp_weights(self.xyz, self.uvw, self.d) + return self._interp_weights + + @property + def source_values_mask(self): + return self._source_values_mask + + @source_values_mask.setter + def source_values_mask(self, source_values_mask): + if source_values_mask is not None and \ + np.sum(source_values_mask) != len(self.xyz[0]): + raise ValueError('source_values_mask must contain the same number ' + 'of True (active) values as there are source (xyz) points') + self._source_values_mask = source_values_mask + +
+[docs] + def interpolate(self, source_values, method='linear'): + """Interpolate values in source_values to the destination points in the *uvw* attribute. + using modelgrid instances + attached to the source and destination models. + + Parameters + ---------- + source_values : ndarray + Values to be interpolated to destination points. Array must be the same size as + the number of source points, or the number of active points within source points, + as defined by the `source_values_mask` array input to the :class:`~mfsetup.interpolate.Interpolator`. + method : str ('linear', 'nearest') + Interpolation method. With 'linear' a triangular mesh is discretized around + the source points, and barycentric weights representing the influence of the *d* +1 + source points on each destination point (where *d* is the number of dimensions), + are computed. With 'nearest', the input is simply passed to :meth:`scipy.interpolate.griddata`. + + Returns + ------- + interpolated : 1D numpy array + Array of interpolated values at the destination locations. + """ + if self.source_values_mask is not None: + source_values = source_values.flatten()[self.source_values_mask.flatten()] + if method == 'linear': + interpolated = interpolate(source_values, *self.interp_weights, + fill_value=None) + elif method == 'nearest': + interpolated = griddata(self.xyz, source_values, + self.uvw, method=method) + return interpolated
+
+ + + +if __name__ == '__main__': + """Example from stack overflow. In this example, both + xyz and uvw have points in 3 dimensions. (npoints x ndim)""" + m, n, d = int(3.5e4), int(3e3), 3 + # make sure no new grid point is extrapolated + bounding_cube = np.array(list(itertools.product([0, 1], repeat=d))) + xyz = np.vstack((bounding_cube, + np.random.rand(int(m - len(bounding_cube)), d))) + f = np.random.rand(m) + g = np.random.rand(m) + uvw = np.random.rand(n, d) + + vtx, wts = interp_weights(xyz, uvw) + + np.allclose(interpolate(f, vtx, wts), griddata(xyz, f, uvw)) +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/_modules/mfsetup/mf6model.html b/_modules/mfsetup/mf6model.html new file mode 100644 index 00000000..27e79d2c --- /dev/null +++ b/_modules/mfsetup/mf6model.html @@ -0,0 +1,1347 @@ + + + + + + + + mfsetup.mf6model — modflow-setup 0.5.0.post59+g65803fd documentation + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for mfsetup.mf6model

+import copy
+import os
+import shutil
+import time
+from pathlib import Path
+
+import flopy
+import numpy as np
+import pandas as pd
+
+fm = flopy.modflow
+mf6 = flopy.mf6
+from flopy.utils.lgrutil import Lgr
+from gisutils import get_values_at_points
+
+from mfsetup.bcs import remove_inactive_bcs
+from mfsetup.discretization import (
+    ModflowGwfdis,
+    create_vertical_pass_through_cells,
+    deactivate_idomain_above,
+    find_remove_isolated_cells,
+    make_idomain,
+    make_irch,
+    make_lgr_idomain,
+)
+from mfsetup.fileio import add_version_to_fileheader, flopy_mfsimulation_load
+from mfsetup.fileio import load as load_config
+from mfsetup.fileio import load_cfg
+from mfsetup.ic import setup_strt
+from mfsetup.lakes import (
+    get_lakeperioddata,
+    setup_lake_connectiondata,
+    setup_lake_fluxes,
+    setup_lake_info,
+    setup_lake_tablefiles,
+    setup_mf6_lake_obs,
+)
+from mfsetup.mfmodel import MFsetupMixin
+from mfsetup.mover import get_mover_sfr_package_input
+from mfsetup.obs import remove_inactive_obs, setup_head_observations
+from mfsetup.oc import parse_oc_period_input
+from mfsetup.tdis import add_date_comments_to_tdis, setup_perioddata
+from mfsetup.units import convert_time_units
+from mfsetup.utils import flatten, get_input_arguments
+
+
+
+[docs] +class MF6model(MFsetupMixin, mf6.ModflowGwf): + """Class representing a MODFLOW-6 model. + """ + default_file = 'mf6_defaults.yml' + + def __init__(self, simulation=None, modelname='model', parent=None, cfg=None, + exe_name='mf6', load=False, + version='mf6', lgr=False, **kwargs): + defaults = {'simulation': simulation, + 'parent': parent, + 'modelname': modelname, + 'exe_name': exe_name, + 'version': version, + 'lgr': lgr} + # load configuration, if supplied + if cfg is not None: + if not isinstance(cfg, dict): + cfg = self.load_cfg(cfg) + cfg = self._parse_model_kwargs(cfg) + defaults.update(cfg['model']) + kwargs = {k: v for k, v in kwargs.items() if k not in defaults} + # otherwise, pass arguments on to flopy constructor + args = get_input_arguments(defaults, mf6.ModflowGwf, + exclude='packages') + mf6.ModflowGwf.__init__(self, **args, **kwargs) + #mf6.ModflowGwf.__init__(self, simulation, + # modelname, exe_name=exe_name, version=version, + # **kwargs) + MFsetupMixin.__init__(self, parent=parent) + + self._is_lgr = lgr + self._package_setup_order = ['tdis', 'dis', 'ic', 'npf', 'sto', 'rch', 'oc', + 'chd', 'drn', 'ghb', 'sfr', 'lak', 'riv', + 'wel', 'maw', 'obs'] + # set up the model configuration dictionary + # start with the defaults + self.cfg = load_config(self.source_path / self.default_file) #'mf6_defaults.yml') + self.relative_external_paths = self.cfg.get('model', {}).get('relative_external_paths', True) + # set the model workspace and change working directory to there + self.model_ws = self._get_model_ws(cfg=cfg) + # update defaults with user-specified config. (loaded above) + # set up and validate the model configuration dictionary + self._load = load # whether the model is being created or loaded + self._set_cfg(cfg) + + # property attributes + self._idomain = None + + # other attributes + self._features = {} # dictionary for caching shapefile datasets in memory + self._drop_thin_cells = self.cfg.get('dis', {}).get('drop_thin_cells', True) + + # arrays remade during this session + self.updated_arrays = set() + + # delete the temporary 'original-files' folder + # if it already exists, to avoid side effects from stale files + if not self._is_lgr: + shutil.rmtree(self.tmpdir, ignore_errors=True) + + def __repr__(self): + return MFsetupMixin.__repr__(self) + + def __str__(self): + return MFsetupMixin.__repr__(self) + + @property + def nlay(self): + return self.cfg['dis']['dimensions'].get('nlay', 1) + + @property + def length_units(self): + return self.cfg['dis']['options']['length_units'] + + @property + def time_units(self): + return self.cfg['tdis']['options']['time_units'] + + + @property + def perioddata(self): + """DataFrame summarizing stress period information. + Columns: + ============== ========================================= + start_datetime Start date of each model stress period + end_datetime End date of each model stress period + time MODFLOW elapsed time, in days* + per Model stress period number + perlen Stress period length (days) + nstp Number of timesteps in stress period + tsmult Timestep multiplier + steady Steady-state or transient + oc Output control setting for MODFLOW + parent_sp Corresponding parent model stress period + ============== ========================================= + """ + if self._perioddata is None: + # check first for already loaded time discretization info + try: + tdis_perioddata_config = {col: getattr(self.modeltime, col) + for col in ['perlen', 'nstp', 'tsmult']} + nper = self.modeltime.nper + steady = self.modeltime.steady_state + default_start_datetime = self.modeltime.start_datetime + except: + tdis_perioddata_config = self.cfg['tdis']['perioddata'] + default_start_datetime = self.cfg['tdis']['options'].get('start_date_time', + '1970-01-01') + #tdis_dimensions_config = self.cfg['tdis']['dimensions'] + nper = self.cfg['tdis']['dimensions'].get('nper') + # steady can be input in either the tdis or sto input blocks + steady = self.cfg['tdis'].get('steady') + if steady is None: + steady = self.cfg['sto'].get('steady') + + parent_stress_periods = self.cfg.get('parent').get('copy_stress_periods') + perioddata = setup_perioddata( + self, + tdis_perioddata_config=tdis_perioddata_config, + default_start_datetime=default_start_datetime, + nper=nper, steady=steady, + time_units=self.time_units, + parent_model=self.parent, + parent_stress_periods=parent_stress_periods, + ) + self._perioddata = perioddata + # reset nper property so that it will reference perioddata table + self._nper = None + self._perioddata.to_csv(f'{self._tables_path}/stress_period_data.csv', index=False) + # update the model configuration + if 'parent_sp' in perioddata.columns: + self.cfg['parent']['copy_stress_periods'] = perioddata['parent_sp'].tolist() + + return self._perioddata + + @property + def idomain(self): + """3D array indicating which cells will be included in the simulation. + Made a property so that it can be easily updated when any packages + it depends on change. + """ + if self._idomain is None and 'DIS' in self.get_package_list(): + self._set_idomain() + return self._idomain + + def _set_idomain(self): + """Remake the idomain array from the source data, + no data values in the top and bottom arrays, and + so that cells above SFR reaches are inactive. + + Also remakes irch for the recharge package""" + print('(re)setting the idomain array...') + # loop thru LGR models and inactivate area of parent grid for each one + lgr_idomain = np.ones(self.dis.idomain.array.shape, dtype=int) + if isinstance(self.lgr, dict): + for k, v in self.lgr.items(): + lgr_idomain[v.idomain == 0] = 0 + self._lgr_idomain2d = lgr_idomain[0] + idomain_from_layer_elevations = make_idomain(self.dis.top.array, + self.dis.botm.array, + nodata=self._nodata_value, + minimum_layer_thickness=self.cfg['dis'].get('minimum_layer_thickness', 1), + drop_thin_cells=self._drop_thin_cells, + tol=1e-4) + # include cells that are active in the existing idomain array + # and cells inactivated on the basis of layer elevations + idomain = (self.dis.idomain.array >= 1) & \ + (idomain_from_layer_elevations >= 1) & \ + (lgr_idomain >= 1) + idomain = idomain.astype(int) + + # remove cells that conincide with lakes + # idomain[self.isbc == 1] = 0. + + # remove cells that are above stream cells + if self.get_package('sfr') is not None: + idomain = deactivate_idomain_above(idomain, self.sfr.packagedata) + + # inactivate any isolated cells that could cause problems with the solution + idomain = find_remove_isolated_cells(idomain, minimum_cluster_size=20) + + # create pass-through cells in inactive cells that have an active cell above and below + # by setting these cells to -1 + idomain = create_vertical_pass_through_cells(idomain) + + self._idomain = idomain + + # take the updated idomain array and set cells != 1 to np.nan in layer botm array + # including lake cells + # effect is that the layer thicknesses in these cells will be set to zero + # fill_cells_vertically will be run in the setup_array routine, + # to collapse the nan cells to zero-thickness + # (assign their layer botm to the next valid layer botm above) + botm = self.dis.botm.array.copy() + botm[(idomain != 1)] = np.nan + + # re-write the input files + # todo: integrate this better with setup_dis + # to reduce the number of times the arrays need to be remade + self._setup_array('dis', 'botm', + data={i: arr for i, arr in enumerate(botm)}, + datatype='array3d', resample_method='linear', + write_fmt='%.2f', dtype=float) + self.dis.botm = self.cfg['dis']['griddata']['botm'] + self._setup_array('dis', 'idomain', + data={i: arr for i, arr in enumerate(idomain)}, + datatype='array3d', resample_method='nearest', + write_fmt='%d', dtype=int) + self.dis.idomain = self.cfg['dis']['griddata']['idomain'] + self._mg_resync = False + self.setup_grid() # reset the model grid + + # rebuild irch to keep it in sync with idomain changes + irch = make_irch(idomain) + self._setup_array('rch', 'irch', + data={0: irch}, + datatype='array2d', + write_fmt='%d', dtype=int) + #self.dis.irch = self.cfg['dis']['irch'] + + def _update_grid_configuration_with_dis(self): + """Update grid configuration with any information supplied to dis package + (so that settings specified for DIS package have priority). This method + is called by MFsetupMixin.setup_grid. + """ + for param in ['nlay', 'nrow', 'ncol']: + if param in self.cfg['dis']['dimensions']: + self.cfg['setup_grid'][param] = self.cfg['dis']['dimensions'][param] + for param in ['delr', 'delc']: + if param in self.cfg['dis']['griddata']: + self.cfg['setup_grid'][param] = self.cfg['dis']['griddata'][param] + +
+[docs] + def get_flopy_external_file_input(self, var): + """Repath intermediate external file input to the + external file path that MODFLOW will use. Copy the + file because MF6 flopy reads and writes to the same location. + + Parameters + ---------- + var : str + key in self.cfg['intermediate_data'] dict + + Returns + ------- + input : dict or list of dicts + MODFLOW6 external file input format + {'filename': <filename>} + """ + pass
+ + #intermediate_paths = self.cfg['intermediate_data'][var] + #if isinstance(intermediate_paths, str): + # intermediate_paths = [intermediate_paths] + #external_path = os.path.basename(os.path.normpath(self.external_path)) + #input = [] + #for f in intermediate_paths: + # outf = os.path.join(external_path, os.path.split(f)[1]) + # input.append({'filename': outf}) + # shutil.copy(f, os.path.normpath(self.external_path)) + #if len(input) == 1: + # input = input[0] + #return input + +
+[docs] + def get_package_list(self): + """Replicate this method in flopy.modflow.Modflow. + """ + # TODO: this should reference namfile dict + return [p.name[0].upper() for p in self.packagelist]
+ + +
+[docs] + def get_raster_values_at_cell_centers(self, raster, out_of_bounds_errors='coerce'): + """Sample raster values at centroids + of model grid cells.""" + values = get_values_at_points(raster, + x=self.modelgrid.xcellcenters.ravel(), + y=self.modelgrid.ycellcenters.ravel(), + points_crs=self.modelgrid.crs, + out_of_bounds_errors=out_of_bounds_errors) + if self.modelgrid.grid_type == 'structured': + values = np.reshape(values, (self.nrow, self.ncol)) + return values
+ + +
+[docs] + def get_raster_statistics_for_cells(self, top, stat='mean'): + """Compute zonal statics for raster pixels within + each model cell. + """ + raise NotImplementedError()
+ + + def create_lgr_models(self): + for k, v in self.cfg['setup_grid']['lgr'].items(): + # load the config file for lgr inset model + if 'filename' in v: + inset_cfg = load_cfg(v['filename'], + default_file='mf6_defaults.yml') + elif 'cfg' in v: + inset_cfg = copy.deepcopy(v['cfg']) + else: + raise ValueError('Unrecognized input in subblock lgr: ' + 'Supply either a configuration filename: ' + 'or additional yaml configuration under cfg:' + ) + # if lgr inset has already been created + if inset_cfg['model']['modelname'] in self.simulation._models: + return + inset_cfg['model']['simulation'] = self.simulation + if 'ims' in inset_cfg['model']['packages']: + inset_cfg['model']['packages'].remove('ims') + # set parent configuation dictionary here + # (even though parent model is explicitly set below) + # so that the LGR grid is snapped to the parent grid + inset_cfg['parent'] = {'namefile': self.namefile, + 'model_ws': self.model_ws, + 'version': 'mf6', + 'hiKlakes_value': self.cfg['model']['hiKlakes_value'], + 'default_source_data': True, + 'length_units': self.length_units, + 'time_units': self.time_units + } + inset_cfg = MF6model._parse_model_kwargs(inset_cfg) + kwargs = get_input_arguments(inset_cfg['model'], mf6.ModflowGwf, + exclude='packages') + kwargs['parent'] = self # otherwise will try to load parent model + inset_model = MF6model(cfg=inset_cfg, lgr=True, load=self._load, **kwargs) + #inset_model._load = self._load # whether model is being made or loaded from existing files + inset_model.setup_grid() + del inset_model.cfg['ims'] + inset_model.cfg['tdis'] = self.cfg['tdis'] + if self.inset is None: + self.inset = {} + self.lgr = {} + + self.inset[inset_model.name] = inset_model + #self.inset[inset_model.name]._is_lgr = True + + # establish inset model layering within parent model + parent_start_layer = v.get('parent_start_layer', 0) + # parent_end_layer is specified as the last zero-based + # parent layer that includes LGR refinement (not as a slice end) + parent_end_layer = v.get('parent_end_layer', self.nlay - 1) + # the layer refinement can be specified as an int, a list or a dict + ncppl_input = v.get('layer_refinement', 1) + if np.isscalar(ncppl_input): + ncppl = np.array([0] * self.modelgrid.nlay) + ncppl[parent_start_layer:parent_end_layer+1] = ncppl_input + elif isinstance(ncppl_input, list): + if not len(ncppl_input) == self.modelgrid.nlay: + raise ValueError( + "Configuration input: layer_refinement specified as" + "a list must include a value for every layer." + ) + ncppl = ncppl_input.copy() + elif isinstance(ncppl_input, dict): + ncppl = [ncppl_input.get(i, 0) for i in range(self.modelgrid.nlay)] + else: + raise ValueError("Configuration input: Unsupported input for " + "layer_refinement: supply an int, list or dict.") + + # refined layers must be consecutive, starting from layer 1 + is_refined = (np.array(ncppl) > 0).astype(int) + last_refined_layer = max(np.where(is_refined > 0)[0]) + consecutive = all(np.diff(is_refined)[:last_refined_layer] == 0) + if (is_refined[0] != 1) | (not consecutive): + raise ValueError("Configuration input: layer_refinement must " + "include consecutive sequence of layers, " + "starting with the top layer.") + # check the specified DIS package input is consistent + # with the specified layer_refinement + specified_nlay_dis = inset_cfg['dis']['dimensions'].get('nlay') + # skip this check if nlay hasn't been entered into the configuration file yet + if specified_nlay_dis and (np.sum(ncppl) != specified_nlay_dis): + raise ValueError( + f"Configuration input: layer_refinement of {ncppl} " + f"implies {is_refined.sum()} inset model layers.\n" + f"{specified_nlay_dis} inset model layers specified in DIS package.") + # mapping between parent and inset model layers + # that is used for copying input from parent model + inset_parent_layer_mapping = dict() + inset_k = -1 + for parent_k, n_inset_lay in enumerate(ncppl): + for i in range(n_inset_lay): + inset_k += 1 + inset_parent_layer_mapping[inset_k] = parent_k + self.inset[inset_model.name].cfg['parent']['inset_layer_mapping'] =\ + inset_parent_layer_mapping + # create idomain indicating area of parent grid that is LGR + lgr_idomain = make_lgr_idomain(self.modelgrid, self.inset[inset_model.name].modelgrid, + ncppl) + + # inset model horizontal refinement from parent resolution + refinement = self.modelgrid.delr[0] / self.inset[inset_model.name].modelgrid.delr[0] + if not np.round(refinement, 4).is_integer(): + raise ValueError(f"LGR inset model spacing must be a factor of the parent model spacing.") + ncpp = int(refinement) + self.lgr[inset_model.name] = Lgr(self.nlay, self.nrow, self.ncol, + self.dis.delr.array, self.dis.delc.array, + self.dis.top.array, self.dis.botm.array, + lgr_idomain, ncpp, ncppl) + inset_model._perioddata = self.perioddata + # set parent model top in LGR area to bottom of LGR area + # this is an initial draft; + # bottom elevations are readjusted in sourcedata.py + # when inset model DIS package botm array is set up + # (set to mean of inset model bottom elevations + # within each parent cell) + # number of layers in parent model with LGR + n_parent_lgr_layers = np.sum(np.array(ncppl) > 0) + lgr_area = self.lgr[inset_model.name].idomain == 0 + self.dis.top[lgr_area[0]] =\ + self.lgr[inset_model.name].botmp[n_parent_lgr_layers -1][lgr_area[0]] + # set parent model layers in LGR area to zero-thickness + new_parent_botm = self.dis.botm.array.copy() + for k in range(n_parent_lgr_layers): + new_parent_botm[k][lgr_area[0]] = self.dis.top[lgr_area[0]] + self.dis.botm = new_parent_botm + self._update_top_botm_external_files() + + + def _update_top_botm_external_files(self): + """Update the external files after assigning new elevations to the + Discretization Package top and botm arrays; adjust idomain as needed.""" + # reset the model top + # (this step may not be needed if the "original top" functionality + # is limited to cases where there is a lake package, + # or if the "original top"/"lake bathymetry" functionality is eliminated + # and we instead require the top to be pre-processed) + original_top_file = Path(self.external_path, + f"{self.name}_{self.cfg['dis']['top_filename_fmt']}.original") + original_top_file.unlink(missing_ok=True) + self._setup_array('dis', 'top', + data={0: self.dis.top.array}, + datatype='array2d', resample_method='linear', + write_fmt='%.2f', dtype=float) + # _set_idomain() regerates external files for bottom array + self._set_idomain() + + + def setup_lgr_exchanges(self): + + for inset_name, inset_model in self.inset.items(): + + # update cell information for computing any bottom exchanges + self.lgr[inset_name].top = inset_model.dis.top.array + self.lgr[inset_name].botm = inset_model.dis.botm.array + # update only the layers of the parent model below the child model + parent_top_below_child = np.sum(self.lgr[inset_name].ncppl > 0) -1 + self.lgr[inset_name].botmp[parent_top_below_child:] =\ + self.dis.botm.array[parent_top_below_child:] + + # get the exchange data + exchangelist = self.lgr[inset_name].get_exchange_data(angldegx=True, cdist=True) + + # make a dataframe for concise unpacking of cellids + columns = ['cellidm1', 'cellidm2', 'ihc', 'cl1', 'cl2', 'hwva', 'angldegx', 'cdist'] + exchangedf = pd.DataFrame(exchangelist, columns=columns) + + # unpack the cellids and get their respective ibound values + k1, i1, j1 = zip(*exchangedf['cellidm1']) + k2, i2, j2 = zip(*exchangedf['cellidm2']) + # limit connections to + active1 = self.idomain[k1, i1, j1] >= 1 + + active2 = inset_model.idomain[k2, i2, j2] >= 1 + + # screen out connections involving an inactive cell + active_connections = active1 & active2 + nexg = active_connections.sum() + active_exchangelist = [l for i, l in enumerate(exchangelist) if active_connections[i]] + + # arguments to ModflowGwfgwf + kwargs = {'exgtype': 'gwf6-gwf6', + 'exgmnamea': self.name, + 'exgmnameb': inset_name, + 'nexg': nexg, + 'auxiliary': [('angldegx', 'cdist')], + 'exchangedata': active_exchangelist + } + kwargs = get_input_arguments(kwargs, mf6.ModflowGwfgwf) + + # set up the exchange package + gwfgwf = mf6.ModflowGwfgwf(self.simulation, **kwargs) + + # set up a Mover Package if needed + self.setup_simulation_mover(gwfgwf) + + + def setup_dis(self, **kwargs): + """""" + package = 'dis' + print('\nSetting up {} package...'.format(package.upper())) + t0 = time.time() + + # resample the top from the DEM + if self.cfg['dis']['remake_top']: + self._setup_array(package, 'top', datatype='array2d', + resample_method='linear', + write_fmt='%.2f') + + # make the botm array + self._setup_array(package, 'botm', datatype='array3d', + resample_method='linear', + write_fmt='%.2f') + + # set number of layers to length of the created bottom array + # this needs to be set prior to setting up the idomain, + # otherwise idomain may have wrong number of layers + self.cfg['dis']['dimensions']['nlay'] = len(self.cfg['dis']['griddata']['botm']) + + # initial idomain input for creating a dis package instance + self._setup_array(package, 'idomain', datatype='array3d', write_fmt='%d', + resample_method='nearest', + dtype=int) + + # put together keyword arguments for dis package + kwargs = self.cfg['grid'].copy() # nrow, ncol, delr, delc + kwargs.update(self.cfg['dis']) + kwargs.update(self.cfg['dis']['dimensions']) # nper, nlay, etc. + kwargs.update(self.cfg['dis']['griddata']) + + # modelgrid: dis arguments + remaps = {'xoff': 'xorigin', + 'yoff': 'yorigin', + 'rotation': 'angrot'} + + for k, v in remaps.items(): + if v not in kwargs: + kwargs[v] = kwargs.pop(k) + kwargs['length_units'] = self.length_units + # get the arguments for the flopy version of ModflowGwfdis + # but instantiate with modflow-setup subclass of ModflowGwfdis + kwargs = get_input_arguments(kwargs, mf6.ModflowGwfdis) + dis = ModflowGwfdis(model=self, **kwargs) + self._mg_resync = False + self._reset_bc_arrays() + self._set_idomain() + print("finished in {:.2f}s\n".format(time.time() - t0)) + return dis + + #def setup_tdis(self): +
+[docs] + def setup_tdis(self, **kwargs): + """ + Sets up the TDIS package. + """ + package = 'tdis' + print('\nSetting up {} package...'.format(package.upper())) + t0 = time.time() + perioddata = mf6.ModflowTdis.perioddata.empty(self, self.nper) + for col in ['perlen', 'nstp', 'tsmult']: + perioddata[col] = self.perioddata[col].values + kwargs = self.cfg['tdis']['options'] + kwargs['nper'] = self.nper + kwargs['perioddata'] = perioddata + kwargs = get_input_arguments(kwargs, mf6.ModflowTdis) + tdis = mf6.ModflowTdis(self.simulation, **kwargs) + print("finished in {:.2f}s\n".format(time.time() - t0)) + return tdis
+ + +
+[docs] + def setup_ic(self, **kwargs): + """ + Sets up the IC package. + """ + package = 'ic' + print('\nSetting up {} package...'.format(package.upper())) + t0 = time.time() + + kwargs = self.cfg[package] + kwargs.update(self.cfg[package]['griddata']) + kwargs['source_data_config'] = kwargs['source_data'] + kwargs['filename_fmt'] = kwargs['strt_filename_fmt'] + + # make the starting heads array + strt = setup_strt(self, package, **kwargs) + + ic = mf6.ModflowGwfic(self, strt=strt) + print("finished in {:.2f}s\n".format(time.time() - t0)) + return ic
+ + +
+[docs] + def setup_npf(self, **kwargs): + """ + Sets up the NPF package. + """ + package = 'npf' + print('\nSetting up {} package...'.format(package.upper())) + t0 = time.time() + hiKlakes_value = float(self.cfg['parent'].get('hiKlakes_value', 1e4)) + + # make the k array + self._setup_array(package, 'k', vmin=0, vmax=hiKlakes_value, + resample_method='linear', + datatype='array3d', write_fmt='%.6e') + + # make the k33 array (kv) + self._setup_array(package, 'k33', vmin=0, vmax=hiKlakes_value, + resample_method='linear', + datatype='array3d', write_fmt='%.6e') + + kwargs = self.cfg[package]['options'].copy() + kwargs.update(self.cfg[package]['griddata'].copy()) + kwargs = get_input_arguments(kwargs, mf6.ModflowGwfnpf) + npf = mf6.ModflowGwfnpf(self, **kwargs) + print("finished in {:.2f}s\n".format(time.time() - t0)) + return npf
+ + +
+[docs] + def setup_sto(self, **kwargs): + """ + Sets up the STO package. + """ + + if np.all(self.perioddata['steady']): + print('Skipping STO package, no transient stress periods...') + return + + package = 'sto' + print('\nSetting up {} package...'.format(package.upper())) + t0 = time.time() + + # make the sy array + self._setup_array(package, 'sy', datatype='array3d', resample_method='linear', + write_fmt='%.6e') + + # make the ss array + self._setup_array(package, 'ss', datatype='array3d', resample_method='linear', + write_fmt='%.6e') + + kwargs = self.cfg[package]['options'].copy() + kwargs.update(self.cfg[package]['griddata'].copy()) + # get steady/transient info from perioddata table + # which parses it from either DIS or STO input (to allow consistent input structure with mf2005) + kwargs['steady_state'] = {k: v for k, v in zip(self.perioddata['per'], self.perioddata['steady']) if v} + kwargs['transient'] = {k: not v for k, v in zip(self.perioddata['per'], self.perioddata['steady'])} + kwargs = get_input_arguments(kwargs, mf6.ModflowGwfsto) + sto = mf6.ModflowGwfsto(self, **kwargs) + print("finished in {:.2f}s\n".format(time.time() - t0)) + return sto
+ + +
+[docs] + def setup_rch(self, **kwargs): + """ + Sets up the RCH package. + """ + package = 'rch' + print('\nSetting up {} package...'.format(package.upper())) + t0 = time.time() + + # make the irch array + irch = make_irch(self.idomain) + + self._setup_array('rch', 'irch', + data={0: irch}, + datatype='array2d', + write_fmt='%d', dtype=int) + + # make the rech array + self._setup_array(package, 'recharge', datatype='transient2d', + resample_method='nearest', write_fmt='%.6e', + write_nodata=0.) + + kwargs = self.cfg[package].copy() + kwargs.update(self.cfg[package]['options']) + kwargs = get_input_arguments(kwargs, mf6.ModflowGwfrcha) + rch = mf6.ModflowGwfrcha(self, **kwargs) + print("finished in {:.2f}s\n".format(time.time() - t0)) + return rch
+ + +
+[docs] + def setup_lak(self, **kwargs): + """ + Sets up the Lake package. + """ + package = 'lak' + print('\nSetting up {} package...'.format(package.upper())) + t0 = time.time() + if self.lakarr.sum() == 0: + print("lakes_shapefile not specified, or no lakes in model area") + return + + # option to write connectiondata to external file + external_files = self.cfg['lak']['external_files'] + horizontal_connections = self.cfg['lak']['horizontal_connections'] + + + # source data + source_data = self.cfg['lak']['source_data'] + + # munge lake package input + # returns dataframe with information for each lake + self.lake_info = setup_lake_info(self) + + # returns dataframe with connection information + connectiondata = setup_lake_connectiondata(self, for_external_file=external_files, + include_horizontal_connections=horizontal_connections) + # lakeno column will have # in front if for_external_file=True + lakeno_col = [c for c in connectiondata.columns if 'lakeno' in c][0] + nlakeconn = connectiondata.groupby(lakeno_col).count().iconn.to_dict() + offset = 0 if external_files else 1 + self.lake_info['nlakeconn'] = [nlakeconn[id - offset] for id in self.lake_info['lak_id']] + + # set up the tab files + if 'stage_area_volume_file' in source_data: + tab_files = setup_lake_tablefiles(self, source_data['stage_area_volume_file']) + + # tabfiles aren't rewritten by flopy on package write + self.cfg['lak']['tab_files'] = tab_files + # kludge to deal with ugliness of lake package external file handling + # (need to give path relative to model_ws, not folder that flopy is working in) + tab_files_argument = [os.path.relpath(f) for f in tab_files] + else: + tab_files = None + # todo: implement lake outlets with SFR + + # perioddata + self.lake_fluxes = setup_lake_fluxes(self) + lakeperioddata = get_lakeperioddata(self.lake_fluxes) + + # set up external files + connectiondata_cols = [lakeno_col, 'iconn', 'k', 'i', 'j', 'claktype', 'bedleak', + 'belev', 'telev', 'connlen', 'connwidth'] + if external_files: + # get the file path (allowing for different external file locations, specified name format, etc.) + filepath = self.setup_external_filepaths(package, 'connectiondata', + self.cfg[package]['connectiondata_filename_fmt']) + connectiondata[connectiondata_cols].to_csv(filepath[0]['filename'], index=False, sep=' ') + # make a copy for the intermediate data folder, for consistency with mf-2005 + shutil.copy(filepath[0]['filename'], self.cfg['intermediate_data']['output_folder']) + else: + connectiondata_cols = connectiondata_cols[:2] + ['cellid'] + connectiondata_cols[5:] + self.cfg[package]['connectiondata'] = connectiondata[connectiondata_cols].values.tolist() + + # set up input arguments + kwargs = self.cfg[package].copy() + options = self.cfg[package]['options'].copy() + renames = {'budget_fileout': 'budget_filerecord', + 'stage_fileout': 'stage_filerecord'} + for k, v in renames.items(): + if k in options: + options[v] = options.pop(k) + kwargs.update(self.cfg[package]['options']) + kwargs['time_conversion'] = convert_time_units(self.time_units, 'seconds') + kwargs['length_conversion'] = convert_time_units(self.length_units, 'meters') + kwargs['nlakes'] = len(self.lake_info) + kwargs['noutlets'] = 0 # not implemented + # [lakeno, strt, nlakeconn, aux, boundname] + packagedata_cols = ['lak_id', 'strt', 'nlakeconn'] + if kwargs.get('boundnames'): + packagedata_cols.append('name') + packagedata = self.lake_info[packagedata_cols] + packagedata['lak_id'] -= 1 # convert to zero-based + kwargs['packagedata'] = packagedata.values.tolist() + if tab_files != None: + kwargs['ntables'] = len(tab_files) + kwargs['tables'] = [(i, f) #, 'junk', 'junk') + for i, f in enumerate(tab_files)] + kwargs['outlets'] = None # not implemented + #kwargs['outletperioddata'] = None # not implemented + kwargs['perioddata'] = lakeperioddata + + # observations + kwargs['observations'] = setup_mf6_lake_obs(kwargs) + + kwargs = get_input_arguments(kwargs, mf6.ModflowGwflak) + lak = mf6.ModflowGwflak(self, **kwargs) + print("finished in {:.2f}s\n".format(time.time() - t0)) + return lak
+ + + +
+[docs] + def setup_chd(self, **kwargs): + """Set up the CHD Package. + """ + return self._setup_basic_stress_package( + 'chd', mf6.ModflowGwfchd, ['head'], **kwargs)
+ + + +
+[docs] + def setup_drn(self, **kwargs): + """Set up the Drain Package. + """ + return self._setup_basic_stress_package( + 'drn', mf6.ModflowGwfdrn, ['elev', 'cond'], **kwargs)
+ + + +
+[docs] + def setup_ghb(self, **kwargs): + """Set up the General Head Boundary Package. + """ + return self._setup_basic_stress_package( + 'ghb', mf6.ModflowGwfghb, ['bhead', 'cond'], **kwargs)
+ + + +
+[docs] + def setup_riv(self, rivdata=None, **kwargs): + """Set up the River Package. + """ + return self._setup_basic_stress_package( + 'riv', mf6.ModflowGwfriv, ['stage', 'cond', 'rbot'], + rivdata=rivdata, **kwargs)
+ + + +
+[docs] + def setup_wel(self, **kwargs): + """Set up the Well Package. + """ + return self._setup_basic_stress_package( + 'wel', mf6.ModflowGwfwel, ['q'], **kwargs)
+ + + +
+[docs] + def setup_obs(self, **kwargs): + """ + Sets up the OBS utility. + """ + package = 'obs' + print('\nSetting up {} package...'.format(package.upper())) + t0 = time.time() + + iobs_domain = None + if not kwargs['mfsetup_options']['allow_obs_in_bc_cells']: + # for now, discard any head observations in same (i, j) column of cells + # as a non-well boundary condition + # including lake package lakes and non lake, non well BCs + # (high-K lakes are excluded, since we may want head obs at those locations, + # to serve as pseudo lake stage observations) + iobs_domain = ~((self.isbc == 1) | np.any(self.isbc > 2, axis=0)) + + # munge the observation data + df = setup_head_observations(self, + obs_package=package, + obsname_column='obsname', + iobs_domain=iobs_domain, + **kwargs['source_data'], + **kwargs['mfsetup_options']) + + # reformat to flopy input format + obsdata = df[['obsname', 'obstype', 'id']].to_records(index=False) + filename = self.cfg[package]['mfsetup_options']['filename_fmt'].format(self.name) + obsdata = {filename: obsdata} + + kwargs = self.cfg[package].copy() + kwargs.update(self.cfg[package]['options']) + kwargs['continuous'] = obsdata + kwargs = get_input_arguments(kwargs, mf6.ModflowUtlobs) + obs = mf6.ModflowUtlobs(self, **kwargs) + print("finished in {:.2f}s\n".format(time.time() - t0)) + return obs
+ + +
+[docs] + def setup_oc(self, **kwargs): + """ + Sets up the OC package. + """ + package = 'oc' + print('\nSetting up {} package...'.format(package.upper())) + t0 = time.time() + kwargs = self.cfg[package] + kwargs['budget_filerecord'] = self.cfg[package]['budget_fileout_fmt'].format(self.name) + kwargs['head_filerecord'] = self.cfg[package]['head_fileout_fmt'].format(self.name) + + period_input = parse_oc_period_input(kwargs) + kwargs.update(period_input) + + kwargs = get_input_arguments(kwargs, mf6.ModflowGwfoc) + oc = mf6.ModflowGwfoc(self, **kwargs) + print("finished in {:.2f}s\n".format(time.time() - t0)) + return oc
+ + +
+[docs] + def setup_ims(self): + """ + Sets up the IMS package. + """ + package = 'ims' + print('\nSetting up {} package...'.format(package.upper())) + t0 = time.time() + kwargs = flatten(self.cfg[package]) + # renames to cover difference between mf6: flopy input + renames = {'csv_outer_output': 'csv_outer_output_filerecord', + 'csv_inner_output': 'csv_outer_inner_filerecord' + } + for k, v in renames.items(): + if k in kwargs: + kwargs[v] = kwargs[k] + kwargs = get_input_arguments(kwargs, mf6.ModflowIms) + ims = mf6.ModflowIms(self.simulation, **kwargs) + #self.simulation.register_ims_package(ims, [self.name]) + print("finished in {:.2f}s\n".format(time.time() - t0)) + return ims
+ + +
+[docs] + def setup_simulation_mover(self, gwfgwf): + """Set up the MODFLOW-6 water mover package at the simulation level. + Automate set-up of the mover between SFR packages in LGR parent and inset models. + todo: automate set-up of mover between SFR and lakes (within a model). + + Parameters + ---------- + gwfgwf : Flopy :class:`~flopy.mf6.modflow.mfgwfgwf.ModflowGwfgwf` package instance + + Notes + ------ + Other uses of the water mover need to be configured manually using flopy. + """ + package = 'mvr' + print('\nSetting up the simulation water mover package...') + t0 = time.time() + + perioddata_dfs = [] + if self.get_package('sfr') is not None: + if self.inset is not None: + for inset_name, inset in self.inset.items(): + if inset.get_package('sfr'): + inset_perioddata = get_mover_sfr_package_input( + self, inset, gwfgwf.exchangedata.array) + perioddata_dfs.append(inset_perioddata) + # for each SFR reach with a connection + # to a reach in another model + # set the SFR Package downstream connection to 0 + for i, r in inset_perioddata.iterrows(): + rd = self.simulation.get_model(r['mname1']).sfrdata.reach_data + rd.loc[rd['rno'] == r['id1']+1, 'outreach'] = 0 + # fix flopy connectiondata as well + sfr_package = self.simulation.get_model(r['mname1']).sfr + cd = sfr_package.connectiondata.array.tolist() + # there should be no downstream reaches + # (indicated by negative numbers) + cd[r['id1']] = tuple(v for v in cd[r['id1']] if v > 0) + sfr_package.connectiondata = cd + # re-write the shapefile exports with corrected routing + inset.sfrdata.write_shapefiles(f'{inset._shapefiles_path}/{inset_name}') + + self.sfrdata.write_shapefiles(f'{self._shapefiles_path}/{self.name}') + + + if len(perioddata_dfs) > 0: + perioddata = pd.concat(perioddata_dfs) + if len(perioddata) > 0: + kwargs = flatten(self.cfg[package]) + # modelnames (boolean) keyword to indicate that all package names will + # be preceded by the model name for the package. Model names are + # required when the Mover Package is used with a GWF-GWF Exchange. The + # MODELNAME keyword should not be used for a Mover Package that is for + # a single GWF Model. + # this argument will need to be adapted for implementing a mover package within a model + # (between lakes and sfr) + kwargs['modelnames'] = True + kwargs['maxmvr'] = len(perioddata) # assumes that input for period 0 applies to all periods + packages = set(list(zip(perioddata.mname1, perioddata.pname1)) + + list(zip(perioddata.mname2, perioddata.pname2))) + kwargs['maxpackages'] = len(packages) + kwargs['packages'] = list(packages) + kwargs['perioddata'] = {0: perioddata.values.tolist()} # assumes that input for period 0 applies to all periods + kwargs = get_input_arguments(kwargs, mf6.ModflowGwfmvr) + mvr = mf6.ModflowMvr(gwfgwf, **kwargs) + print("finished in {:.2f}s\n".format(time.time() - t0)) + return mvr + else: + print("no packages with mover information\n")
+ + +
+[docs] + def write_input(self): + """Write the model input. + """ + # prior to writing output + # remove any BCs in inactive cells + # handle cases of single model or multi-model LGR simulation + # by working with the simulation-level model dictionary + for model_name, model in self.simulation.model_dict.items(): + pckgs = ['chd', 'drn', 'ghb', 'riv', 'wel'] + for pckg in pckgs: + package_instance = getattr(model, pckg.lower(), None) + if package_instance is not None: + external_files = model.cfg[pckg.lower()]['stress_period_data'] + remove_inactive_bcs(package_instance, + external_files=external_files) + if hasattr(model, 'obs'): + # handle case of single obs package, in which case model.obs + # will be a ModflowUtlobs package instance + try: + len(model.obs) + obs_packages = model.obs + except: + obs_packages = [model.obs] + for obs_package_instance in obs_packages: + remove_inactive_obs(obs_package_instance) + + # write the model with flopy + # but skip the sfr package + # by monkey-patching the write method + def skip_write(**kwargs): + pass + if hasattr(model, 'sfr'): + model.sfr.write = skip_write + self.simulation.write_simulation() + + # post-flopy write actions + for model_name, model in self.simulation.model_dict.items(): + # write the sfr package with SFRmaker + if 'SFR' in ' '.join(model.get_package_list()): + options = [] + for k, b in model.cfg['sfr']['options'].items(): + options.append(k) + if 'save_flows' in options: + budget_fileout = '{}.{}'.format(model_name, + model.cfg['sfr']['budget_fileout']) + stage_fileout = '{}.{}'.format(model_name, + model.cfg['sfr']['stage_fileout']) + options.append('budget fileout {}'.format(budget_fileout)) + options.append('stage fileout {}'.format(stage_fileout)) + if len(model.sfrdata.observations) > 0: + options.append('obs6 filein {}.{}'.format(model_name, + model.cfg['sfr']['obs6_filein_fmt']) + ) + model.sfrdata.write_package(idomain=model.idomain, + version='mf6', + options=options, + external_files_path=model.external_path + ) + # add version info to package file headers + files = [model.namefile] + files += [p.filename for p in model.packagelist] + files += [p[0].filename for k, p in model.simulation.package_key_dict.items()] + for f in files: + add_version_to_fileheader(f, model_info=model.header) + + if not model.cfg['mfsetup_options']['keep_original_arrays']: + shutil.rmtree(model.tmpdir) + + # label stress periods in tdis file with comments + self.perioddata.sort_values(by='per', inplace=True) + add_date_comments_to_tdis(self.simulation.tdis.filename, + self.perioddata.start_datetime, + self.perioddata.end_datetime + )
+ + + + + @staticmethod + def _parse_model_kwargs(cfg): + + if isinstance(cfg['model']['simulation'], str): + # assume that simulation for model + # is the one simulation specified in configuration + # (regardless of the name specified in model configuration) + cfg['model']['simulation'] = cfg['simulation'] + if isinstance(cfg['model']['simulation'], dict): + # create simulation from simulation block in config dict + kwargs = cfg['simulation'].copy() + kwargs.update(cfg['simulation']['options']) + kwargs = get_input_arguments(kwargs, mf6.MFSimulation) + sim = flopy.mf6.MFSimulation(**kwargs) + cfg['model']['simulation'] = sim + sim_ws = cfg['simulation']['sim_ws'] + # if a simulation has already been created, get the path from the instance + elif isinstance(cfg['model']['simulation'], mf6.MFSimulation): + sim_ws = cfg['model']['simulation'].simulation_data.mfpath._sim_path + else: + raise TypeError('unrecognized configuration input for simulation.') + + # listing file + cfg['model']['list'] = os.path.join(cfg['model']['list_filename_fmt'] + .format(cfg['model']['modelname'])) + + # newton options + if cfg['model']['options'].get('newton', False): + cfg['model']['options']['newtonoptions'] = [''] + if cfg['model']['options'].get('newton_under_relaxation', False): + cfg['model']['options']['newtonoptions'] = ['under_relaxation'] + cfg['model'].update(cfg['model']['options']) + return cfg + + +
+[docs] + @classmethod + def load_from_config(cls, yamlfile, load_only=None): + """Load a model from a configuration file and set of MODFLOW files. + + Parameters + ---------- + yamlfile : pathlike + Modflow setup YAML format configuration file + load_only : list + List of package abbreviations or package names corresponding to + packages that flopy will load. default is None, which loads all + packages. the discretization packages will load regardless of this + setting. subpackages, like time series and observations, will also + load regardless of this setting. + example list: ['ic', 'maw', 'npf', 'oc', 'ims', 'gwf6-gwf6'] + + Returns + ------- + m : mfsetup.MF6model instance + """ + print('\nLoading simulation in {}\n'.format(yamlfile)) + t0 = time.time() + + #cfg = load_cfg(yamlfile, verbose=verbose, default_file=cls.default_file) # 'mf6_defaults.yml') + #cfg = cls._parse_model_kwargs(cfg) + #kwargs = get_input_arguments(cfg['model'], mf6.ModflowGwf, + # exclude='packages') + #model = cls(cfg=cfg, **kwargs) + model = cls(cfg=yamlfile, load=True) + if 'grid' not in model.cfg.keys(): + model.setup_grid() + sim = model.cfg['model']['simulation'] # should be a flopy.mf6.MFSimulation instance + models = [model] + if isinstance(model.inset, dict): + for inset_name, inset in model.inset.items(): + models.append(inset) + + # execute the flopy load code on the pre-defined simulation and model instances + # (so that the end result is a MFsetup.MF6model instance) + # (kludgy) + sim = flopy_mfsimulation_load(sim, models, load_only=load_only) + + # just return the parent model (inset models should be attached through the inset attribute, + # in addition to through the .simulation flopy attribute) + m = sim.get_model(model_name=model.name) + print('finished loading model in {:.2f}s'.format(time.time() - t0)) + return m
+
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/_modules/mfsetup/mfmodel.html b/_modules/mfsetup/mfmodel.html new file mode 100644 index 00000000..b799b9db --- /dev/null +++ b/_modules/mfsetup/mfmodel.html @@ -0,0 +1,2018 @@ + + + + + + + + mfsetup.mfmodel — modflow-setup 0.5.0.post59+g65803fd documentation + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for mfsetup.mfmodel

+import os
+import time
+import warnings
+from collections import defaultdict
+from pathlib import Path
+
+import flopy
+import geopandas as gpd
+import numpy as np
+import pandas as pd
+import pyproj
+from packaging import version
+
+fm = flopy.modflow
+mf6 = flopy.mf6
+import gisutils
+import sfrmaker
+from gisutils import get_shapefile_crs, get_values_at_points, project
+from sfrmaker import Lines
+from sfrmaker.utils import assign_layers
+
+from mfsetup.bcs import (
+    get_bc_package_cells,
+    setup_basic_stress_data,
+    setup_flopy_stress_period_data,
+)
+from mfsetup.config import validate_configuration
+from mfsetup.fileio import (
+    check_source_files,
+    load,
+    load_array,
+    load_cfg,
+    save_array,
+    set_cfg_paths_to_absolute,
+    setup_external_filepaths,
+)
+from mfsetup.grid import MFsetupGrid, get_ij, rasterize, setup_structured_grid
+from mfsetup.interpolate import (
+    get_source_dest_model_xys,
+    interp_weights,
+    interpolate,
+    regrid,
+)
+from mfsetup.lakes import make_lakarr2d, setup_lake_fluxes, setup_lake_info
+from mfsetup.mf5to6 import (
+    get_model_length_units,
+    get_model_time_units,
+    get_package_name,
+)
+from mfsetup.model_version import get_versions
+from mfsetup.sourcedata import TransientTabularSourceData, setup_array
+from mfsetup.tdis import (
+    concat_periodata_groups,
+    get_parent_stress_periods,
+    parse_perioddata_groups,
+    setup_perioddata,
+    setup_perioddata_group,
+)
+from mfsetup.tmr import Tmr
+from mfsetup.units import convert_length_units, lenuni_text, lenuni_values
+from mfsetup.utils import flatten, get_input_arguments, get_packages, update
+from mfsetup.wells import setup_wel_data
+
+if version.parse(gisutils.__version__) < version.parse('0.2.2'):
+    warnings.warn('Automatic reprojection functionality requires gis-utils >= 0.2.2'
+                  '\nPlease pip install --upgrade gis-utils')
+if version.parse(sfrmaker.__version__) < version.parse('0.6'):
+    warnings.warn('sfr: sfrmaker_options: add_outlet functionality requires sfrmaker >= 0.6'
+                  '\nPlease pip install --upgrade sfrmaker')
+
+
+
+[docs] +class MFsetupMixin(): + """Mixin class for shared functionality between MF6model and MFnwtModel. + Meant to be inherited by both those classes and not be called directly. + + https://stackoverflow.com/questions/533631/what-is-a-mixin-and-why-are-they-useful + """ + source_path = Path(__file__).parent + """ -1 : well + 0 : no lake + 1 : lak package lake (lakarr > 0) + 2 : high-k lake + 3 : ghb + 4 : sfr""" + # package variable name: number + bc_numbers = {'wel': -1, + 'lak': 1, + 'high-k lake': 2, + 'ghb': 3, + 'sfr': 4, + 'riv': 5 + } + model_type = "mfsetup" + + def __init__(self, parent): + + # property attributes + self._cfg = None + self._nper = None + self._perioddata = None + self._sr = None + self._modelgrid = None + self._bbox = None + self._parent = parent + self._parent_layers = None + self._parent_default_source_data = False + self._parent_mask = None + self._lakarr_2d = None + self._isbc_2d = None + self._lakarr = None + self._isbc = None + self._lake_bathymetry = None + self._high_k_lake_recharge = None + self._nodata_value = -9999 + self._model_ws = None + self._abs_model_ws = None + self._model_version = None # semantic version of model + self._longname = None # long name for model (short name is self.name) + self._header = None # header for files and repr + self.inset = None # dictionary of inset models attached to LGR parent + self._is_lgr = False # flag for lgr inset models + self.lgr = None # holds flopy Lgr utility object + self._lgr_idomain2d = None # array of Lgr inset model locations within parent grid + self.tmr = None # holds TMR class instance for TMR-type perimeter boundaries + self._load = False # whether model is being made or loaded from existing files + self.lake_info = None + self.lake_fluxes = None + + # flopy settings + self._mg_resync = False + + self._features = {} # dictionary for caching shapefile datasets in memory + + # arrays remade during this session + self.updated_arrays = set() + + # cache of interpolation weights to speed up regridding + self._interp_weights = None + + + def __repr__(self): + header = f'{self.header}\n' + txt = '' + if self.parent is not None: + txt += 'Parent model: {}/{}\n'.format(self.parent.model_ws, self.parent.name) + if self._modelgrid is not None: + txt += f'{self._modelgrid.__repr__()}' + txt += 'Packages:' + for pkg in self.get_package_list(): + txt += ' {}'.format(pkg.lower()) + txt += '\n' + txt += f'{self.nper:d} period(s):\n' + if self._perioddata is not None: + cols = ['per', 'start_datetime', 'end_datetime', 'perlen', 'steady', 'nstp'] + txt += self.perioddata[cols].head(3).to_string(index=False) + txt += '\n ...\n' + tail = self.perioddata[cols].tail(1).to_string(index=False) + txt += tail.split('\n')[1] + txt = header + txt + return txt + + def __eq__(self, other): + """Test for equality to another model object.""" + if not isinstance(other, self.__class__): + return False + # kludge: skip obs packages for now + # - obs packages aren't read in with same name under which they were created + # - also SFR_OBS package is handled by SFRmaker instead of Flopy; + # a loaded version of a model might have SFR_OBS, + # where a freshly made version may not (even though SFRmaker will write it) + # + all_packages = set(self.get_package_list()).union(other.get_package_list()) + exceptions = {p for p in all_packages if p.lower().startswith('obs') + or p.lower().endswith('obs')} + other_packages = [s for s in sorted(other.get_package_list()) + if s not in exceptions] + packages = [s for s in sorted(self.get_package_list()) + if s not in exceptions] + if other_packages != packages: + return False + if other.modelgrid != self.modelgrid: + return False + if other.nlay != self.nlay: + return False + if not np.array_equal(other.perioddata, self.perioddata): + return False + # TODO: add checks of actual array values and other parameters + for k, v in self.__dict__.items(): + if k in ['cfg', + 'sfrdata', + '_load', + '_packagelist', + '_package_paths', + 'package_key_dict', + 'package_type_dict', + 'package_name_dict', + '_ftype_num_dict']: + continue + elif k not in other.__dict__: + return False + elif type(v) == bool: + if not v == other.__dict__[k]: + return False + elif k == 'cfg': + continue + elif type(v) in [str, int, float, dict, list]: + if v != other.__dict__[k]: + pass + continue + return True + + @property + def nper(self): + if self.perioddata is not None: + return len(self.perioddata) + + @property + def nrow(self): + if self.modelgrid.grid_type == 'structured': + return self.modelgrid.nrow + + @property + def ncol(self): + if self.modelgrid.grid_type == 'structured': + return self.modelgrid.ncol + + @property + def modelgrid(self): + if self._modelgrid is None: + self.setup_grid() + # trap for instance where default (base) modelgrid + # instance is attached to the flopy model + # (because the grid hasn't been set up with) + # self._modelgrid.nlay will error in this case + # because of NotImplementedError in base class + elif self._modelgrid.grid_type is None: + pass + # add layer tops and bottoms and idomain to the model grid + # if they haven't been yet + elif self._modelgrid.nlay is None and 'DIS' in self.get_package_list(): + self._modelgrid._top = self.dis.top.array + self._modelgrid._botm = self.dis.botm.array + if self.version == 'mf6': + self._modelgrid._idomain = self.dis.idomain.array + elif 'bas6' in self.get_package_list(): + self._modelgrid._idomain = self.bas6.ibound.array + #self.setup_grid() + return self._modelgrid + + @property + def bbox(self): + if self._bbox is None and self.modelgrid is not None: + self._bbox = self.modelgrid.bbox + return self._bbox + + #@property + #def perioddata(self): + # """DataFrame summarizing stress period information. +# + # Columns: +# + # start_date_time : pandas datetimes; start date/time of each stress period + # (does not include steady-state periods) + # end_date_time : pandas datetimes; end date/time of each stress period + # (does not include steady-state periods) + # time : float; cumulative MODFLOW time (includes steady-state periods) + # per : zero-based stress period + # perlen : stress period length in model time units + # nstp : number of timesteps in the stress period + # tsmult : timestep multiplier for stress period + # steady : True=steady-state, False=Transient + # oc : MODFLOW-6 output control options + # """ + # if self._perioddata is None: + # perioddata = setup_perioddata(self) + # return self._perioddata + + @property + def parent(self): + return self._parent + + @property + def parent_layers(self): + """Mapping between layers in source model and + layers in destination model. + + Returns + ------- + parent_layers : dict + {inset layer : parent layer} + """ + if self._parent_layers is None: + parent_layers = None + botm_source_data = self.cfg['dis'].get('source_data', {}).get('botm', {}) + nlay = self.modelgrid.nlay + if nlay is None: + nlay = self.cfg['dis']['dimensions']['nlay'] + if self.cfg['parent'].get('inset_layer_mapping') is not None: + parent_layers = self.cfg['parent'].get('inset_layer_mapping') + elif isinstance(botm_source_data, dict) and 'from_parent' in botm_source_data: + parent_layers = botm_source_data.get('from_parent') + elif self.parent is not None and (self.parent.modelgrid.nlay == nlay): + parent_layers = dict(zip(range(self.parent.modelgrid.nlay), + range(nlay))) + else: + #parent_layers = dict(zip(range(self.parent.modelgrid.nlay), range(self.parent.modelgrid.nlay))) + parent_layers = None + self._parent_layers = parent_layers + return self._parent_layers + + @property + def parent_stress_periods(self): + """Mapping between stress periods in source model and + stress periods in destination model. + + Returns + ------- + parent_stress_periods : dict + {inset stress period : parent stress period} + """ + return dict(zip(self.perioddata['per'], self.perioddata['parent_sp'])) + + @property + def package_list(self): + """Definitive list of packages. Get from namefile input first + (as in mf6 input), then look under model input. + """ + packages = self.cfg.get('nam', {}).get('packages', []) + if len(packages) == 0: + packages = self.cfg['model'].get('packages', []) + return [p for p in self._package_setup_order + if p in packages] + + @property + def perimeter_bc_type(self): + """Dictates how perimeter boundaries are set up. + + if 'head'; a constant head package is created + from the parent model starting heads + if 'flux'; a specified flux boundary is created + from parent model cell by cell flow output + """ + perimeter_boundary_type = self.cfg['model'].get('perimeter_boundary_type') + if perimeter_boundary_type is not None: + if 'head' in perimeter_boundary_type: + return 'head' + if 'flux' in perimeter_boundary_type: + return 'flux' + + @property + def model_ws(self): + if self._model_ws is None: + self._model_ws = Path(self._get_model_ws()) + return self._model_ws + + @model_ws.setter + def model_ws(self, model_ws): + self._model_ws = model_ws + self._abs_model_ws = os.path.normpath(os.path.abspath(model_ws)) + + @property + def model_version(self): + """Semantic version of model, using a hacked version of the versioneer. + Version is reported using git tags for the model repository + or a start_version: key specified in the configuration file (default 0). + The start_version or tag is then appended by the remaining information + in a pep440-post style version tag (e.g. most recent git commit hash + for the model repository + "dirty" if the model repository has uncommited changes) + + References + ---------- + https://github.com/warner/python-versioneer + https://github.com/warner/python-versioneer/blob/master/details.md + """ + if self._model_version is None: + self._model_version = get_versions(path=self.model_ws, + start_version=str(self.cfg['metadata']['start_version'])) + return self._model_version + + @property + def longname(self): + if self._longname is None: + longname = self.cfg['metadata'].get('longname') + if longname is None: + longname = f'{self.name} model' + self._longname = longname + return self._longname + + @property + def header(self): + if self._header is None: + version_str = self.model_version['version'] + header = f'{self.longname} version {version_str}' + self._header = header + return self._header + + @property + def tmpdir(self): + #abspath = os.path.abspath( + # self.cfg['intermediate_data']['output_folder']) + abspath = self.model_ws / 'original-arrays' + self.cfg['intermediate_data']['output_folder'] = str(abspath) + abspath.mkdir(exist_ok=True) + #if not os.path.isdir(abspath): + # os.makedirs(abspath) + tmpdir = abspath + if self.relative_external_paths: + #tmpdir = os.path.relpath(abspath) + tmpdir = abspath.relative_to(self.model_ws) + #else: + # do we need to normalize with Pathlib?? + # tmpdir = os.path.normpath(abspath) + return tmpdir + + @property + def external_path(self): + abspath = os.path.abspath( + self.cfg.get('model', {}).get('external_path', 'external')) + if not os.path.isdir(abspath): + os.makedirs(abspath) + if self.relative_external_paths: + ext_path = os.path.relpath(abspath) + else: + ext_path = os.path.normpath(abspath) + return ext_path + + @external_path.setter + def external_path(self, x): + pass # bypass any setting in parent class + + @property + def interp_weights(self): + """For a given parent, only calculate interpolation weights + once to speed up re-gridding of arrays to pfl_nwt.""" + if self._interp_weights is None: + parent_xy, inset_xy = get_source_dest_model_xys(self.parent, + self) + self._interp_weights = interp_weights(parent_xy, inset_xy) + return self._interp_weights + + @property + def parent_mask(self): + """Boolean array indicating window in parent model grid (subset of cells) + that encompass the inset model domain, with a surrounding buffer. + Used to speed up interpolation of parent grid values onto inset model grid.""" + if self._parent_mask is None: + x, y = np.squeeze(self.bbox.exterior.coords.xy) + pi, pj = get_ij(self.parent.modelgrid, x, y) + pad = 3 + i0 = np.max([pi.min() - pad, 0]) + i1 = np.min([pi.max() + pad + 1, self.parent.modelgrid.nrow]) + j0 = np.max([pj.min() - pad, 0]) + j1 = np.min([pj.max() + pad + 1, self.parent.modelgrid.ncol]) + mask = np.zeros((self.parent.modelgrid.nrow, self.parent.modelgrid.ncol), dtype=bool) + mask[i0:i1, j0:j1] = True + self._parent_mask = mask + return self._parent_mask + + @property + def nlakes(self): + if self.lakarr is not None: + return int(np.max(self.lakarr)) + else: + return 0 + + @property + def _lakarr2d(self): + """2-D array of areal extent of lakes. Non-zero values + correspond to lak package IDs.""" + if self._lakarr_2d is None: + self._set_lakarr2d() + return self._lakarr_2d + + @property + def lakarr(self): + """3-D array of lake extents in each layer. Non-zero values + correspond to lak package IDs. Extent of lake in + each layer is based on bathymetry and model layer thickness. + """ + if self._lakarr is None: + self.setup_external_filepaths('lak', 'lakarr', + self.cfg['lak']['{}_filename_fmt'.format('lakarr')], + file_numbers=list(range(self.nlay))) + if self.isbc is None: + return None + else: + self._set_lakarr() + return self._lakarr + + @property + def _isbc2d(self): + """2-D array indicating the i, j locations of + boundary conditions. + -1 : well + 0 : no lake + 1 : lak package lake (lakarr > 0) + 2 : high-k lake + 3 : ghb + 4 : sfr + 5 : riv + + see also the .bc_numbers attibute + """ + if self._isbc_2d is None: + self._set_isbc2d() + return self._isbc_2d + + @property + def isbc(self): + """3D array indicating which cells have a lake in each layer. + -1 : well + 0 : no lake + 1 : lak package lake (lakarr > 0) + 2 : high-k lake + 3 : ghb + 4 : sfr + 5 : riv + + see also the .bc_numbers attibute + """ + # DIS package is needed to set up the isbc array + # (to compare lake bottom elevations to layer bottoms) + if self.get_package('dis') is None: + return None + if self._isbc is None: + self._set_isbc() + return self._isbc + + @property + def lake_bathymetry(self): + """Put lake bathymetry setup logic here instead of DIS package. + """ + + if self._lake_bathymetry is None: + self._set_lake_bathymetry() + return self._lake_bathymetry + + @property + def high_k_lake_recharge(self): + """Recharge value to apply to high-K lakes, in model units. + """ + if self._high_k_lake_recharge is None and self.cfg['high_k_lakes']['simulate_high_k_lakes']: + if self.lake_info is None: + self.lake_info = setup_lake_info(self) + if self.lake_info is not None: + self.lake_fluxes = setup_lake_fluxes(self, block='high_k_lakes') + self._high_k_lake_recharge = self.lake_fluxes.groupby('per').mean()['highk_lake_rech'].sort_index() + return self._high_k_lake_recharge + + def load_array(self, filename): + if isinstance(filename, list): + arrays = [] + for f in filename: + arrays.append(load_array(f, + shape=(self.nrow, self.ncol), + nodata=self._nodata_value + ) + ) + return np.array(arrays) + return load_array(filename, shape=(self.nrow, self.ncol)) + +
+[docs] + def load_features(self, filename, bbox_filter=None, + id_column=None, include_ids=None, + cache=True): + """Load vector and attribute data from a shapefile; + cache it to the _features dictionary. + """ + if isinstance(filename, str): + features_file = [filename] + + dfs_list = [] + for f in features_file: + if f not in self._features.keys(): + if os.path.exists(f): + features_crs = get_shapefile_crs(f) + if bbox_filter is None: + if self.bbox is not None: + bbox = self.bbox + elif self.parent.modelgrid is not None: + bbox = self.parent.modelgrid.bbox + model_crs = self.parent.modelgrid.crs + assert model_crs is not None + + if features_crs != self.modelgrid.crs: + bbox_filter = project(bbox, self.modelgrid.crs, features_crs).bounds + else: + bbox_filter = bbox.bounds + + # implement automatic reprojection in gis-utils + # maintaining backwards compatibility + df = gpd.read_file(f) + df.to_crs(self.modelgrid.crs, inplace=True) + df.columns = [c.lower() for c in df.columns] + if cache: + print('caching data in {}...'.format(f)) + self._features[f] = df + else: + print('feature input file {} not found'.format(f)) + return + else: + df = self._features[f] + if id_column is not None: + id_column = id_column.lower() + # convert any floating point dtypes to integer + if df[id_column].dtype == float: + df[id_column] = df[id_column].astype('int64') + df.index = df[id_column] + if include_ids is not None: + df = df.loc[include_ids].copy() + dfs_list.append(df) + df = pd.concat(dfs_list) + if len(df) == 0: + warnings.warn('No features loaded from {}!'.format(filename)) + return df
+ + +
+[docs] + def get_boundary_cells(self, exclude_inactive=False): + """Get the i, j locations of cells along the model perimeter. + + Returns + ------- + k, i, j : 1D numpy arrays of ints + zero-based layer, row, column locations of boundary cells + """ + # top row, right side, left side, bottom row + i_top = [0] * self.ncol + j_top = list(range(self.ncol)) + i_left = list(range(1, self.nrow - 1)) + j_left = [0] * (self.nrow - 2) + i_right = i_left + j_right = [self.ncol - 1] * (self.nrow - 2) + i_botm = [self.nrow - 1] * self.ncol + j_botm = j_top + i = i_top + i_left + i_right + i_botm + j = j_top + j_left + j_right + j_botm + + assert len(i) == 2 * self.nrow + 2 * self.ncol - 4 + nlaycells = len(i) + k = np.array(sorted(list(range(self.nlay)) * len(i))) + i = np.array(i * self.nlay) + j = np.array(j * self.nlay) + assert np.sum(k[nlaycells:nlaycells * 2]) == nlaycells + + if exclude_inactive: + if self.version == 'mf6': + active_cells = self.idomain[k, i, j] >= 1 + else: + active_cells = self.ibound[k, i, j] >= 1 + k = k[active_cells].copy() + i = i[active_cells].copy() + j = j[active_cells].copy() + return k, i, j
+ + +
+[docs] + def regrid_from_parent(self, parent_array, + mask=None, + method='linear'): + """Interpolate values in parent array onto + the pfl_nwt model grid, using model grid instances + attached to the parent and pfl_nwt models. + + Parameters + ---------- + parent_array : ndarray + Values from parent model to be interpolated to pfl_nwt grid. + 1 or 2-D numpy array of same sizes as a + layer of the parent model. + mask : ndarray (bool) + 1 or 2-D numpy array of same sizes as a + layer of the parent model. True values + indicate cells to include in interpolation, + False values indicate cells that will be + dropped. + method : str ('linear', 'nearest') + Interpolation method. + """ + if mask is not None: + return regrid(parent_array, self.parent.modelgrid, self.modelgrid, + mask1=mask, + method=method) + if method == 'linear': + #parent_values = parent_array.flatten()[self.parent_mask.flatten()] + parent_values = parent_array[self.parent_mask].flatten() + regridded = interpolate(parent_values, + *self.interp_weights) + elif method == 'nearest': + regridded = regrid(parent_array, self.parent.modelgrid, self.modelgrid, + method='nearest') + regridded = np.reshape(regridded, (self.nrow, self.ncol)) + return regridded
+ + +
+[docs] + def setup_external_filepaths(self, package, variable_name, + filename_format, file_numbers=None): + """Set up external file paths for a MODFLOW package variable. Sets paths + for intermediate files, which are written from the (processed) source data. + Intermediate files are supplied to Flopy as external files for a given package + variable. Flopy writes external files to a specified location when the MODFLOW + package file is written. This method gets the external file paths that + will be written by FloPy, and puts them in the configuration dictionary + under their respective variables. + + Parameters + ---------- + package : str + Three-letter package abreviation (e.g. 'DIS' for discretization) + variable_name : str + FloPy name of variable represented by external files (e.g. 'top' or 'botm') + filename_format : str + File path to the external file(s). Can be a string representing a single file + (e.g. 'top.dat'), or for variables where a file is written for each layer or + stress period, a format string that will be formated with the zero-based layer + number (e.g. 'botm{}.dat') for files botm0.dat, botm1.dat, ... + file_numbers : list of ints + List of numbers for the external files. Usually these represent zero-based + layers or stress periods. + relative_external_paths : bool + If true, external paths will be specified relative to model_ws, + otherwise, they will be absolute paths + Returns + ------- + filepaths : list + List of external file paths + + Adds intermediated file paths to model.cfg[<package>]['intermediate_data'] + Adds external file paths to model.cfg[<package>][<variable_name>] + """ + # for lgr models, add the model name to the external filename + # if lgr parent or lgr inset + if self.lgr or self._is_lgr: + filename_format = '{}_{}'.format(self.name, filename_format) + return setup_external_filepaths(self, package, variable_name, + filename_format, file_numbers=file_numbers, + relative_external_paths=self.relative_external_paths)
+ + + def _get_model_ws(self, cfg=None): + if cfg is None: + cfg = self.cfg + if self.version == 'mf6': + abspath = os.path.abspath(cfg.get('simulation', {}).get('sim_ws', '.')) + else: + abspath = os.path.abspath(cfg.get('model', {}).get('model_ws', '.')) + if not os.path.exists(abspath): + os.makedirs(abspath) + self._abs_model_ws = os.path.normpath(abspath) + os.chdir(abspath) # within a session, modflow-setup operates in the model_ws + if self.relative_external_paths: + model_ws = os.path.relpath(abspath) + else: + model_ws = os.path.normpath(abspath) + return Path(model_ws) + + def _reset_bc_arrays(self): + """Reset the boundary condition property arrays in order. + _lakarr2d (depends on _lakarr_2d + _isbc2d (depends on _lakarr2d) + _lake_bathymetry (depends on _isbc2d) + _isbc (depends on _isbc2d) + _lakarr (depends on _isbc and _lakarr2d) + """ + self._lakarr_2d = None + self._isbc_2d = None # (depends on _lakarr2d) + self._lake_bathymetry = None # (depends on _isbc2d) + self._isbc = None # (depends on _isbc2d) + self._lakarr = None # + #self._set_lakarr2d() # calls self._set_isbc2d(), which calls self._set_lake_bathymetry() + #self._set_isbc() # calls self._set_lakarr() + + def _set_cfg(self, user_specified_cfg): + """Load configuration file; update dictionary. + """ + #self.cfg = defaultdict(dict) + self.cfg = defaultdict(dict, self.cfg) + + if isinstance(user_specified_cfg, str) or \ + isinstance(user_specified_cfg, Path): + raise ValueError("Configuration should have already been loaded") + # convert to an absolute path + #user_specified_cfg = Path(user_specified_cfg).resolve() + #assert user_specified_cfg.exists(), \ + # "config file {} not found".format(user_specified_cfg) + #updates = load(user_specified_cfg) + #updates['filename'] = user_specified_cfg + elif isinstance(user_specified_cfg, dict): + updates = user_specified_cfg.copy() + elif user_specified_cfg is None: + return + else: + raise TypeError("unrecognized input for cfg") + + # if the user specifies a complexity option for IMS or NWT, + # don't import any defaults + ims_cfg = updates.get('ims', {}) + if ims_cfg.get('options', {}).get('complexity'): + # delete the defaults + for default_block in 'nonlinear', 'linear': + if default_block in self.cfg['ims']: + del self.cfg['ims'][default_block] + nwt_cfg = updates.get('nwt', {}) + if nwt_cfg.get('options', 'specified').lower() != 'specified': + keep_args = {'headtol', 'fluxtol', 'maxiterout', + 'thickfact', 'linmeth', 'iprnwt', 'ibotav', + 'Continue', 'use_existing_file'} + self.cfg['nwt'] = {k: v for k, v in self.cfg['nwt'].items() if k in keep_args} + + update(self.cfg, updates) + # make sure empty variables get initialized as dicts + for k, v in self.cfg.items(): + if v is None: + self.cfg[k] = {} + + if 'filename' in self.cfg: + config_file_path = Path(self.cfg['filename']) + if config_file_path.is_absolute(): + self.cfg = set_cfg_paths_to_absolute(self.cfg, config_file_path.parent) + + # mf6 models: set up or load the simulation + if self.version == 'mf6': + kwargs = self.cfg['simulation'].copy() + kwargs.update(self.cfg['simulation']['options']) + if os.path.exists('{}.nam'.format(kwargs['sim_name'])) and self._load: + try: + kwargs = get_input_arguments(kwargs, mf6.MFSimulation.load, warn=False) + self._sim = mf6.MFSimulation.load(**kwargs) + except: + # create simulation + kwargs = get_input_arguments(kwargs, mf6.MFSimulation, warn=False) + self._sim = mf6.MFSimulation(**kwargs) + else: + # create simulation + kwargs = get_input_arguments(kwargs, mf6.MFSimulation, warn=False) + self._sim = mf6.MFSimulation(**kwargs) + + # load the parent model (skip if already attached) + if 'namefile' in self.cfg.get('parent', {}).keys(): + self._set_parent() + + output_paths = self.cfg['postprocessing']['output_folders'] + for name, folder_path in output_paths.items(): + if not os.path.exists(folder_path): + os.makedirs(folder_path) + setattr(self, '_{}_path'.format(name), folder_path) + + # absolute path to config file + self._config_path = os.path.split(os.path.abspath(str(self.cfg['filename'])))[0] + + # set package keys to default dicts + for pkg in self._package_setup_order: + self.cfg[pkg] = defaultdict(dict, self.cfg.get(pkg, {})) + + # other variables + self.cfg['external_files'] = {} + + # validate the configuration + validate_configuration(self.cfg) + + def _get_high_k_lakes(self): + """Get the i, j locations of any high-k lakes within the model grid. + """ + lakesdata = None + lakes_shapefile = self.cfg['high_k_lakes'].get('source_data', {}).get('lakes_shapefile') + if lakes_shapefile is not None: + if isinstance(lakes_shapefile, str): + lakes_shapefile = {'filename': lakes_shapefile} + kwargs = get_input_arguments(lakes_shapefile, self.load_features) + if 'include_ids' in kwargs: # load all lakes in shapefile + kwargs.pop('include_ids') + lakesdata = self.load_features(**kwargs) + if lakesdata is not None: + is_high_k_lake = rasterize(lakesdata, self.modelgrid) + return is_high_k_lake > 0 + + def _set_isbc2d(self): + """Set up the _isbc2d array, that indicates the i,j locations + of boundary conditions. + """ + isbc = np.zeros((self.nrow, self.ncol), dtype=int) + + # high-k lakes + if self.cfg['high_k_lakes']['simulate_high_k_lakes']: + is_high_k_lake = self._get_high_k_lakes() + if is_high_k_lake is not None: + isbc[is_high_k_lake] = 2 + + # lake package lakes + isbc[self._lakarr2d > 0] = 1 + + # add other bcs + for packagename, bcnumber in self.bc_numbers.items(): + if 'lak' not in packagename: + package = self.get_package(packagename) + if package is not None: + # handle multiple instances of package + # (in MODFLOW-6) + if isinstance(package, flopy.pakbase.PackageInterface): + packages = [package] + else: + packages = package + for package in packages: + k, i, j = get_bc_package_cells(package) + not_a_lake = np.where(isbc[i, j] != 1) + i = i[not_a_lake] + j = j[not_a_lake] + isbc[i, j] = bcnumber + self._isbc_2d = isbc + self._set_lake_bathymetry() + + def _set_isbc(self): + isbc = np.zeros((self.nlay, self.nrow, self.ncol), dtype=int) + isbc[0] = self._isbc2d + + # in mf6 models, the model top is set to the lake botm + # and any layers originally above the lake botm + # are also reset to the lake botm (given zero-thickness) + lake_botm_elevations = self.dis.top.array + below = self.dis.botm.array >= lake_botm_elevations + if not self.version == 'mf6': + lake_botm_elevations = self.dis.top.array - self.lake_bathymetry + layer_tops = np.concatenate([[self.dis.top.array], self.dis.botm.array[:-1]]) + # lakes must be at least 10% into a layer to get simulated in that layer + below = layer_tops > lake_botm_elevations + 0.1 + for i, ibelow in enumerate(below[1:]): + if np.any(ibelow): + isbc[i+1][ibelow] = self._isbc2d[ibelow] + # add other bcs + for packagename, bcnumber in self.bc_numbers.items(): + if 'lak' not in packagename: + package = self.get_package(packagename) + if package is not None: + # handle multiple instances of package + # (in MODFLOW-6) + if isinstance(package, flopy.pakbase.PackageInterface): + packages = [package] + else: + packages = package + for package in packages: + k, i, j = get_bc_package_cells(package) + not_a_lake = np.where(isbc[k, i, j] != 1) + k = k[not_a_lake] + i = i[not_a_lake] + j = j[not_a_lake] + isbc[k, i, j] = bcnumber + self._isbc = isbc + self._set_lakarr() + + def _set_lakarr2d(self): + lakarr2d = np.zeros((self.nrow, self.ncol), dtype=int) + if 'lak' in self.package_list: + lakes_shapefile = self.cfg['lak'].get('source_data', {}).get('lakes_shapefile', {}).copy() + if lakes_shapefile: + kwargs = get_input_arguments(lakes_shapefile, self.load_features) + lakesdata = self.load_features(**kwargs) # caches loaded features + lakes_shapefile['lakesdata'] = lakesdata + lakes_shapefile.pop('filename') + kwargs = get_input_arguments(lakes_shapefile, make_lakarr2d) + lakarr2d = make_lakarr2d(self.modelgrid, **kwargs) + self._lakarr_2d = lakarr2d + self._set_isbc2d() + + def _set_lakarr(self): + self.setup_external_filepaths('lak', 'lakarr', + self.cfg['lak']['{}_filename_fmt'.format('lakarr')], + file_numbers=list(range(self.nlay))) + # assign lakarr values from 3D isbc array + lakarr = np.zeros((self.nlay, self.nrow, self.ncol), dtype=int) + for k in range(self.nlay): + lakarr[k][self.isbc[k] == 1] = self._lakarr2d[self.isbc[k] == 1] + for k, ilakarr in enumerate(lakarr): + save_array(self.cfg['intermediate_data']['lakarr'][0][k], ilakarr, fmt='%d') + self._lakarr = lakarr + + def _set_lake_bathymetry(self): + bathymetry_file = self.cfg.get('lak', {}).get('source_data', {}).get('bathymetry_raster') + default_lake_depth = self.cfg['model'].get('default_lake_depth', 2) + if bathymetry_file is not None: + lmult = 1.0 + if isinstance(bathymetry_file, dict): + lmult = convert_length_units(bathymetry_file.get('length_units', 0), + self.length_units) + bathymetry_file = bathymetry_file['filename'] + + # sample pre-made bathymetry at grid points + bathy = get_values_at_points(bathymetry_file, + x=self.modelgrid.xcellcenters.ravel(), + y=self.modelgrid.ycellcenters.ravel(), + points_crs=self.modelgrid.crs, + out_of_bounds_errors='coerce') + bathy = np.reshape(bathy, (self.nrow, self.ncol)) * lmult + bathy[(bathy < 0) | np.isnan(bathy)] = 0 + + # fill bathymetry grid in remaining lake cells with default lake depth + # also ensure that all non lake cells have bathy=0 + fill = (bathy == 0) & (self._isbc2d > 0) & (self._isbc2d < 3) + bathy[fill] = default_lake_depth + bathy[(self._isbc2d > 1) & (self._isbc2d > 2)] = 0 + else: + bathy = np.zeros((self.nrow, self.ncol)) + self._lake_bathymetry = bathy + + def _set_parent_modelgrid(self, mg_kwargs=None): + """Reset the parent model grid from keyword arguments + or existing modelgrid, and DIS package. + """ + + if mg_kwargs is not None: + kwargs = mg_kwargs.copy() + else: + kwargs = {'xoff': self.parent.modelgrid.xoffset, + 'yoff': self.parent.modelgrid.yoffset, + 'angrot': self.parent.modelgrid.angrot, + 'crs': self.parent.modelgrid.crs, + 'epsg': self.parent.modelgrid.epsg, + #'proj4': self.parent.modelgrid.proj4, + } + parent_units = get_model_length_units(self.parent) + if 'lenuni' in self.cfg['parent']: + parent_units = lenuni_text[self.cfg['parent']['lenuni']] + elif 'length_units' in self.cfg['parent']: + parent_units = self.cfg['parent']['length_units'] + + if self.version == 'mf6': + self.parent.dis.length_units = parent_units + else: + self.parent.dis.lenuni = lenuni_values[parent_units] + + # make sure crs is populated, then get CRS units for the grid + from gisutils import get_authority_crs + if kwargs.get('crs') is not None: + kwargs['crs'] = get_authority_crs(kwargs['crs']) + elif kwargs.get('epsg') is not None: + kwargs['crs'] = get_authority_crs(kwargs['epsg']) + # no parent CRS info, assume the parent model is in the same CRS + elif self.cfg['setup_grid'].get('crs') is not None: + kwargs['crs'] = get_authority_crs(self.cfg['setup_grid']['crs']) + # no parent CRS info, assume the parent model is in the same CRS + elif self.cfg['setup_grid'].get('epsg') is not None: + kwargs['crs'] = get_authority_crs(self.cfg['setup_grid']['epsg']) + else: + raise ValueError('No coordinate reference input in setup_grid: or parent: ' + 'SpatialReference: blocks of configuration file. Supply ' + 'at least coordinate reference information to ' + 'setup_grid: crs: item.') + + parent_grid_units = kwargs['crs'].axis_info[0].unit_name + + if 'foot' in parent_grid_units.lower() or 'feet' in parent_grid_units.lower(): + parent_grid_units = 'feet' + elif 'metre' in parent_grid_units.lower() or 'meter' in parent_grid_units.lower(): + parent_grid_units = 'meters' + else: + raise ValueError(f'unrecognized CRS units {parent_grid_units}: CRS must be projected in feet or meters') + + # assume that model grid is in a projected CRS of meters + lmult = convert_length_units(parent_units, parent_grid_units) + kwargs['delr'] = self.parent.dis.delr.array * lmult + kwargs['delc'] = self.parent.dis.delc.array * lmult + kwargs['top'] = self.parent.dis.top.array + kwargs['botm'] = self.parent.dis.botm.array + if hasattr(self.parent.dis, 'laycbd'): + kwargs['laycbd'] = self.parent.dis.laycbd.array + # renames for parent modelgrid + renames = {'rotation': 'angrot'} + for k, v in renames.items(): + if k in kwargs: + kwargs[v] = kwargs.pop(k) + + kwargs = get_input_arguments(kwargs, MFsetupGrid, warn=False) + self._parent._mg_resync = False + self._parent._modelgrid = MFsetupGrid(**kwargs) + + def _set_parent(self): + """Set attributes related to a parent or source model + if one is specified. + """ + + # if it's an LGR model (where parent is also being created) + # set up the parent DIS package + if self._is_lgr and isinstance(self.parent, MFsetupMixin): + if 'DIS' not in self.parent.get_package_list(): + dis = self.parent.setup_dis() + + kwargs = self.cfg['parent'].copy() + if kwargs is not None: + kwargs = kwargs.copy() + + # load MF6 or MF2005 parent + if self.parent is None: + print('loading parent model {}...'.format(os.path.join(kwargs['model_ws'], + kwargs['namefile']))) + t0 = time.time() + + # load only specified packages that the parent model has + packages_in_parent_namefile = get_packages(os.path.join(kwargs['model_ws'], + kwargs['namefile'])) + # load at least these packages + # so that there is complete information on model time and space dis + default_parent_packages = {'dis', 'tdis'} + specified_packages = set(self.cfg['model'].get('packages', set())) + specified_packages.update(default_parent_packages) + + # get equivalent packages to load if parent is another MODFLOW version; + # then flatten (a package may have more than one equivalent) + parent_packages = [get_package_name(p, kwargs['version']) + for p in specified_packages] + parent_packages = {item for subset in parent_packages for item in subset} + if kwargs['version'] == 'mf6': + parent_packages.add('sto') + load_only = list(set(packages_in_parent_namefile).intersection(parent_packages)) + if 'load_only' not in kwargs: + kwargs['load_only'] = load_only + if 'skip_load' in kwargs: + kwargs['skip_load'] = [s.lower() for s in kwargs['skip_load']] + kwargs['load_only'] = [pckg for pckg in kwargs['load_only'] + if pckg not in kwargs['skip_load']] + + if self.cfg['parent']['version'] == 'mf6': + sim_kwargs = kwargs.copy() + if 'sim_name' not in kwargs: + sim_kwargs['sim_name'] = kwargs.get('simulation', 'mfsim') + if 'sim_ws' not in kwargs: + sim_kwargs['sim_ws'] = sim_kwargs.get('model_ws', '.') + sim_kwargs = get_input_arguments(sim_kwargs, mf6.MFSimulation.load, warn=False) + parent_sim = mf6.MFSimulation.load(**sim_kwargs) + modelname, _ = os.path.splitext(kwargs['namefile']) + self._parent = parent_sim.get_model(modelname) + else: + kwargs['f'] = kwargs.pop('namefile') + kwargs = get_input_arguments(kwargs, fm.Modflow.load, warn=False) + self._parent = fm.Modflow.load(**kwargs) + print("finished in {:.2f}s\n".format(time.time() - t0)) + + # set parent model units in config if not entered + if 'length_units' not in self.cfg['parent']: + self.cfg['parent']['length_units'] = get_model_length_units(self.parent) + if 'time_units' not in self.cfg['parent']: + self.cfg['parent']['time_units'] = get_model_time_units(self.parent) + + # set the parent model grid from mg_kwargs if not None + # otherwise, convert parent model grid to MFsetupGrid + mg_kwargs = self.cfg['parent'].get('SpatialReference', + self.cfg['parent'].get('modelgrid', None)) + # check configuration file input + # for consistency with parent model DIS package input + # (configuration file input may be different if an existing model + # doesn't have a valid spatial reference in the DIS package) + mf6_names = { + 'rotation': 'angrot', + 'xoff': 'xorigin', + 'yoff': 'yorigin' + } + if mg_kwargs is not None and (self.parent.version == 'mf6') and not\ + mg_kwargs.get('override_dis_package_input', False): + for variable, mf6_name in mf6_names.items(): + if (variable in mg_kwargs) and\ + ('DIS' in self.parent.get_package_list()): + dis_value = getattr(self.parent.dis, mf6_name).array + if not np.allclose(mg_kwargs[variable], dis_value): + raise ValueError( + "Configuration file entry parent: SpatialReference: " + f"{variable}: {mg_kwargs[variable]} does not match {mf6_name}={dis_value} " + "specified in the parent model DIS package file. Either make " + "these consistent or specify override_dis_package_input: True " + "in the parent: SpatialReference: configuration block.") + self._set_parent_modelgrid(mg_kwargs) + + # setup parent model perioddata table + if getattr(self.parent, 'perioddata', None) is None: + kwargs = self.cfg['parent'].copy() + kwargs['model_time_units'] = self.cfg['parent']['time_units'] + if self.parent.version == 'mf6': + for var in ['perlen', 'nstp', 'tsmult']: + kwargs[var] = getattr(self.parent.modeltime, var) + kwargs['steady'] = self.parent.modeltime.steady_state + kwargs['nper'] = self.parent.simulation.tdis.nper.array + else: + for var in ['perlen', 'steady', 'nstp', 'tsmult']: + kwargs[var] = self.parent.dis.__dict__[var].array + kwargs['nper'] = self.parent.dis.nper + kwargs = get_input_arguments(kwargs, setup_perioddata_group) + kwargs['oc_saverecord'] = {} + if hasattr(self.parent, '_perioddata'): + self._parent._perioddata = setup_perioddata_group(**kwargs) + else: + self._parent.perioddata = setup_perioddata_group(**kwargs) + + # default_source_data, where omitted configuration input is + # obtained from parent model by default + # Set default_source_data to True by default if it isn't specified + if self.cfg['parent'].get('default_source_data') is None: + self.cfg['parent']['default_source_data'] = True + if self.cfg['parent'].get('default_source_data'): + self._parent_default_source_data = True + + # set number of layers from parent if not specified + if self.version == 'mf6' and self.cfg['dis']['dimensions'].get('nlay') is None: + self.cfg['dis']['dimensions']['nlay'] = getattr(self.parent.dis.nlay, 'array', + self.parent.dis.nlay) + elif self.cfg['dis'].get('nlay') is None: + self.cfg['dis']['nlay'] = getattr(self.parent.dis.nlay, 'array', + self.parent.dis.nlay) + + # set start date/time from parent if not specified + if not self._is_lgr: + parent_start_date_time = self.cfg.get('parent', {}).get('start_date_time') + if self.version == 'mf6': + if self.cfg['tdis']['options'].get('start_date_time', '1970-01-01') == '1970-01-01' \ + and parent_start_date_time is not None: + self.cfg['tdis']['options']['start_date_time'] = self.cfg['parent']['start_date_time'] + else: + if self.cfg['dis'].get('start_date_time', '1970-01-01') == '1970-01-01' \ + and parent_start_date_time is not None: + self.cfg['dis']['start_date_time'] = self.cfg['parent']['start_date_time'] + + # only get time dis information from parent if + # no periodata groups are specified, and nper is not specified under dimensions + tdis_package = 'tdis' if self.version == 'mf6' else 'dis' + # check if any item within perioddata block is a dictionary + # (groups are subblocks within perioddata block) + has_perioddata_groups = any([isinstance(k, dict) + for k in self.cfg[tdis_package]['perioddata'].values()]) + # get the number of inset model periods + if not has_perioddata_groups: + if self.version == 'mf6': + if self.cfg['tdis']['dimensions'].get('nper') is None: + self.cfg['tdis']['dimensions']['nper'] = self.parent.modeltime.nper + nper = self.cfg['tdis']['dimensions']['nper'] + else: + if self.cfg['dis']['nper'] is None: + self.cfg['dis']['nper'] = self.dis.nper + nper = self.cfg['dis']['nper'] + # get the periods that are shared with the parent model + parent_periods = get_parent_stress_periods(self.parent, nper=nper, + parent_stress_periods=self.cfg['parent'][ + 'copy_stress_periods']) + # get time discretization info. from the parent model + if self.version == 'mf6': + for var in ['perlen', 'nstp', 'tsmult']: + if self.cfg['tdis']['perioddata'].get(var) is None: + self.cfg['tdis']['perioddata'][var] = getattr(self.parent.modeltime, var)[ + parent_periods] + # 'steady' can be specified under sto package (as in MODFLOW-6) + # or within perioddata group blocks + # but not in the tdis perioddata block itset + if self.cfg['sto'].get('steady') is None: + self.cfg['sto']['steady'] = self.parent.modeltime.steady_state[parent_periods] + else: + for var in ['perlen', 'nstp', 'tsmult', 'steady']: + if self.cfg['dis'].get(var) is None: + self.cfg['dis'][var] = self.parent.dis.__dict__[var].array[parent_periods] + + def _setup_array(self, package, var, vmin=-1e30, vmax=1e30, + source_model=None, source_package=None, + **kwargs): + return setup_array(self, package, var, vmin=vmin, vmax=vmax, + source_model=source_model, source_package=source_package, + **kwargs) + + def _setup_basic_stress_package(self, package, flopy_package_class, + variable_columns, rivdata=None, + **kwargs): + print(f'\nSetting up {package.upper()} package...') + t0 = time.time() + + # possible future support to + # handle filenames of multiple packages + # leave this out for now because of additional complexity + # from multiple sets of external files + #existing_packages = getattr(self, package, None) + #filename = f"{self.name}.{package}" + #if existing_packages is not None: + # try: + # len(existing_packages) + # suffix = len(existing_packages) + 1 + # except: + # suffix = 1 + # filename = f"{self.name}-{suffix}.{package}" + + # perimeter boundary (CHD or WEL) + dfs = [] + if 'perimeter_boundary' in kwargs: + perimeter_cfg = kwargs['perimeter_boundary'] + if package == 'chd': + perimeter_cfg['boundary_type'] = 'head' + boundname = 'perimeter-heads' + elif package == 'wel': + perimeter_cfg['boundary_type'] = 'flux' + boundname = 'perimeter-fluxes' + else: + raise ValueError(f'Unsupported package for perimeter_boundary: {package.upper()}') + if 'inset_parent_period_mapping' not in perimeter_cfg: + perimeter_cfg['inset_parent_period_mapping'] = self.parent_stress_periods + if 'parent_start_time' not in perimeter_cfg: + perimeter_cfg['parent_start_date_time'] = self.parent.perioddata['start_datetime'][0] + self.tmr = Tmr(self.parent, self, **perimeter_cfg) + df = self.tmr.get_inset_boundary_values() + + # add boundname to allow boundary flux to be tracked as observation + df['boundname'] = boundname + dfs.append(df) + + # RIV package converted from SFR input + elif rivdata is not None: + if 'name' in rivdata.stress_period_data.columns: + rivdata.stress_period_data['boundname'] = rivdata.stress_period_data['name'] + dfs.append(rivdata.stress_period_data) + + # set up package from user input + df_sd = None + if 'source_data' in kwargs: + if package == 'wel': + dropped_wells_file =\ + kwargs.get('output_files', {})\ + .get('dropped_wells_file', '{}_dropped_wells.csv').format(self.name) + df_sd = setup_wel_data(self, + source_data=kwargs['source_data'], + dropped_wells_file=dropped_wells_file) + else: + df_sd = setup_basic_stress_data(self, **kwargs['source_data'], **kwargs.get('mfsetup_options', dict())) + if df_sd is not None and len(df_sd) > 0: + dfs.append(df_sd) + # set up package from parent model + elif self.cfg['parent'].get('default_source_data') and\ + hasattr(self.parent, package): + if package == 'wel': + dropped_wells_file =\ + kwargs['output_files']['dropped_wells_file'].format(self.name) + df_sd = setup_wel_data(self, + dropped_wells_file=dropped_wells_file) + else: + print(f'Skipping setup of {package.upper()} Package from parent model-- not implemented.') + if df_sd is not None and len(df_sd) > 0: + dfs.append(df_sd) + if len(dfs) == 0: + print(f"{package.upper()} package:\n" + "No input specified or package configuration file input " + "not understood. See the Configuration " + "File Gallery in the online docs for example input " + "Note that direct input to basic stress period packages " + "is currently not supported.") + return + else: + df = pd.concat(dfs, axis=0) + + # option to write stress_period_data to external files + if self.version == 'mf6': + external_files = self.cfg[package]['mfsetup_options'].get('external_files', True) + else: + # external list or tabular type files not supported for MODFLOW-NWT + # adding support for this may require changes to Flopy + external_files = False + external_filename_fmt = self.cfg[package]['mfsetup_options']['external_filename_fmt'] + spd = setup_flopy_stress_period_data(self, package, df, + flopy_package_class=flopy_package_class, + variable_columns=variable_columns, + external_files=external_files, + external_filename_fmt=external_filename_fmt) + + kwargs = self.cfg[package] + if isinstance(self.cfg[package]['options'], dict): + kwargs.update(self.cfg[package]['options']) + #kwargs['filename'] = filename + # add observation for perimeter BCs + # and any user input with a boundname col + obslist = [] + obsfile = f'{self.name}.{package}.obs.output.csv' + if 'perimeter_boundary' in kwargs: + perimeter_btype = f"perimeter-{perimeter_cfg['boundary_type']}" + obslist.append((perimeter_btype, package, perimeter_btype)) + if 'boundname' in df.columns: + unique_boundnames = df['boundname'].unique() + for bname in unique_boundnames: + obslist.append((bname, package, bname)) + if len(obslist) > 0: + kwargs['observations'] = {obsfile: obslist} + kwargs = get_input_arguments(kwargs, flopy_package_class) + if not external_files: + kwargs['stress_period_data'] = spd + pckg = flopy_package_class(self, **kwargs) + print("finished in {:.2f}s\n".format(time.time() - t0)) + return pckg + +
+[docs] + def setup_grid(self): + """Set up the attached modelgrid instance from configuration input + """ + if self.cfg['grid']: + cfg = self.cfg['grid'] + cfg['rotation'] = self.cfg['grid']['angrot'] + else: + cfg = self.cfg['setup_grid'] #.copy() + # update grid configuration with any information supplied to dis package + # (so that settings specified for DIS package have priority) + self._update_grid_configuration_with_dis() + if not cfg['structured']: + raise NotImplementedError('Support for unstructured grids') + features_shapefile = cfg.get('source_data', {}).get('features_shapefile') + if features_shapefile is not None and 'features_shapefile' not in cfg: + features_shapefile['features_shapefile'] = features_shapefile['filename'] + del features_shapefile['filename'] + cfg.update(features_shapefile) + cfg['parent_model'] = self.parent + cfg['model_length_units'] = self.length_units + output_files = self.cfg['setup_grid']['output_files'] + cfg['grid_file'] = output_files['grid_file'].format(self.name) + bbox_shapefile_name = Path(output_files['bbox_shapefile'].format(self.name)).name + cfg['bbox_shapefile'] = Path(self._shapefiles_path) / bbox_shapefile_name + if 'DIS' in self.get_package_list(): + cfg['top'] = self.dis.top.array + cfg['botm'] = self.dis.botm.array + + # if model is an LGR inset with the default rotation=0 + # and the LGR parent is rotated + # assume that the inset model rotation should == parent + # (different LGR parent/inset rotations not allowed) + if self._is_lgr and (cfg['rotation'] == 0) and\ + self.parent.modelgrid.angrot != 0: + cfg['rotation'] = self.parent.modelgrid.angrot + + if os.path.exists(cfg['grid_file']) and self._load: + print('Loading model grid definition from {}'.format(cfg['grid_file'])) + cfg.update(load(cfg['grid_file'])) + self.cfg['grid'] = cfg + kwargs = get_input_arguments(self.cfg['grid'], MFsetupGrid) + self._modelgrid = MFsetupGrid(**kwargs) + self._modelgrid.cfg = self.cfg['grid'] + else: + kwargs = get_input_arguments(cfg, setup_structured_grid) + if not set(kwargs.keys()).intersection({ + 'features_shapefile', 'features', 'xoff', 'yoff', 'xul', 'yul'}): + raise ValueError( + "No features_shapefile or xoff, yoff supplied " + "to setup_grid: block. Check configuration file input, " + "including for accidental indentation of the setup_grid: block.") + self._modelgrid = setup_structured_grid(**kwargs) + self.cfg['grid'] = self._modelgrid.cfg + # update DIS package configuration + if self.version == 'mf6': + self.cfg['dis']['dimensions']['nrow'] = self.cfg['grid']['nrow'] + self.cfg['dis']['dimensions']['ncol'] = self.cfg['grid']['ncol'] + else: + self.cfg['dis']['nrow'] = self.cfg['grid']['nrow'] + self.cfg['dis']['ncol'] = self.cfg['grid']['ncol'] + + self._reset_bc_arrays() + + # set up local grid refinement + if 'lgr' in self.cfg['setup_grid'].keys(): + if self.version != 'mf6': + raise TypeError('LGR only supported for MODFLOW-6 models.') + if not self.lgr: + self.lgr = True + for key, cfg in self.cfg['setup_grid']['lgr'].items(): + existing_inset_models = set() + if isinstance(self.inset, dict): + existing_inset_models = {k for k, v in self.inset.items()} + if key not in existing_inset_models: + self.create_lgr_models()
+ + +
+[docs] + def load_grid(self, gridfile=None): + """Load model grid information from a json or yml file.""" + if gridfile is None: + if os.path.exists(self.cfg['setup_grid']['grid_file']): + gridfile = self.cfg['setup_grid']['grid_file'] + print('Loading model grid information from {}'.format(gridfile)) + self.cfg['grid'] = load(gridfile)
+ + + def setup_sfr(self, **kwargs): + package = 'sfr' + print('\nSetting up {} package...'.format(package.upper())) + t0 = time.time() + + # input + flowlines = self.cfg['sfr'].get('source_data', {}).get('flowlines') + if flowlines is not None: + if 'nhdplus_paths' in flowlines.keys(): + nhdplus_paths = flowlines['nhdplus_paths'] + for f in nhdplus_paths: + if not os.path.exists(f): + print('SFR setup: missing input file: {}'.format(f)) + nhdplus_paths.remove(f) + if len(nhdplus_paths) == 0: + return + + # create an sfrmaker.lines instance + bbox_filter = project(self.bbox, self.modelgrid.crs, 'epsg:4269').bounds + lines = Lines.from_nhdplus_v2(NHDPlus_paths=nhdplus_paths, + bbox_filter=bbox_filter) + else: + for key in ['filename', 'filenames']: + if key in flowlines: + kwargs = flowlines.copy() + kwargs['shapefile'] = kwargs.pop(key) + check_source_files(kwargs['shapefile']) + if 'epsg' not in kwargs: + try: + from gisutils import get_shapefile_crs + shapefile_crs = get_shapefile_crs(kwargs['shapefile']) + except Exception as e: + print(e) + msg = ('Need gis-utils >= 0.2 to get crs' + ' for shapefile: {}\nPlease pip install ' + '--upgrade gis-utils'.format(kwargs['shapefile'])) + print(msg) + else: + shapefile_crs = pyproj.crs.CRS.from_epsg(kwargs['epsg']) + authority = shapefile_crs.to_authority() + if authority is not None: + shapefile_crs = pyproj.CRS.from_user_input(shapefile_crs.to_authority()) + + bbox_filter = self.bbox.bounds + if shapefile_crs != self.modelgrid.crs: + bbox_filter = project(self.bbox, self.modelgrid.crs, shapefile_crs).bounds + kwargs['bbox_filter'] = bbox_filter + # create an sfrmaker.lines instance + kwargs = get_input_arguments(kwargs, Lines.from_shapefile) + lines = Lines.from_shapefile(**kwargs) + break + else: + return + + # output + output_path = self.cfg['sfr'].get('output_path') + if output_path is not None: + if not os.path.isdir(output_path): + os.makedirs(output_path) + else: + output_path = self.cfg['postprocessing']['output_folders']['shapefiles'] + self.cfg['sfr']['output_path'] = output_path + + # create isfr array (where SFR cells will be populated) + if self.version == 'mf6': + active_cells = np.sum(self.idomain >= 1, axis=0) > 0 + # For models with LGR, set the LGR area to isfr=0 + # to prevent SFR from being generated within the LGR area + # needed for LGR models that only have refinement + # in some layers (in other words, active parent model cells + # below the LGR inset) + if self.lgr: + active_cells[self._lgr_idomain2d == 0] = 0 + else: + active_cells = np.sum(self.ibound >= 1, axis=0) > 0 + #active_cells = self.ibound.sum(axis=0) > 0 + # only include active cells that don't have another boundary condition + # (besides the wel package) + isfr = active_cells & (self._isbc2d <= 0) + + # kludge to get sfrmaker to work with modelgrid + self.modelgrid.model_length_units = self.length_units + + # create an sfrmaker.sfrdata instance from the lines instance + to_sfr_kwargs = self.cfg['sfr'].copy() + if not self.cfg['sfr'].get('sfrmaker_options'): + self.cfg['sfr']['sfrmaker_options'] = {} + to_sfr_kwargs.update(self.cfg['sfr']['sfrmaker_options']) + #to_sfr_kwargs = get_input_arguments(to_sfr_kwargs, Lines.to_sfr) + sfr = lines.to_sfr(grid=self.modelgrid, + isfr=isfr, + model=self, + **to_sfr_kwargs) + if self.cfg['sfr'].get('set_streambed_top_elevations_from_dem'): + warnings.warn('sfr: set_streambed_top_elevations_from_dem option is now under sfr: sfrmaker_options', + DeprecationWarning) + self.cfg['sfr']['sfrmaker_options']['set_streambed_top_elevations_from_dem'] = True + if self.cfg['sfr']['sfrmaker_options'].get('set_streambed_top_elevations_from_dem'): + dem_kwargs = self.cfg['sfr']['sfrmaker_options'].get('set_streambed_top_elevations_from_dem') + if not isinstance(dem_kwargs, dict): + dem_kwargs = {} + error_msg = ( + "If set_streambed_top_elevations_from_dem=True, " + "need a dem block in source_data for SFR package. " + "Otherwise set_streambed_top_elevations_from_dem should be" + "a block with arguments to " + "sfrmaker.SFRData.set_streambed_top_elevations_from_dem") + assert 'dem' in self.cfg['sfr'].get('source_data', {}), error_msg + dem_kwargs.update(self.cfg['sfr']['source_data']['dem']) + sfr.set_streambed_top_elevations_from_dem(**dem_kwargs) + else: + sfr.reach_data['strtop'] = sfr.interpolate_to_reaches('elevup', 'elevdn') + + # assign layers to the sfr reaches + botm = self.dis.botm.array.copy() + if self.version == 'mf6': + idomain = self.dis.idomain.array + else: + idomain = self.bas6.ibound.array + layers, new_botm = assign_layers(sfr.reach_data, + botm_array=botm, + idomain=idomain) + sfr.reach_data['k'] = layers + if new_botm is not None: + # run thru setup_array so that DIS input remains open/close + self._setup_array('dis', 'botm', + data={i: arr for i, arr in enumerate(new_botm)}, + datatype='array3d', write_fmt='%.2f', dtype=int) + # reset the bottom array in flopy (and in memory) + # is this necessary? = + self.dis.botm = new_botm + # set bottom array to external files + if self.version == 'mf6': + self.dis.botm = self.cfg['dis']['griddata']['botm'] + else: + self.dis.botm = self.cfg['dis']['botm'] + print('\nModel cell bottom elevations adjusted after assigning ' + 'SFR reaches to layers\n(to accommodate SFR reach bottoms ' + 'below the previous model bottom)\n') + + # option to convert reaches to the River Package + if self.cfg['sfr'].get('to_riv'): + warnings.warn('sfr: to_riv option is now under sfr: sfrmaker_options', + DeprecationWarning) + self.cfg['sfr']['sfrmaker_options']['to_riv'] = self.cfg['sfr'].get('to_riv') + if self.cfg['sfr'].get('sfrmaker_options', {}).get('to_riv'): + rivdata = sfr.to_riv(line_ids=self.cfg['sfr']['sfrmaker_options']['to_riv'], + drop_in_sfr=True) + # setup of RIV package from SFRmaker-derived RIVdata + # and any user input + # do this instead of 2 seperate packages + # to avoid having two sets of external files + self.setup_riv(rivdata, **self.cfg['riv'], **self.cfg['riv']['mfsetup_options']) + rivdata_filename = self.cfg['riv']['output_files']['rivdata_file'].format(self.name) + rivdata.write_table(os.path.join(self._tables_path, rivdata_filename)) + rivdata.write_shapefiles('{}/{}'.format(self._shapefiles_path, self.name)) + + # optional routing input + # (for a complete representation of a larger or more detailed + # stream network that may be culled in SFR package) + sd = self.cfg['sfr'].get('source_data', {}) + routing_input_key = [k for k in sd.keys() if 'routing' in k] + routing_input = None + if len(routing_input_key) > 0: + routing_input = sd.get(routing_input_key[0]) + routing = pd.read_csv(routing_input['filename']) + routing = dict(zip(routing[routing_input['id_column']], + routing[routing_input['routing_column']])) + # set any values (downstream lines) not in keys (upstream lines) + # to 0 (outlet condition) + routing = {k: v if v in routing.keys() else 0 + for k, v in routing.items()} + # use _original_routing attached to Lines instance as default + else: + routing = lines._original_routing + + # add inflows + inflows_input = self.cfg['sfr'].get('source_data', {}).get('inflows') + if inflows_input is not None: + # resample inflows to model stress periods + inflows_input['id_column'] = inflows_input['line_id_column'] + sd = TransientTabularSourceData.from_config(inflows_input, + dest_model=self) + inflows_by_stress_period = sd.get_data() + + missing_sites = set(inflows_by_stress_period[inflows_input['id_column']]). \ + difference(routing.keys()) + if any(missing_sites): + # cast IDs to strings for compatibility with SFRmaker > 0.11.3 + # for now, assume IDs are numeric; future updates to SFRmaker + # may eventually allow for alpha numeric IDs + inflows_by_stress_period[inflows_input['id_column']] =\ + inflows_by_stress_period[inflows_input['id_column']].astype(int).astype(str) + + # check if all inflow sites are included in sfr network + missing_sites = set(inflows_by_stress_period[inflows_input['id_column']]). \ + difference(routing.keys()) + # if there are missing sites, try using the supplied routing + if any(missing_sites): + raise KeyError(('inflow sites {} are not within the model sfr network. ' + 'Please supply an inflows_routing source_data block ' + '(see shellmound example config file)'.format(missing_sites))) + + # add resampled inflows to SFR package + inflows_input['data'] = inflows_by_stress_period + inflows_input['flowline_routing'] = routing + if self.version == 'mf6': + inflows_input['variable'] = 'inflow' + method = sfr.add_to_perioddata + else: + inflows_input['variable'] = 'flow' + method = sfr.add_to_segment_data + kwargs = get_input_arguments(inflows_input.copy(), method) + method(**kwargs) + + # add runoff + runoff_input = self.cfg['sfr'].get('source_data', {}).get('runoff') + if runoff_input is not None: + # resample inflows to model stress periods + runoff_input['id_column'] = runoff_input['line_id_column'] + sd = TransientTabularSourceData.from_config(runoff_input, + dest_model=self) + runoff_by_stress_period = sd.get_data() + + # check if all sites are included in sfr network + missing_sites = set(runoff_by_stress_period[runoff_input['id_column']]). \ + difference(routing.keys()) + if any(missing_sites): + warnings.warn(('runoff sites {} are not within the model sfr network. ' + 'Please supply an inflows_routing source_data block ' + '(see shellmound example config file)'.format(missing_sites)), + UserWarning) + + # add resampled inflows to SFR package + runoff_input['data'] = runoff_by_stress_period + runoff_input['flowline_routing'] = routing + runoff_input['variable'] = 'runoff' + runoff_input['distribute_flows_to_reaches'] = True + if self.version == 'mf6': + method = sfr.add_to_perioddata + else: + method = sfr.add_to_segment_data + kwargs = get_input_arguments(runoff_input.copy(), method) + method(**kwargs) + + # add observations + observations_input = self.cfg['sfr'].get('source_data', {}).get('observations') + if self.version != 'mf6': + sfr.gage_starting_unit_number = self.cfg['gag']['starting_unit_number'] + if observations_input is not None: + key = 'filename' if 'filename' in observations_input else 'filenames' + observations_input['data'] = observations_input[key] + kwargs = get_input_arguments(observations_input.copy(), sfr.add_observations) + obsdata = sfr.add_observations(**kwargs) + # resample observations to model stress periods; write to table + + # write reach and segment data tables + sfr.write_tables('{}/{}'.format(self._tables_path, self.name)) + + # export shapefiles of lines, routing, cell polygons, inlets and outlets + sfr.write_shapefiles('{}/{}'.format(self._shapefiles_path, self.name)) + + # create the flopy SFR package instance + sfr.create_modflow_sfr2(model=self, istcb2=223) + if self.version != 'mf6': + sfr_package = sfr.modflow_sfr2 + else: + # pass options kwargs through to mf6 constructor + kwargs = flatten({k:v for k, v in self.cfg[package].items() if k not in + {'source_data', 'flowlines', 'inflows', 'observations', + 'inflows_routing', 'dem', 'sfrmaker_options'}}) + kwargs = get_input_arguments(kwargs, mf6.ModflowGwfsfr) + sfr_package = sfr.create_mf6sfr(model=self, **kwargs) + # monkey patch ModflowGwfsfr instance to behave like ModflowSfr2 + sfr_package.reach_data = sfr.modflow_sfr2.reach_data + + # attach the sfrmaker.sfrdata instance as an attribute + self.sfrdata = sfr + + # reset dependent arrays + self._reset_bc_arrays() + if self.version == 'mf6': + self._set_idomain() + print("finished in {:.2f}s\n".format(time.time() - t0)) + return sfr_package + + def setup_solver(self): + if self.version == 'mf6': + solver_package = 'ims' + else: + solver_package = 'nwt' + assert solver_package not in self.package_list + setup_method_name = 'setup_{}'.format(solver_package) + package_setup = getattr(self, setup_method_name, None) + package_setup() + + def setup_packages(self, reset_existing=True): + package_list = self.package_list #['sfr'] #m.package_list # ['tdis', 'dis', 'npf', 'oc'] + if not reset_existing: + package_list = [p for p in package_list if p.upper() not in self.get_package_list()] + for pkg in package_list: + setup_method_name = f'setup_{pkg}' + package_setup = getattr(self, setup_method_name, None) + if package_setup is None: + print('{} package not supported for MODFLOW version={}'.format(pkg.upper(), self.version)) + continue + if not callable(package_setup): + package_setup = getattr(MFsetupMixin, 'setup_{}'.format(pkg.strip('6'))) + # avoid multiple package instances for now, except for obs + if self.version != 'mf6' or pkg == 'obs' or not hasattr(self, pkg): + package_setup(**self.cfg[pkg], **self.cfg[pkg]['mfsetup_options']) + + +
+[docs] + @classmethod + def load_cfg(cls, yamlfile, verbose=False): + """Loads a configuration file, with default settings + specific to the MFnwtModel or MF6model class. + + Parameters + ---------- + yamlfile : str (filepath) + Configuration file in YAML format with pfl_nwt setup information. + verbose : bool + + Returns + ------- + cfg : dict (configuration dictionary) + """ + return load_cfg(yamlfile, verbose=verbose, default_file=cls.default_file)
+ + +
+[docs] + @classmethod + def setup_from_yaml(cls, yamlfile, verbose=False): + """Make a model from scratch, using information in a yamlfile. + + Parameters + ---------- + yamlfile : str (filepath) + Configuration file in YAML format with pfl_nwt setup information. + verbose : bool + + Returns + ------- + m : model instance + """ + cfg = cls.load_cfg(yamlfile, verbose=verbose) + return cls.setup_from_cfg(cfg, verbose=verbose)
+ + +
+[docs] + @classmethod + def setup_from_cfg(cls, cfg, verbose=False): + """Make a model from scratch, using information in a configuration dictionary. + + Parameters + ---------- + cfg : dict + Configuration dictionary, as produced by the model.load_cfg method. + verbose : bool + + Returns + ------- + m : model instance + """ + cfg_filename = Path(cfg.get('filename', '')).name + msg = f"\nSetting up {cfg['model']['modelname']} model" + if len(cfg_filename) > 0: + msg += f" from configuration in {cfg_filename}" + print(msg) + t0 = time.time() + + m = cls(cfg=cfg) #, **kwargs) + + # make a grid if one isn't already specified + if 'grid' not in m.cfg.keys(): + m.setup_grid() + + # establish time discretization, including TDIS setup for MODFLOW-6 + m.setup_tdis() + + # set up the solver + m.setup_solver() + + # set up all of the packages specified in the config file + m.setup_packages(reset_existing=False) + + # LGR inset model(s) + if m.inset is not None: + for k, v in m.inset.items(): + if v._is_lgr: + v.setup_packages() + m.setup_lgr_exchanges() + + print('finished setting up model in {:.2f}s'.format(time.time() - t0)) + print('\n{}'.format(m)) + return m
+
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/_modules/mfsetup/mfnwtmodel.html b/_modules/mfsetup/mfnwtmodel.html new file mode 100644 index 00000000..a6ada977 --- /dev/null +++ b/_modules/mfsetup/mfnwtmodel.html @@ -0,0 +1,1060 @@ + + + + + + + + mfsetup.mfnwtmodel — modflow-setup 0.5.0.post59+g65803fd documentation + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for mfsetup.mfnwtmodel

+import os
+import shutil
+import time
+from pathlib import Path
+
+import flopy
+import numpy as np
+import pandas as pd
+
+fm = flopy.modflow
+from flopy.modflow import Modflow
+
+from mfsetup.bcs import remove_inactive_bcs
+from mfsetup.discretization import (
+    deactivate_idomain_above,
+    find_remove_isolated_cells,
+    make_ibound,
+)
+from mfsetup.fileio import (
+    add_version_to_fileheader,
+    flopy_mf2005_load,
+    load,
+    load_cfg,
+    save_array,
+)
+from mfsetup.ic import setup_strt
+from mfsetup.lakes import (
+    make_bdlknc2d,
+    make_bdlknc_zones,
+    setup_lake_fluxes,
+    setup_lake_info,
+    setup_lake_tablefiles,
+)
+from mfsetup.mfmodel import MFsetupMixin
+from mfsetup.obs import setup_head_observations
+from mfsetup.oc import parse_oc_period_input
+from mfsetup.tdis import (
+    get_parent_stress_periods,
+    setup_perioddata,
+    setup_perioddata_group,
+)
+from mfsetup.units import convert_length_units, itmuni_text, lenuni_text
+from mfsetup.utils import get_input_arguments, get_packages
+
+
+
+[docs] +class MFnwtModel(MFsetupMixin, Modflow): + """Class representing a MODFLOW-NWT model""" + default_file = 'mfnwt_defaults.yml' + + def __init__(self, parent=None, cfg=None, + modelname='model', exe_name='mfnwt', + version='mfnwt', model_ws='.', + external_path='external/', **kwargs): + defaults = {'parent': parent, + 'modelname': modelname, + 'exe_name': exe_name, + 'version': version, + 'model_ws': model_ws, + 'external_path': external_path, + } + # load configuration, if supplied + if cfg is not None: + if not isinstance(cfg, dict): + cfg = self.load_cfg(cfg) + cfg = self._parse_model_kwargs(cfg) + defaults.update(cfg['model']) + kwargs = {k: v for k, v in kwargs.items() if k not in defaults} + # otherwise, pass arguments on to flopy constructor + args = get_input_arguments(defaults, Modflow, + exclude='packages') + Modflow.__init__(self, **args, **kwargs) + #Modflow.__init__(self, modelname, exe_name=exe_name, version=version, + # model_ws=model_ws, external_path=external_path, + # **kwargs) + MFsetupMixin.__init__(self, parent=parent) + + # default configuration + self._package_setup_order = ['dis', 'bas6', 'upw', 'rch', 'oc', + 'chd', 'ghb', 'lak', 'sfr', 'riv', 'wel', 'mnw2', + 'gag', 'hyd'] + # set up the model configuration dictionary + # start with the defaults + self.cfg = load(self.source_path / self.default_file) # 'mf6_defaults.yml') + self.relative_external_paths = self.cfg.get('model', {}).get('relative_external_paths', True) + # set the model workspace and change working directory to there + self.model_ws = self._get_model_ws(cfg=cfg) + # update defaults with user-specified config. (loaded above) + # set up and validate the model configuration dictionary + self._set_cfg(cfg) + + # set the list file path + self.lst.file_name = [self.cfg['model']['list_filename_fmt'].format(self.name)] + + # the "drop thin cells" option is not available for MODFLOW-2005 models + self._drop_thin_cells = False + + # property arrays + self._ibound = None + + # delete the temporary 'original-files' folder + # if it already exists, to avoid side effects from stale files + shutil.rmtree(self.tmpdir, ignore_errors=True) + + def __repr__(self): + return MFsetupMixin.__repr__(self) + + @property + def nlay(self): + return self.cfg['dis'].get('nlay', 1) + + @property + def length_units(self): + return lenuni_text[self.cfg['dis']['lenuni']] + + @property + def time_units(self): + return itmuni_text[self.cfg['dis']['itmuni']] + + @property + def perioddata(self): + """DataFrame summarizing stress period information. + Columns: + ============== ========================================= + start_datetime Start date of each model stress period + end_datetime End date of each model stress period + time MODFLOW elapsed time, in days* + per Model stress period number + perlen Stress period length (days) + nstp Number of timesteps in stress period + tsmult Timestep multiplier + steady Steady-state or transient + oc Output control setting for MODFLOW + parent_sp Corresponding parent model stress period + ============== ========================================= + + TODO: the code here might still need to be adapted to + parallel the code in MF6model.perioddata, to work with + parent models that are already loaded but have no configuration. + """ + if self._perioddata is None: + default_start_datetime = self.cfg['dis'].get('start_date_time', '1970-01-01') + tdis_perioddata_config = self.cfg['dis'] + nper = self.cfg['dis'].get('nper') + steady = self.cfg['dis'].get('steady') + parent_stress_periods=None + if self.parent is not None: + parent_stress_periods = self.cfg['parent'].get('copy_stress_periods') + perioddata = setup_perioddata( + self, + tdis_perioddata_config=tdis_perioddata_config, + default_start_datetime=default_start_datetime, + nper=nper, steady=steady, time_units=self.time_units, + parent_model=self.parent, + parent_stress_periods=parent_stress_periods, + ) + self._perioddata = perioddata + # reset nper property so that it will reference perioddata table + self._nper = None + self._perioddata.to_csv(f'{self._tables_path}/stress_period_data.csv', index=False) + # update the model configuration + if 'parent_sp' in perioddata.columns: + self.cfg['parent']['copy_stress_periods'] = perioddata['parent_sp'].tolist() + + return self._perioddata + + @property + def ipakcb(self): + """By default write everything to one cell budget file.""" + return self.cfg['upw'].get('ipakcb', 53) + + @property + def ibound(self): + """3D array indicating which cells will be included in the simulation. + Made a property so that it can be easily updated when any packages + it depends on change. + """ + if self._ibound is None and 'BAS6' in self.get_package_list(): + self._set_ibound() + return self._ibound + + def _set_ibound(self): + """Remake the idomain array from the source data, + no data values in the top and bottom arrays, and + so that cells above SFR reaches are inactive.""" + ibound_from_layer_elevations = make_ibound(self.dis.top.array, + self.dis.botm.array, + nodata=self._nodata_value, + minimum_layer_thickness=self.cfg['dis'].get( + 'minimum_layer_thickness', 1), + #drop_thin_cells=self._drop_thin_cells, + tol=1e-4) + + # include cells that are active in the existing idomain array + # and cells inactivated on the basis of layer elevations + ibound = (self.bas6.ibound.array > 0) & (ibound_from_layer_elevations >= 1) + ibound = ibound.astype(int) + + # remove cells that conincide with lakes + ibound[self.isbc == 1] = 0. + + # remove cells that are above stream cells + if self.get_package('sfr') is not None: + ibound = deactivate_idomain_above(ibound, self.sfr.reach_data) + # remove cells that are above ghb cells + if self.get_package('ghb') is not None: + ibound = deactivate_idomain_above(ibound, self.ghb.stress_period_data[0]) + + # inactivate any isolated cells that could cause problems with the solution + ibound = find_remove_isolated_cells(ibound, minimum_cluster_size=20) + + self._ibound = ibound + # re-write the input files + self._setup_array('bas6', 'ibound', resample_method='nearest', + data={i: arr for i, arr in enumerate(ibound)}, + datatype='array3d', write_fmt='%d', dtype=int) + self.bas6.ibound = self.cfg['bas6']['ibound'] + + def _set_parent(self): + """Set attributes related to a parent or source model + if one is specified.""" + + if self.cfg['parent'].get('version') == 'mf6': + raise NotImplementedError("MODFLOW-6 parent models") + + kwargs = self.cfg['parent'].copy() + if kwargs is not None: + kwargs = kwargs.copy() + kwargs['f'] = kwargs.pop('namefile') + # load only specified packages that the parent model has + packages_in_parent_namefile = get_packages(os.path.join(kwargs['model_ws'], + kwargs['f'])) + load_only = list(set(packages_in_parent_namefile).intersection( + set(self.cfg['model'].get('packages', set())))) + if 'load_only' not in kwargs: + kwargs['load_only'] = load_only + if 'skip_load' in kwargs: + kwargs['skip_load'] = [s.lower() for s in kwargs['skip_load']] + kwargs['load_only'] = [pckg for pckg in kwargs['load_only'] + if pckg not in kwargs['skip_load']] + kwargs = get_input_arguments(kwargs, fm.Modflow.load, warn=False) + + print('loading parent model {}...'.format(os.path.join(kwargs['model_ws'], + kwargs['f']))) + t0 = time.time() + self._parent = fm.Modflow.load(**kwargs) + print("finished in {:.2f}s\n".format(time.time() - t0)) + + # parent model units + if 'length_units' not in self.cfg['parent']: + self.cfg['parent']['length_units'] = lenuni_text[self.parent.dis.lenuni] + if 'time_units' not in self.cfg['parent']: + self.cfg['parent']['time_units'] = itmuni_text[self.parent.dis.itmuni] + + # set the parent model grid from mg_kwargs if not None + # otherwise, convert parent model grid to MFsetupGrid + mg_kwargs = self.cfg['parent'].get('SpatialReference', + self.cfg['parent'].get('modelgrid', None)) + self._set_parent_modelgrid(mg_kwargs) + + # parent model perioddata + if not hasattr(self.parent, 'perioddata'): + kwargs = {} + kwargs['start_date_time'] = self.cfg['parent'].get('start_date_time', + self.cfg['model'].get('start_date_time', + '1970-01-01')) + kwargs['nper'] = self.parent.nper + kwargs['model_time_units'] = self.cfg['parent']['time_units'] + for var in ['perlen', 'steady', 'nstp', 'tsmult']: + kwargs[var] = self.parent.dis.__dict__[var].array + kwargs = get_input_arguments(kwargs, setup_perioddata_group) + kwargs['oc_saverecord'] = {} + self._parent.perioddata = setup_perioddata_group(**kwargs) + + # default_source_data, where omitted configuration input is + # obtained from parent model by default + # Set default_source_data to True by default if it isn't specified + if self.cfg['parent'].get('default_source_data') is None: + self.cfg['parent']['default_source_data'] = True + if self.cfg['parent'].get('default_source_data'): + self._parent_default_source_data = True + if self.cfg['dis'].get('nlay') is None: + self.cfg['dis']['nlay'] = self.parent.dis.nlay + parent_start_date_time = self.cfg.get('parent', {}).get('start_date_time') + if self.cfg['dis'].get('start_date_time', '1970-01-01') == '1970-01-01' and parent_start_date_time is not None: + self.cfg['dis']['start_date_time'] = self.cfg['parent']['start_date_time'] + if self.cfg['dis'].get('nper') is None: + self.cfg['dis']['nper'] = self.parent.dis.nper + parent_periods = get_parent_stress_periods(self.parent, nper=self.cfg['dis']['nper'], + parent_stress_periods=self.cfg['parent']['copy_stress_periods']) + for var in ['perlen', 'nstp', 'tsmult', 'steady']: + if self.cfg['dis'].get(var) is None: + self.cfg['dis'][var] = self.parent.dis.__dict__[var].array[parent_periods] + + def _update_grid_configuration_with_dis(self): + """Update grid configuration with any information supplied to dis package + (so that settings specified for DIS package have priority). This method + is called by MFsetupMixin.setup_grid. + """ + for param in ['nrow', 'ncol', 'delr', 'delc']: + if param in self.cfg['dis']: + self.cfg['setup_grid'][param] = self.cfg['dis'][param] + + def setup_dis(self, **kwargs): + """""" + package = 'dis' + print('\nSetting up {} package...'.format(package.upper())) + t0 = time.time() + + # resample the top from the DEM + if self.cfg['dis']['remake_top']: + self._setup_array(package, 'top', datatype='array2d', + resample_method='linear', + write_fmt='%.2f') + + # make the botm array + self._setup_array(package, 'botm', datatype='array3d', + resample_method='linear', + write_fmt='%.2f') + + # put together keyword arguments for dis package + kwargs = self.cfg['grid'].copy() # nrow, ncol, delr, delc + kwargs.update(self.cfg['dis']) # nper, nlay, etc. + kwargs = get_input_arguments(kwargs, fm.ModflowDis) + # we need flopy to read the intermediate files + # (it will write the files in cfg) + lmult = convert_length_units('meters', self.length_units) + kwargs.update({'top': self.cfg['intermediate_data']['top'][0], + 'botm': self.cfg['intermediate_data']['botm'], + 'nper': self.nper, + 'delc': self.modelgrid.delc * lmult, + 'delr': self.modelgrid.delr * lmult + }) + for arg in ['perlen', 'nstp', 'tsmult', 'steady']: + kwargs[arg] = self.perioddata[arg].values + + dis = fm.ModflowDis(model=self, **kwargs) + self._perioddata = None # reset perioddata + #if not isinstance(self._modelgrid, MFsetupGrid): + # self._modelgrid = None # override DIS package grid setup + self.setup_grid() # reset the model grid + self._reset_bc_arrays() + #self._isbc = None # reset BC property arrays + print("finished in {:.2f}s\n".format(time.time() - t0)) + return dis + +
+[docs] + def setup_tdis(self, **kwargs): + """Calls the _set_perioddata, to establish time discretization. Only purpose + is to conform to same syntax as mf6 for MFsetupMixin.setup_from_yaml() + """ + self.perioddata
+ + + def setup_bas6(self, **kwargs): + """""" + package = 'bas6' + print('\nSetting up {} package...'.format(package.upper())) + t0 = time.time() + + kwargs = self.cfg[package] + kwargs['source_data_config'] = kwargs['source_data'] + kwargs['filename_fmt'] = kwargs['strt_filename_fmt'] + kwargs['write_fmt'] = kwargs['strt_write_fmt'] + + # make the starting heads array + strt = setup_strt(self, package, **kwargs) + + # initial ibound input for creating a bas6 package instance + self._setup_array(package, 'ibound', datatype='array3d', write_fmt='%d', + resample_method='nearest', + dtype=int) + + kwargs = get_input_arguments(self.cfg['bas6'], fm.ModflowBas) + kwargs['strt'] = strt + bas = fm.ModflowBas(model=self, **kwargs) + print("finished in {:.2f}s\n".format(time.time() - t0)) + self._set_ibound() + return bas + + def setup_oc(self, **kwargs): + + package = 'oc' + print('\nSetting up {} package...'.format(package.upper())) + t0 = time.time() + #stress_period_data = {} + #for i, r in self.perioddata.iterrows(): + # stress_period_data[(r.per, r.nstp -1)] = r.oc + + # use stress_period_data if supplied + # (instead of period_input defaults) + if 'stress_period_data' in self.cfg['oc']: + del self.cfg['oc']['period_options'] + kwargs = self.cfg['oc'] + period_input = parse_oc_period_input(kwargs, nstp=self.perioddata.nstp, + output_fmt='mfnwt') + kwargs.update(period_input) + kwargs = get_input_arguments(kwargs, fm.ModflowOc) + oc = fm.ModflowOc(model=self, **kwargs) + print("finished in {:.2f}s\n".format(time.time() - t0)) + return oc + + def setup_rch(self, **kwargs): + package = 'rch' + print('\nSetting up {} package...'.format(package.upper())) + t0 = time.time() + + # make the rech array + self._setup_array(package, 'rech', datatype='transient2d', + resample_method='linear', + write_fmt='%.6e', + write_nodata=0.) + + # create flopy package instance + kwargs = self.cfg['rch'] + kwargs['ipakcb'] = self.ipakcb + kwargs = get_input_arguments(kwargs, fm.ModflowRch) + rch = fm.ModflowRch(model=self, **kwargs) + print("finished in {:.2f}s\n".format(time.time() - t0)) + return rch + +
+[docs] + def setup_upw(self, **kwargs): + """ + """ + package = 'upw' + print('\nSetting up {} package...'.format(package.upper())) + t0 = time.time() + hiKlakes_value = float(self.cfg['parent'].get('hiKlakes_value', 1e4)) + + # copy transient variables if they were included in config file + # defaults are hard coded to arrays in parent model priority + # over config file values, in the case that ss and sy weren't entered + hk = self.cfg['upw'].get('hk') + vka = self.cfg['upw'].get('vka') + default_sy = 0.1 + default_ss = 1e-6 + + # Determine which hk, vka to use + # load parent upw if it's needed and not loaded + source_package = package + if None in [hk, vka] and \ + 'UPW' not in self.parent.get_package_list() and \ + 'LPF' not in self.parent.get_package_list(): + for ext, pckgcls in {'upw': fm.ModflowUpw, + 'lpf': fm.ModflowLpf, + }.items(): + pckgfile = '{}/{}.{}'.format(self.parent.model_ws, self.parent.name, package) + if os.path.exists(pckgfile): + upw = pckgcls.load(pckgfile, self.parent) + source_package = ext + break + + self._setup_array(package, 'hk', vmin=0, vmax=hiKlakes_value, resample_method='linear', + source_package=source_package, datatype='array3d', write_fmt='%.6e') + self._setup_array(package, 'vka', vmin=0, vmax=hiKlakes_value, resample_method='linear', + source_package=source_package, datatype='array3d', write_fmt='%.6e') + if np.any(~self.dis.steady.array): + self._setup_array(package, 'sy', vmin=0, vmax=1, resample_method='linear', + source_package=source_package, + datatype='array3d', write_fmt='%.6e') + self._setup_array(package, 'ss', vmin=0, vmax=1, resample_method='linear', + source_package=source_package, + datatype='array3d', write_fmt='%.6e') + sy = self.cfg['intermediate_data']['sy'] + ss = self.cfg['intermediate_data']['ss'] + else: + sy = default_sy + ss = default_ss + + upw = fm.ModflowUpw(self, hk=self.cfg['intermediate_data']['hk'], + vka=self.cfg['intermediate_data']['vka'], + sy=sy, + ss=ss, + layvka=self.cfg['upw']['layvka'], + laytyp=self.cfg['upw']['laytyp'], + hdry=self.cfg['upw']['hdry'], + ipakcb=self.cfg['upw']['ipakcb']) + print("finished in {:.2f}s\n".format(time.time() - t0)) + return upw
+ + + def setup_mnw2(self, **kwargs): + + print('setting up MNW2 package...') + t0 = time.time() + + # added wells + # todo: generalize MNW2 source data input; add auto-reprojection + added_wells = self.cfg['mnw'].get('added_wells') + if added_wells is not None: + if isinstance(added_wells, str): + aw = pd.read_csv(added_wells) + aw.rename(columns={'name': 'comments'}, inplace=True) + elif isinstance(added_wells, dict): + added_wells = {k: v for k, v in added_wells.items() if v is not None} + if len(added_wells) > 0: + aw = pd.DataFrame(added_wells).T + aw['comments'] = aw.index + else: + aw = None + elif isinstance(added_wells, pd.DataFrame): + aw = added_wells + aw['comments'] = aw.index + else: + raise IOError('unrecognized added_wells input') + + k, ztop, zbotm = 0, 0, 0 + zpump = None + + wells = aw.groupby('comments').first() + periods = aw + if 'x' in wells.columns and 'y' in wells.columns: + wells['i'], wells['j'] = self.modelgrid.intersect(wells['x'].values, + wells['y'].values) + if 'depth' in wells.columns: + wellhead_elevations = self.dis.top.array[wells.i, wells.j] + ztop = wellhead_elevations - (5*.3048) # 5 ft casing + zbotm = wellhead_elevations - wells.depth + zpump = zbotm + 1 # 1 meter off bottom + elif 'ztop' in wells.columns and 'zbotm' in wells.columns: + ztop = wells.ztop + zbotm = wells.zbotm + zpump = zbotm + 1 + if 'k' in wells.columns: + k = wells.k + + for var in ['losstype', 'pumploc', 'rw', 'rskin', 'kskin']: + if var not in wells.columns: + wells[var] = self.cfg['mnw']['defaults'][var] + + nd = fm.ModflowMnw2.get_empty_node_data(len(wells)) + nd['k'] = k + nd['i'] = wells.i + nd['j'] = wells.j + nd['ztop'] = ztop + nd['zbotm'] = zbotm + nd['wellid'] = wells.index + nd['losstype'] = wells.losstype + nd['pumploc'] = wells.pumploc + nd['rw'] = wells.rw + nd['rskin'] = wells.rskin + nd['kskin'] = wells.kskin + if zpump is not None: + nd['zpump'] = zpump + + spd = {} + for per, group in periods.groupby('per'): + spd_per = fm.ModflowMnw2.get_empty_stress_period_data(len(group)) + spd_per['wellid'] = group.comments + spd_per['qdes'] = group.flux + spd[per] = spd_per + itmp = [] + for per in range(self.nper): + if per in spd.keys(): + itmp.append(len(spd[per])) + else: + itmp.append(0) + + mnw = fm.ModflowMnw2(self, mnwmax=len(wells), ipakcb=self.ipakcb, + mnwprnt=1, + node_data=nd, stress_period_data=spd, + itmp=itmp + ) + print("finished in {:.2f}s\n".format(time.time() - t0)) + return mnw + else: + print('No wells specified in configuration file!\n') + return None + + def setup_lak(self, **kwargs): + + print('setting up LAKE package...') + t0 = time.time() + # if shapefile of lakes was included, + # lakarr should be automatically built by property method + if self.lakarr.sum() == 0: + print("lakes_shapefile not specified, or no lakes in model area") + return + + # source data + source_data = self.cfg['lak']['source_data'] + self.lake_info = setup_lake_info(self) + nlakes = len(self.lake_info) + + # set up the tab files, if any + tab_files_argument = None + tab_units = None + start_tab_units_at = 150 # default starting number for iunittab + if 'stage_area_volume_file' in source_data: + + tab_files = setup_lake_tablefiles(self, source_data['stage_area_volume_file']) + tab_units = list(range(start_tab_units_at, start_tab_units_at + len(tab_files))) + + # tabfiles aren't rewritten by flopy on package write + self.cfg['lak']['tab_files'] = tab_files + # kludge to deal with ugliness of lake package external file handling + # (need to give path relative to model_ws, not folder that flopy is working in) + tab_files_argument = [os.path.relpath(f) for f in tab_files] + + self.setup_external_filepaths('lak', 'lakzones', + self.cfg['lak']['{}_filename_fmt'.format('lakzones')]) + self.setup_external_filepaths('lak', 'bdlknc', + self.cfg['lak']['{}_filename_fmt'.format('bdlknc')], + file_numbers=list(range(self.nlay))) + + # make the arrays or load them + lakzones = make_bdlknc_zones(self.modelgrid, self.lake_info, + include_ids=self.lake_info['feat_id'], + littoral_zone_buffer_width=source_data['littoral_zone_buffer_width']) + save_array(self.cfg['intermediate_data']['lakzones'][0], lakzones, fmt='%d') + + bdlknc = np.zeros((self.nlay, self.nrow, self.ncol)) + # make the areal footprint of lakebed leakance from the zones (layer 1) + bdlknc[0] = make_bdlknc2d(lakzones, + self.cfg['lak']['source_data']['littoral_leakance'], + self.cfg['lak']['source_data']['profundal_leakance']) + for k in range(self.nlay): + if k > 0: + # for each underlying layer, assign profundal leakance to cells were isbc == 1 + bdlknc[k][self.isbc[k] == 1] = self.cfg['lak']['source_data']['profundal_leakance'] + save_array(self.cfg['intermediate_data']['bdlknc'][0][k], bdlknc[k], fmt='%.6e') + + # get estimates of stage from model top, for specifying ranges + stages = [] + for lakid in self.lake_info['lak_id']: + loc = self.lakarr[0] == lakid + est_stage = self.dis.top.array[loc].min() + stages.append(est_stage) + stages = np.array(stages) + + # setup stress period data + tol = 5 # specify lake stage range as +/- this value + ssmn, ssmx = stages - tol, stages + tol + stage_range = list(zip(ssmn, ssmx)) + + # set up dataset 9 + # ssmn and ssmx values only required for steady-state periods > 0 + self.lake_fluxes = setup_lake_fluxes(self) + precip = self.lake_fluxes['precipitation'].tolist() + evap = self.lake_fluxes['evaporation'].tolist() + flux_data = {} + for i, steady in enumerate(self.dis.steady.array): + if i > 0 and steady: + flux_data_i = [] + for lake_ssmn, lake_ssmx in zip(ssmn, ssmx): + flux_data_i.append([precip[i], evap[i], 0, 0, lake_ssmn, lake_ssmx]) + else: + flux_data_i = [[precip[i], evap[i], 0, 0]] * nlakes + flux_data[i] = flux_data_i + options = ['tableinput'] if tab_files_argument is not None else None + + kwargs = self.cfg['lak'] + kwargs['nlakes'] = len(self.lake_info) + kwargs['stages'] = stages + kwargs['stage_range'] = stage_range + kwargs['flux_data'] = flux_data + kwargs['tab_files'] = tab_files_argument #This needs to be in the order of the lake IDs! + kwargs['tab_units'] = tab_units + kwargs['options'] = options + kwargs['ipakcb'] = self.ipakcb + kwargs['lwrt'] = 0 + kwargs = get_input_arguments(kwargs, fm.mflak.ModflowLak) + lak = fm.ModflowLak(self, **kwargs) + print("finished in {:.2f}s\n".format(time.time() - t0)) + return lak + +
+[docs] + def setup_chd(self, **kwargs): + """Set up the CHD Package. + """ + return self._setup_basic_stress_package( + 'chd', fm.ModflowChd, ['head'], **kwargs)
+ + +
+[docs] + def setup_drn(self, **kwargs): + """Set up the Drain Package. + """ + return self._setup_basic_stress_package( + 'drn', fm.ModflowDrn, ['elev', 'cond'], **kwargs)
+ + +
+[docs] + def setup_ghb(self, **kwargs): + """Set up the General Head Boundary Package. + """ + return self._setup_basic_stress_package( + 'ghb', fm.ModflowGhb, ['bhead', 'cond'], **kwargs)
+ + + +
+[docs] + def setup_riv(self, rivdata=None, **kwargs): + """Set up the River Package. + """ + return self._setup_basic_stress_package( + 'riv', fm.ModflowRiv, ['stage', 'cond', 'rbot'], + rivdata=rivdata, **kwargs)
+ + +
+[docs] + def setup_wel(self, **kwargs): + """Set up the Well Package. + """ + return self._setup_basic_stress_package( + 'wel', fm.ModflowWel, ['flux'], **kwargs)
+ + + def setup_nwt(self, **kwargs): + + print('setting up NWT package...') + t0 = time.time() + use_existing_file = self.cfg['nwt'].get('use_existing_file') + kwargs = self.cfg['nwt'] + if use_existing_file is not None: + #set use_existing_file relative to source path + filepath = os.path.join(self._config_path, + use_existing_file) + + assert os.path.exists(filepath), "Couldn't find {}, need a path to a NWT file".format(filepath) + nwt = fm.ModflowNwt.load(filepath, model=self) + else: + kwargs = get_input_arguments(kwargs, fm.ModflowNwt) + nwt = fm.ModflowNwt(self, **kwargs) + print("finished in {:.2f}s\n".format(time.time() - t0)) + return nwt + +
+[docs] + def setup_hyd(self, **kwargs): + """TODO: generalize hydmod setup with specific input requirements""" + package = 'hyd' + print('setting up HYDMOD package...') + t0 = time.time() + + iobs_domain = None + if not kwargs['mfsetup_options']['allow_obs_in_bc_cells']: + # for now, discard any head observations in same (i, j) column of cells + # as a non-well boundary condition + # including lake package lakes and non lake, non well BCs + # (high-K lakes are excluded, since we may want head obs at those locations, + # to serve as pseudo lake stage observations) + iobs_domain = (self.isbc == 1) | np.any(self.isbc > 2) + + # munge the observation data + df = setup_head_observations(self, + obs_package=package, + obsname_column='hydlbl', + iobs_domain=iobs_domain, + **kwargs['source_data'], + **kwargs['mfsetup_options']) + + # create observation data recarray + obsdata = fm.ModflowHyd.get_empty(len(df)) + for c in obsdata.dtype.names: + assert c in df.columns, "Missing observation data field: {}".format(c) + obsdata[c] = df[c] + nhyd = len(df) + hyd = flopy.modflow.ModflowHyd(self, nhyd=nhyd, hydnoh=-999, obsdata=obsdata) + print("finished in {:.2f}s\n".format(time.time() - t0)) + return hyd
+ + + def setup_gag(self, **kwargs): + + print('setting up GAGE package...') + t0 = time.time() + # setup gage package output for all included lakes + ngages = 0 + nlak_gages = 0 + starting_unit_number = self.cfg['gag']['starting_unit_number'] + if self.get_package('lak') is not None: + nlak_gages = self.lak.nlakes + if nlak_gages > 0: + ngages += nlak_gages + lak_gagelocs = list(np.arange(1, nlak_gages+1) * -1) + lak_gagerch = [0] * nlak_gages # dummy list to maintain index position + lak_outtype = [self.cfg['gag']['lak_outtype']] * nlak_gages + # need minus sign to tell MF to read outtype + lake_unit = list(-np.arange(starting_unit_number, + starting_unit_number + nlak_gages, dtype=int)) + # TODO: make private attribute to facilitate keeping track of lake IDs + lak_files = ['lak{}_{}.ggo'.format(i+1, hydroid) + for i, hydroid in enumerate(self.cfg['lak']['source_data']['lakes_shapefile']['include_ids'])] + # update the starting unit number of avoid collisions with other gage packages + starting_unit_number = np.max(np.abs(lake_unit)) + 1 + + # need to add streams at some point + nstream_gages = 0 + stream_gageseg = [] + stream_gagerch = [] + stream_unit = [] + stream_outtype = [] + stream_files = [] + if self.get_package('sfr') is not None: + #observations_input = self.cfg['sfr'].get('source_data', {}).get('observations') + #obs_info_files = self.cfg['gag'].get('observation_data') + #if obs_info_files is not None: + # # get obs_info_files into dictionary format + # # filename: dict of column names mappings + # if isinstance(obs_info_files, str): + # obs_info_files = [obs_info_files] + # if isinstance(obs_info_files, list): + # obs_info_files = {f: self.cfg['gag']['default_columns'] + # for f in obs_info_files} + # elif isinstance(obs_info_files, dict): + # for k, v in obs_info_files.items(): + # if v is None: + # obs_info_files[k] = self.cfg['gag']['default_columns'] +# + # print('Reading observation files...') + # check_source_files(obs_info_files.keys()) + # dfs = [] + # for f, column_info in obs_info_files.items(): + # print(f) + # df = read_observation_data(f, + # column_info, + # column_mappings=self.cfg['hyd'].get('column_mappings')) + # dfs.append(df) # cull to cols that are needed + # df = pd.concat(dfs, axis=0) + df = self.sfrdata.observations + nstream_gages = len(df) + stream_files = ['{}.ggo'.format(site_no) for site_no in df.obsname] + stream_gageseg = df.iseg.tolist() + stream_gagerch = df.ireach.tolist() + stream_unit = list(np.arange(starting_unit_number, + starting_unit_number + nstream_gages, dtype=int)) + stream_outtype = [self.cfg['gag']['sfr_outtype']] * nstream_gages + ngages += nstream_gages + + if ngages == 0: + print('No gage package input.') + return + + # create flopy gage package object + gage_data = fm.ModflowGage.get_empty(ncells=ngages) + gage_data['gageloc'] = lak_gagelocs + stream_gageseg + gage_data['gagerch'] = lak_gagerch + stream_gagerch + gage_data['unit'] = lake_unit + stream_unit + gage_data['outtype'] = lak_outtype + stream_outtype + if len(self.cfg['gag'].get('ggo_files', {})) == 0: + self.cfg['gag']['ggo_files'] = lak_files + stream_files + gag = fm.ModflowGage(self, numgage=len(gage_data), + gage_data=gage_data, + files=self.cfg['gag']['ggo_files'], + ) + print("finished in {:.2f}s\n".format(time.time() - t0)) + return gag + +
+[docs] + def write_input(self): + """Write the model input. + """ + # prior to writing output + # remove any BCs in inactive cells + pckgs = ['CHD'] + for pckg in pckgs: + package_instance = getattr(self, pckg.lower(), None) + if package_instance is not None: + remove_inactive_bcs(package_instance) + + # write the model with flopy + # but skip the sfr package + # by monkey-patching the write method + SelPackList = [p for p in self.get_package_list() if p != 'SFR'] + super().write_input(SelPackList=SelPackList) + + # write the sfr package with SFRmaker + # gage package was already set-up and then written by Flopy + if 'SFR' in self.get_package_list(): + self.sfrdata.write_package(write_observations_input=False) + + # add version info to file headers + files = [self.namefile] + files += [p.file_name[0] for p in self.packagelist] + for f in files: + # either flopy or modflow + # doesn't allow headers for some packages + ext = Path(f).suffix + if ext in {'.hyd', '.gag', '.gage'}: + continue + add_version_to_fileheader(f, model_info=self.header) + + if not self.cfg['mfsetup_options']['keep_original_arrays']: + tmpdir_path = self.tmpdir + shutil.rmtree(tmpdir_path)
+ + + @staticmethod + def _parse_model_kwargs(cfg): + return cfg + +
+[docs] + @classmethod + def load(cls, yamlfile, load_only=None, verbose=False, forgive=False, check=False): + """Load a model from a config file and set of MODFLOW files. + """ + cfg = load_cfg(yamlfile, verbose=verbose, default_file=cls.default_file) # 'mfnwt_defaults.yml') + print('\nLoading {} model from data in {}\n'.format(cfg['model']['modelname'], yamlfile)) + t0 = time.time() + + m = cls(cfg=cfg, **cfg['model']) + if 'grid' not in m.cfg.keys(): + m.setup_grid() + #grid_file = cfg['setup_grid']['output_files']['grid_file'] + #if os.path.exists(grid_file): + # print('Loading model grid definition from {}'.format(grid_file)) + # m.cfg['grid'] = load(grid_file) + #else: + # m.setup_grid() + + m = flopy_mf2005_load(m, load_only=load_only, forgive=forgive, check=check) + print('finished loading model in {:.2f}s'.format(time.time() - t0)) + return m
+
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/_modules/mfsetup/tdis.html b/_modules/mfsetup/tdis.html new file mode 100644 index 00000000..afd37c1d --- /dev/null +++ b/_modules/mfsetup/tdis.html @@ -0,0 +1,1035 @@ + + + + + + + + mfsetup.tdis — modflow-setup 0.5.0.post59+g65803fd documentation + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for mfsetup.tdis

+"""
+Functions related to temporal discretization
+"""
+import calendar
+import copy
+import datetime as dt
+import os
+import shutil
+from pathlib import Path
+
+import numpy as np
+import pandas as pd
+
+import mfsetup
+from mfsetup.checks import is_valid_perioddata
+from mfsetup.utils import get_input_arguments, print_item
+
+months = {v.lower(): k for k, v in enumerate(calendar.month_name) if k > 0}
+
+
+
+[docs] +def convert_freq_to_period_start(freq): + """convert pandas frequency to period start""" + if isinstance(freq, str): + for prefix in ['M', 'Q', 'A', 'Y']: + if prefix in freq.upper() and "S" not in freq.upper(): + freq = freq.replace(prefix, "{}S".format(prefix)).upper() + return freq
+ + + +
+[docs] +def get_parent_stress_periods(parent_model, nper=None, + parent_stress_periods='all'): + + parent_sp = copy.copy(parent_stress_periods) + parent_model_nper = parent_model.modeltime.nper + + # use all stress periods from parent model + if isinstance(parent_sp, str) and parent_sp.lower() == 'all': + if nper is None: # or nper < parent_model.nper: + nper = parent_model_nper + parent_sp = list(range(nper)) + elif nper > parent_model_nper: + parent_sp = list(range(parent_model_nper)) + for i in range(nper - parent_model_nper): + parent_sp.append(parent_sp[-1]) + else: + parent_sp = list(range(nper)) + + # use only specified stress periods from parent model + elif isinstance(parent_sp, list): + # limit parent stress periods to include + # to those in parent model and nper specified for pfl_nwt + if nper is None: + nper = len(parent_sp) + + perlen = [parent_model.modeltime.perlen[0]] + for i, p in enumerate(parent_sp): + if i == nper: + break + if p == parent_model_nper: + break + if p > 0 and p >= parent_sp[-1] and len(parent_sp) < nper: + parent_sp.append(p) + perlen.append(parent_model.modeltime.perlen[p]) + if nper < len(parent_sp): + nper = len(parent_sp) + else: + n_parent_per = len(parent_sp) + for i in range(nper - n_parent_per): + parent_sp.append(parent_sp[-1]) + + # no parent stress periods specified, + # default to just using first stress period + # (repeating if necessary; + # for example if creating transient inset model with steady bc from parent) + else: + if nper is None: + nper = 1 + parent_sp = [0] + for i in range(nper - 1): + parent_sp.append(parent_sp[-1]) + assert len(parent_sp) == nper + return parent_sp
+ + + +
+[docs] +def parse_perioddata_groups(perioddata_dict, + **kwargs): + """Reorganize input in perioddata dict into + a list of groups (dicts). + """ + perioddata_groups = [] + defaults = { + 'start_date_time': '1970-01-01' + } + defaults.update(kwargs) + group0 = defaults.copy() + + valid_txt = "if transient: perlen specified or 3 of start_date_time, " \ + "end_date_time, nper or freq;\n" \ + "if steady: nper or perlen specified. Default perlen " \ + "for steady-state periods is 1." + for k, v in perioddata_dict.items(): + if 'group' in k.lower(): + data = defaults.copy() + data.update(v) + if is_valid_perioddata(data): + data = get_input_arguments(data, setup_perioddata_group, + errors='raise') + perioddata_groups.append(data) + else: + print_item(k, data) + prefix = "perioddata input for {} must have".format(k) + raise Exception(prefix + valid_txt) + elif 'perioddata' in k.lower(): + perioddata_groups += parse_perioddata_groups(perioddata_dict[k], **defaults) + else: + group0[k] = v + if len(perioddata_groups) == 0: + if not is_valid_perioddata(group0): + print_item('perioddata:', group0) + prefix = "perioddata input must have" + raise Exception(prefix + valid_txt) + data = get_input_arguments(group0, setup_perioddata_group) + perioddata_groups = [data] + for group in perioddata_groups: + if 'steady' in group: + if np.isscalar(group['steady']) or group['steady'] is None: + group['steady'] = {0: group['steady']} + elif not isinstance(group['steady'], dict): + group['steady'] = {i: s for i, s in enumerate(group['steady'])} + return perioddata_groups
+ + + +
+[docs] +def setup_perioddata_group(start_date_time, end_date_time=None, + nper=None, perlen=None, model_time_units='days', freq=None, + steady={0: True, 1: False}, + nstp=10, tsmult=1.5, + oc_saverecord={0: ['save head last', + 'save budget last']}, + ): + """Sets up time discretization for a model; outputs a DataFrame with + stress period dates/times and properties. Stress periods can be established + by explicitly specifying perlen as a list of period lengths in + model units. Or, stress periods can be generated via :func:`pandas.date_range`, + using three of the start_date_time, end_date_time, nper, and freq arguments. + + Parameters + ---------- + start_date_time : str or datetime-like + Left bound for generating stress period dates. See :func:`pandas.date_range`. + end_date_time : str or datetime-like, optional + Right bound for generating stress period dates. See :func:`pandas.date_range`. + nper : int, optional + Number of stress periods. Only used if perlen is None, or in combination with freq + if an end_date_time isn't specified. + perlen : sequence or None, optional + A list of stress period lengths in model time units. Or specify as None and + specify 3 of start_date_time, end_date_time, nper and/or freq. + model_time_units : str, optional + 'days' or 'seconds'. + By default, 'days'. + freq : str or DateOffset, default None + For setting up uniform stress periods between a start and end date, or of length nper. + Same as argument to pandas.date_range. Frequency strings can have multiples, + e.g. ‘6MS’ for a 6 month interval on the start of each month. + See the pandas documentation for a list of frequency aliases. Note: Only "start" + frequences (e.g. MS vs M for "month end") are supported. + steady : dict + Dictionary with zero-based stress periods as keys and boolean values. Similar to MODFLOW-6 + input, the information specified for a period will continue to apply until + information for another period is specified. + nstp : int or sequence + Number of timesteps in a stress period. Must be an integer if perlen=None. + nstp : int or sequence + Timestep multiplier for a stress period. Must be an integer if perlen=None. + oc_saverecord : dict + Dictionary with zero-based stress periods as keys and output control options as values. + Similar to MODFLOW-6 input, the information specified for a period will + continue to apply until information for another period is specified. + + Returns + ------- + perioddata : pandas.DataFrame + DataFrame summarizing stress period information. Data columns: + + ================== ================ ============================================== + **start_datetime** pandas datetimes start date/time of each stress period + **end_datetime** pandas datetimes end date/time of each stress period + **time** float cumulative MODFLOW time at end of period + **per** int zero-based stress period + **perlen** float stress period length in model time units + **nstp** int number of timesteps in the stress period + **tsmult** int timestep multiplier for stress period + **steady** bool True=steady-state, False=Transient + **oc** dict MODFLOW-6 output control options + ================== ================ ============================================== + + Notes + ----- + *Initial steady-state period* + + If the first stress period is specified as steady-state (``steady[0] == True``), + the period length (perlen) in MODFLOW time is automatically set to 1. If subsequent + stress periods are specified, or if no end-date is specified, the end date for + the initial steady-state stress period is set equal to the start date. In the latter case, + the assumption is that the specified start date represents the start of the transient simulation, + and the initial steady-state (which is time-invarient anyways) is intended to produce a valid + starting condition. If only a single steady-state stress period is specified with an end date, + then that end date is retained. + + *MODFLOW time vs real time* + + The ``time`` column of the output DataFrame represents time in the MODFLOW simulation, + which cannot have zero-lengths for any period. Therefore, initial steady-state periods + are automatically assigned lengths of one (as described above), and MODFLOW time is incremented + accordingly. If the model has an initial steady-state period, this means that subsequent MODFLOW + times will be 1 time unit greater than the acutal date-times. + + *End-dates* + + Specified ``end_date_time`` represents the right bound of the time discretization, + or in other words, the time increment *after* the last time increment to be + simulated. For example, ``end_date_time='2019-01-01'`` would mean that + ``'2018-12-31'`` is the last date simulated by the model + (which ends at ``2019-01-01 00:00:00``). + + + + """ + specified_start_datetime = None + if start_date_time is not None: + specified_start_datetime = pd.Timestamp(start_date_time) + elif end_date_time is None: + raise ValueError('If no start_datetime, must specify end_datetime') + specified_end_datetime = None + if end_date_time is not None: + specified_end_datetime = pd.Timestamp(end_date_time) + + # if times are specified by start & end dates and freq, + # period is determined by pd.date_range + if all({specified_start_datetime, specified_end_datetime, freq}): + nper = None + freq = convert_freq_to_period_start(freq) + oc = oc_saverecord + if not isinstance(steady, dict): + steady = {i: v for i, v in enumerate(steady)} + + # nstp and tsmult need to be lists + if not np.isscalar(nstp): + nstp = list(nstp) + if not np.isscalar(tsmult): + tsmult = list(tsmult) + + txt = "Specify perlen as a list of lengths in model units, or\nspecify 3 " \ + "of start_date_time, end_date_time, nper and/or freq." + + # Explicitly specified stress period lengths + start_datetime = [] # datetimes at period starts + end_datetime = [] # datetimes at period ends + if perlen is not None: + if np.isscalar(perlen): + perlen = [perlen] + start_datetime = [specified_start_datetime] + if len(perlen) > 1: + for i, length in enumerate(perlen): + # initial steady-state period + # set perlen to 0 + # and start/end dates to be equal + if i == 0 and steady[0]: + next_start = start_datetime[i] + perlen[0] == 1 + else: + next_start = start_datetime[i] + \ + pd.Timedelta(length, unit=model_time_units) + start_datetime.append(next_start) + end_datetime = pd.to_datetime(start_datetime[1:]) + start_datetime = pd.to_datetime(start_datetime[:-1]) + # single specified stress period length + else: + end_datetime = [specified_start_datetime + pd.Timedelta(perlen[0], + unit=model_time_units)] + time = np.cumsum(perlen) # time at end of period, in MODFLOW units + + # single steady-state period + elif nper == 1 and steady[0]: + perlen = [1] + time = [1] + start_datetime = pd.to_datetime([specified_start_datetime]) + if specified_end_datetime is not None: + end_datetime = pd.to_datetime([specified_end_datetime]) + else: + end_datetime = pd.to_datetime([specified_start_datetime]) + + # Set up datetimes based on 3 of start_date_time, specified_end_datetime, nper and/or freq (scalar perlen) + else: + assert np.isscalar(nstp), "nstp: {}; nstp must be a scalar if perlen " \ + "is not specified explicitly as a list.\n{}".format(nstp, txt) + assert np.isscalar(tsmult), "tsmult: {}; tsmult must be a scalar if perlen " \ + "is not specified explicitly as a list.\n{}".format(tsmult, txt) + periods = None + if specified_end_datetime is None: + # start_date_time, periods and freq + # (i.e. nper periods of length perlen starting on stat_date) + if freq is not None: + periods = nper + else: + raise ValueError("Unrecognized input for perlen: {}.\n{}".format(perlen, txt)) + else: + # specified_end_datetime and freq and periods + if specified_start_datetime is None: + periods = nper + 1 + # start_date_time, specified_end_datetime and uniform periods + # (i.e. nper periods of uniform length between start_date_time and specified_end_datetime) + elif freq is None: + periods = nper #-1 if steady[0] else nper + # start_date_time, specified_end_datetime and frequency + elif freq is not None: + pass + datetimes = pd.date_range(specified_start_datetime, specified_end_datetime, + periods=periods, freq=freq) + # if end_datetime, periods and freq were specified + if specified_start_datetime is None: + specified_start_datetime = datetimes[0] + start_datetime = datetimes[:-1] + end_datetime = datetimes[1:] + time_edges = getattr((datetimes - start_datetime[0]), + model_time_units).tolist() + perlen = np.diff(time_edges) + # time is elapsed time at the end of each period + time = time_edges[1:] + else: + start_datetime = datetimes + end_datetime = pd.to_datetime(datetimes[1:].tolist() + + [specified_end_datetime]) + # Edge case of end date falling on the start date freq + # (zero-length sp at end) + if end_datetime[-1] == start_datetime[-1]: + start_datetime = start_datetime[:-1] + end_datetime = end_datetime[:-1] + time_edges = getattr((end_datetime - start_datetime[0]), + model_time_units).tolist() + time_edges = [0] + time_edges + perlen = np.diff(time_edges) + # time is elapsed time at the end of each period + time = time_edges[1:] + #if len(datetimes) == 1: + # perlen = [(specified_end_datetime - specified_start_datetime).days] + # time = np.array(perlen) + + # if first period is steady-state, + # insert it at the beginning of the generated range + # (only do for pd.date_range -based discretization) + if steady[0]: + start_datetime = [start_datetime[0]] + start_datetime.tolist() + end_datetime = [start_datetime[0]] + end_datetime.tolist() + perlen = [1] + list(perlen) + time = [1] + (np.array(time) + 1).tolist() + if isinstance(nstp, list): + nstp = [1] + nstp + if isinstance(tsmult, list): + tsmult = [1] + tsmult + + perioddata = pd.DataFrame({ + 'start_datetime': start_datetime, + 'end_datetime': end_datetime, + 'time': time, + 'per': range(len(time)), + 'perlen': np.array(perlen).astype(float), + 'nstp': nstp, + 'tsmult': tsmult, + }) + + # specify steady-state or transient for each period, filling empty + # periods with previous state (same logic as MF6 input) + issteady = [steady[0]] + for i in range(len(perioddata)): + issteady.append(steady.get(i, issteady[i])) + perioddata['steady'] = issteady[1:] + perioddata['steady'] = perioddata['steady'].astype(bool) + + # set up output control, using previous value to fill empty periods + # (same as MF6) + oclist = [None] + for i in range(len(perioddata)): + oclist.append(oc.get(i, oclist[i])) + perioddata['oc'] = oclist[1:] + + # correct nstp and tsmult to be 1 for steady-state periods + perioddata.loc[perioddata.steady.values, 'nstp'] = 1 + perioddata.loc[perioddata.steady.values, 'tsmult'] = 1 + return perioddata
+ + + +
+[docs] +def concat_periodata_groups(perioddata_groups, time_units='days'): + """Concatenate multiple perioddata DataFrames, but sort + result on (absolute) datetimes and increment model time and stress period + numbers accordingly.""" + + # update any missing variables in the groups with global variables + group_dfs = [] + for i, group in enumerate(perioddata_groups): + group.update({'model_time_units': time_units, + }) + df = setup_perioddata_group(**group) + group_dfs.append(df) + + df = pd.concat(group_dfs).sort_values(by=['end_datetime']) + perlen = np.ones(len(df)) + perlen[~df.steady.values] = df.loc[~df.steady.values, 'perlen'] + df['time'] = np.cumsum(perlen) + df['per'] = range(len(df)) + df.index = range(len(df)) + return df
+ + + +
+[docs] +def setup_perioddata(model, + tdis_perioddata_config, + default_start_datetime=None, + nper=None, + steady=None, time_units='days', + oc_saverecord=None, parent_model=None, + parent_stress_periods=None, + ): + """Sets up the perioddata DataFrame that is used to reference model + stress period start and end times to real date time. + + Parameters + ---------- + model : _type_ + _description_ + tdis_perioddata_config : dict + ``perioddata:``, ``tdis:`` (MODFLOW 6 models) or ``dis:`` (MODFLOW-2005 models) + block from the Modflow-setup configuration file. + default_start_datetime : str, optional + Start date for model from the tdis: options: block in the configuration file, + or ``model.modeltime.start_datetime`` Flopy attribute. Only used + where start_datetime information is missing, for example if a group + for an initial steady-state period in ``tdis_perioddata_config`` + doesn't have a start_datetime: entry. By default, None, in which case + the default start_datetime of 1970-01-01 may be applied by + py:func:`setup_perioddata_group`. + nper : int, optional + Number of stress periods. Only used if nper is specified in the + tdis: dimensions: block of the configuration file and + not in a perioddata group. + steady : bool, sequence or dict + Whether each period is steady-state or transient. Only used + if steady is specified in the tdis: or sto: configuration file + blocks (MODFLOW 6 models) or the dis: block (MODFLOW-2005 models), + and not in perioddata groups. + time_units : str, optional + Model time units, by default 'days'. + oc_saverecord : dict, optional + Output control settings, keyed by stress period. Only + used to record this information in the stress period data table. + parent_model : flopy model instance, optional + Parent model, if model is an inset. + parent_stress_periods : list of ints, optional + Parent model stress periods to apply to the inset model + (read from the parent: copy_stress_periods: item in the + configuration file). + + Returns + ------- + perioddata : DataFrame + Table of stress period information with columns: + + ============== ========================================= + start_datetime Start date of each model stress period + end_datetime End date of each model stress period + time MODFLOW elapsed time, in days [#f1]_ + per Model stress period number + perlen Stress period length (days) + nstp Number of timesteps in stress period + tsmult Timestep multiplier + steady Steady-state or transient + oc Output control setting for MODFLOW + parent_sp Corresponding parent model stress period + ============== ========================================= + + Notes + ----- + perioddata is also saved to stress_period_data.csv in the tables folder + (usually `/tables`). + + .. rubric:: Footnotes + + .. [#f1] Modflow elapsed time includes the time lengths specified for + any steady-state periods (at least 1 day). Therefore if the model + has an initial steady-state period with a ``perlen`` of one day, + the elapsed time at the model start date will already be 1 day. + """ + # get start_date_time from parent if available and start_date_time wasn't specified + # only apply to tdis_perioddata_config if it wasn't specified there + if tdis_perioddata_config.get('start_datetime', '1970-01-01') == '1970-01-01' and \ + default_start_datetime != '1970-01-01': + tdis_perioddata_config['start_date_time'] = default_start_datetime + + # option to define stress periods in table prior to model build + if 'csvfile' in tdis_perioddata_config: + csvfile = Path(model._config_path) / tdis_perioddata_config['csvfile']['filename'] + perioddata = pd.read_csv(csvfile) + defaults = { + 'start_datetime_column': 'start_datetime', + 'end_datetime_column': 'end_datetime', + 'steady_column': 'steady', + 'nstp_column': 'nstp', + 'tsmult_column': 'tsmult' + } + + csv_config = tdis_perioddata_config['csvfile'] + renames = {csv_config.get(k): v + for k, v in defaults.items() if k in csv_config} + perioddata.rename(columns=renames, inplace=True) + required_cols = defaults.values() + for col in required_cols: + if col not in perioddata.columns: + raise KeyError(f"{col} column missing in supplied stress " + f"period table {csvfile}.") + perioddata['start_datetime'] = pd.to_datetime(perioddata['start_datetime']) + perioddata['end_datetime'] = pd.to_datetime(perioddata['end_datetime']) + perioddata['per'] = np.arange(len(perioddata)) + perlen = getattr((perioddata['end_datetime'] - + perioddata['start_datetime']).dt, + model.time_units).tolist() + # set initial steady-state stress period to at least length 1 + if perioddata['steady'][0] and perlen[0] < 1: + perlen[0] = 1 + perioddata['perlen'] = perlen + perioddata['time'] = np.cumsum(perlen) + cols = ['start_datetime', 'end_datetime', 'time', + 'per', 'perlen', 'nstp', 'tsmult', 'steady'] + # option to supply Output Contorl INstructions as well + if 'oc' in perioddata.columns: + cols.append('oc') + perioddata = perioddata[cols] + # some validation + assert np.all(perioddata['perlen'] > 0) + assert np.all(np.diff(perioddata['time']) > 0) + # define stress periods from perioddata group blocks in configuration file + else: + perioddata_groups = parse_perioddata_groups(tdis_perioddata_config, + nper=nper, steady=steady, + start_date_time=default_start_datetime) + # set up the perioddata table from the groups + perioddata = concat_periodata_groups(perioddata_groups, time_units) + + # assign parent model stress periods to each inset model stress period + parent_sp = None + if parent_model is not None: + if parent_stress_periods is not None: + # parent_sp has parent model stress period corresponding + # to each inset model stress period (len=nper) + # the same parent stress period can be specified for multiple inset model periods + parent_sp = get_parent_stress_periods(parent_model, nper=len(perioddata), + parent_stress_periods=parent_stress_periods) + elif model._is_lgr: + parent_sp = perioddata['per'].values + + # add corresponding stress periods in parent model if there are any + perioddata['parent_sp'] = parent_sp + assert np.array_equal(perioddata['per'].values, np.arange(len(perioddata))) + return perioddata
+ + + +
+[docs] +def aggregate_dataframe_to_stress_period(data, id_column, data_column, datetime_column='datetime', + end_datetime_column=None, category_column=None, + start_datetime=None, end_datetime=None, period_stat='mean', + resolve_duplicates_with='raise error'): + """Aggregate time-series data in a DataFrame to a single value representing + a period defined by a start and end date. + + Parameters + ---------- + data : DataFrame + Must have an id_column, data_column, datetime_column, and optionally, + an end_datetime_column. + id_column : str + Column in data with location identifier (e.g. node or well id). + data_column : str or list + Column(s) in data with values to aggregate. + datetime_column : str + Column in data with times for each value. For downsampling of multiple values in data + to a longer period represented by start_datetime and end_datetime, this is all that is needed. + Aggregated values will include values in datetime_column that are >= start_datetime and < end_datetime. + In other words, datetime_column represents the start of each time interval in data. + Values can be strings (e.g. YYYY-MM-DD) or pandas Timestamps. By default, None. + end_datetime_column : str + Column in data with end times for period represented by each value. This is only needed + for upsampling, where the interval defined by start_datetime and end_datetime is smaller + than the time intervals in data. The row(s) in data that have a datetime_column value < end_datetime, + and an end_datetime_column value > start_datetime will be retained in aggregated. + Values can be strings (e.g. YYYY-MM-DD) or pandas Timestamps. By default, None. + start_datetime : str or pandas.Timestamp + Start time of aggregation period. Only used if an aggregation start + and end time are not given in period_stat. If None, and no start + and end time are specified in period_stat, the first time in datetime_column is used. + By default, None. + end_datetime : str or pandas.Timestamp + End time of aggregation period. Only used if an aggregation start + and end time are not given in period_stat. If None, and no start + and end time are specified in period_stat, the last time in datetime_column is used. + By default, None. + period_stat : str, list, or NoneType + Method for aggregating data. By default, 'mean'. + + * Strings will be passed to DataFrame.groupby + as the aggregation method. For example, ``'mean'`` would result in DataFrame.groupby().mean(). + * If period_stat is None, ``'mean'`` is used. + * Lists of length 2 can be used to specify a statistic for a month (e.g. ``['mean', 'august']``), + or for a time period that can be represented as a single string in pandas. + For example, ``['mean', '2014']`` would average all values in the year 2014; ``['mean', '2014-01']`` + would average all values in January of 2014, etc. Basically, if the string + can be used to slice a DataFrame or Series, it can be used here. + * Lists of length 3 can be used to specify a statistic and a start and end date. + For example, ``['mean', '2014-01-01', '2014-03-31']`` would average the values for + the first three months of 2014. + resolve_duplicates_with : {'sum', 'mean', 'first', 'raise error'} + Method for reducing duplicates (of times, sites and measured or estimated category). + By default, 'raise error' will result in a ValueError if duplicates are encountered. + Otherwise any aggregate method in pandas can be used (e.g. DataFrame.groupby().<method>()) + + Returns + ------- + aggregated : DataFrame + Aggregated values. Columns are the same as data, except the time column + is named 'start_datetime'. In other words, aggregated periods are represented by + their start dates (as opposed to midpoint dates or end dates). + + """ + data = data.copy() + + if data.index.name == datetime_column: + data.sort_index(inplace=True) + else: + data.sort_values(by=datetime_column, inplace=True) + + if isinstance(period_stat, str): + period_stat = [period_stat] + elif period_stat is None: + period_stat = ['mean'] + else: + period_stat = period_stat.copy() + if isinstance(data_column, str): + data_columns = [data_column] + else: + data_columns = data_column + + if len(data_columns) > 1: + pass + + start, end = None, None + if isinstance(period_stat, list): + stat = period_stat.pop(0) + + # stat for specified period + if len(period_stat) == 2: + start, end = period_stat + period_data = data.loc[start:end] + + # stat specified by single item + elif len(period_stat) == 1: + period = period_stat.pop() + # stat for a specified month + if period in months.keys() or period in months.values(): + period_data = data.loc[data.index.dt.month == months.get(period, period)] + + # stat for a period specified by single string (e.g. '2014', '2014-01', etc.) + else: + period_data = data.loc[period] + + # no time period in source data specified for statistic; use start/end of current model period + elif len(period_stat) == 0: + assert datetime_column in data.columns, \ + "datetime_column needed for " \ + "resampling irregular data to model stress periods" + if data[datetime_column].dtype == object: + data[datetime_column] = pd.to_datetime(data[datetime_column]) + if end_datetime_column in data.columns and \ + data[end_datetime_column].dtype == object: + data[end_datetime_column] = pd.to_datetime(data[end_datetime_column]) + if start_datetime is None: + start_datetime = data[datetime_column].iloc[0] + if end_datetime is None: + end_datetime = data[datetime_column].iloc[-1] + # >= includes the start datetime + # if there is no end_datetime column, select values that have start_datetimes within the period + # this excludes values that start before the period but don't have an end date + if end_datetime_column not in data.columns: + data_overlaps_period = (data[datetime_column] < end_datetime) & \ + (data[datetime_column] >= start_datetime) + # if some end_datetimes are missing, assume end_datetime is the period end + # this assumes that missing end datetimes indicate pumping that continues to the end of the simulation + elif data[end_datetime_column].isna().any(): + data.loc[data[end_datetime_column].isna(), 'end_datetime'] = end_datetime + data_overlaps_period = (data[datetime_column] < end_datetime) & \ + (data[end_datetime_column] >= start_datetime) + # otherwise, select values with start datetimes that are before the period end + # and end datetimes that are after the period start + # in other words, include all values that overlap in time with the period + else: + if data[end_datetime_column].dtype == object: + data[end_datetime_column] = pd.to_datetime(data[end_datetime_column]) + data_overlaps_period = (data[datetime_column] < end_datetime) & \ + (data[end_datetime_column] > start_datetime) + period_data = data.loc[data_overlaps_period] + + else: + raise Exception("") + + # create category column if there is none, to conform to logic below + categories = False + if category_column is None: + category_column = 'category' + period_data[category_column] = 'measured' + elif category_column not in period_data.columns: + raise KeyError('category_column: {} not in data'.format(category_column)) + else: + categories = True + + # compute statistic on data + # ensure that ids are unique in each time period + # by summing multiple id instances by period + # (only sum the data column) + # check for duplicates with same time, id, and category (measured vs estimated) + duplicated = pd.Series(list(zip(period_data[datetime_column], + period_data[id_column], + period_data[category_column]))).duplicated() + aggregated = period_data.groupby(id_column).first() + for data_column in data_columns: + if any(duplicated): + if resolve_duplicates_with == 'raise error': + duplicate_info = period_data.loc[duplicated.values] + msg = 'The following locations are duplicates which need to be resolved:\n'.format(duplicate_info.__str__()) + raise ValueError(msg) + period_data.index.name = None + by_period = period_data.groupby([id_column, datetime_column]).first().reset_index() + agg_groupedby = getattr(period_data.groupby([id_column, datetime_column]), + resolve_duplicates_with)(numeric_only=True) + by_period[data_column] = agg_groupedby[data_column].values + period_data = by_period + agg_groupedby = getattr(period_data.groupby(id_column), stat)(numeric_only=True) + aggregated[data_column] = agg_groupedby[data_column].values + # if category column was argued, get counts of measured vs estimated + # for each measurement location, for current stress period + if categories: + counts = period_data.groupby([id_column, category_column]).size().unstack(fill_value=0) + for col in 'measured', 'estimated': + if col not in counts.columns: + counts[col] = 0 + aggregated['n_{}'.format(col)] = counts[col] + aggregated.reset_index(inplace=True) + + # add datetime back in + aggregated['start_datetime'] = start if start is not None else start_datetime + # enforce consistent datetime dtypes + # (otherwise pd.concat of multiple outputs from this function may fail) + for col in 'start_datetime', 'end_datetime': + if col in aggregated.columns: + aggregated[col] = aggregated[col].astype('datetime64[ns]') + + # drop original datetime column, which doesn't reflect dates for period averages + drop_cols = [datetime_column] + if not categories: # drop category column if it was created + drop_cols.append(category_column) + aggregated.drop(drop_cols, axis=1, inplace=True) + return aggregated
+ + + +
+[docs] +def aggregate_xarray_to_stress_period(data, datetime_coords_name='time', + start_datetime=None, end_datetime=None, + period_stat='mean'): + + period_stat = copy.copy(period_stat) + if isinstance(start_datetime, pd.Timestamp): + start_datetime = start_datetime.strftime('%Y-%m-%d') + if isinstance(end_datetime, pd.Timestamp): + end_datetime = end_datetime.strftime('%Y-%m-%d') + if isinstance(period_stat, str): + period_stat = [period_stat] + elif period_stat is None: + period_stat = ['mean'] + + if isinstance(period_stat, list): + stat = period_stat.pop(0) + + # stat for specified period + if len(period_stat) == 2: + start, end = period_stat + arr = data.loc[start:end].values + + # stat specified by single item + elif len(period_stat) == 1: + period = period_stat.pop() + # stat for a specified month + if period in months.keys() or period in months.values(): + arr = data.loc[data[datetime_coords_name].dt.month == months.get(period, period)].values + + # stat for a period specified by single string (e.g. '2014', '2014-01', etc.) + else: + arr = data.loc[period].values + + # no period specified; use start/end of current period + elif len(period_stat) == 0: + + assert datetime_coords_name in data.coords, \ + "datetime_column needed for " \ + "resampling irregular data to model stress periods" + # not sure if this is needed for xarray + if data[datetime_coords_name].dtype == object: + data[datetime_coords_name] = pd.to_datetime(data[datetime_coords_name]) + # default to aggregating whole dataset + # if start_ and end_datetime not provided + if start_datetime is None: + start_datetime = data[datetime_coords_name].values[0] + if end_datetime is None: + end_datetime = data[datetime_coords_name].values[-1] + # >= includes the start datetime + # for now, in comparison to aggregate_dataframe_to_stress_period() fn + # for tabular data (pandas) + # assume that xarray data does not have an end_datetime column + # (infer the end datetimes) + arr = data.loc[start_datetime:end_datetime].values + + else: + raise Exception("") + + # compute statistic on data + aggregated = getattr(arr, stat)(axis=0) + + return aggregated
+ + + +
+[docs] +def add_date_comments_to_tdis(tdis_file, start_dates, end_dates=None): + """Add stress period start and end dates to a tdis file as comments; + add modflow-setup version info to tdis file header. + """ + tempfile = tdis_file + '.temp' + shutil.copy(tdis_file, tempfile) + with open(tempfile) as src: + with open(tdis_file, 'w') as dest: + header = '' + read_header = True + for line in src: + if read_header and len(line) > 0 and \ + line.strip()[0] in {'#', '!', '//'}: + header += line + elif 'begin options' in ' '.join(line.lower().split()): + if 'modflow-setup' not in header: + if 'flopy' in header.lower(): + mfsetup_text = '# via ' + else: + mfsetup_text = '# File created by ' + mfsetup_text += 'modflow-setup version {}'.format(mfsetup.__version__) + mfsetup_text += ' at {:%Y-%m-%d %H:%M:%S}'.format(dt.datetime.now()) + header += mfsetup_text + '\n' + dest.write(header) + read_header = False + dest.write(line) + elif 'begin perioddata' in ' '.join(line.lower().split()): + dest.write(line) + dest.write(2*' ' + '# perlen nstp tsmult\n') + + for i, line in enumerate(src): + if 'end perioddata' in ' '.join(line.lower().split()): + dest.write(line) + break + else: + line = 2*' ' + line.strip() + f' # period {i+1}: {start_dates[i]:%Y-%m-%d}' + if end_dates is not None: + line += f' to {end_dates[i]:%Y-%m-%d}' + line += '\n' + dest.write(line) + else: + dest.write(line) + os.remove(tempfile)
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/_modules/mfsetup/tmr.html b/_modules/mfsetup/tmr.html new file mode 100644 index 00000000..6bec98f3 --- /dev/null +++ b/_modules/mfsetup/tmr.html @@ -0,0 +1,1121 @@ + + + + + + + + mfsetup.tmr — modflow-setup 0.5.0.post59+g65803fd documentation + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for mfsetup.tmr

+import time
+from pathlib import Path
+
+import flopy
+import geopandas as gp
+import numpy as np
+import pandas as pd
+from shapely.geometry import MultiLineString
+
+fm = flopy.modflow
+from flopy.discretization import StructuredGrid
+from flopy.mf6.utils.binarygrid_util import MfGrdFile
+from flopy.utils import binaryfile as bf
+
+from mfsetup.discretization import find_remove_isolated_cells
+from mfsetup.fileio import check_source_files
+from mfsetup.grid import get_cellface_midpoint, get_ij, get_intercell_connections
+from mfsetup.interpolate import Interpolator, interp_weights
+from mfsetup.lakes import get_horizontal_connections
+
+
+
+[docs] +def get_qx_qy_qz(cell_budget_file, binary_grid_file=None, + cell_connections_df=None, + version='mf6', + kstpkper=(0, 0), + specific_discharge=False, + headfile=None, + modelgrid=None): + """Get 2 or 3D arrays of cell by cell flows across the cell faces + (for structured grid models). + + Parameters + ---------- + cell_budget_file : str, pathlike, or instance of flopy.utils.binaryfile.CellBudgetFile + File path or pointer to MODFLOW cell budget file. + binary_grid_file : str or pathlike + File path to MODFLOW 6 binary grid (``*.dis.grb``) file. Not needed for MFNWT + cell_connections_df : DataFrame + DataFrame of cell connections that can be provided as an alternative to bindary_grid_file, + to avoid having to get the connections with each call to get_qx_qy_qz. This can + be produced by the :meth:``mfsetup.grid.MFsetupGrid.intercell_connections`` method. + Must have following columns: + + === ============================================================= + n from zero-based node number + kn from zero-based layer + in from zero-based row + jn from zero-based column + m to zero-based node number + kn to zero-based layer + in to zero-based row + jn to zero-based column + === ============================================================= + + version : str + MODFLOW version- 'mf6' or other. If not 'mf6', the cell budget output + is assumed to be formatted similar to a MODFLOW 2005 style model. + model_top : 2D numpy array of shape (nrow, ncol) + Model top elevations (only needed for modflow 2005 style models without + a binary grid file) + model_bottom_array : 3D numpy array of shape (nlay, nrow, ncol) + Model bottom elevations (only needed for modflow 2005 style models + without a binary grid file) + kstpkper : tuple + zero-based (time step, stress period) + specific_discharge : bool + Option to return arrays of specific discharge (1D vector components) + instead of volumetric fluxes. + By default, False + headfile : str, pathlike, or instance of flopy.utils.binaryfile.HeadFile + File path or pointer to MODFLOW head file. Only required if + specific_discharge=True + modelgrid : instance of MFsetupGrid object + Defaults to None, only required if specific_discharge=True + + + Returns + ------- + Qx, Qy, Qz : tuple of 2 or 3D numpy arrays + Volumetric or specific discharge fluxes across cell faces. + """ + msg = 'Getting discharge...' + if specific_discharge: + msg = 'Getting specific discharge...' + print(msg) + ta = time.time() + if version == 'mf6': + # get the cell connections + if cell_connections_df is not None: + df = cell_connections_df + elif binary_grid_file is not None: + df = get_intercell_connections(binary_grid_file) + else: + raise ValueError("Must specify a binary_grid_file or cell_connections_df.") + + # get the flows + # this constitutes almost all of the execution time for this fn + t1 = time.time() + if isinstance(cell_budget_file, str) or isinstance(cell_budget_file, Path): + cbb = bf.CellBudgetFile(cell_budget_file) + else: + cbb = cell_budget_file + nlay, nrow, ncol = cbb.shape + flowja = cbb.get_data(text='FLOW-JA-FACE', kstpkper=kstpkper)[0][0, 0, :] + df['q'] = flowja[df['qidx']] + print(f"getting flows from budget file took {time.time() - t1:.2f}s\n") + + # get arrays of flow through cell faces + # Qx (right face; TODO: confirm direction) + rfdf = df.loc[(df['jn'] < df['jm'])] + nlay = rfdf['km'].max() + 1 + qx = np.zeros((nlay, nrow, ncol)) + qx[rfdf['kn'].values, rfdf['in'].values, rfdf['jn'].values] = -rfdf.q.values + + # Qy (front face; TODO: confirm direction) + ffdf = df.loc[(df['in'] < df['im'])] + qy = np.zeros((nlay, nrow, ncol)) + qy[ffdf['kn'].values, ffdf['in'].values, ffdf['jn'].values] = -ffdf.q.values + + # Qz (bottom face; TODO: confirm that this is downward positive) + bfdf = df.loc[(df['kn'] < df['km'])] + qz = np.zeros((nlay, nrow, ncol)) + qz[bfdf['kn'].values, bfdf['in'].values, bfdf['jn'].values] = -bfdf.q.values + else: + if isinstance(cell_budget_file, str) or isinstance(cell_budget_file, Path): + cbb = bf.CellBudgetFile(cell_budget_file) + else: + cbb = cell_budget_file + qx = cbb.get_data(text="flow right face", kstpkper=kstpkper)[0] + qy = cbb.get_data(text="flow front face", kstpkper=kstpkper)[0] + unique_rec_names = [bs.decode().strip().lower() for bs in cbb.get_unique_record_names()] + if "flow lower face" in unique_rec_names: + qz = cbb.get_data(text="flow lower face", kstpkper=kstpkper)[0] + else: + qz = np.zeros_like(qy) + + # optionally get specific discharge + if specific_discharge: + if modelgrid is None: + raise Exception('specific discharge calculations require a modelgrid input') + if headfile is None: + print('No headfile object provided - thickness for specific discharge calculations\n' + + 'will be based on the model top rather than the water table') + thickness = modelgrid.cell_thickness + else: + if isinstance(headfile, str) or isinstance(headfile, Path): + hds = bf.HeadFile(headfile).get_data(kstpkper=kstpkper) + else: + hds = headfile.get_data(kstpkper=kstpkper) + thickness = modelgrid.saturated_thickness(array=hds) + + delr_gridp, delc_gridp = np.meshgrid(modelgrid.delr, + modelgrid.delc) + nlay, nrow, ncol = modelgrid.shape + + # multiply average thickness by width (along rows or cols) to + # obtain cross sectional area on the faces + # https://water.usgs.gov/ogw/modflow-nwt/MODFLOW-NWT-Guide/delrdelcillustration.png + qy_face_areas = np.tile(delr_gridp[:-1,:], (nlay,1,1)) * \ + ((thickness[:,:-1,:]+thickness[:,1:,:])/2) + # the above calculation results in a missing dimension ( only internal faces are + # calculated ) so we concatenate on a repetition of the final row or column + qy_face_areas = np.concatenate([qy_face_areas, + np.expand_dims(qy_face_areas[:,-1,:], axis=1)], axis=1) + + qx_face_areas = np.tile(delc_gridp[:,:-1], (nlay,1,1)) * \ + ((thickness[:,:,:-1]+thickness[:,:,1:])/2) + qx_face_areas = np.concatenate([qx_face_areas, + np.expand_dims(qx_face_areas[:,:,-1], axis=2)], axis=2) + + # z direction is simply delr * delc across all layers + qz_face_areas = np.tile(delr_gridp * delc_gridp, (nlay,1,1)) + + # divide by the areas resulting in normalized, specific discharge + qx /= qx_face_areas + qy /= qy_face_areas + qz /= qz_face_areas + + print(f"{msg} took {time.time() - ta:.2f}s\n") + return qx, qy, qz
+ + +class Tmr: + """ + Class for general telescopic mesh refinement of a MODFLOW model. Head or + flux fields from parent model are interpolated to boundary cells of + inset model, which may be in any configuration (jagged, rotated, etc.). + + Parameters + ---------- + parent_model : flopy model instance instance of parent model + Must have a valid, attached ``modelgrid`` attribute that is an instance of + :class:`mfsetup.grid.MFsetupGrid`. + inset_model : flopy model instance instance of inset model + Must have a valid, attached ``modelgrid`` attribute that is an instance of + :class:`mfsetup.grid.MFsetupGrid`. + parent_head_file : filepath + MODFLOW binary head output + parent_cell_budget_file : filepath + MODFLOW binary cell budget output + parent_binary_grid_file : filepath + MODFLOW 6 binary grid file (``*.grb``) + define_connections : str, {'max_active_extent', 'by_layer'} + Method for defining perimeter cells where the TMR boundary + condition will be applied. If 'max_active_extent', the + maximum footprint of the active area (including all cell + locations with at least one layer that is active) will be used. + If 'by_layer', the perimeter of the active area in each layer will be used + (excluding any interior clusters of active cells). The 'by_layer' + option is potentially problematic if some layers have substantial + areas of pinched-out (idomain != 1) cells, which may result + in perimeter boundary condition cells getting placed too close + to the area of interest. By default, 'max_active_extent'. + + Notes + ----- + """ + + def __init__(self, parent_model, inset_model, + parent_head_file=None, parent_cell_budget_file=None, + parent_binary_grid_file=None, + boundary_type=None, inset_parent_period_mapping=None, + parent_start_date_time=None, source_mask=None, + define_connections_by='max_active_extent', + shapefile=None, + ): + self.parent = parent_model + self.inset = inset_model + self.parent_head_file = parent_head_file + self.parent_cell_budget_file = parent_cell_budget_file + self.parent_binary_grid_file = parent_binary_grid_file + self.define_connections_by = define_connections_by + self.shapefile = shapefile + self.boundary_type = boundary_type + if boundary_type is None and parent_head_file is not None: + self.boundary_type = 'head' + elif boundary_type is None and parent_cell_budget_file is not None: + self.boundary_type = 'flux' + self.parent_start_date_time = parent_start_date_time + + # Path for writing auxilliary output tables + # (boundary_cells.shp, etc.) + if hasattr(self.inset, '_tables_path'): + self._tables_path = Path(self.inset._tables_path) + else: + self._tables_path = Path(self.inset.model_ws) / 'tables' + self._tables_path.mkdir(exist_ok=True, parents=True) + + # properties + self._idomain = None + self._inset_boundary_cells = None + self._inset_parent_period_mapping = inset_parent_period_mapping + self._interp_weights_heads = None + self._interp_weights_flux = None + self._source_mask = source_mask + self._inset_zone_within_parent = None + + @property + def idomain(self): + """Active area of the inset model. + """ + if self._idomain is None: + if self.inset.version == 'mf6': + idomain = self.inset.dis.idomain.array + if idomain is None: + idomain = np.ones_like(self.inset.dis.botm.array, dtype=int) + else: + idomain = self.inset.bas6.ibound.array + self._idomain = idomain + return self._idomain + + @property + def inset_boundary_cells(self): + if self._inset_boundary_cells is None: + by_layer = self.define_connections_by == 'by_layer' + df = self.get_inset_boundary_cells(by_layer=by_layer) + x, y, z = self.inset.modelgrid.xyzcellcenters + df['x'] = x[df.i, df.j] + df['y'] = y[df.i, df.j] + df['z'] = z[df.k, df.i, df.j] + self._inset_boundary_cells = df + self._interp_weights = None + return self._inset_boundary_cells + + @property + def inset_parent_period_mapping(self): + nper = self.inset.nper + # if mapping between source and dest model periods isn't specified + # assume one to one mapping of stress periods between models + if self._inset_parent_period_mapping is None: + parent_periods = list(range(self.parent.nper)) + self._inset_parent_period_mapping = {i: parent_periods[i] + if i < self.parent.nper else parent_periods[-1] for i in range(nper)} + return self._inset_parent_period_mapping + + @inset_parent_period_mapping.setter + def inset_parent_period_mapping(self, inset_parent_period_mapping): + self._inset_parent_period_mapping = inset_parent_period_mapping + + @property + def interp_weights_flux(self): + """For the two main directions of flux (i, j) and the four orientations of + inset faces to interpolate to (right.left,top,bottom + we can precalulate the interpolation weights of the combinations to speed up + interpolation""" + if self._interp_weights_flux is None: + self._interp_weights_flux = dict() # we need four flux directions for the insets + # x, y, z locations of parent model head values for i faces + ipx, ipy, ipz = self.x_iface_parent, self.y_iface_parent, self.z_iface_parent + # x, y, z locations of parent model head values for j faces + jpx, jpy, jpz = self.x_jface_parent, self.y_jface_parent, self.z_jface_parent + + + # these are the i-direction fluxes + x,y,z = self.inset_boundary_cell_faces.loc[ + self.inset_boundary_cell_faces.cellface.isin(['top','bottom'])][['xface','yface','zface']].T.values + self._interp_weights_flux['iface'] = interp_weights((ipx, ipy, ipz), (x, y, z), d=3) + assert not np.any(np.isnan(self._interp_weights_flux['iface'][1])) + + # these are the j-direction fluxes + x,y,z = self.inset_boundary_cell_faces.loc[ + self.inset_boundary_cell_faces.cellface.isin(['left','right'])][['xface','yface','zface']].T.values + + self._interp_weights_flux['jface'] = interp_weights((jpx, jpy, jpz), (x, y, z), d=3) + assert not np.any(np.isnan(self._interp_weights_flux['jface'][1])) + + + return self._interp_weights_flux + + @property + def parent_xyzcellcenters(self): + """Get x, y, z locations of parent cells in a buffered area + (defined by the _source_grid_mask property) around the + inset model.""" + px, py, pz = self.parent.modelgrid.xyzcellcenters + + # add an extra layer on the top and bottom + # for inset model cells above or below + # the last cell center in the vert. direction + # pad top by top layer thickness + b1 = self.parent.modelgrid.top - self.parent.modelgrid.botm[0] + top = pz[0] + b1 + # pad botm by botm layer thickness + if self.parent.modelgrid.shape[0] > 1: + b2 = -np.diff(self.parent.modelgrid.botm[-2:], axis=0)[0] + else: + b2 = b1 + botm = pz[-1] - b2 + pz = np.vstack([[top], pz, [botm]]) + + nlay, nrow, ncol = pz.shape + px = np.tile(px, (nlay, 1, 1)) + py = np.tile(py, (nlay, 1, 1)) + mask = self._source_grid_mask + # mask already has extra top/botm layers + # (_source_grid_mask property) + px = px[mask] + py = py[mask] + pz = pz[mask] + return px, py, pz + + @property + def parent_xyzcellfacecenters(self): + """Get x, y, z locations of the centroids of the cell faces + in the row and column directions in a buffered area + (defined by the _source_grid_mask property) around the + inset model. Analogous to parent_xyzcellcenters, but for + interpolating parent model cell by cell fluxes that are located + at the cell face centers (instead of heads that are located + at the cell centers). + """ + #px, py, pz = self.parent.modelgrid.xyzcellcenters + k, i, j = np.indices(self.parent.modelgrid.shape) + xyzcellfacecenters = {} + for cellface in 'right', 'bottom': + px, py, pz = get_cellface_midpoint(self.parent.modelgrid, + k, i, j, + cellface) + px = np.reshape(px, self.parent.modelgrid.shape) + py = np.reshape(py, self.parent.modelgrid.shape) + pz = np.reshape(pz, self.parent.modelgrid.shape) + # add an extra layer on the top and bottom + # for inset model cells above or below + # the last cell center in the vert. direction + # pad top by top layer thickness + b1 = self.parent.modelgrid.top - self.parent.modelgrid.botm[0] + top = pz[0] + b1 + # pad botm by botm layer thickness + if self.parent.modelgrid.shape[0] > 1: + b2 = -np.diff(self.parent.modelgrid.botm[-2:], axis=0)[0] + else: + b2 = b1 + botm = pz[-1] - b2 + pz = np.vstack([[top], pz, [botm]]) + + nlay, nrow, ncol = pz.shape + px = np.tile(px, (nlay, 1, 1)) + py = np.tile(py, (nlay, 1, 1)) + mask = self._source_grid_mask + # mask already has extra top/botm layers + # (_source_grid_mask property) + px = px[mask] + py = py[mask] + pz = pz[mask] + + xyzcellfacecenters[cellface] = px, py, pz + return xyzcellfacecenters + + + @property + def _inset_max_active_area(self): + """The maximum (2D) footprint of the active area within the inset + model grid, where each i, j location has at least 1 active cell + vertically, excluding any inactive holes that are surrounded by + active cells. + """ + # get the max footprint of active cells + max_active_area = np.sum(self.idomain > 0, axis=0) > 0 + # fill any holes within the max footprint + # including any LGR areas (that are inactive in this model) + # set min cluster size to 1 greater than number of inactive cells + # (to not allow any holes) + minimum_cluster_size = np.sum(max_active_area == 0) + 1 + # find_remove_isolated_cells fills clusters of 1s with 0s + # to fill holes, we want to look for clusters of 0s and fill with 1s + to_fill = ~max_active_area + # pad the array to fill so that exterior inactive cells + # (outside the active area perimeter) aren't included + to_fill = np.pad(to_fill, pad_width=1, mode='reflect') + # invert the result to get True values for active cells and filled areas + filled = ~find_remove_isolated_cells(to_fill, minimum_cluster_size) + # de-pad the result + filled = filled[1:-1, 1:-1] + max_active_area = filled + return max_active_area + + @property + def inset_zone_within_parent(self): + """The footprint of the inset model maximum active area footprint + (``Tmr._inset_max_active_area``) within the parentmodel grid. + In other words, all parent cells containing one or inset + model cell centers within ``Tmr._inset_max_active_area`` (ones). + Zeros indicate parent cells with no inset cells. + """ + # get the locations of the inset model cells within _inset_max_active_area + x, y, z = self.inset.modelgrid.xyzcellcenters + x = x[self._inset_max_active_area] + y = y[self._inset_max_active_area] + pi, pj = get_ij(self.parent.modelgrid, x, y) + inset_zone_within_parent = np.zeros((self.parent.modelgrid.nrow, + self.parent.modelgrid.ncol), dtype=bool) + inset_zone_within_parent[pi, pj] = True + return inset_zone_within_parent + + + @property + def _source_grid_mask(self): + """Boolean array indicating window in parent model grid (subset of cells) + that encompass the inset model domain. Used to speed up interpolation + of parent grid values onto inset grid.""" + if self._source_mask is None: + mask = np.zeros((self.parent.modelgrid.nrow, + self.parent.modelgrid.ncol), dtype=bool) + if hasattr(self.inset, 'parent_mask') and \ + (self.inset.parent_mask.shape == self.parent.modelgrid.xcellcenters.shape): + mask = self.inset.parent_mask + else: + #x, y = np.squeeze(self.inset.modelgrid.bbox.exterior.coords.xy) + l, r, b, t = self.inset.modelgrid.extent + x = np.array([r, r, l, l, r]) + y = np.array([b, t, t, b, b]) + pi, pj = get_ij(self.parent.modelgrid, x, y) + pad = 3 + i0 = np.max([pi.min() - pad, 0]) + i1 = np.min([pi.max() + pad + 1, self.parent.modelgrid.nrow]) + j0 = np.max([pj.min() - pad, 0]) + j1 = np.min([pj.max() + pad + 1, self.parent.modelgrid.ncol]) + mask[i0:i1, j0:j1] = True + # make the mask 3D + # include extra layer for top and bottom edges of model + mask3d = np.tile(mask, (self.parent.modelgrid.nlay + 2, 1, 1)) + self._source_mask = mask3d + elif len(self._source_mask.shape) == 2: + mask3d = np.tile(self._source_mask, (self.parent.modelgrid.nlay + 2, 1, 1)) + self._source_mask = mask3d + return self._source_mask + + def get_inset_boundary_cells(self, by_layer=False, shapefile=None): + """Get a dataframe of connection information for + horizontal boundary cells. + + Parameters + ---------- + by_layer : bool + Controls how boundary cells will be defined. If True, + the perimeter of the active area in each layer will be used + (excluding any interior clusters of active cells). If + False, the maximum footprint of the active area + (including all cell locations with at least one layer that + is active). + """ + print('\ngetting perimeter cells...') + t0 = time.time() + if shapefile is None: + shapefile = self.shapefile + if shapefile: + perimeter = gp.read_file(shapefile) + perimeter = perimeter[['geometry']] + # reproject the perimeter shapefile to the model CRS if needed + if perimeter.crs != self.inset.modelgrid.crs: + perimeter.to_crs(self.inset.modelgrid.crs, inplace=True) + # convert polygons to linear rings + # (so just the cells along the polygon exterior are selected) + geoms = [] + for g in perimeter.geometry: + if g.type == 'MultiPolygon': + g = MultiLineString([p.exterior for p in g.geoms]) + elif g.type == 'Polygon': + g = g.exterior + geoms.append(g) + # add a buffer of 1 cell width so that cells aren't missed + # extra cells will get culled later + # when only cells along the outer perimeter (max idomain extent) + # are selected + buffer_dist = np.mean([self.inset.modelgrid.delr.mean(), + self.inset.modelgrid.delc.mean()]) + perimeter['geometry'] = [g.buffer(buffer_dist * 0.5) for g in geoms] + grid_df = self.inset.modelgrid.get_dataframe(layers=False) + df = gp.sjoin(grid_df, perimeter, predicate='intersects', how='inner') + # add layers + dfs = [] + for k in range(self.inset.modelgrid.nlay): + kdf = df.copy() + kdf['k'] = k + dfs.append(kdf) + specified_bcells = pd.concat(dfs) + # get the active extent in each layer + # and the cell faces along the edge + # apply those cell faces to specified_bcells + by_layer = True + else: + specified_bcells = None + if not by_layer: + + # attached the filled array as an attribute + max_active_area = self._inset_max_active_area + + # pad filled idomain array with zeros around the edge + # so that perimeter connections are identified + filled = np.pad(max_active_area, 1, constant_values=0) + filled3d = np.tile(filled, (self.idomain.shape[0], 1, 1)) + df = get_horizontal_connections(filled3d, connection_info=False) + # deincrement rows and columns + # so that they reflect positions in the non-padded array + df['i'] -= 1 + df['j'] -= 1 + else: + dfs = [] + for k, layer_idomain in enumerate(self.idomain): + + # just get the perimeter of inactive cells + # (exclude any interior active cells) + # start by filling any interior active cells + from scipy.ndimage import binary_fill_holes + binary_idm = layer_idomain > 0 + filled = binary_fill_holes(binary_idm) + # pad filled idomain array with zeros around the edge + # so that perimeter connections are identified + filled = np.pad(filled, 1, constant_values=0) + # get the cells along the inside edge + # of the model active area perimeter, + # via a sobel filter + df = get_horizontal_connections(filled, connection_info=False) + df['k'] = k + # deincrement rows and columns + # so that they reflect positions in the non-padded array + df['i'] -= 1 + df['j'] -= 1 + dfs.append(df) + df = pd.concat(dfs) + + # cull the boundary cells identified above + # with the sobel filter on the outer perimeter + # to just the cells specified in the shapefile + if specified_bcells is not None: + df['cellid'] = list(zip(df.k, df.i, df.j)) + specified_bcells['cellid'] = list(zip(specified_bcells.k, specified_bcells.i, specified_bcells.j)) + df = df.loc[df.cellid.isin(specified_bcells.cellid)] + + # add layer top and bottom and idomain information + layer_tops = np.stack([self.inset.dis.top.array] + + [l for l in self.inset.dis.botm.array])[:-1] + df['top'] = layer_tops[df.k, df.i, df.j] + df['botm'] = self.inset.dis.botm.array[df.k, df.i, df.j] + df['idomain'] = 1 + if self.inset.version == 'mf6': + df['idomain'] = self.idomain[df.k, df.i, df.j] + elif 'BAS6' in self.inset.get_package_list(): + df['idomain'] = self.inset.bas6.ibound.array[df.k, df.i, df.j] + df = df[['k', 'i', 'j', 'cellface', 'top', 'botm', 'idomain']] + # drop inactive cells + df = df.loc[df['idomain'] > 0] + + # get cell polygons from modelgrid + # write shapefile of boundary cells with face information + grid_df = self.inset.modelgrid.dataframe.copy() + grid_df['cellid'] = list(zip(grid_df.k, grid_df.i, grid_df.j)) + geoms = dict(zip(grid_df['cellid'], grid_df['geometry'])) + if 'cellid' not in df.columns: + df['cellid'] = list(zip(df.k, df.i, df.j)) + df['geometry'] = [geoms[cellid] for cellid in df.cellid] + df = gp.GeoDataFrame(df, crs=self.inset.modelgrid.crs) + outshp = Path(self._tables_path, 'boundary_cells.shp') + df.drop('cellid', axis=1).to_file(outshp) + print(f"wrote {outshp}") + print("perimeter cells took {:.2f}s\n".format(time.time() - t0)) + return df + + def get_inset_boundary_values(self, for_external_files=False): + + if self.boundary_type == 'head': + check_source_files([self.parent_head_file]) + hdsobj = bf.HeadFile(self.parent_head_file) # , precision='single') + all_kstpkper = hdsobj.get_kstpkper() + + last_steps = {kper: kstp for kstp, kper in all_kstpkper} + + # create an interpolator instance + cell_centers_interp = Interpolator(self.parent_xyzcellcenters, + self.inset_boundary_cells[['x', 'y', 'z']].T.values, + d=3, + source_values_mask=self._source_grid_mask) + # compute the weights + _ = cell_centers_interp.interp_weights + + print('\ngetting perimeter heads...') + t0 = time.time() + dfs = [] + parent_periods = [] + for inset_per, parent_per in self.inset_parent_period_mapping.items(): + print(f'for stress period {inset_per}', end=', ') + t1 = time.time() + # skip getting data if parent period is already represented + # (heads will be reused) + if parent_per in parent_periods: + continue + else: + parent_periods.append(parent_per) + parent_kstpkper = last_steps[parent_per], parent_per + parent_heads = hdsobj.get_data(kstpkper=parent_kstpkper) + # pad the parent heads on the top and bottom + # so that inset cells above and below the top/bottom cell centers + # will be within the interpolation space + # (parent x, y, z locations already contain this pad; parent_xyzcellcenters) + parent_heads = np.pad(parent_heads, pad_width=1, mode='edge')[:, 1:-1, 1:-1] + + # interpolate inset boundary heads from 3D parent head solution + heads = cell_centers_interp.interpolate(parent_heads, method='linear') + #heads = griddata((px, py, pz), parent_heads.ravel(), + # (x, y, z), method='linear') + + # make a DataFrame of interpolated heads at perimeter cell locations + df = self.inset_boundary_cells.copy() + df['per'] = inset_per + df['head'] = heads + + # boundary heads must be greater than the cell bottom + # and idomain > 0 + loc = (df['head'] > df['botm']) & (df['idomain'] > 0) + df = df.loc[loc] + # drop invalid heads (most likely due to dry cells) + valid = (df['head'] < 1e10) & (df['head'] > -1e10) + df = df.loc[valid] + dfs.append(df) + print("took {:.2f}s".format(time.time() - t1)) + + df = pd.concat(dfs) + # drop duplicate cells (accounting for stress periods) + # (that may have connections in the x and y directions, + # and therefore would be listed twice) + df['cellid'] = list(zip(df.per, df.k, df.i, df.j)) + duplicates = df.duplicated(subset=['cellid']) + df = df.loc[~duplicates, ['k', 'i', 'j', 'per', 'head']] + print("getting perimeter heads took {:.2f}s\n".format(time.time() - t0)) + + + elif self.boundary_type == 'flux': + check_source_files([self.parent_cell_budget_file]) + if self.parent.version == 'mf6': + if self.parent_binary_grid_file is None: + raise ValueError('Specified flux perimeter boundary requires a parent_binary_grid_file if parent is MF6') + else: + check_source_files([self.parent_binary_grid_file]) + fileobj = bf.CellBudgetFile(self.parent_cell_budget_file) # , precision='single') + all_kstpkper = fileobj.get_kstpkper() + + last_steps = {kper: kstp for kstp, kper in all_kstpkper} + + print('\ngetting perimeter fluxes...') + t0 = time.time() + dfs = [] + parent_periods = [] + + # TODO: consider refactoring to move this into its own function + # * handle vertical fluxes + # * possibly handle rotated inset with differnt angle than parent - now assuming colinear + # * Handle the geometry issues for the inset + # * need to locate edge faces (x,y,z) based on which faces is out (e.g. left, right, up, down) + + # TODO: refactor self.inset_boundary_cells + # it's probably not ideal to have self.inset_boundary_cells + # be a 'public' attribute that gets modified every stress period + # but without any information tying the current state of it + # to a specific stress period. It should either have all stress periods + # or the stress period-specific information + # (the fluxes and cell thickness if we are considering sat. thickness) + # pulled out into a separate container + + # make a dataframe to store these + self.inset_boundary_cell_faces = self.inset_boundary_cells.copy() + # get the locations of the boundary face midpoints + x, y, z = get_cellface_midpoint(self.inset.modelgrid, + *self.inset_boundary_cells[['k', 'i', 'j', 'cellface']].T.values) + self.inset_boundary_cell_faces['x'] = x + self.inset_boundary_cell_faces['y'] = y + self.inset_boundary_cell_faces['z'] = z + # renaming columns to be clear now x,y,z, is for the outer cell face + #self.inset_boundary_cell_faces.rename(columns={'x':'xface','y':'yface','z':'zface'}, inplace=True) + # convert x,y coordinates to model coords from world coords + #self.inset_boundary_cell_faces.xface, self.inset_boundary_cell_faces.yface = \ + # self.inset.modelgrid.get_local_coords(self.inset_boundary_cell_faces.xface, self.inset_boundary_cell_faces.yface) + # calculate the thickness to later get the area + # TODO: consider saturated thickness instead, but this would require interpolating parent heads to inset cell locations + + self.inset_boundary_cell_faces['thickness'] = self.inset_boundary_cell_faces.top - self.inset_boundary_cell_faces.botm + # populate cell face widths + self.inset_boundary_cell_faces['width'] = np.nan + left_right_faces = self.inset_boundary_cell_faces['cellface'].isin({'left', 'right'}) + # left and right faces are along columns + rows = self.inset_boundary_cell_faces.loc[left_right_faces, 'i'] + self.inset_boundary_cell_faces.loc[left_right_faces, 'width'] = self.inset.modelgrid.delc[rows] + # top and bottom faces are along rows + top_bottom_faces = self.inset_boundary_cell_faces['cellface'].isin({'top', 'bottom'}) + columns = self.inset_boundary_cell_faces.loc[top_bottom_faces, 'j'] + self.inset_boundary_cell_faces.loc[top_bottom_faces, 'width'] = self.inset.modelgrid.delr[columns] + assert not self.inset_boundary_cell_faces['width'].isna().any() + + self.inset_boundary_cell_faces['face_area'] = self.inset_boundary_cell_faces['width'] *\ + self.inset_boundary_cell_faces['thickness'] + # pre-seed the area as thickness to later mult by width + #self.inset_boundary_cell_faces['face_area'] = self.inset_boundary_cell_faces['thickness'].values + # placeholder for interpolated values + self.inset_boundary_cell_faces['q_interp'] = np.nan + # placeholder for flux to well package + # self.inset_boundary_cell_faces['Q'] = np.nan + + # make a grid of the spacings + #delr_gridi, delc_gridi = np.meshgrid(self.inset.modelgrid.delr, self.inset.modelgrid.delc) + # + #for cn in self.inset_boundary_cell_faces.cellface.unique(): + # curri = self.inset_boundary_cell_faces.loc[self.inset_boundary_cell_faces.cellface==cn].i + # currj = self.inset_boundary_cell_faces.loc[self.inset_boundary_cell_faces.cellface==cn].j + # curr_delc = delc_gridi[curri, currj] + # curr_delr = delr_gridi[curri, currj] + # if cn == 'top': + # #self.inset_boundary_cell_faces.loc[self.inset_boundary_cell_faces.cellface==cn, 'yface'] += curr_delc/2 + # self.inset_boundary_cell_faces.loc[self.inset_boundary_cell_faces.cellface==cn, 'face_area'] *= curr_delr + # elif cn == 'bottom': + # #self.inset_boundary_cell_faces.loc[self.inset_boundary_cell_faces.cellface==cn, 'yface'] -= curr_delc/2 + # self.inset_boundary_cell_faces.loc[self.inset_boundary_cell_faces.cellface==cn, 'face_area'] *= curr_delr + # if cn == 'right': + # #self.inset_boundary_cell_faces.loc[self.inset_boundary_cell_faces.cellface==cn, 'xface'] += curr_delr/2 + # self.inset_boundary_cell_faces.loc[self.inset_boundary_cell_faces.cellface==cn, 'face_area'] *= curr_delc + # elif cn == 'left': + # #self.inset_boundary_cell_faces.loc[self.inset_boundary_cell_faces.cellface==cn, 'xface'] -= curr_delr/2 + # self.inset_boundary_cell_faces.loc[self.inset_boundary_cell_faces.cellface==cn, 'face_area'] *= curr_delc + + # + # Now handle the geometry issues for the parent + # first thicknesses (at cell centers) + + parent_thick = self.parent.modelgrid.cell_thickness + + # make matrices of the row and column spacings + # NB --> trying to preserve the always seemingly + # backwards delr/delc definitions + # also note - for now, taking average thickness at a connected face + + # need XYZ locations of the center of each face for + # iface and jface edges (faces) + # NB edges are returned in model coordinates + #xloc_edge, yloc_edge = self.parent.modelgrid.xyedges + #nlay = self.parent.modelgrid.nlay + #nrow = self.parent.modelgrid.nrow + #ncol = self.parent.modelgrid.ncol + ## throw out the left and top edges, respectively + #xloc_edge=xloc_edge[1:] + #yloc_edge=yloc_edge[1:] + ## tile out to full dimensions of the grid + #xloc_edge = np.tile(np.atleast_2d(xloc_edge),(nlay+2,nrow,1)) + #yloc_edge = np.tile(np.atleast_2d(yloc_edge).T,(nlay+2,1,ncol)) +# + ## TODO: implement vertical fluxes + #''' parent_vface_areas = np.tile(delc_grid, (nlay,1,1)) * \ + # np.tile(delr_grid, (nlay,1,1)) + #''' + #xloc_center, yloc_center = self.parent.modelgrid.xycenters +# + ## tile out to full dimensions of the grid +# + #xloc_center = np.tile(np.atleast_2d(xloc_center),(nlay+2,nrow,1)) + #yloc_center = np.tile(np.atleast_2d(yloc_center).T,(nlay+2,1,ncol)) +# + ## get the vertical centroids initially at cell centroids + #zloc = (self.parent.modelgrid.top_botm[:-1,:,:] + + # self.parent.modelgrid.top_botm[1:,:,:] ) / 2 +# + ## pad in the vertical above and below the model + #zpadtop = np.expand_dims(self.parent.modelgrid.top_botm[0,:,:] + parent_thick[0], axis=0) + #zpadbotm = np.expand_dims(self.parent.modelgrid.top_botm[-1,:,:] - parent_thick[-1], axis=0) + #zloc=np.vstack([zpadtop,zloc,zpadbotm]) +# + ## for iface, all cols, nrow-1 rows + #self.x_iface_parent = xloc_center[:,:-1,:].ravel() + #self.y_iface_parent = yloc_edge[:,:,:-1].ravel() + ## need to calculate the average z location along rows + #self.z_iface_parent = ((zloc[:,:-1,:]+zloc[:,1:,:]) / 2).ravel() + ## for jface, all rows, ncol-1 cols + #self.x_jface_parent = xloc_edge[:,:-1,:].ravel() + #self.y_jface_parent = yloc_center[:,:,:-1].ravel() + ## need to calculate the average z location along columns + #self.z_jface_parent = ((zloc[:,:,:-1]+zloc[:,:,1:]) / 2).ravel() + ## for kface, all cols, all rows + #self.x_kface_parent = xloc_center.ravel() + #self.y_kface_parent = yloc_center.ravel() + ## for zlocations, -1 layers + #self.z_kface_parent = zloc.ravel() +# + #''' + ## get the perimeter cells and calculate the weights + #_ = self.interp_weights_flux + #''' + # interpolate parent face centers + # (where the cell by cell flows and specific discharge values are located) + # to inset face centers along the exterior sides of the boundary cells + # (the edge of the inset model, where the boundary fluxes will be located) + + # interpolate parent y fluxes (column parallel) + # to inset boundary cell face centers + #px = self.x_iface_parent + #py = self.y_iface_parent + #pz = self.z_iface_parent + px, py, pz = self.parent_xyzcellcenters + #px, py, pz = self.parent_xyzcellfacecenters['bottom'] + iface_interp = Interpolator((px, py, pz), + #self.inset_boundary_cell_faces[['x', 'y', 'z']].T.values, + self.inset_boundary_cells[['x', 'y', 'z']].T.values, + d=3, source_values_mask=self._source_grid_mask + ) + _ = iface_interp.interp_weights + # interpolate parent x fluxes (row parallel) + # to inset boundary cell face centers + #px = self.x_jface_parent + #py = self.y_jface_parent + #pz = self.z_jface_parent + #px, py, pz = self.parent_xyzcellfacecenters['right'] + #jface_interp = Interpolator((px, py, pz), + # #self.inset_boundary_cell_faces[['x', 'y', 'z']].T.values, + # self.inset_boundary_cells[['x', 'y', 'z']].T.values, + # d=3, source_values_mask=self._source_grid_mask + # ) + #_ = jface_interp.interp_weights + + #kface_interp = Interpolator((self.x_kface_parent, self.y_kface_parent, self.z_kface_parent), + # self.inset_boundary_cells[['x', 'y', 'z']].T.values, + # d=3) + #_ = kface_interp.interp_weights + + # get a dataframe of cell connections + # (that can be reused with subsequent stress periods) + cell_connections_df = None + if self.parent.version == 'mf6': + cell_connections_df = get_intercell_connections(self.parent_binary_grid_file) + + for inset_per, parent_per in self.inset_parent_period_mapping.items(): + print(f'for stress period {inset_per}', end=', ') + t1 = time.time() + # skip getting data if parent period is already represented + # (heads will be reused) + if parent_per in parent_periods: + continue + else: + parent_periods.append(parent_per) + parent_kstpkper = last_steps[parent_per], parent_per + + # get parent specific discharge for inset area + qx, qy, qz = get_qx_qy_qz(self.parent_cell_budget_file, + cell_connections_df=cell_connections_df, + version=self.parent.version, + kstpkper=parent_kstpkper, + specific_discharge=True, + modelgrid=self.parent.modelgrid, + headfile=self.parent_head_file) + + # pad the two parent flux arrays on the top and bottom + # so that inset cells above and below the top/bottom cell centers + # will be within the interpolation space + qx = np.pad(qx, pad_width=1, mode='edge')[:, 1:-1, 1:-1] + qy = np.pad(qy, pad_width=1, mode='edge')[:, 1:-1, 1:-1] + qz = np.pad(qz, pad_width=1, mode='edge')[:, 1:-1, 1:-1] + + + # TODO: consider padding or not on top, left, and "top (row-wise)" + # (parent x, y, z locations already contain this pad - see zloc above) + #q_iface = np.pad(q_iface, pad_width=1, mode='edge')[:, 1:-1, 1:-1].ravel() + #q_jface = np.pad(q_jface, pad_width=1, mode='edge')[:, 1:-1, 1:-1].ravel() + + + # TODO: refactor interpolation to use the new interpolator object - DONE: see above + # interpolate q at the four different face orientations (e.g. fluxdir) + + # interpolate inset boundary heads from 3D parent head solution + t2 = time.time() + y_flux = iface_interp.interpolate(qy, method='linear') + x_flux = iface_interp.interpolate(qx, method='linear') + # v_flux = kface_interp.interpolate(qz, method='linear') + f"interpolation took {time.time() - t2:.2f}s" + + t2 = time.time() + self.inset_boundary_cell_faces = self.inset_boundary_cell_faces.assign( + qx_interp=x_flux, + qy_interp=y_flux)#, + #qz_interp=v_flux) + + # assign q values and flip the sign for flux counter to the CBB convention directions of right and bottom + top_faces = self.inset_boundary_cell_faces.cellface == 'top' + self.inset_boundary_cell_faces.loc[top_faces, 'q_interp'] = self.inset_boundary_cell_faces.loc[top_faces, 'qy_interp'] + bottom_faces = self.inset_boundary_cell_faces.cellface == 'bottom' + self.inset_boundary_cell_faces.loc[bottom_faces, 'q_interp'] = -self.inset_boundary_cell_faces.loc[bottom_faces, 'qy_interp'] + left_faces = self.inset_boundary_cell_faces.cellface == 'left' + self.inset_boundary_cell_faces.loc[left_faces, 'q_interp'] = self.inset_boundary_cell_faces.loc[left_faces, 'qx_interp'] + right_faces = self.inset_boundary_cell_faces.cellface == 'right' + self.inset_boundary_cell_faces.loc[right_faces, 'q_interp'] = -self.inset_boundary_cell_faces.loc[right_faces, 'qx_interp'] + + # convert specific discharge in inset cells to Q -- flux for well package + self.inset_boundary_cell_faces['q'] = \ + self.inset_boundary_cell_faces['q_interp'] * self.inset_boundary_cell_faces['face_area'] + + + # make a DataFrame of boundary fluxes at perimeter cell locations + df = self.inset_boundary_cell_faces[['k','i','j','idomain','q']].copy() + # aggregate fluxes by cell + # so that we are accurately compare to the WELL package budget in the listing file + #by_cell = df.groupby('cellid').first() + #by_cell['q'] = df.groupby('cellid').sum()['q'] + ## drop the cellid index + #by_cell.reset_index(drop=True, inplace=True) + df['per'] = inset_per + + # boundary fluxes must be in active cells + # corresponding parent cells must be active too, + # otherwise a nan flux will be produced + # drop nan fluxes, which will revert these boundary cells to the + # default no-flow condition in MODFLOW + # (consistent with parent model cell being inactive) + keep = (df['idomain'] > 0) & ~df['q'].isna() + dfs.append(df.loc[keep].copy()) + f"assigning face fluxes took {time.time() - t2:.2f}s" + print(f"took {time.time() - t1:.2f}s total") + + df = pd.concat(dfs) + # drop duplicate cells (accounting for stress periods) + # (that may have connections in the x and y directions, + # and therefore would be listed twice) + #df['cellid'] = list(zip(df.per, df.k, df.i, df.j)) + #duplicates = df.duplicated(subset=['cellid']) + #df = df.loc[~duplicates, ['k', 'i', 'j', 'per', 'q']] + print("getting perimeter fluxes took {:.2f}s\n".format(time.time() - t0)) + + # convert to one-based and comment out header if df will be written straight to external file + if for_external_files: + df.rename(columns={'k': '#k'}, inplace=True) + df['#k'] += 1 + df['i'] += 1 + df['j'] += 1 + return df +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/_sources/10min.rst.txt b/_sources/10min.rst.txt new file mode 100644 index 00000000..2448a50e --- /dev/null +++ b/_sources/10min.rst.txt @@ -0,0 +1,115 @@ +10 Minutes to Modflow-setup +============================ +This is a short introduction to help get you up and running with Modflow-setup. A complete workflow can be found in the :ref:`Pleasant Lake Example`; additional examples of working configuration files can be found in the :ref:`Configuration File Gallery`. + +1) Define the model active area and coordinate reference system +----------------------------------------------------------------- +Depending on the problem, the model area might simply be a box enclosing features of interest and any relevant hydrologic boundaries, or an irregular shape surrounding a watershed or other feature. In either case, it may be helpful to :ref:`download hydrography first <3) Develop flowlines to represent streams>`, to ensure that the model area includes all important features. The model should be referenced to a `projected coordinate reference system (CRS) `_, ideally with length units of meters and an authority code (such as an `EPSG code `_) that unambiguously defines it. + +Modflow-setup provides two ways to define a model grid: + + * x and y coordinates of the model origin (lower left or upper left corner), grid spacing, number of rows and columns, rotation, and CRS + * As a rectangular area of specified discretization surrounding a polygon shapefile of the model active area (traced by hand or developed by some other means) or a feature of interest buffered by a specified distance. + +The active model area is defined subsequently in the DIS package. + + .. Note:: + + Don't forget about the farfield! Usually it is advised to include important competing sinks outside of the immediate area of interest (the nearfield), so that the solution is not over-specified by the perimeter boundary condition, and recognizing that the surface watershed boundary doesn't always coincide exactly with the groundwatershed boundary. See Haitjema (1995) and Anderson and others (2015) for more info. + + .. Note:: + Need a polygon defining a watershed? In the United States, the `Watershed Boundary Dataset `_ provides watershed deliniations at various scales. + + +2) Create a setup script and configuration file +------------------------------------------------ +Usually creating the desired grid requires some iteration. We can get started on this by making a model setup script and corresponding configuration file. + +An initial model setup script for making the model grid: + + .. literalinclude:: ../../examples/initial_grid_setup.py + :language: python + :linenos: + + Download the file: + :download:`initial_grid_setup.py <../../examples/initial_grid_setup.py>` + +An initial configuration file for developing a model grid around a pre-defined active area: + + .. literalinclude:: ../../examples/initial_config_poly.yaml + :language: yaml + :linenos: + + Download the file: + :download:`initial_config_poly.yaml <../../examples/initial_config_poly.yaml>` + +To define a model grid using an origin, grid spacing and dimensions, a ``setup_grid:`` block like this one could be substitued above: + + .. literalinclude:: ../../examples/initial_config_box.yaml + :language: yaml + :start-at: setup_grid: + + Download the file: + :download:`initial_config_poly.yaml <../../examples/initial_config_box.yaml>` + +Now ``initial_setup_script.py`` can be run repeatedly to explore different grids. + + +3) Develop flowlines to represent streams +------------------------------------------ +Next, let's get some data for setting up boundary conditions. For streams, Modflow-setup can accept any linestring shapefile that has a routing column indicating how the lines connect to one another. This can be created by hand, or in the United States, obtained from the National Hydrography Dataset Plus (NHDPlus). There are two types of NHDPlus: + + - `NHDPlus version 2 `_ is mapped at the 1:100,000 scale, and is therefore suitable for larger regional models with cell sizes of ~100s of meters to ~1km. NHDPlus version 2 can be the best choice for larger model areas (greater than approx 1,000 km\ :sup:`2`), where NHDPlus HR might have too many lines. NHDPlus version 2 can be obtained from the `EPA `_. + - `NHDPlus High Resolution (HR) `_ is mapped at the finer 1:24,000 scale, and may therefore work better for smaller problems (discretizations of ~100 meters or less) where better alignment between the mapped lines and stream channel in the DEM is desired, and where the number of linestring features to manage won't be prohibitive. NHDPlus HR can be accessed via the `National Map Downloader `_. + +Preprocessing NHDPlus HR +^^^^^^^^^^^^^^^^^^^^^^^^^^ +Currently, NHDPlus HR data, which comes in a file geodatabase (GDB), must be preprocessed into a shapefile for input to Modflow-setup and `SFRmaker `_ (which Modflow-setup uses to build the stream network). In many cases, multiple GDBs may need to be combined and undesired line features such as storm sewers culled. The `SFRmaker documentation `_ has examples for how to read and preprocesses NHDPlus HR. + +Preprocessing NHDPlus version 2 +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +Depending on the application, NHDPlus version 2 may not need to be preprocessed. Reasons to preprocess include: + +* the model area is large, and + + * read times for one or more NHDPlus drainage basins are slowing the model build + * the DEM being used for the model top is relatively coarse, and sampling a fine DEM during the model build is prohibitive for time or space reasons. + +* the stream network is too dense, with too many model cells containing SFR reaches (especially a problem in the eastern US at the 1 km resolution); or there are too many ephemeral streams represented. +* the stream network has divergences where one or more distributary lines are downstream of a confluence. + +The `preprocessing module in SFRmaker `_ can resolve these issues, producing a single set of culled flowlines with width and elevation information and divergences removed. The elevation functionality in the preprocessing module requires a DEM. + + +4) Get a DEM +------------- +The `National Map Downloader `_ has 10 meter DEMs for the United States, with finer resolutions available in many areas. Typically, these come in 1 degree x 1 degree tiles. If many tiles are needed, the uGet Download Manager linked to on the National Map site can automate downloading many tiles. Alternatively, links to the files follow a consistent format, and are therefore amenable to scripted or manual downloads. For example, the tile located between -88 and -87 west and 43 and 44 north is available at: + +https://prd-tnm.s3.amazonaws.com/StagedProducts/Elevation/13/TIFF/current/n44w088/USGS_13_n44w088.tif + +Making a virtual raster +^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +Once all of the tiles are downloaded, a virtual raster can be made that allows them to be treated as a single file, without any modifications to the original data. This is required for input to SFRmaker and Modflow-setup. For example, in `QGIS `_: + + a) Load all of the tiles to verify that they are correct and cover the whole model active area. + b) From the ``Raster`` menu, select ``Miscellaneous > Build Virtual Raster``. This will make a virtual raster file with a ``.vrt`` extension that points to the original set of GeoTIFFs, but allows them to be treated as a single continuous raster. + +5) Make a minimum working configuration file and model build script +-------------------------------------------------------------------- +Now that we have a set of flowlines and a DEM (and perhaps shapefiles for other surface water boundaries), we can fill out the rest of the configuration file to get an initial working model. Later, additional details such as more layers, a well package, observations, or other features can be added in a stepwise approach (Haitjema, 1995). + + .. literalinclude:: ../../examples/initial_config_full.yaml + :language: yaml + :linenos: + + Download the file: + :download:`initial_config_full.yaml <../../examples/initial_config_full.yaml>` + +A setup script for making a minimum working model. Additional functions can be added later to further customize the model outside of the Modflow-setup build step. + + .. literalinclude:: ../../examples/initial_model_setup.py + :language: python + :linenos: + + Download the file: + :download:`initial_model_setup.py <../../examples/initial_model_setup.py>` diff --git a/_sources/api/index.rst.txt b/_sources/api/index.rst.txt new file mode 100644 index 00000000..35ebaf46 --- /dev/null +++ b/_sources/api/index.rst.txt @@ -0,0 +1,25 @@ +============== +Code Reference +============== + +Model classes +-------------- + +.. toctree:: + + mfsetup.mf6model + mfsetup.mfnwtmodel + mfsetup.mfmodel + + +Supporting modules +------------------- + +.. toctree:: + + mfsetup.discretization + mfsetup.fileio + mfsetup.grid + mfsetup.interpolate + mfsetup.tdis + mfsetup.tmr diff --git a/_sources/api/mfsetup.discretization.rst.txt b/_sources/api/mfsetup.discretization.rst.txt new file mode 100644 index 00000000..b2aa93c8 --- /dev/null +++ b/_sources/api/mfsetup.discretization.rst.txt @@ -0,0 +1,7 @@ +mfsetup.discretization module +============================= + +.. automodule:: mfsetup.discretization + :members: + :undoc-members: + :show-inheritance: diff --git a/_sources/api/mfsetup.fileio.rst.txt b/_sources/api/mfsetup.fileio.rst.txt new file mode 100644 index 00000000..66519b04 --- /dev/null +++ b/_sources/api/mfsetup.fileio.rst.txt @@ -0,0 +1,7 @@ +mfsetup.fileio module +============================= + +.. automodule:: mfsetup.fileio + :members: + :undoc-members: + :show-inheritance: diff --git a/_sources/api/mfsetup.grid.rst.txt b/_sources/api/mfsetup.grid.rst.txt new file mode 100644 index 00000000..aa9ec529 --- /dev/null +++ b/_sources/api/mfsetup.grid.rst.txt @@ -0,0 +1,7 @@ +mfsetup.grid module +============================= + +.. automodule:: mfsetup.grid + :members: + :undoc-members: + :show-inheritance: diff --git a/_sources/api/mfsetup.interpolate.rst.txt b/_sources/api/mfsetup.interpolate.rst.txt new file mode 100644 index 00000000..1c9ef427 --- /dev/null +++ b/_sources/api/mfsetup.interpolate.rst.txt @@ -0,0 +1,7 @@ +mfsetup.interpolate module +============================= + +.. automodule:: mfsetup.interpolate + :members: + :undoc-members: + :show-inheritance: diff --git a/_sources/api/mfsetup.mf6model.rst.txt b/_sources/api/mfsetup.mf6model.rst.txt new file mode 100644 index 00000000..a8d0164a --- /dev/null +++ b/_sources/api/mfsetup.mf6model.rst.txt @@ -0,0 +1,6 @@ +MF6model class +================================ + +.. automodule:: mfsetup.mf6model + :members: + :show-inheritance: diff --git a/_sources/api/mfsetup.mfmodel.rst.txt b/_sources/api/mfsetup.mfmodel.rst.txt new file mode 100644 index 00000000..ede29c95 --- /dev/null +++ b/_sources/api/mfsetup.mfmodel.rst.txt @@ -0,0 +1,6 @@ +MFsetupMixin class +============================= + +.. automodule:: mfsetup.mfmodel + :members: + :show-inheritance: diff --git a/_sources/api/mfsetup.mfnwtmodel.rst.txt b/_sources/api/mfsetup.mfnwtmodel.rst.txt new file mode 100644 index 00000000..04bf7202 --- /dev/null +++ b/_sources/api/mfsetup.mfnwtmodel.rst.txt @@ -0,0 +1,6 @@ +MFnwtModel class +================================ + +.. automodule:: mfsetup.mfnwtmodel + :members: + :show-inheritance: diff --git a/_sources/api/mfsetup.tdis.rst.txt b/_sources/api/mfsetup.tdis.rst.txt new file mode 100644 index 00000000..17ef5e37 --- /dev/null +++ b/_sources/api/mfsetup.tdis.rst.txt @@ -0,0 +1,7 @@ +mfsetup.tdis module +============================= + +.. automodule:: mfsetup.tdis + :members: + :undoc-members: + :show-inheritance: diff --git a/_sources/api/mfsetup.tmr.rst.txt b/_sources/api/mfsetup.tmr.rst.txt new file mode 100644 index 00000000..0d8a73da --- /dev/null +++ b/_sources/api/mfsetup.tmr.rst.txt @@ -0,0 +1,8 @@ +mfsetup.tmr module +============================= + +.. automodule:: mfsetup.tmr + :members: + :undoc-members: + :show-inheritance: + :exclude-members: Tmr diff --git a/_sources/concepts/index.rst.txt b/_sources/concepts/index.rst.txt new file mode 100644 index 00000000..a7c63d76 --- /dev/null +++ b/_sources/concepts/index.rst.txt @@ -0,0 +1,10 @@ +============================================== +Modflow-setup concepts and methods +============================================== + +.. toctree:: + :maxdepth: 1 + + Interpolation + Local grid refinement + Specifying perimeter boundary conditions diff --git a/_sources/concepts/interp.rst.txt b/_sources/concepts/interp.rst.txt new file mode 100644 index 00000000..9a821e4a --- /dev/null +++ b/_sources/concepts/interp.rst.txt @@ -0,0 +1,23 @@ +=========================================================== +Interpolating data to the model grid +=========================================================== + +For most interpolation operations where geo-located data are sampled to the model grid, Modflow-setup uses a barycentric (triangular) interpolation scheme similar to :py:func:`scipy.interpolate.griddata`. This n-dimensional unstructured method allows for interpolation between grids that are aligned with different coordinate references systems, as well as interpolation between unstructured grids. As described `here `_, setup of the barycentric interpolation involves: + + 1) Construction of a triangular mesh linking the source points + 2) Searching the mesh to find the containing simplex for each destination point + 3) Computation of barycentric coordinates (weights) that describe where each destination point is in terms of the n nearest source points (where n-1 is the number of dimensions) + 4) Computing the interpolated values at the destination points from the source values and the weights + +Steps 1-3 are time-consuming. Therefore, for each interpolation problem, Modflow-setup performs these steps once and caches the results, so that step 4 can be repeated quickly on subsequent calls. This can greatly speed, for example, the computation of hydraulic conductivity or bottom elevation values for models with many layers, or interpolation of boundary conditions for models with many stress periods. + +A few more notes: + * Linear interpolation is the default method in most instances, except for recharge, which is often based on categorical data such as land cover and soil types, and therefore has nearest-neighbor as the default method. + * The interpolation method can generally be specified explicitly for a given dataset by including a ``resample_method`` argument. Available methods are listed in the documentation for :py:func:`scipy.interpolate.griddata`. For example, if we wanted to override the ``'nearest'`` default for the Recharge Package: + + .. literalinclude:: ../../../mfsetup/tests/data/shellmound.yml + :language: yaml + :start-after: # Recharge Package + :end-before: period_stats + + * More details are available in the documentation for the :py:mod:`mfsetup.interpolate` module. diff --git a/_sources/concepts/lgr.rst.txt b/_sources/concepts/lgr.rst.txt new file mode 100644 index 00000000..bee938cc --- /dev/null +++ b/_sources/concepts/lgr.rst.txt @@ -0,0 +1,89 @@ +=========================================================== +Local grid refinement +=========================================================== + +In MODFLOW 6, two groundwater models can be tightly coupled in the same simulation, which allows for efficient "local grid refinement" (LGR; Mehl and others, 2006) and "semistructured" (Feinstein and others, 2016) configurations that combine multiple structured model layers at different resolutions (Langevin and others, 2017). Locally refined areas are conceptualized as separate "child" models that are linked to the surrounding (usually coarser) "parent" model through the GWF Model Exchange (GWFGWF) Package. Similarly, "semistructured" configurations are represented by multiple linked models for each layer or group of layers with the same resolution. + +Modflow-setup supports local grid refinment via an ``lgr:`` subblock within the ``setup_grid:`` block of the configuration file. The ``lgr:`` subblock consists of one of more subblocks, each keyed by a linked model name and containing configuration input for that linked model. Vertical layer refinement relative to the "parent" model is also specified for each layer of the parent model. + +For example, the following "parent" configuration for the :ref:`Pleasant Lake Example ` creates a local refinement model named "``pleasant_lgr_inset``" that spans all layers of the parent model, at the same vertical resolution (1 inset model layer per parent model layer). + +.. literalinclude:: ../../../examples/pleasant_lgr_parent.yml + :language: yaml + :start-at: lgr: + :end-before: # Structured Discretization Package + +The horizontal location and resolution of the inset model are specified in the :ref:`inset model configuration file `, in the same way that they are specified for any model. In this example, the parent model has a uniform horizontal resolution of 200 meters, and the inset a uniform resolution of 40 meters (a horizontal refinement of 5 inset model cells per parent model cells). The inset model resolution must be a factor of the parent model resolution. + +.. image:: ../_static/pleasant_lgr.png + :width: 1200 + :alt: Example of LGR refinement in all layers. + +.. image:: ../_static/pleasant_lgr_xsection.png + :width: 1200 + :alt: Example of LGR refinement in all layers. + +Input from the ``lgr:`` subblock and the inset model configuration file(s) is passed to the :py:class:`Flopy Lgr Utility `, which helps create input for the GWF Model Exchange Package. + +Within the context of a Python session, inset model information is stored in a dictionary under an ``inset`` attribute attached to the parent model. For example, to access a Flopy model object for the above inset model from a parent model named ``model``: + +.. code-block:: python + + inset_model = model.inset['pleasant_lgr_inset'] + + + + +Specification of vertical refinement +----------------------------------------- +Vertical refinement in the LGR child grid is specified in the ``layer_refinement:`` item, as the number of child layers in each parent model layer. Currently vertical refinement is limited to even subdivision of parent model layers. Vertical refinement can be specified as an integer for uniform refinement across all parent model layers: + +.. code-block:: yaml + + layer_refinement: 1 + +a list with an entry for each parent layer: + +.. code-block:: yaml + + layer_refinement: [1, 1, 1, 0, 0] + +or a dictionary with entries for each parent layer that is refined: + +.. code-block:: yaml + + layer_refinement: + 0:1 + 1:1 + 2:1 + +Parent model layers with 0 specified refinement are excluded from the child model. The list and dictionary inputs above are equivalent, as unlisted layers in the dictionary are assigned default refinement values of 0. Refinement values > 1 result in even subdivision of the parent layers. Similar to one-way coupled inset models, LGR child model layers surfaces can be discretized at the finer child resolution from the original source data. In the example below, a 9-layer child model is set within the top 4 layers of a 5 layer parent model. The parent model ``lgr:`` configuration block is specified as: + +.. literalinclude:: ../../../mfsetup/tests/data/pleasant_vertical_lgr_parent.yml + :language: yaml + :start-at: lgr: + :end-before: # Structured Discretization Package + +In the child model ``dis:`` configuration block, raster surfaces that were used to define the parent model layer bottoms are specified at their respective desired locations within the child model grid; Modflow-setup then subdivides these to create the desired layer configuration. The layer specification in the child model ``dis:`` block must be consistent with the `layer_refinement:` input in the parent model configuration (see below). The image below shows + +.. literalinclude:: ../../../mfsetup/tests/data/pleasant_vertical_lgr_inset.yml + :language: yaml + :start-at: dis: + :end-before: # Recharge and Well packages are inherited + +The figure below shows a cross section through the model grid resulting from this configuration: + +.. image:: ../_static/pleasant_vlgr_xsection.png + :width: 1500 + :alt: Example of partial vertical LGR refinement with layer subdivision. + + +**A few notes about LGR functionality in Modflow-setup** + +* **Locally refined "inset" models must be aligned with the parent model grid**, which also means that their horizontal resolution must be a factor of the "parent" model resolution. Modflow-setup handles the alignment automatically by "snapping" inset model grids to the lower left corner of the parent cell containing the lower left corner of the inset model grid (the inset model origin in real-world coordinates). +* Similarly, inset models need to align vertically with the parent model layers. Parent layers can be subdivided using values > 1 in the ``layer_refinement:`` input option. +* Specifically, **the layer specification in the child model** ``dis:`` **block must be consistent with the** ``layer_refinement:`` **input in the parent model configuration**. For example, if a ``layer_refinement`` of ``3`` is specified for the last parent layer included in the child model domain, then the last two raster surfaces specified in the child model ``dis``: block must be specified with two layer bottoms in between. Similarly, the values in ``layer_refinement:`` in the parent model configuration must sum to ``nlay:`` specified in the child model ``dis:`` configuration block. +* Regardless of the supplied input, the child model bottom and parent model tops are aligned to remove any vertical gaps or overlap in the numerical model grid. If a raster surface is supplied for the child model bottom, the child bottom/parent top surface is based on the mean bottom elevations sampled to the child cells within each parent cell area. +* Child model ``layer_refinement:`` must start at the top of the parent model, and include a contiguous sequence of parent model layers. +* Multiple inset models at different horizontal locations, and even inset models within inset models should be achievable, but have not been tested. +* **Multi-model configurations come with costs.** Each model within a MODFLOW 6 simulation carries its own set of files, including external text array and list input files to packages. As with a single model, when using the automated parameterization functionality in `pyEMU `_, the number of files is multiplied. At some point, at may be more efficient to work with individual models, and design the grids in such a way that boundary conditions along the model perimeters have a minimal impact on the predictions of interest. diff --git a/_sources/concepts/perimeter-bcs.rst.txt b/_sources/concepts/perimeter-bcs.rst.txt new file mode 100644 index 00000000..55c44183 --- /dev/null +++ b/_sources/concepts/perimeter-bcs.rst.txt @@ -0,0 +1,111 @@ +=========================================================== +Specifying perimeter boundary conditions from another model +=========================================================== + +Often the area we are trying to model is part of a larger flow system, and we must account for groundwater flow across the model boundaries. Modflow-setup allows for perimeter boundary conditions to be specified from the groundwater flow solution of another Modflow model. + + +Features and Limitations +------------------------- +* Currently, specified head perimeter boundaries are supported via the MODFLOW Constant Head (CHD) Package; specified flux boundaries are supported via the MODFLOW Well (WEL) Package. +* The parent model solution (providing the values for the boundaries) is assumed to align with the inset model time discretization. +* The parent model may have different length units. +* The parent model may be of a different MODFLOW version (e.g. MODFLOW 6 inset with a MODFLOW-NWT parent) +* For specified head perimeter boundaries, the inset model grid need not align with the parent model grid; values from the parent model solution are interpolated linearly to the cell centers along the inset model perimeter in the x, y and z directions (using a barycentric triangular method similar to :py:func:`scipy.interpolate.griddata`). However, this means that there may be some mismatch between the parent and inset model solutions along the inset model perimeter, in places where there are abrupt or non-linear head gradients. Boundaries for inset models should always be set sufficiently far away that they do not appreciably impact the model solution in the area(s) of interest. The :ref:`LGR capability ` of Modflow-setup can help with this. +* Specified flux boundaries are currently limited to the parent and inset models being colinear. +* The perimeter may be irregular. For example, the edge of the model active area may follow a major surface water feature along the opposite side. +* Specified perimeter heads in MODFLOW-NWT models will have ending heads for each stress period assigned from the starting head of the next stress period (with the last period having the same starting and ending heads). The MODFLOW 6 Constant Head Package only supports assignment of a single head per stress period. This distinction only matters for models where stress periods are subdivided by multiple timesteps. + + +Configuration input +------------------- +Input to set up perimeter boundaries are specified in two places: + +1) The ``parent:`` model block, in which a parent or source model can be specified. Currently only a single parent or source model is supported. The parent or source model can be used for other properties (e.g. hydraulic conductivity) and stresses (e.g. recharge) in addition to the perimeter boundary. + + Input example: + + .. code-block:: yaml + + parent: + namefile: 'pleasant.nam' + model_ws: 'data/pleasant/' + version: 'mfnwt' + copy_stress_periods: 'all' + start_date_time: '2012-01-01' + length_units: 'meters' + time_units: 'days' + +2) In a ``perimeter_boundary:`` sub-block for the relevant package (only specified heads via CHD are currently supported). + + Input example (specified head): + + .. code-block:: yaml + + chd: + perimeter_boundary: + parent_head_file: 'data/pleasant/pleasant.hds' + + Input example (specified flux, with optional shapefile defining an irregular perimeter boundary, + and the MODFLOW 6 binary grid file, which is required for reading the cell budget output from MODFLOW 6 parent models): + + .. code-block:: yaml + + wel: + perimeter_boundary: + shapefile: 'shellmound/tmr_parent/gis/irregular_boundary.shp' + parent_cell_budget_file: 'shellmound/tmr_parent/shellmound.cbc' + parent_binary_grid_file: 'shellmound/tmr_parent/shellmound.dis.grb' + + +Specifying the time discretization +------------------------------------ +By default, inset model stress period 0 is assumed to align with parent model stress period zero (``copy_stress_periods: 'all'`` in the :ref:`configuration file ` parent block, which is the default). Alternatively, stress periods can be mapped explicitly using a dictionary. For example: + +.. code-block:: yaml + + copy_stress_periods: + 0: 1 + 1: 2 + 2: 3 + +where ``0: 1`` indicates that the first stress period in the inset model aligns with the second stress period in the parent model (stress period 1), etc. + + +Specifying the locations of perimeter boundary cells +---------------------------------------------------- +Modflow-setup provides 3 primary options for specifying the locations of perimeter cells. In all cases, boundary cells are produced by the :meth:`mfsetup.tmr.TmrNew.get_inset_boundary_cells` method, and the resulting cells (including the boundary faces) can be visualized in a GIS environment with the ``boundary_cells.shp`` shapefile that gets written to the ``tables/`` folder by default. + +**1) No specification of where the perimeter boundary should be applied** (e.g. a shapefile) and ``by_layer=False:`` (the default). Perimeter BC cells are applied to active cells that coincide with the edge of the maximum areal footprint of the active model area. In places where the edge of the active area is inside of the max active footprint, no perimeter cells are applied. + + Input example: + + .. code-block:: yaml + + chd: + perimeter_boundary: + parent_head_file: 'data/pleasant/pleasant.hds' + + +**2) No specification of where the perimeter boundary should be applied and ``by_layer=True:``**. This is the same as option 1), but the active footprint is defined by layer from the idomain array. This option is generally not recommended, as it may often lead to boundary cells being included in the model interior (along layer pinch-outs, for example). Users of this option should check the results carefully by inspecting the + + Input example: + + .. code-block:: yaml + + chd: + perimeter_boundary: + parent_head_file: 'data/pleasant/pleasant.hds' + by_layer: True + +**3) Specification of perimeter boundary cells with a shapefile**. The locations of perimeter cells can be explicitly specified this way, but they still must coincide with the edge of the active extent in each layer (Modflow-setup will not put perimeter cells in the model interior). (Open) Polyline or Polygon shapefiles can be used; in either case a buffer is used to align the supplied features with the active area edge, which is determined using the :py:func:`Sobel edge detection filter in Scipy `. + + + Input example: + + .. code-block:: yaml + + chd: + perimeter_boundary: + shapefile: 'shellmound/tmr_parent/gis/irregular_boundary.shp' + parent_head_file: 'shellmound/tmr_parent/shellmound.hds' diff --git a/_sources/config-file-defaults.rst.txt b/_sources/config-file-defaults.rst.txt new file mode 100644 index 00000000..b5889ec5 --- /dev/null +++ b/_sources/config-file-defaults.rst.txt @@ -0,0 +1,18 @@ +Configuration defaults +---------------------- +The following two yaml files contain default settings for MODFLOW-6 and MODFLOW-NWT. Settings not specified by the user in their configuration file are populated from these files when they are loaded into the ``MF6model`` or ``MFnwtModel`` model instances. + +MODFLOW-6 configuration defaults +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + + +.. literalinclude:: ../../mfsetup/mf6_defaults.yml + :language: yaml + :linenos: + +MODFLOW-NWT configuration defaults +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +.. literalinclude:: ../../mfsetup/mfnwt_defaults.yml + :language: yaml + :linenos: diff --git a/_sources/config-file-gallery.rst.txt b/_sources/config-file-gallery.rst.txt new file mode 100644 index 00000000..f6e65ce8 --- /dev/null +++ b/_sources/config-file-gallery.rst.txt @@ -0,0 +1,110 @@ +========================== +Configuration File Gallery +========================== + +Below are example (valid) configuration files from the modflow-setup test suite. The yaml files and the datasets they reference can be found under ``modflow-setup/mfsetup/tests/data/``. + +Shellmound test case +^^^^^^^^^^^^^^^^^^^^ +* 13 layer MODFLOW-6 model with no parent model +* 9 layers specified with raster surfaces; with remaining 4 layers subdividing the raster surfaces +* `vertical pass-through cells`_ at locations of layer pinch-outs (``drop_thin_cells: True`` option) +* variable time discretization +* model grid aligned with the `National Hydrologic Grid`_ +* recharge read from NetCDF source data +* SFR network created from custom hydrography +* WEL package created from CSV input + + +.. literalinclude:: ../../mfsetup/tests/data/shellmound.yml + :language: yaml + :linenos: + + +Shellmound TMR inset test case +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +* 13 layer MODFLOW-6 Telescopic Mesh Refinement (TMR) model with a MODFLOW-6 parent model +* 1:1 layer mapping between parent and TMR inset (default) +* parent model grid defined with a SpatialReference subblock (which overrides information in MODFLOW Namefile) +* DIS package top and bottom elevations copied from parent model +* IC, NPF, STO, RCH, and WEL packages copied from parent model (default if not specified in config file) +* :ref:`default OC configuration ` +* variable time discretization +* model grid aligned with the `National Hydrologic Grid`_ +* SFR network created from custom hydrography + + +.. literalinclude:: ../../mfsetup/tests/data/shellmound_tmr_inset.yml + :language: yaml + :linenos: + + +Pleasant Lake test case +^^^^^^^^^^^^^^^^^^^^^^^ +* MODFLOW-6 model with local grid refinement (LGR) +* LGR parent model is itself a Telescopic Mesh Refinment (TMR) inset from a MODFLOW-NWT model +* Layer 1 in TMR parent model is subdivided evenly into two layers in LGR model (``botm: from_parent: 0: -0.5``). Other layers mapped explicitly between TMR parent and LGR model. +* starting heads from LGR parent model resampled from binary output from the TMR parent +* rch, npf, sto, and wel input copied from parent model +* SFR package constructed from an NHDPlus v2 dataset (path to NHDPlus files in the same structure as the `downloads from the NHDPlus website`_) +* head observations from csv files with different column names +* LGR inset extent based on a buffer distance around a feature of interest +* LGR inset dis, ic, npf, sto and rch packages copied from LGR parent +* WEL package created from custom format +* Lake package created from polygon features, bathymetry raster, stage-area-volume file and climate data from `PRISM`_. +* Lake package observations set up automatically (output file for each lake) + +LGR parent model configuration +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +.. literalinclude:: ../../examples/pleasant_lgr_parent.yml + :language: yaml + :linenos: + +pleasant_lgr_inset.yml +~~~~~~~~~~~~~~~~~~~~~~ + +.. literalinclude:: ../../examples/pleasant_lgr_inset.yml + :language: yaml + :linenos: + +Pleasant Lake MODFLOW-NWT test case +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +* MODFLOW-NWT TMR inset from a MODFLOW-NWT model +* Layer 1 in parent model is subdivided evenly into two layers in the inset model (``botm: from_parent: 0: -0.5``). Other layers mapped explicitly between TMR parent and LGR model. +* starting heads resampled from binary output from the TMR parent +* RCH, UPW and WEL input copied from parent model +* SFR package constructed from an NHDPlus v2 dataset (path to NHDPlus files in the same structure as the `downloads from the NHDPlus website`_) +* HYDMOD package for head observations from csv files with different column names +* WEL package created from custom format +* Lake package created from polygon features, bathymetry raster, stage-area-volume file and climate data from `PRISM`_. +* Lake package observations set up automatically (output file for each lake) +* GHB package created from polygon feature and DEM raster + +.. literalinclude:: ../../mfsetup/tests/data/pleasant_nwt_test.yml + :language: yaml + :linenos: + +Plainfield Lakes MODFLOW-NWT test case +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +* MODFLOW-NWT TMR inset from a MODFLOW-NWT model +* Layer 1 in parent model is subdivided evenly into two layers in the inset model (``botm: from_parent: 0: -0.5``). Other layers mapped explicitly between TMR parent and LGR model. +* starting heads resampled from binary output from the TMR parent +* Temporally constant recharge specified from raster file, with multiplier +* WEL package created from custom format +* MNW2 package with dictionary input +* UPW input copied from parent model +* HYDMOD package for head observations from csv files with different column names +* WEL package created from custom format and dictionary input +* WEL package configured to use average for a specified period (period 0) and specified month (period 1 on) +* Lake package created from polygon features, bathymetry raster, stage-area-volume file +* Lake package precipitation and evaporation specified directly +* Lake package observations set up automatically (output file for each lake) + +.. literalinclude:: ../../mfsetup/tests/data/pfl_nwt_test.yml + :language: yaml + :linenos: + +.. _downloads from the NHDPlus website: https://nhdplus.com/NHDPlus/NHDPlusV2_data.php +.. _vertical pass-through cells: https://water.usgs.gov/water-resources/software/MODFLOW-6/mf6io_6.1.0.pdf +.. _PRISM: http://www.prism.oregonstate.edu/ +.. _National Hydrologic Grid: https://www.sciencebase.gov/catalog/item/5a95dd5de4b06990606a805e diff --git a/_sources/config-file.rst.txt b/_sources/config-file.rst.txt new file mode 100644 index 00000000..53460f05 --- /dev/null +++ b/_sources/config-file.rst.txt @@ -0,0 +1,178 @@ +The configuration file +======================= + + +The YAML format +--------------- +The configuration file is the primary model of user input to the ``MF6model`` and ``MFnwtModel`` classes. Input is specified in the `yaml format`_, which can be thought of as a serialized python dictionary with some additional features, including the ability to include comments. Instead of curly brackets (as in `JSON`_), white space indentation is used to denote different levels of the dictionary. Value can generally be entered more or less as they are in python, except that dictionary keys (strings) don't need to be quoted. Numbers are parsed as integers or floating point types depending whether they contain a decimal point. Values in square brackets are cast into python lists; curly brackets can also be used to denote dictionaries instead of white space. Comments are indicated with the `#` symbol, and can be placed on the same line as data, as in python. + +Modflow-setup uses the `pyyaml`_ package to parse the configuration file into the ``cfg`` dictionary attached to a model instance. The methods attached to ``MF6model``, ``MFnwtModel`` and ``MFsetupMixin`` then use the information in the ``cfg`` dictonary to set up various aspects of the model. + + +Configuration file structure +---------------------------- +In general, the configuration file structure is patterned after the MODFLOW input structure, especially the `input structure to MODFLOW-6`_. Larger blocks represent input to MODFLOW packages or modflow-setup features, with sub-blocks representing MODFLOW-6 input blocks (within individual packages) or individual features in modflow-setup. Naming of blocks and the variables within is intended to follow MODFLOW and Flopy naming as closely as possible; where these conflict, the MODFLOW naming conventions are used (see also the `MODFLOW-NWT Online Guide`_). + + +Package blocks +^^^^^^^^^^^^^^ +The modflow-setup configuration file is divided into blocks, which represent sub-dictionaries within the ``cfg`` dictionary represented by the whole configuration file. The blocks are generally organized as input to individual object classes in Flopy, or features specific to MODFLOW-setup. For example, this block would represent input to the `Simulation class`_ for MODFLOW-6: + +.. code-block:: yaml + + simulation: + sim_name: 'mfsim' + version: 'mf6' + sim_ws: '../tmp/shellmound' + +and would be loaded into the configuration dictionary as: + +.. code-block:: python + + cfg['simulation'] = {'sim_name: 'mfsim', + 'version': 'mf6', + 'sim_ws': '../tmp/shellmound' + } + +The above dictionary would then be fed to the Flopy `Simulation class`_ constructor as `keyword arguments (**kwargs)`_. + +Sub-blocks +^^^^^^^^^^ +Sub-blocks (nested dictionaries) with blocks are used to denote either input to MODFLOW-6 package blocks or input to modflow-setup features. For example, the options block below represents input to the options block for the MODFLOW-6 name file: + +.. code-block:: yaml + + model: + simulation: 'shellmound' + modelname: 'shellmound' + options: + print_input: True + save_flows: True + newton: True + newton_under_relaxation: False + packages: ['dis', + 'ic', + 'npf', + 'oc', + 'sto', + 'rch', + 'sfr', + 'obs', + 'wel', + 'ims' + ] + external_path: 'external/' + relative_external_filepaths: True + +Note that some items in the model block above do not represent flopy +input. The ``relative_external_filepaths`` item is a flag for modflow-setup that instructs it to reference external files relative to the model workspace, to avoid broken paths when the model is copied to a different location. + +Directly specifying MODFLOW input +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +MODFLOW input can be specified directly in the configuration file using the appropriate variables described in the `MODFLOW-6 input instructions`_ and `MODFLOW-NWT Online Guide`_. For example, in the block below, the dimensions and griddata sub-blocks would be fed directly to the `ModflowGwfdis`_ constructor in Flopy: + +.. code-block:: yaml + + dis: + remake_top: True + options: + length_units: 'meters' + dimensions: + nlay: 2 + nrow: 30 + ncol: 35 + griddata: + delr: 1000. + delc: 1000. + top: 2. + botm: [1, 0] + + +Source_data sub-blocks +^^^^^^^^^^^^^^^^^^^^^^ +Alternatively, ``source_data`` subblocks indicate input from general file formats (shapefiles, csvs, rasters, etc.) that needs to be mapped to the model space and time discretization. The ``source_data`` blocks are intended to be general across input types. For example- ``filename`` indicates a file path (string), regardless of the type of file, and ``filenames`` indicates a list or dictionary of files that map to model layers or stress periods. Items with the '_units' suffix indicate the units of the source data, allowing modflow-setup to convert the values to model units accordingly. In the example below, the model top would be read from the specified `GeoTiff`_ and mapped onto the model grid via linear interpolation (the default method for model layer elevations) using the `scipy.interpolate.griddata`_ method. The model botm elevations would be read similarly, with missing layers sub-divided evenly between the specified layers. For example, the layer 7 bottom elevations would be set halfway between the layer 6 and 8 bottoms. Finally, supplying a shapefile as input to idomain instructs modflow-setup to intersect the shapefile with the model grid (using :meth:`rasterio.features.rasterize`), and limit the active cells to the intersected area. + +.. code-block:: yaml + + dis: + remake_top: True + options: + length_units: 'meters' + dimensions: + nlay: 13 + nrow: 30 + ncol: 35 + griddata: + delr: 1000. + delc: 1000. + source_data: + top: + filename: 'shellmound/rasters/meras_100m_dem.tif' # DEM file; path relative to setup script + elevation_units: 'feet' + botm: + filenames: + 0: 'shellmound/rasters/vkbg_surf.tif' # Vicksburg-Jackson Group (top) + 1: 'shellmound/rasters/ucaq_surf.tif' # Upper Claiborne aquifer (top) + 2: 'shellmound/rasters/mccu_surf.tif' # Middle Claiborne confining unit (top) + 3: 'shellmound/rasters/mcaq_surf.tif' # Middle Claiborne aquifer (top) + 6: 'shellmound/rasters/lccu_surf.tif' # Lower Claiborne confining unit (top) + 8: 'shellmound/rasters/lcaq_surf.tif' # Lower Claiborne aquifer (top) + 9: 'shellmound/rasters/mwaq_surf.tif' # Middle Wilcox aquifer (top) + 10: 'shellmound/rasters/lwaq_surf.tif' # Lower Wilcox aquifer (top) + 12: 'shellmound/rasters/mdwy_surf.tif' # Midway confining unit (top) + elevation_units: 'feet' + idomain: + filename: 'shellmound/shps/active_area.shp' + + +Some additional notes on YAML +--------------------------------------- +* quotes are optional for strings without special meanings. See `this reference`_ for more details. +* (``None`` and ``'none'``) (``'None'`` and ``'none'``) are parsed as strings (``'None'`` and ``'none'``) +* null is parsed to a ``NoneType`` instance (``None``) +* numbers in exponential format need a decimal place and a sign for the exponent to be parsed as floats. + For example, as of pyyaml 5.3.1: + + * ``1e5`` parses to ``'1e5'`` + * ``1.e5`` parses to ``'1.e5'`` + * ``1.e+5`` parses to ``1.e5`` (a float) +* sequences must be explicitly enclosed in brackets to be parsed as lists. + For example: + + * ``12,1.2`` parses to ``'12,1.2'`` + * ``[12,1.2]`` parses to ``[12,1.2]`` + * ``(12,1.2)`` parses to ``"(12,1.2)"`` + * ``{12,1.2}`` parses to ``{12: None, 1.2: None}`` +* dictionaries can be represented with indentation, but spaces are needed after the colon(s): + + .. code-block:: yaml + + items: + 0:1 + 1:2 + + parses to ``'0:1 1:2'`` + + .. code-block:: yaml + + items: + 0: 1 + 1: 2 + + parses to ``{0:1, 1:2}`` + +Using a YAML-aware text editor such as VS Code can help with these issues, for example by changing the highlighting color to indicate a string in the first dictionary example above and an interpreted python data structure in the second dictionary example. + +.. _JSON: https://www.json.org/json-en.html +.. _pyyaml: https://pyyaml.org/ +.. _this reference: http://blogs.perl.org/users/tinita/2018/03/strings-in-yaml---to-quote-or-not-to-quote.html +.. _yaml format: https://yaml.org/ +.. _GeoTIFF: https://en.wikipedia.org/wiki/GeoTIFF +.. _input structure to MODFLOW-6: https://water.usgs.gov/water-resources/software/MODFLOW-6/mf6io_6.1.0.pdf +.. _keyword arguments (**kwargs): https://stackoverflow.com/questions/1769403/what-is-the-purpose-and-use-of-kwargs +.. _MODFLOW-6 input instructions: https://water.usgs.gov/water-resources/software/MODFLOW-6/mf6io_6.1.0.pdf +.. _MODFLOW-NWT Online Guide: https://water.usgs.gov/ogw/modflow-nwt/MODFLOW-NWT-Guide/ +.. _ModflowGwf class: https://github.com/modflowpy/flopy/blob/develop/flopy/mf6/modflow/mfgwf.py +.. _ModflowGwfdis: https://github.com/modflowpy/flopy/blob/develop/flopy/mf6/modflow/mfgwfdis.py +.. _scipy.interpolate.griddata: https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.griddata.html +.. _Simulation class: https://github.com/modflowpy/flopy/blob/develop/flopy/mf6/modflow/mfsimulation.py diff --git a/_sources/contributing.rst.txt b/_sources/contributing.rst.txt new file mode 100644 index 00000000..5f44e295 --- /dev/null +++ b/_sources/contributing.rst.txt @@ -0,0 +1,337 @@ +Contributing to modflow-setup +============================= + +(Note: much of this page was cribbed from the `geopandas `_ project, +which has similar guidelines to `pandas `_ +and `xarray `_.) + +Getting started +---------------- +All contributions, bug reports, bug fixes, documentation improvements, enhancements, and ideas are welcome. If an issue that interests you isn't already listed in the `Issues tab`_, consider `filing an issue`_. + +Bug reports and enhancement requests +------------------------------------------------ +Bug reports are an important part of improving Modflow-setup. Having a complete bug report +will allow others to reproduce the bug and provide insight into fixing. See +`this stackoverflow article `_ and +`this blogpost `_ +for tips on writing a good bug report. + +Trying the bug-producing code out on the *develop* branch is often a worthwhile exercise +to confirm the bug still exists. It is also worth searching existing bug reports and pull requests +to see if the issue has already been reported and/or fixed. + +To file a bug report or enhancement request, from the issues tab on the `Modflow-setup GitHub page `_, select "New Issue". + +Bug reports must: + +#. Include a short, self-contained Python snippet reproducing the problem, along with the contents of your configuration file and the full error traceback. + You can format the code nicely by using `GitHub Flavored Markdown + `_:: + + ```python + >>> import mfsetup + >>> m = MF6model.setup_from_yaml('pleasant_lgr_parent.yml') + ``` + + e.g.:: + + ```yaml + + ``` + + ```python + + ``` + +#. Include the version of Modflow-setup that you are running, which can be obtained with: + + .. code-block:: python + + import mfsetup + mfsetup.__version__ + + Depending on the issue, it may also be helpful to include information about the version(s) + of python, key dependencies (e.g. numpy, pandas, etc) and operating system. You can get the versions of packages in a conda python environment with:: + + conda list + +#. Explain why the current behavior is wrong/not desired and what you expect instead. + +The issue will then be visible on the `Issues tab`_ and open to comments/ideas from others. + + +Code contributions +------------------------------ +Code contributions to Modflow-setup to fix bugs, implement new features or improve existing code are encouraged. Regardless of the context, consider `filing an issue`_ first to make others aware of the problem and allow for discussion on potential approaches to addressing it. + +In general, Modflow-setup trys to follow the conventions of the pandas project where applicable. Contributions to Modflow-setup are likely to be accepted more quickly if they follow these guidelines. + +In particular, when submitting a pull request: + +- All existing tests should pass. Please make sure that the test + suite passes, both locally and on + `GitHub Actions `_. Status with GitHub Actions will be visible on a pull request. + +- New functionality should include tests. Please write reasonable + tests for your code and make sure that they pass on your pull request. + +- Classes, methods, functions, etc. should have docstrings. The first + line of a docstring should be a standalone summary. Parameters and + return values should be documented explicitly. (Note: there are admittedly more than a few places in the existing code where docstrings are missing. Docstring contributions are especially welcome! + +- Follow PEP 8 when possible. For more details see + :ref:`below `. + +- Following the `FloPy Commit Message Guidelines `_ (which are similar to the `Conventional Commits `_ specification) is encouraged. Structured commit messages like these can result in more explicit commit messages that are more informative, and also facilitate automation of project maintenance tasks. + +- Imports should be grouped with standard library imports first, + 3rd-party libraries next, and modflow-setup imports third. Within each + grouping, imports should be alphabetized. Always use absolute + imports when possible, and explicit relative imports for local + imports when necessary in tests. Imports can be sorted automatically using the isort package with a pre-commit hook. For more details see :ref:`below `. + +- modflow-setup supports Python 3.7+ only. + + +Seven Steps for Contributing +~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +There are seven basic steps to contributing to *modflow-setup*: + +1) Fork the *modflow-setup* git repository +2) Create a development environment +3) Install *modflow-setup* dependencies +4) Installing the modflow-setup source code +5) Make changes to code and add tests +6) Update the documentation +7) Submit a Pull Request + +Each of these 7 steps is detailed below. + + +1) Forking the *modflow-setup* repository using Git +------------------------------------------------------ + +To the new user, working with Git is one of the more daunting aspects of contributing to *modflow-setup**. +It can very quickly become overwhelming, but sticking to the guidelines below will help keep the process +straightforward and mostly trouble free. As always, if you are having difficulties please +feel free to ask for help. + +The code is hosted on `GitHub `_. To +contribute you will need to sign up for a `free GitHub account +`_. We use `Git `_ for +version control to allow many people to work together on the project. + +Some great resources for learning Git: + +* Software Carpentry's `Git Tutorial `_ +* `Atlassian `_ +* the `GitHub help pages `_. +* Matthew Brett's `Pydagogue `_. + +Getting started with Git +~~~~~~~~~~~~~~~~~~~~~~~~ + +`GitHub has instructions `__ for installing git, +setting up your SSH key, and configuring git. All these steps need to be completed before +you can work seamlessly between your local repository and GitHub. + +.. _contributing.forking: + +Forking +~~~~~~~ + +You will need your own fork to work on the code. Go to the `modflow-setup project +page `_ and hit the ``Fork`` button. You will +want to clone your fork to your machine:: + + git clone git@github.com:your-user-name/modflow-setup.git modflow-setup-yourname + cd modflow-setup-yourname + git remote add upstream https://github.com/modflow-setup/modflow-setup.git + +This creates the directory `modflow-setup-yourname` and connects your repository to +the upstream (main project) *modflow-setup* repository. + +The testing suite will run automatically on Travis-CI once your pull request is +submitted. However, if you wish to run the test suite on a branch prior to +submitting the pull request, then Travis-CI needs to be hooked up to your +GitHub repository. Instructions for doing so are `here +`__. + +Creating a branch +~~~~~~~~~~~~~~~~~~ + +You want your master branch to reflect only production-ready code, so create a +feature branch for making your changes. For example:: + + git branch shiny-new-feature + git checkout shiny-new-feature + +The above can be simplified to:: + + git checkout -b shiny-new-feature + +This changes your working directory to the shiny-new-feature branch. Keep any +changes in this branch specific to one bug or feature so it is clear +what the branch brings to *modflow-setup*. You can have many shiny-new-features +and switch in between them using the git checkout command. + +To update this branch, you need to retrieve the changes from the develop branch:: + + git fetch upstream + git rebase upstream/develop + +This will replay your commits on top of the latest modflow-setup git develop. If this +leads to merge conflicts, you must resolve these before submitting your pull +request. **It's a good idea to move slowly while doing this and pay attention to the messages from git.** The wrong command at the wrong time can quickly spiral into a confusing mess. + +If you have uncommitted changes, you will need to ``stash`` them prior +to updating. This will effectively store your changes and they can be reapplied +after updating. + +.. _contributing.dev_env: + +2 & 3) Creating a development environment with the required dependencies +--------------------------------------------------------------------------- +A development environment is a virtual space where you can keep an independent installation of *modflow-setup*. +This makes it easy to keep both a stable version of python in one place you use for work, and a development +version (which you may break while playing with code) in another. + +An easy way to create a *modflow-setup* development environment is as follows: + +- Install either `Anaconda `_ or + `miniconda `_ +- Make sure that you have :ref:`cloned the repository ` +- ``cd`` to the *modflow-setup** source directory + +Tell conda to create a new environment, named ``modflow-setup_dev``, that has all of the python packages needed to contribute to modflow-setup. Note that in the `geopandas instructions `_, this step is broken into two parts- 2) creating the environment, and 3) installing the dependencies. By using a yaml file that includes the environment name and package requirements, these two steps can be combined:: + + conda env create -f requirements-dev.yml + +This will create the new environment, and not touch any of your existing environments, +nor any existing python installation. + +To work in this environment, you need to ``activate`` it. The instructions below +should work for both Windows, Mac and Linux:: + + conda activate modflow-setup_dev + +Once your environment is activated, you will see a confirmation message to +indicate you are in the new development environment. + +To view your environments:: + + conda info -e + +To return to your home root environment:: + + conda deactivate + +See the full conda docs `here `__. + +At this point you can easily do a *development* install, as detailed in the next sections. + + +4) Installing the modflow-setup source code +------------------------------------------------------ + +Once dependencies are in place, install the modflow-setup source code by navigating to the git clone of the *modflow-setup* repository and (with the ``modflow-setup_dev`` environment activated) running:: + + pip install -e . + +.. note:: + Don't forget the ``.`` after ``pip install -e``! + +5) Making changes and writing tests +------------------------------------- + +*modflow-setup* is serious about testing and strongly encourages contributors to embrace +`test-driven development (TDD) `_. +This development process "relies on the repetition of a very short development cycle: +first the developer writes an (initially failing) automated test case that defines a desired +improvement or new function, then produces the minimum amount of code to pass that test." +So, before actually writing any code, you should write your tests. Often the test can be +taken from the original GitHub issue. However, it is always worth considering additional +use cases and writing corresponding tests. + +In general, tests are required for code pushed to *modflow-setup*. Therefore, +it is worth getting in the habit of writing tests ahead of time so this is never an issue. + +*modflow-setup* uses the `pytest testing system +`_ and the convenient +extensions in `numpy.testing +`_ and `pandas.testing `_. + +Writing tests +~~~~~~~~~~~~~ + +All tests should go into the ``tests`` directory. This folder contains many +current examples of tests, and we suggest looking to these for inspiration. In general, +the tests in this folder aim to be organized by module (e.g. ``test_lakes.py`` for the functions in ``lakes.py``) or test case (e.g. ``test_mf6_shellmound.py`` for the :ref:`Shellmound test case`). + +The ``.testing`` module has some special functions to facilitate writing tests. The easiest way to verify that your code is correct is to explicitly construct the result you expect, then compare the actual result to the expected correct result. + +Running the test suite +~~~~~~~~~~~~~~~~~~~~~~ + +The tests can then be run directly inside your Git clone (without having to +install *modflow-setup*) by typing:: + + pytest + +6) Updating the Documentation +----------------------------- + +The *modflow-setup* documentation resides in the `docs` folder. Changes to the docs are +made by modifying the appropriate file in the `source` folder within `docs`. +The *modflow-setup* docs use reStructuredText syntax, `which is explained here `_ +and the docstrings follow the `Numpy Docstring standard `_. + +Once you have made your changes, you can try building the docs using sphinx. To do so, you can navigate to the `doc` folder and type:: + + make -C docs html + +The resulting html pages will be located in `docs/build/html`. It's a good practice to rebuild the docs often while writing to stay on top of any mistakes. The `reStructuredText extension in VS Code `_ is another way to continuously preview a rendered documentation page while writing. + + +7) Submitting a Pull Request +------------------------------ + +Once you've made changes and pushed them to your forked repository, you then +submit a pull request to have them integrated into the *modflow-setup* code base. + +You can find a pull request (or PR) tutorial in the `GitHub's Help Docs `_. + +.. _contributing_style: + +Style Guide & Linting +--------------------- + +modflow-setup tries to follow the `PEP8 `_ standard. At this point, there's no enforcement of this, but I am considering implementing `Black `_, which automates a code style that is PEP8-complient. Many editors perform automatic linting that makes following PEP8 easy. + +modflow-setup does use the `isort `_ package to automatically organize import statements. isort can installed via pip:: + + $ pip install isort + +And then run with:: + + $ isort . + +from the root level of the project. + +Optionally (but recommended), you can setup `pre-commit hooks `_ +to automatically run ``isort`` when you make a git commit. This +can be done by installing ``pre-commit``:: + + $ python -m pip install pre-commit + +From the root of the modflow-setup repository, you should then install the +``pre-commit`` included in *modflow-setup*:: + + $ pre-commit install + +Then ``isort`` will be run automatically each time you commit changes. You can skip these checks with ``git commit --no-verify``. + +.. _filing an issue: https://docs.github.com/en/free-pro-team@latest/github/managing-your-work-on-github/creating-an-issue +.. _Issues tab: https://github.com/aleaf/modflow-setup/issues diff --git a/_sources/examples.rst.txt b/_sources/examples.rst.txt new file mode 100644 index 00000000..6a525d88 --- /dev/null +++ b/_sources/examples.rst.txt @@ -0,0 +1,11 @@ +======== +Examples +======== + + + +.. toctree:: + :maxdepth: 1 + :caption: Example problems + + Pleasant Lake Example diff --git a/_sources/index.rst.txt b/_sources/index.rst.txt new file mode 100644 index 00000000..e42e7f24 --- /dev/null +++ b/_sources/index.rst.txt @@ -0,0 +1,49 @@ +.. Packaging Scientific Python documentation master file, created by + sphinx-quickstart on Thu Jun 28 12:35:56 2018. + You can adapt this file completely to your liking, but it should at least + contain the root `toctree` directive. + +======================= +modflow-setup |version| +======================= + + +Modflow-setup is a Python package for automating the setup of MODFLOW groundwater models from grid-independent source data including shapefiles, rasters, and other MODFLOW models that are geo-located. Input data and model construction options are summarized in a single configuration file. Source data are read from their native formats and mapped to a regular finite difference grid specified in the configuration file. An external array-based `Flopy `_ model instance with the desired packages is created from the sampled source data and configuration settings. MODFLOW input can then be written from the flopy model instance. + + +.. toctree:: + :maxdepth: 2 + :caption: Getting Started + + Philosophy + Installation + 10 Minutes to Modflow-setup <10min> + Examples + Configuration File Gallery + + +.. toctree:: + :maxdepth: 2 + :caption: User Guide + + Basic program structure and usage + The configuration file + Concepts and methods + Input instructions by package + Troubleshooting + + +.. toctree:: + :maxdepth: 1 + :caption: Reference + + Code reference + Configuration file defaults + Release History + Contributing to modflow-setup + +.. toctree:: + :maxdepth: 1 + :caption: Bibliography + + References cited diff --git a/_sources/input/basic-stress.rst.txt b/_sources/input/basic-stress.rst.txt new file mode 100644 index 00000000..6b4fe7db --- /dev/null +++ b/_sources/input/basic-stress.rst.txt @@ -0,0 +1,351 @@ +======================================================================================= +Specifying boundary conditions with the 'basic' MODFLOW stress packages +======================================================================================= + +This page describes configuration file input for the basic MODFLOW stress packages, including +the CHD, DRN, GHB, RCH, RIV and WEL packages. The EVT package is not currently supported by Modflow-setup. The supported packages can be broadly placed into two categories. Feature or list-based packages such as CHD, DRN, GHB, RIV and WEL often represent discrete phenomena such as surface water features, pumping wells, or even lines that denote a perimeter boundary. Input to these packages in MODFLOW is tabular, consisting of a table for each stress period, with rows specifying stresses at individual grid cells representing the boundary features. In contrast, continuous or grid-based packages represent a stress field that applies to a large area, such as areal recharge. In past versions of MODFLOW, input to these packages was array-based, with values specified for all model cells, at each stress period. In MODFLOW 6, input to these packages can be array or list-based. The Recharge (RCH) Package is currently the only grid-based stress package supported by Modflow-setup. In keeping with the current structured grid-based paradigm of Modflow-setup, Modflow 6 recharge input is generated for the array-based recharge package (Langevin and others, 2017). + + + +List-based basic stress packages +------------------------------------- + +Input for list-based basic stress packages follows a similar pattern to other packages. + +* Package blocks are named using the 3 letter MODFLOW abbrieviation for the Package in lower case (e.g. ``chd:``, ``ghb:``, etc.). +* Sub-blocks within the package block include: + + * ``options:`` for specifying Modflow 6 options, exactly as they are described in the input instructions (Langevin and others, 2017). + * ``source_data:`` for specifying grid-independent source data to be mapped to the model discretization, in addition to other package input. ``source_data:`` in turn can have the following sub-blocks and items: + * A ``shapefile:`` block for specifying shapefile input that maps the boundary condition features in space. Items in the shapefile block include + + * ``filename:`` path to the shapefile + * ``boundname_col:`` column in shapefile with feature names to be applied as `boundnames` in Modflow 6 input + * ``all_touched:`` argument to :func:`rasterio.features.rasterize` that specifies whether all intersected grid cells should be included, or just the grid cells with centers inside the feature. + * One or more variable columns: Optionally the shapefile can also supply steady-state variable values by feature in attribute columns named for the variables (e.g. ``'head'``, ``'bhead'``, etc.) + + Example: + + .. literalinclude:: ../../../mfsetup/tests/data/shellmound.yml + :language: yaml + :start-at: shapefile: + :end-before: csvfile: + + * A ``csvfile:`` block for specifying point feature locations or time-varying values for the variables. Items in the shapefile block include + + * ``filename:`` or `filenames:` path(s) to the csv file(s) + * ``id_column``: unique identifier associated with each feature + * ``datetime_column:`` date-time associated with each stress value + * ``end_datetime_column:`` date-time associated with the end of stress value (optional; for rates that extend across more than one model stress period. If this is specified, ``datetime_column:`` is assumed to indicate the date-time associated with the start of each stress value. + * ``x_col:`` feature x-coordinate (WEL package only; default ``'x'``) + * ``y_col:`` feature y-coordinate (WEL package only; default ``'y'``) + * ``length_units:`` length units associated with the stress value (optional; if omitted no conversion is performed) + * ``time_units:`` time units associated with the stress value (WEL package only; optional; if omitted no conversion is performed) + * ``volume_units:`` value-units associated with the stress value (e.g. `gallons`) in lieu of length-based volume units (e.g., `cubic feet`) (WEL package only; optional; if omitted volumes are assumed to be in model units of L\ :sup:`3` and no conversion is performed) + * ``boundname_col:`` column in shapefile with feature names to be applied as `boundnames` in Modflow 6 input + * one or more columns for the package variables, specified in the format of ``_col``, where ```` is an input variable for the package; for example ``head_col`` for the Constant Head Package, or ``cond_col`` for the Drain or GHB packages. + * ``period_stats:`` a sub-block that is used to specify mapping of the input data to the model temporal discretization. Items within period stats are numbered by stress period, with the entry for each item specifying the temporal aggregation. Currently, two options are supported: + * aggregation of measurements falling within a stress period. For example, assigning the mean value of all input data points within the stress period. In this case, the aggregration method is simply specified as a string. While ``mean`` is typical, any of the standard numpy aggregators can be use (``min``, ``max``, etc.) + * aggregation of measurements from an arbitrary time window. For example, applying a long-term mean to a steady-state stress period, or transient period representing a different time window. In this case three items are specified-- the aggregation method, the start date, and end date (e.g. ``[mean, 2000-01-01, 2017-12-31]``) + + Example: + + .. literalinclude:: ../../../mfsetup/tests/data/shellmound.yml + :language: yaml + :start-at: csvfile: + :end-before: # Drain Package + + + * Additional sub-blocks or items for specifying values for each variable + * In general, these sub-blocks are named for the variable (e.g. ``bhead:``). + * Scalar values (items) can be specified in model units, and are applied globally to the variable. + + Example: + + .. literalinclude:: ../../../mfsetup/tests/data/shellmound.yml + :language: yaml + :start-at: cond: + :end-at: cond: + + * Rasters can be used to specify steady-state values that vary in space; values supplied with a raster are mapped to the model grid using zonal statistics. If the raster contains projection information (GeoTIFFs are preferred in part because of this), reprojection to the model coorindate reference system (CRS) will be performed automatically as needed. Otherwise, the raster is assumed to be in the model projection. Units can optionally be specified and automatically converted; otherwise, the raster values are assumed to be in the model units. Items in the raster block include: + + * ``filename:`` or `filenames:` path(s) to the raster + * ``length_units:`` (or ``elevation_units``; optional): length units of the raster values + * ``time_units:`` (optional): time units of the raster values (``cond`` variable only) + * ``stat:`` (optional): zonal statistic to use in sampling the raster (defaults are listed for each variable in the :ref:`Configuration defaults`) + + Example: + + .. literalinclude:: ../../../mfsetup/tests/data/shellmound.yml + :language: yaml + :start-at: stage: + :end-before: mfsetup_options: + + * **Not implemented yet:** NetCDF input for gridded values that vary in time and space. Due to the lack of standardization in NetCDF coordinate reference information, automatic reprojection is currently not supported for NetCDF files; the data are assumed to be in the model CRS. + * ``mfsetup_options:`` Configuration options for Modflow-setup. General options that apply to all basic stress packages include: + * ``external_files:`` Whether to write the package input as external text arrays or tables (i.e., with ``open/close`` statements). By default ``True`` except in the case of list-based or tabular files for MODFLOW-NWT models, which are not supported. Adding support for this may require changes to Flopy, which handles external list-based files differently for MODFLOW-2005 style models. + * ``external_filename_fmt:`` Python string format for external file names. By default, ``"_{:03d}.dat"``. which results in filenames such as ``wel_000.dat``, ``wel_001.dat``, ``wel_002.dat``... for stress periods 0, 1, and 2, for example. + + Other Modflow-setup options specific to individual packages are described below. + +Constant Head (CHD) Package +++++++++++++++++++++++++++++++ +Input consists of specified head values that may vary in time or space. + + **Required input** + + * parent model head solution --or-- + * shapefile of features --or-- + * parent model package (not implemented yet) + * at least steady-state head values through one of the methods below + + **Optional input** + + * raster to specify steady state elevations by cell (for supplied shapefile) + * shapefile or csv to specify steady elevations by feature + * csv to specify transient elevation by feature (needs to be referenced to features in shapefile) + + **Examples** + (also see the :ref:`Configuration File Gallery`) + + Setting up a Constant Head package with perimeter fluxes from a parent model (Note: an additional ``source_data`` block can be added to represent other features inside of the model perimeter, as below): + + .. literalinclude:: ../../../mfsetup/tests/data/pfl_nwt_test.yml + :language: yaml + :start-at: chd: + + Setting up a Constant Head package from features specified in a shapefile, + and time-varing heads specified in a csvfile: + + .. literalinclude:: ../../../mfsetup/tests/data/shellmound.yml + :language: yaml + :start-after: # Constant Head Package + :end-before: # Drain Package + + +Drain DRN Package +++++++++++++++++++ +Input consists of elevations and conductances that may vary in time or space. + + **Required input** + + * shapefile of features --or-- + * parent model package (not implemented yet) + * at least steady-state head and conductance values through one of the methods below + + **Optional input** + + * global conductance value specified directly + * raster to specify steady state elevation by cell (for supplied shapefile) + * shapefile or csv to specify steady elevations by feature + * csv to specify transient elevation by feature (needs to be referenced to features in shapefile) + + **Examples** + (also see the :ref:`Configuration File Gallery`) + + .. literalinclude:: ../../../mfsetup/tests/data/shellmound.yml + :language: yaml + :start-after: # Drain Package + :end-before: # General Head Boundary Package + +General Head Boundary (GHB) Package ++++++++++++++++++++++++++++++++++++++ +Input consists of head elevations and conductances that may vary in time or space. + + **Required input** + + * shapefile of features --or-- + * parent model package (not implemented yet) + * at least steady-state head and conductance values through one of the methods below + + **Optional input** + + * global conductance value specified directly + * shapefile or csv to specify steady elevations and conductances by feature --or-- + * rasters to specify steady state elevations or conductances by cell (for supplied shapefile) + * csv to specify transient elevations or conductances by feature (needs to be referenced to features in shapefile) + + **Examples** + (also see the :ref:`Configuration File Gallery`) + + .. literalinclude:: ../../../mfsetup/tests/data/shellmound.yml + :language: yaml + :start-after: # General Head Boundary Package + :end-before: # River Package + +River (RIV) package +++++++++++++++++++++ +Input consists of stages, river bottom elevations and conductances, + that may vary in time or space. + + **Required input** + + * shapefile of features --or-- + * ``to_riv:`` block under ``sfrmaker_options:`` with an ``sfr:`` block (see configuration gallery) + * parent model package (not implemented yet) + + **Optional input** + + * global conductance value specified directly + * ``default_rbot_thick`` argument to set a uniform riverbed thickness (``rbot = stage - uniform thickness``) + * shapefile or csv to specify steady heads, conductances and rbots by feature --or-- + * rasters to specify steady heads, conductances and rbots by cell (for supplied shapefile) + * csv to specify transient heads, conductances and rbots by feature (needs to be referenced to features in shapefile) + + **Examples** + (also see the :ref:`Configuration File Gallery`) + + .. literalinclude:: ../../../mfsetup/tests/data/shellmound.yml + :language: yaml + :start-after: # River Package + :end-before: # Well Package + + Example of setting up the RIV package using SFRmaker (via the ``sfr:`` block): + + .. literalinclude:: ../../../mfsetup/tests/data/shellmound_tmr_inset.yml + :language: yaml + :start-at: sfr: + :end-at: to_riv: + + +Well (WEL) Package +++++++++++++++++++++ +Input consists of flux rates that may vary in time or space. + + **Required input** + + * parent model cell by cell flow solution (not implemented yet) --or-- + * parent model WEL package + * steady-state or transient flux values through one of the methods below + + **Optional input** + + * temporal discretization (default is to use the average rate(s) for each stress period) + * vertical discretization (default is to distribute fluxes vertically by the individual transmissivities of the intersection(s) of the well open interval with the model layers.) + + **Flux input options with examples** + (also see the :ref:`Configuration File Gallery`) + + * Fluxes translated from a parent model WEL package + * this input option is very simple. A parent model with a well package is needed, and ``default_source_data: True`` must be specified in the ``parent:`` block. Then, fluxes from the parent model are simply mapped to the inset model grid, based on the parent model cell centers, and the stress period mappings specified in the ``parent:`` block. Well package options can still be specified in a ``wel:`` block. + * Examples: + + .. literalinclude:: ../../../mfsetup/tests/data/pleasant_mf6_test.yml + :language: yaml + :lines: 119-123 + + * CSV input from one or more files (``csvfiles:`` block) + * multiple files can be specified using a list, but column names and units must be consistent + * input for column names and units is the same for the general ``csvfile:`` block described above + * temporal discretization is specified using a ``period_stats:`` sub-block + * spatial discretization for open intervals spanning multiple layers is specified using a ``vertical_flux_distribution:`` sub-block + * Examples: + + .. literalinclude:: ../../../mfsetup/tests/data/shellmound.yml + :language: yaml + :start-after: # Well Package + :end-before: # Output Control Package + + * Perimeter boundary fluxes from a parent model solution: + + .. literalinclude:: ../../../mfsetup/tests/data/shellmound_tmr_inset.yml + :language: yaml + :start-at: wel: + + + Similar to the Constant Head Package, a ``perimeter_boundary`` block can be mixed with the other input blocks described here to simulate pumping or injection inside of the model perimeter. + + * ``wdnr_dataset`` block + .. note:: + This is a custom option from early versions of Modflow-setup, and is likely to be generalized into a combined shapefile (or CSV site information file) and CSV timeseries input option similar to the other basic stress packages. + + * site information is specified in a shapefile formatted like ``csls_sources_wu_pts.shp`` below + * pumping rates are specified by month in a CSV file formatted like ``master_wu.csv`` below + * temporal discretization is specified with a ``period_stats:`` block similar to the ``csvfiles:`` option + * vertical discretization is specified with a ``vertical_flux_distribution:`` block similar to the ``csvfiles:`` option + + * Example: + + .. literalinclude:: ../../../mfsetup/tests/data/pfl_nwt_test.yml + :language: yaml + :lines: 113-118 + + **The** ``vertical_flux_distribution:`` **sub-block** + * This sub-block specifies how Well Packages fluxes should be distributed vertically. + * Items/options include: + * ``across_layers:`` If ``True``, fluxes for a well will be put in the layer containing the open interval midpoint. If ``False``, fluxes will be distributed to the layers intersecting the well open interval. + * ``distribute_by:`` ``'transmissivity'`` (default) to distribute fluxes based on the transmissivities of open interval/layer intersections; ``'thickness'`` to distribute fluxes based on intersection thicknesses. Only relevant with ``across_layers: True``. + * ``minimum_layer_thickness:`` Minimum layer thickness for placing a well (by default 2 model length units). Wells in layers thinner than this will be relocated to the thickess layers at their row, column locations. If no thicker layers exist at the row, column location, the wells are dropped, and reported in *_dropped_wells.csv*. + + +Grid-based basic stress packages +------------------------------------- +The Recharge (RCH) Package is currently the only grid-based stress package supported by Modflow-setup. + + +Recharge (RCH) Package +++++++++++++++++++++++++ + +Direct input +@@@@@@@@@@@@@@@@ +As with other grid-based input such as aquifer properties, input to the recharge package can be specified directly as it would in Flopy. This may be useful for setting up a test model quickly. For example, a single scalar value could be entered to apply to all locations across all periods: + +.. code-block:: yaml + + rch: + recharge: 0.001 + +Or global scalar values could be entered by stress period: + +.. code-block:: yaml + + rch: + recharge: + 0: 0.001 + 1: 0.01 + +In the above example, ``0.01`` would be also be applied to all subsequent stress periods. + +Grid-independent input +@@@@@@@@@@@@@@@@@@@@@@@@@@@ +Modflow-setup currently supports three methods for entering spatially-referenced recharge input not mapped to the model grid. + + * Recharge translated from a parent model RCH package + * This input option is very simple. A parent model with a recharge package is needed, and ``default_source_data: True`` must be specified in the ``parent:`` block. Then, fluxes from the parent model are simply mapped to the inset model grid, based on the parent model cell centers, and the stress period mappings specified in the ``parent:`` block. Recharge package options can still be specified in a ``rch:`` block. + + * Raster input by stress period + * A raster of spatially varying recharge values can be supplied for one or more model stress periods. Similar to the direct input, specified recharge will be applied to subsequent periods were recharge is not specified. + * If the raster contains projection information (GeoTIFFs are preferred in part because of this), any reprojection to the model coorindate reference system (CRS) will be performed automatically as needed. Otherwise, the raster is assumed to be in the model projection. + * Input items include: + * ``length_units:`` input recharge length units (optional; if omitted no conversion is performed) + * ``time_units:`` input recharge time units (optional; if omitted no conversion is performed) + * ``mult:`` option multiplier value that applies to all stress periods. + * ``resample_method:`` method for resampling the data from the source grid to model grid. (optional; by default, ``'nearest'``) + + * Examples: + + .. literalinclude:: ../../../mfsetup/tests/data/pfl_nwt_test.yml + :language: yaml + :lines: 99-106 + + * NetCDF input + * NetCDF input can be supplied for gridded values that vary in time and space. + * Automatic reprojection is supported for Climate Forecast (CF) 1.8-compliant netcdf files (that work with the :py:meth:`pyproj.CRS.from_cf() ` constructor), or files that have a `'crs_wkt'` or `'proj4_string'` grid mapping variable (the latter includes many or most Soil Water Balance Code models). + * Otherwise, coordinate reference information can be supplied via the ``crs:`` item (using any valid input to :py:class:`pyproj.crs.CRS`), and the data will be reprojected to the model coordinate reference system. + + * Input items include: + * ``variable:`` name of variable in NetCDF file containing the recharge values. + * ``length_units:`` input recharge length units (optional; if omitted no conversion is performed) + * ``time_units:`` input recharge time units (optional; if omitted no conversion is performed) + * ``crs``: coordinate reference system (CRS) of the netcdf file (optional; only needed if the NetCDF file is in a different CRS than the model *and* automatic reprojection from the internal `grid mapping `_ isn't working. + * ``resample_method:`` method for resampling the data from the source grid to model grid. (optional; by default, ``'nearest'``) + * ``period_stats:`` a sub-block that is used to specify mapping of the input data to the model temporal discretization. Items within period stats are numbered by stress period, with the entry for each item specifying the temporal aggregation. Currently, two options are supported: + * aggregation of measurements falling within a stress period. For example, assigning the mean value of all input data points within the stress period. In this case, the aggregration method is simply specified as a string. While ``mean`` is typical, any of the standard numpy aggregators can be use (``min``, ``max``, etc.) + * aggregation of measurements from an arbitrary time window. For example, applying a long-term mean to a steady-state stress period, or transient period representing a different time window. In this case three items are specified-- the aggregation method, the start date, and end date (e.g. ``[mean, 2000-01-01, 2017-12-31]``; see below for an example) + + * Examples: + + .. literalinclude:: ../../../mfsetup/tests/data/shellmound.yml + :language: yaml + :start-after: # Recharge Package + :end-before: # Streamflow Routing Package diff --git a/_sources/input/dis.rst.txt b/_sources/input/dis.rst.txt new file mode 100644 index 00000000..73c85c20 --- /dev/null +++ b/_sources/input/dis.rst.txt @@ -0,0 +1,162 @@ +======================================================================================= +Time and space discretization +======================================================================================= + +This page describes spatial and temporal discretization input options to the Discretization (DIS) and Time Discretization (TDIS) Packages. Specification of the model active area in the DIS Package (MODFLOW 6) and BAS6 Package (MODFLOW-2005/NWT) is also covered. As always, additional input examples can be found in the :ref:`Configuration File Gallery` and :ref:`Configuration defaults` pages. + +As stated previously, a key paradigm of Modflow-setup is setup of space and time discretization during the automated model build, from grid-independent inputs. This allows different discretization schemes to be readily tested without extensive modifications to the inputs. + +Spatial Discretization +---------------------- +Similar to other packages, input to the Discretization Package follows the structure of MODFLOW and Flopy. For MODFLOW 6 models, the "Options", "Dimensions" and "Griddata" input blocks are represented as sub-blocks within the ``dis:`` block. Within these blocks, model inputs can be specified directly, as long as they are consistent with the definition of the model grid. For example, if ``nlay: 2`` is specified, then the model bottom must be specified as two scalar values, or two ``nrow`` x ``ncol`` arrays: + +.. code-block:: yaml + + dis: + options: + length_units: 'meters' + dimensions: + nlay: 2 + nrow: 30 + ncol: 35 + griddata: + delr: 1000. + delc: 1000. + top: 2. + botm: [1, 0] + +More commonly, only ``delr`` and ``delc`` are specified in the ``griddata:`` block, and geolocated, grid-independent raster surfaces are supplied in a ``source_data`` sub-block. Modflow-setup then interpolates values from these surfaces to the grid cell centers. + +.. literalinclude:: ../../../mfsetup/tests/data/shellmound.yml + :language: yaml + :start-after: Discretization Package + :end-before: # Temporal Discretization Package + +**A few notes:** + + * by default, linear interpolation is used, as described :ref:`here `. + * If a more sophisticated sampling strategy is desired, for example computing mean elevations with zonal statistics for the model top, the respective layers should be pre-processed prior to input to Modflow-setup. This is by design, as it avoids adding additional complexity to the Modflow-setup codebase, and expensive operations like zonal statistics can greatly slow a model build time and often only need to be done infrequently (in contrast to other changes where rapid iteration may be helpful). + * GeoTIFFs are generally best, because they include complete projection information (including the coordinate reference system) and generally use less disk space than other raster types. + * if an ``elevation_units:`` item is included, elevation values in the rasters will be converted to the model units + * the most straightforward way to input layer elevations is to simply assign a raster surface to each layer: + + .. code-block:: yaml + + botm: + filenames: + 0: bottom_of_layer_0.tif + 1: bottom_of_layer_1.tif + ... + + * Alternatively, multiple model layers can be inserted between key layer surfaces by simply skipping those numbers. In this exampmle, Modflow-setup creates three layers of equal thickness between the two specified surfaces: + + .. code-block:: yaml + + botm: + filenames: + 0: bottom_of_layer_0.tif + # layer 1 bottom is created by Modflow-setup + # layer 2 bottom is created by Modflow-setup + 3: bottom_of_layer_3.tif + ... + +Adopting layering from a parent model +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +Similar to other input, layer bottoms can be resampled from a parent model. This ``source_data:`` block would simply adopt the same layering scheme as the parent model: + +.. code-block:: yaml + + source_data: + top: from_parent + botm: from_parent + +The parent model layering can also be subdivided by mapping pairs of ``inset: parent`` model layers using a dictionary (YAML sub-block): + +.. code-block:: yaml + + source_data: + top: from_parent + botm: + from_parent: + 0: -0.5 # bottom of layer zero in pfl_nwt is positioned at half the thickness of parent layer 1 + 1: 0 # bottom of layer 1 corresponds to bottom of layer 0 in parent + 2: 1 + 3: 2 + 4: 3 + +In this case, the top layer of the parent model is subdivded into two layers in the inset model. A negative number is used on the parent model side because layer 0 (the first layer bottom) of the parent model coincides with the second layer bottom of the inset model (layer 1). A value of ``-0.5`` places the first inset model layer bottom at half the thickness of the parent model layer; different values between ``-1.`` and ``0.`` could be used to move this surface up or down within the parent model layer, or multiple inset model layers could be specified within the first parent model layer: + +.. code-block:: yaml + + source_data: + top: from_parent + botm: + from_parent: + 0: -0.9 # bottom of layer 1 set at 10% of the depth of layer 0 in the parent + 1: -0.3 # bottom of layer 2 set at 70% of the depth of layer 0 in the parent + 2: 0 + 3: 1 + 4: 2 + +MODFLOW-2005/NTW input +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +Specification of ``source_data:`` blocks is the same for MODFLOW 6 and MODFLOW-2005 style models, except the latter wouldn't contain an ``idomain:`` subblock. Specification of other inputs generally follows Flopy (for example, :py:class:`~flopy.modflow.mfdis.ModflowDis`). A ``dis:`` block equivalent to the first example give would look like: + +.. code-block:: yaml + + dis: + length_units: 'meters' + nlay: 2 + nrow: 30 + ncol: 35 + delr: 1000. + delc: 1000. + top: 2. + botm: [1, 0] + +.. note:: + The ``length_units:`` item is specific to Modflow-setup; in a MODFLOW-2005 context, Modflow-setup takes this input and enters the appropriate value of ``lenuni`` to Flopy (which writes the MODFLOW input). + +Modflow-setup specific input +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +* ``drop_thin_cells``: Option in MODFLOW 6 models to remove cells less than a minimum layer thickness from the model solution. +* ``minimum_layer_thickness:`` Minimum layer thickness to allow in model. In MODFLOW 6 models, if ``drop_thin_cells: True``, layers thinner than this will be collapsed to zero thickness, and their cells either made inactive (``idomain=0``) or, if they are between two layers greater than the minimum thickness, converted to vertical pass-through cells (``idomain=1``). In MODFLOW-2005 models or if ``drop_thin_cells: False``, thin layers will be expanded downward to the minimum thicknesses. + +Time Discretization +---------------------- +In MODFLOW 6, time discretization is specified at the simulation level, it its own Time Discretization (TDIS) Package. In MODFLOW-2005/NWT, time discretization is specified in the Discretization Package. Accordingly, in Modflow-setup, time discretization in specified in the appropriate package block for the model version. + +Specifying stress period information directly +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +Input to the DIS and TDIS packages follows the MODFLOW structure. For simple steady-state models, time discretization could be specified directly to the DIS or TDIS packages using there respective the Flopy inputs (:py:class:`~flopy.modflow.mfdis.ModflowDis`; :py:class:`~flopy.mf6.modflow.mftdis.ModflowTdis`). This example from the :ref:`Configuration File Gallery` shows direct specification of stress period information to the Discretization Package: + +.. literalinclude:: ../../../mfsetup/tests/data/pfl_nwt_test.yml + :language: yaml + :start-after: arguments to flopy.modflow.ModflowDis + :end-before: bas6: + +Specifying uniform stress periods frequencies by group +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +For transient models, we often want to combine an initial steady-state period with subsequent transient periods, which may be of variable lengths. To facilitate this, Modflow-setup has a ``perioddata:`` sub-block that can in turn contain multiple sub-blocks representing stress period "groups". Each group in the ``perioddata:`` sub-block contains information to generate one or more stress periods at a specified frequency and time datum (for example, months, days, every 7 days, etc.). Input to transient groups is based on the :py:func:`pandas.date_range` function, where three of the four ``start_date_time``, ``end_date_time``, ``freq`` and ``nper`` parameters must be defined. For example, this sequence of blocks from the :ref:`Configuration File Gallery` generates an initial steady-state period, followed by a 9 year "spin-up" period between two dates, and then biannual stress periods spanning another specified set of sets. Time-step information is also specified, using the MODFLOW variable names. + +.. literalinclude:: ../../../mfsetup/tests/data/shellmound.yml + :language: yaml + :start-after: # Temporal Discretization Package + :end-before: # Initial Conditions Package + +The ``perioddata:`` sub-block can be used within a ``tdis:`` block for MODFLOW 6 models, or a ``dis:`` block for MODFLOW-2005 style models. + +Specifying pre-defined stress periods from a CSV file +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +In some model applications, irregular stress periods may be needed that would require many groups to be specifed using the above ``perioddata:`` sub-block. In these cases, a stress period data table can be pre-defined and input as a CSV file: + +.. literalinclude:: ../../../mfsetup/tests/data/shellmound_tmr_inset.yml + :language: yaml + :start-after: drop_thin_cells: True + :end-before: sfr: + +An example of a valid table is shown below. Note that only the columns listed in the above ``csvfile:`` block are actually needed. ``perlen`` and ``time`` are calculated internally by Modflow-setup; output control (``oc``) can be specified here or in the ``oc:`` package block. + +.. csv-table:: Example Stress period data + :file: ../../../mfsetup/tests/data/shellmound/tmr_parent/tables/stress_period_data.csv + :header-rows: 1 diff --git a/_sources/input/ic.rst.txt b/_sources/input/ic.rst.txt new file mode 100644 index 00000000..35b50196 --- /dev/null +++ b/_sources/input/ic.rst.txt @@ -0,0 +1,88 @@ +======================================================================================= +Initial Conditions +======================================================================================= + +Similar to other packages, input of initial conditions follows the structure of MODFLOW and Flopy. Setting the starting heads from the model top is often a good way to go initially. After the model has been run, starting heads can then be :ref:`updated from the initial model head output ` to improve convergence on subsequent runs. + + .. Note:: + + With any transient model, an :ref:`initial steady-state stress period