diff --git a/.buildinfo b/.buildinfo
new file mode 100644
index 00000000..bb6cdf85
--- /dev/null
+++ b/.buildinfo
@@ -0,0 +1,4 @@
+# Sphinx build info version 1
+# This file records the configuration used when building these files. When it is not found, a full rebuild will be done.
+config: 4724ebffb4b01f919207576d4d776997
+tags: 645f666f9bcd5a90fca523b33c5a78b7
diff --git a/10min.html b/10min.html
new file mode 100644
index 00000000..e671e271
--- /dev/null
+++ b/10min.html
@@ -0,0 +1,465 @@
+
+
+
+
+
This is a short introduction to help get you up and running with Modflow-setup. A complete workflow can be found in the Pleasant Lake Example; additional examples of working configuration files can be found in the Configuration File Gallery.
+
+
1) Define the model active area and coordinate reference system
+
Depending on the problem, the model area might simply be a box enclosing features of interest and any relevant hydrologic boundaries, or an irregular shape surrounding a watershed or other feature. In either case, it may be helpful to download hydrography first, to ensure that the model area includes all important features. The model should be referenced to a projected coordinate reference system (CRS), ideally with length units of meters and an authority code (such as an EPSG code) that unambiguously defines it.
+
Modflow-setup provides two ways to define a model grid:
+
+
+
x and y coordinates of the model origin (lower left or upper left corner), grid spacing, number of rows and columns, rotation, and CRS
+
As a rectangular area of specified discretization surrounding a polygon shapefile of the model active area (traced by hand or developed by some other means) or a feature of interest buffered by a specified distance.
+
+
+
The active model area is defined subsequently in the DIS package.
+
+
+
Note
+
Don’t forget about the farfield! Usually it is advised to include important competing sinks outside of the immediate area of interest (the nearfield), so that the solution is not over-specified by the perimeter boundary condition, and recognizing that the surface watershed boundary doesn’t always coincide exactly with the groundwatershed boundary. See Haitjema (1995) and Anderson and others (2015) for more info.
+
+
+
Note
+
Need a polygon defining a watershed? In the United States, the Watershed Boundary Dataset provides watershed deliniations at various scales.
Usually creating the desired grid requires some iteration. We can get started on this by making a model setup script and corresponding configuration file.
+
An initial model setup script for making the model grid:
+
+
1frommfsetupimportMF6model
+ 2
+ 3
+ 4defsetup_grid(cfg_file):
+ 5"""Just set up (a shapefile of) the model grid.
+ 6 For trying different grid configurations."""
+ 7m=MF6model(cfg=cfg_file)
+ 8m.setup_grid()
+ 9m.modelgrid.write_shapefile('postproc/shps/grid.shp')
+10
+11if__name__=='__main__':
+12
+13setup_grid('initial_config_poly.yaml')
+
Next, let’s get some data for setting up boundary conditions. For streams, Modflow-setup can accept any linestring shapefile that has a routing column indicating how the lines connect to one another. This can be created by hand, or in the United States, obtained from the National Hydrography Dataset Plus (NHDPlus). There are two types of NHDPlus:
+
+
+
NHDPlus version 2 is mapped at the 1:100,000 scale, and is therefore suitable for larger regional models with cell sizes of ~100s of meters to ~1km. NHDPlus version 2 can be the best choice for larger model areas (greater than approx 1,000 km2), where NHDPlus HR might have too many lines. NHDPlus version 2 can be obtained from the EPA.
+
NHDPlus High Resolution (HR) is mapped at the finer 1:24,000 scale, and may therefore work better for smaller problems (discretizations of ~100 meters or less) where better alignment between the mapped lines and stream channel in the DEM is desired, and where the number of linestring features to manage won’t be prohibitive. NHDPlus HR can be accessed via the National Map Downloader.
Currently, NHDPlus HR data, which comes in a file geodatabase (GDB), must be preprocessed into a shapefile for input to Modflow-setup and SFRmaker (which Modflow-setup uses to build the stream network). In many cases, multiple GDBs may need to be combined and undesired line features such as storm sewers culled. The SFRmaker documentation has examples for how to read and preprocesses NHDPlus HR.
Depending on the application, NHDPlus version 2 may not need to be preprocessed. Reasons to preprocess include:
+
+
the model area is large, and
+
+
+
read times for one or more NHDPlus drainage basins are slowing the model build
+
the DEM being used for the model top is relatively coarse, and sampling a fine DEM during the model build is prohibitive for time or space reasons.
+
+
+
+
the stream network is too dense, with too many model cells containing SFR reaches (especially a problem in the eastern US at the 1 km resolution); or there are too many ephemeral streams represented.
+
the stream network has divergences where one or more distributary lines are downstream of a confluence.
+
+
The preprocessing module in SFRmaker can resolve these issues, producing a single set of culled flowlines with width and elevation information and divergences removed. The elevation functionality in the preprocessing module requires a DEM.
The National Map Downloader has 10 meter DEMs for the United States, with finer resolutions available in many areas. Typically, these come in 1 degree x 1 degree tiles. If many tiles are needed, the uGet Download Manager linked to on the National Map site can automate downloading many tiles. Alternatively, links to the files follow a consistent format, and are therefore amenable to scripted or manual downloads. For example, the tile located between -88 and -87 west and 43 and 44 north is available at:
Once all of the tiles are downloaded, a virtual raster can be made that allows them to be treated as a single file, without any modifications to the original data. This is required for input to SFRmaker and Modflow-setup. For example, in QGIS:
+
+
+
Load all of the tiles to verify that they are correct and cover the whole model active area.
+
From the Raster menu, select Miscellaneous>BuildVirtualRaster. This will make a virtual raster file with a .vrt extension that points to the original set of GeoTIFFs, but allows them to be treated as a single continuous raster.
+
+
+
+
+
+
5) Make a minimum working configuration file and model build script
+
Now that we have a set of flowlines and a DEM (and perhaps shapefiles for other surface water boundaries), we can fill out the rest of the configuration file to get an initial working model. Later, additional details such as more layers, a well package, observations, or other features can be added in a stepwise approach (Haitjema, 1995).
A setup script for making a minimum working model. Additional functions can be added later to further customize the model outside of the Modflow-setup build step.
+
+
1importos
+ 2
+ 3frommfsetupimportMF6model
+ 4
+ 5
+ 6defsetup_grid(cfg_file):
+ 7"""Just set up (a shapefile of) the model grid.
+ 8 For trying different grid configurations."""
+ 9cwd=os.getcwd()
+10m=MF6model(cfg=cfg_file)
+11m.setup_grid()
+12m.modelgrid.write_shapefile('postproc/shps/grid.shp')
+13# Modflow-setup changes the working directory
+14# to the model workspace; change it back
+15os.chdir(cwd)
+16
+17
+18defsetup_model(cfg_file):
+19"""Set up the whole model."""
+20cwd=os.getcwd()
+21m=MF6model.setup_from_yaml(cfg_file)
+22m.write_input()
+23os.chdir(cwd)
+24returnm
+25
+26
+27if__name__=='__main__':
+28
+29#setup_grid('initial_config_poly.yaml')
+30setup_model('initial_config_full.yaml')
+
+[docs]
+defdeactivate_idomain_above(idomain,packagedata):
+"""Sets ibound to 0 for all cells above active SFR cells.
+
+ Parameters
+ ----------
+ packagedata : MFList, recarray or DataFrame
+ SFR package reach data
+
+ Notes
+ -----
+ This routine updates the ibound array of the flopy.model.ModflowBas6 instance. To produce a
+ new BAS6 package file, model.write() or flopy.model.ModflowBas6.write()
+ must be run.
+ """
+ ifisinstance(packagedata,MFList):
+ packagedata=packagedata.array
+ idomain=idomain.copy()
+ ifisinstance(packagedata,np.recarray):
+ packagedata.columns=packagedata.dtype.names
+ if'cellid'inpackagedata.columns:
+ k,i,j=cellids_to_kij(packagedata['cellid'])
+ else:
+ k,i,j=packagedata['k'],packagedata['i'],packagedata['j']
+ deact_lays=[list(range(ki))forkiink]
+ forks,ci,cjinzip(deact_lays,i,j):
+ forckinks:
+ idomain[ck,ci,cj]=0
+ returnidomain
+
+
+
+
+[docs]
+deffind_remove_isolated_cells(array,minimum_cluster_size=10):
+"""Identify clusters of isolated cells in a binary array.
+ Remove clusters less than a specified minimum cluster size.
+ """
+ iflen(array.shape)==2:
+ arraylist=[array]
+ else:
+ arraylist=array
+
+ # exclude diagonal connections
+ structure=np.zeros((3,3))
+ structure[1,:]=1
+ structure[:,1]=1
+
+ retained_arraylist=[]
+ forarrinarraylist:
+
+ # for each cell in the binary array arr (i.e. representing active cells)
+ # take the sum of the cell and 4 immediate neighbors (excluding diagonal connections)
+ # values > 2 in the output array indicate cells with at least two connections
+ convolved=convolve2d(arr,structure,mode='same')
+ # taking union with (arr == 1) prevents inactive cells from being activated
+ atleast_2_connections=(arr==1)&(convolved>2)
+
+ # then apply connected component analysis
+ # to identify small clusters of isolated cells to exclude
+ labeled,ncomponents=ndimage.measurements.label(atleast_2_connections,
+ structure=structure)
+ retain_areas=[cforcinrange(1,ncomponents+1)
+ if(labeled==c).sum()>=minimum_cluster_size]
+ retain=np.in1d(labeled.ravel(),retain_areas)
+ retained=np.reshape(retain,arr.shape).astype(array.dtype)
+ retained_arraylist.append(retained)
+ iflen(array.shape)==3:
+ returnnp.array(retained_arraylist,dtype=array.dtype)
+ returnretained_arraylist[0]
+
+
+
+
+[docs]
+defcellids_to_kij(cellids,drop_inactive=True):
+"""Unpack tuples of MODFLOW-6 cellids (k, i, j) to
+ lists of k, i, j values; ignoring instances
+ where cellid is None (unconnected cells).
+
+ Parameters
+ ----------
+ cellids : sequence of (k, i, j) tuples
+ drop_inactive : bool
+ If True, drop cellids == 'none'. If False,
+ distribute these to k, i, j.
+
+ Returns
+ -------
+ k, i, j : 1D numpy arrays of integers
+ """
+ active=np.array(cellids)!='none'
+ ifdrop_inactive:
+ k,i,j=map(np.array,zip(*cellids[active]))
+ else:
+ k=np.array([cid[0]ifcid!='none'elseNoneforcidincellids])
+ i=np.array([cid[1]ifcid!='none'elseNoneforcidincellids])
+ j=np.array([cid[2]ifcid!='none'elseNoneforcidincellids])
+ returnk,i,j
+
+
+
+
+[docs]
+defcreate_vertical_pass_through_cells(idomain):
+"""Replaces inactive cells with vertical pass-through cells at locations that have an active cell
+ above and below by setting these cells to -1.
+
+ Parameters
+ ----------
+ idomain : np.ndarray with 2 or 3 dimensions. 2D arrays are returned as-is.
+
+ Returns
+ -------
+ revised : np.ndarray
+ idomain with -1s added at locations that were previous <= 0
+ that have an active cell (idomain=1) above and below.
+ """
+ iflen(idomain.shape)==2:
+ returnidomain
+ revised=idomain.copy()
+ foriinrange(1,idomain.shape[0]-1):
+ has_active_above=np.any(idomain[:i]>0,axis=0)
+ has_active_below=np.any(idomain[i+1:]>0,axis=0)
+ bounded=has_active_above&has_active_below
+ pass_through=(idomain[i]<=0)&bounded
+ assertnotnp.any(revised[i][pass_through]>0)
+ revised[i][pass_through]=-1
+
+ # scrub any pass through cells that aren't bounded by active cells
+ revised[i][(idomain[i]<=0)&~bounded]=0
+ foriin(0,-1):
+ revised[i][revised[i]<0]=0
+ returnrevised
+
+
+
+
+[docs]
+deffill_empty_layers(array):
+"""Fill empty layers in a 3D array by linearly interpolating
+ between the values above and below. Layers are defined
+ as empty if they contain all nan values. In the example of
+ model layer elevations, this would create equal layer thicknesses
+ between layer surfaces with values.
+
+ Parameters
+ ----------
+ array : 3D numpy.ndarray
+
+ Returns
+ -------
+ filled : ndarray of same shape as array
+ """
+ defget_next_below(seq,value):
+ foriteminsorted(seq):
+ ifitem>value:
+ returnitem
+
+ defget_next_above(seq,value):
+ foriteminsorted(seq)[::-1]:
+ ifitem<value:
+ returnitem
+
+ array=array.copy()
+ nlay=array.shape[0]
+ layers_with_values=[kforkinrange(nlay)ifnotnp.all(np.isnan(array[k]),axis=(0,1))]
+ empty_layers=[kforkinrange(nlay)ifknotinlayers_with_values]
+
+ forkinempty_layers:
+ nextabove=get_next_above(layers_with_values,k)
+ nextbelow=get_next_below(layers_with_values,k)
+
+ # linearly interpolate layer values between next layers
+ # above and below that have values
+ # (in terms of elevation
+ n=nextbelow-nextabove
+ diff=(array[nextbelow]-array[nextabove])/n
+ foriinrange(k,nextbelow):
+ array[i]=array[i-1]+diff
+ k=i
+ returnarray
+
+
+
+
+[docs]
+deffill_cells_vertically(top,botm):
+"""In MODFLOW 6, cells where idomain < 1 are excluded from the solution.
+ However, in the botm array, values are needed in overlying cells to
+ compute layer thickness (cells with idomain != 1 overlying cells with idomain >= 1 need
+ values in botm). Given a 3D numpy array with nan values indicating excluded cells,
+ fill in the nans with the overlying values. For example, given the column of cells
+ [10, nan, 8, nan, nan, 5, nan, nan, nan, 1], fill the nan values to make
+ [10, 10, 8, 8, 8, 5, 5, 5, 5], so that layers 2, 5, and 9 (zero-based)
+ all have valid thicknesses (and all other layers have zero thicknesses).
+
+ algorithm:
+ * given a top and botm array (top of the model and layer bottom elevations),
+ get the layer thicknesses (accounting for any nodata values) idomain != 1 cells in
+ thickness array must be set to np.nan
+ * set thickness to zero in nan cells take the cumulative sum of the thickness array
+ along the 0th (depth) axis, from the bottom of the array to the top
+ (going backwards in a depth-positive sense)
+ * add the cumulative sum to the array bottom elevations. The backward difference in
+ bottom elevations should be zero in inactive cells, and representative of the
+ desired thickness in the active cells.
+ * append the model bottom elevations (excluded in bottom-up difference)
+
+ Parameters
+ ----------
+ top : 2D numpy array; model top elevations
+ botm : 3D (nlay, nrow, ncol) array; model bottom elevations
+
+ Returns
+ -------
+ top, botm : filled top and botm arrays
+ """
+ thickness=get_layer_thicknesses(top,botm)
+ assertnp.all(np.isnan(thickness[np.isnan(thickness)]))
+ thickness[np.isnan(thickness)]=0
+ # cumulative sum from bottom to top
+ filled=np.cumsum(thickness[::-1],axis=0)[::-1]
+ # add in the model bottom elevations
+ # use the minimum values instead of the bottom layer,
+ # in case there are nans in the bottom layer
+ # include the top, in case there are nans in all botms
+ # introducing nans into the top can cause issues
+ # with partical vertical LGR
+ all_surfaces=np.stack([top]+[arr2dforarr2dinbotm])
+ filled+=np.nanmin(all_surfaces,axis=0)# botm[-1]
+ # append the model bottom elevations
+ filled=np.append(filled,[np.nanmin(all_surfaces,axis=0)],axis=0)
+ returnfilled[1:].copy()
+
+
+
+
+[docs]
+deffix_model_layer_conflicts(top_array,botm_array,
+ ibound_array=None,
+ minimum_thickness=3):
+"""Compare model layer elevations; adjust layer bottoms downward
+ as necessary to maintain a minimum thickness.
+
+ Parameters
+ ----------
+ top_array : 2D numpy array (nrow * ncol)
+ Model top elevations
+ botm_array : 3D numpy array (nlay * nrow * ncol)
+ Model bottom elevations
+ minimum thickness : scalar
+ Minimum layer thickness to enforce
+
+ Returns
+ -------
+ new_botm_array : 3D numpy array of new layer bottom elevations
+ """
+ top=top_array.copy()
+ botm=botm_array.copy()
+ nlay,nrow,ncol=botm.shape
+ ifibound_arrayisNone:
+ ibound_array=np.ones(botm.shape,dtype=int)
+ # fix thin layers in the DIS package
+ new_layer_elevs=np.empty((nlay+1,nrow,ncol))
+ new_layer_elevs[1:,:,:]=botm
+ new_layer_elevs[0]=top
+ foriinnp.arange(1,nlay+1):
+ active=ibound_array[i-1]>0.
+ thicknesses=new_layer_elevs[i-1]-new_layer_elevs[i]
+ withnp.errstate(invalid='ignore'):
+ too_thin=active&(thicknesses<minimum_thickness)
+ new_layer_elevs[i,too_thin]=new_layer_elevs[i-1,too_thin]-minimum_thickness*1.001
+ try:
+ assertnp.nanmax(np.diff(new_layer_elevs,axis=0)[ibound_array>0])*-1>=minimum_thickness
+ except:
+ j=2
+ returnnew_layer_elevs[1:]
+
+
+
+
+[docs]
+defget_layer(botm_array,i,j,elev):
+"""Return the layers for elevations at i, j locations.
+
+ Parameters
+ ----------
+ botm_array : 3D numpy array of layer bottom elevations
+ i : scaler or sequence
+ row index (zero-based)
+ j : scaler or sequence
+ column index
+ elev : scaler or sequence
+ elevation (in same units as model)
+
+ Returns
+ -------
+ k : np.ndarray (1-D) or scalar
+ zero-based layer index
+ """
+ defto_array(arg):
+ ifnp.isscalar(arg):
+ np.array([arg])
+ #if not isinstance(arg, np.ndarray):
+ # return np.array([arg])
+ else:
+ returnnp.array(arg)
+
+ i=to_array(i)
+ j=to_array(j)
+ nlay=botm_array.shape[0]
+ elev=to_array(elev)
+ botms=botm_array[:,i,j].tolist()
+ layers=np.sum(((botms-elev)>0),axis=0)
+ # force elevations below model bottom into bottom layer
+ layers[layers>nlay-1]=nlay-1
+ layers=np.atleast_1d(np.squeeze(layers))
+ iflen(layers)==1:
+ layers=layers[0]
+ returnlayers
+
+
+
+
+[docs]
+defverify_minimum_layer_thickness(top,botm,isactive,minimum_layer_thickness):
+"""Verify that model layer thickness is equal to or
+ greater than a minimum thickness."""
+ top=top.copy()
+ botm=botm.copy()
+ isactive=isactive.copy().astype(bool)
+ nlay,nrow,ncol=botm.shape
+ all_layers=np.zeros((nlay+1,nrow,ncol))
+ all_layers[0]=top
+ all_layers[1:]=botm
+ isvalid=np.nanmax(np.diff(all_layers,axis=0)[isactive])*-1+1e-4>= \
+ minimum_layer_thickness
+ returnisvalid
+
+
+
+
+[docs]
+defmake_ibound(top,botm,nodata=-9999,
+ minimum_layer_thickness=1,
+ drop_thin_cells=True,tol=1e-4):
+"""Make the ibound array that specifies
+ cells that will be excluded from the simulation. Cells are
+ excluded based on:
+
+
+ Parameters
+ ----------
+ model : mfsetup.MFnwtModel model instance
+
+ Returns
+ -------
+ idomain : np.ndarray (int)
+
+ """
+ top=top.copy()
+ botm=botm.copy()
+ top[top==nodata]=np.nan
+ botm[botm==nodata]=np.nan
+ criteria=np.isnan(botm)
+
+ # compute layer thicknesses, considering pinched cells (nans)
+ b=get_layer_thicknesses(top,botm)
+ all_cells_thin=np.all(b<minimum_layer_thickness+tol,axis=0)
+ criteria=criteria|np.isnan(b)# cells without thickness values
+
+ ifdrop_thin_cells:
+ criteria=criteria|all_cells_thin
+ #all_layers = np.stack([top] + [b for b in botm])
+ #min_layer_thickness = minimum_layer_thickness
+ #isthin = np.diff(all_layers, axis=0) * -1 < min_layer_thickness + tol
+ #criteria = criteria | isthin
+ idomain=np.abs(~criteria).astype(int)
+ returnidomain
+
+
+
+
+[docs]
+defmake_lgr_idomain(parent_modelgrid,inset_modelgrid,
+ ncppl):
+"""Inactivate cells in parent_modelgrid that coincide
+ with area of inset_modelgrid."""
+ ifparent_modelgrid.rotation!=inset_modelgrid.rotation:
+ raiseValueError('LGR parent and inset models must have same rotation.'
+ f'\nParent rotation: {parent_modelgrid.rotation}'
+ f'\nInset rotation: {inset_modelgrid.rotation}'
+ )
+ # upper left corner of inset model in parent model
+ # use the cell centers, to avoid edge situation
+ # where neighboring parent cell is accidentally selected
+ x0=inset_modelgrid.xcellcenters[0,0]
+ y0=inset_modelgrid.ycellcenters[0,0]
+ pi0,pj0=parent_modelgrid.intersect(x0,y0,forgive=True)
+ # lower right corner of inset model
+ x1=inset_modelgrid.xcellcenters[-1,-1]
+ y1=inset_modelgrid.ycellcenters[-1,-1]
+ pi1,pj1=parent_modelgrid.intersect(x1,y1,forgive=True)
+ idomain=np.ones(parent_modelgrid.shape,dtype=int)
+ ifany(np.isnan([pi0,pj0])):
+ raiseValueError(f"LGR model upper left corner {pi0}, {pj0} "
+ "is outside of the parent model domain! "
+ "Check the grid offset and dimensions."
+ )
+ ifany(np.isnan([pi1,pj1])):
+ raiseValueError(f"LGR model lower right corner {pi0}, {pj0} "
+ "is outside of the parent model domain! "
+ "Check the grid offset and dimensions."
+ )
+ idomain[0:(np.array(ncppl)>0).sum(),
+ pi0:pi1+1,pj0:pj1+1]=0
+ returnidomain
+
+
+
+
+[docs]
+defmake_idomain(top,botm,nodata=-9999,
+ minimum_layer_thickness=1,
+ drop_thin_cells=True,tol=1e-4):
+"""Make the idomain array for MODFLOW 6 that specifies
+ cells that will be excluded from the simulation. Cells are
+ excluded based on:
+ 1) np.nans or nodata values in the botm array
+ 2) np.nans or nodata values in the top array (applies to the highest cells with valid botm elevations;
+ in other words, these cells have no thicknesses)
+ 3) layer thicknesses less than the specified minimum thickness plus a tolerance (tol)
+
+ Parameters
+ ----------
+ model : mfsetup.MF6model model instance
+
+ Returns
+ -------
+ idomain : np.ndarray (int)
+
+ """
+ top=top.copy()
+ botm=botm.copy()
+ top[top==nodata]=np.nan
+ botm[botm==nodata]=np.nan
+ criteria=np.isnan(botm)
+
+ # compute layer thicknesses, considering pinched cells (nans)
+ b=get_layer_thicknesses(top,botm)
+ criteria=criteria|np.isnan(b)# cells without thickness values
+
+ ifdrop_thin_cells:
+ criteria=criteria|(b<minimum_layer_thickness+tol)
+ #all_layers = np.stack([top] + [b for b in botm])
+ #min_layer_thickness = minimum_layer_thickness
+ #isthin = np.diff(all_layers, axis=0) * -1 < min_layer_thickness + tol
+ #criteria = criteria | isthin
+ idomain=np.abs(~criteria).astype(int)
+ returnidomain
+
+
+
+
+[docs]
+defget_highest_active_layer(idomain,null_value=-9999):
+"""Get the highest active model layer at each
+ i, j location, accounting for inactive and
+ vertical pass-through cells."""
+ idm=idomain.copy()
+ # reset all inactive/passthrough values to large positive value
+ # for min calc
+ idm[idm<1]=9999
+ highest_active_layer=np.argmin(idm,axis=0)
+ # set locations with all inactive cells to null values
+ highest_active_layer[(idm==9999).all(axis=0)]=null_value
+ returnhighest_active_layer
+
+
+
+
+[docs]
+defmake_irch(idomain):
+"""Make an irch array for the MODFLOW 6 Recharge Package,
+ which specifies the highest active model layer at each
+ i, j location, accounting for inactive and
+ vertical pass-through cells. Set all i, j locations
+ with no active layers to 1 (MODFLOW 6 only allows
+ valid layer numbers in the irch array).
+ """
+ irch=get_highest_active_layer(idomain,null_value=-9999)
+ # set locations where all layers are inactive back to 0
+ irch[irch==-9999]=0
+ irch+=1# set to one-based
+ returnirch
+
+
+
+
+[docs]
+defget_layer_thicknesses(top,botm,idomain=None):
+"""For each i, j location in the grid, get thicknesses
+ between pairs of subsequent valid elevation values. Make
+ a thickness array of the same shape as the model grid, assign the
+ computed thicknesses for each pair of valid elevations to the
+ position of the elevation representing the cell botm. For example,
+ given the column of cells [nan nan 8. nan nan nan nan nan 2. nan],
+ a thickness of 6 would be assigned to the second to last layer
+ (position -2).
+
+ Parameters
+ ----------
+ top : nrow x ncol array of model top elevations
+ botm : nlay x nrow x ncol array of model botm elevations
+ idomain : nlay x nrow x ncol array indicating cells to be
+ included in the model solution. idomain=0 are converted to np.nans
+ in the example column of cells above. (optional)
+ If idomain is not specified, excluded cells are expected to be
+ designated in the top and botm arrays as np.nans.
+
+ Examples
+ --------
+ Make a fake model grid with 7 layers, but only top and two layer bottoms specified:
+ >>> top = np.reshape([[10]]* 4, (2, 2))
+ >>> botm = np.reshape([[np.nan, 8., np.nan, np.nan, np.nan, 2., np.nan]]*4, (2, 2, 7)).transpose(2, 0, 1)
+ >>> result = get_layer_thicknesses(top, botm)
+ >>> result[:, 0, 0]
+ array([nan 2. nan nan nan 6. nan])
+
+ example with all layer elevations specified
+ note: this is the same result that np.diff(... axis=0) would produce;
+ except positive in the direction of the zero axis
+ >>> top = np.reshape([[10]] * 4, (2, 2))
+ >>> botm = np.reshape([[9, 8., 8, 6, 3, 2., -10]] * 4, (2, 2, 7)).transpose(2, 0, 1)
+ >>> result = get_layer_thicknesses(top, botm)
+ array([1., 1., 0., 2., 3., 1., 12.])
+ """
+ print('computing cell thicknesses...')
+ t0=time.time()
+ top=top.copy()
+ botm=botm.copy()
+ ifidomainisnotNone:
+ idomain=idomain>=1
+ top[~idomain[0]]=np.nan
+ botm[~idomain]=np.nan
+ all_layers=np.stack([top]+[bforbinbotm])
+ thicknesses=np.zeros_like(botm)*np.nan
+ nrow,ncol=top.shape
+ foriinrange(nrow):
+ forjinrange(ncol):
+ cells=all_layers[:,i,j]
+ valid_b=list(-np.diff(cells[~np.isnan(cells)]))
+ b_ij=np.zeros_like(cells[1:])*np.nan
+ has_top=False
+ fork,elevinenumerate(cells):
+ ifnothas_topandnotnp.isnan(elev):
+ has_top=True
+ elifhas_topandnotnp.isnan(elev):
+ b_ij[k-1]=valid_b.pop(0)
+ thicknesses[:,i,j]=b_ij
+ thicknesses[thicknesses==0]=0# get rid of -0.
+ print("finished in {:.2f}s\n".format(time.time()-t0))
+ returnthicknesses
+[docs]
+defpopulate_values(values_dict,array_shape=None):
+"""Given an input dictionary with non-consecutive keys,
+ make a second dictionary with consecutive keys, with values
+ that are linearly interpolated from the first dictionary,
+ based on the key values. For example, given {0: 1.0, 2: 2.0},
+ {0: 1.0, 1: 1.5, 2: 2.0} would be returned.
+
+ Examples
+ --------
+ >>> populate_values({0: 1.0, 2: 2.0}, array_shape=None)
+ {0: 1.0, 1: 1.5, 2: 2.0}
+ >>> populate_values({0: 1.0, 2: 2.0}, array_shape=(2, 2))
+ {0: array([[1., 1.],
+ [1., 1.]]),
+ 1: array([[1.5, 1.5],
+ [1.5, 1.5]]),
+ 2: array([[2., 2.],
+ [2., 2.]])}
+ """
+ sorted_layers=sorted(list(values_dict.keys()))
+ values={}
+ foriinrange(len(sorted_layers[:-1])):
+ l1=sorted_layers[i]
+ l2=sorted_layers[i+1]
+ v1=values_dict[l1]
+ v2=values_dict[l2]
+ layers=np.arange(l1,l2+1)
+ interp_values=dict(zip(layers,np.linspace(v1,v2,len(layers))))
+
+ # if an array shape is given, fill an array of that shape
+ # or reshape to that shape
+ ifarray_shapeisnotNone:
+ fork,vininterp_values.items():
+ ifnp.isscalar(v):
+ v=np.ones(array_shape,dtype=float)*v
+ else:
+ v=np.reshape(v,array_shape)
+ interp_values[k]=v
+ values.update(interp_values)
+ returnvalues
+
+
+
+
+[docs]
+defvoxels_to_layers(voxel_array,z_edges,model_top=None,model_botm=None,no_data_value=0,
+ extend_top=True,extend_botm=False,tol=0.1,
+ minimum_frac_active_cells=0.01):
+"""Combine a voxel array (voxel_array), with no-data values and either uniform or non-uniform top
+ and bottom elevations, with land-surface elevations (model_top; to form the top of the grid), and
+ additional elevation surfaces forming layering below the voxel grid (model_botm).
+
+ * In places where the model_botm elevations are above the lowest voxel elevations,
+ the voxels are given priority, and the model_botm elevations reset to equal the lowest voxel elevations
+ (effectively giving the underlying layer zero-thickness).
+ * Voxels with no_data_value(s) are also given zero-thickness. Typically these would be cells beyond a
+ no-flow boundary, or below the depth of investigation (for example, in an airborne electromagnetic survey
+ of aquifer electrical resisitivity). The vertical extent of the layering representing the voxel data then spans the highest and lowest valid voxels.
+ * In places where the model_top (typically land-surface) elevations are higher than the highest valid voxel,
+ the voxel layer can either be extended to the model_top (extend_top=True), or an additional layer
+ can be created between the top edge of the highest voxel and model_top (extent_top=False).
+ * Similarly, in places where elevations in model_botm are below the lowest valid voxel, the lowest voxel
+ elevation can be extended to the highest underlying layer (extend_botm=True), or an additional layer can fill
+ the gap between the lowest voxel and highest model_botm (extend_botm=False).
+
+ Parameters
+ ----------
+ voxel_array : 3D numpy array
+ 3D array of voxel data- could be zones or actually aquifer properties. Empty voxels
+ can be marked with a no_data_value. Voxels are assumed to have the same horizontal
+ discretization as the model_top and model_botm layers.
+ z_edges : 3D numpy array or sequence
+ Top and bottom edges of the voxels (length is voxel_array.shape[0] + 1). A sequence
+ can be used to specify uniform voxel edge elevations; non-uniform top and bottom
+ elevations can be specified with a 3D numpy array (similar to the botm array in MODFLOW).
+ model_top : 2D numpy array
+ Top elevations of the model at each row/column location.
+ model_botm : 2D or 3D numpy array
+ Model layer(s) underlying the voxel grid.
+ no_data_value : scalar, optional
+ Indicates empty voxels in voxel_array.
+ extend_top : bool, optional
+ Option to extend the top voxel layer to the model_top, by default True.
+ extend_botm : bool, optional
+ Option to extend the bottom voxel layer to the next layer below in model_botm,
+ by default False.
+ tol : float, optional
+ Depth tolerance used in comparing the voxel edges to model_top and model_botm.
+ For example, if model_top - z_edges[0] is less than tol, the model_top and top voxel
+ edge will be considered equal, and no additional layer will be added, regardless of extend_top.
+ by default 0.1
+ minimum_frac_active_cells : float
+ Minimum fraction of cells with a thickness of > 0 for a layer to be retained,
+ by default 0.01.
+
+ Returns
+ -------
+ layers : 3D numpy array of shape (nlay +1, nrow, ncol)
+ Model layer elevations (vertical edges of cells), including the model top.
+
+
+ Raises
+ ------
+ ValueError
+ If z_edges is not 1D or 3D
+ """
+ model_top=model_top.copy()
+ model_botm=model_botm.copy()
+ iflen(model_botm.shape)==2:
+ model_botm=np.reshape(model_botm,(1,*model_botm.shape))
+ ifnp.any(np.isnan(z_edges)):
+ raiseNotImplementedError("Nan values in z_edges array not allowed!")
+ z_values=np.array(z_edges)[1:]
+
+ # convert nodata values to nans
+ hasdata=voxel_array.astype(float).copy()
+ hasdata[hasdata==no_data_value]=np.nan
+ hasdata[~np.isnan(hasdata)]=1
+ thicknesses=-np.diff(z_edges,axis=0)
+
+ # apply nodata to thicknesses and botm elevations
+ iflen(z_values.shape)==3:
+ z=hasdata*z_values
+ b=hasdata*thicknesses
+ eliflen(z_values.shape)==1:
+ z=(hasdata.transpose(1,2,0)*z_values).transpose(2,0,1)
+ b=(hasdata.transpose(1,2,0)*thicknesses).transpose(2,0,1)
+ else:
+ msg='z_edges.shape = {}; z_edges must be a 3D or 1D numpy array'
+ raiseValueError(msg.format(z_edges.shape))
+
+ assertnp.all(np.isnan(b[np.isnan(b)]))
+ b[np.isnan(b)]=0
+ # cumulative sum from bottom to top
+ layers=np.cumsum(b[::-1],axis=0)[::-1]
+ # add in the model bottom elevations
+ # use the minimum values instead of the bottom layer,
+ # in case there are nans in the bottom layer
+ layers+=np.nanmin(z,axis=0)# botm[-1]
+ # append the model bottom elevations
+ layers=np.append(layers,[np.nanmin(z,axis=0)],axis=0)
+
+ # set all voxel edges greater than land surface to land surface
+ k,i,j=np.where(layers>model_top)
+ layers[k,i,j]=model_top[i,j]
+
+ # reset model bottom to lowest valid voxels, where they are lower than model bottom
+ lowest_valid_edges=np.nanmin(layers,axis=0)
+ fori,layer_botminenumerate(model_botm):
+ loc=layer_botm>lowest_valid_edges
+ model_botm[i][loc]=lowest_valid_edges[loc]
+
+ # option to add another layer on top of voxel sequence,
+ # if any part of the model top is above the highest valid voxel edges
+ ifnp.any(layers[0]<model_top-tol)andnotextend_top:
+ layers=np.vstack([np.reshape(model_top,(1,*model_top.shape)),layers])
+ # otherwise set the top edges of the voxel sequence to be consistent with model top
+ else:
+ layers[0]=model_top
+
+ # option to add additional layers below the voxel sequence,
+ # if any part of those layers in model botm array are below the lowest valid voxel edges
+ ifnotextend_botm:
+ new_botms=[layers]
+ forlayer_botminmodel_botm:
+ # get the percentage of active cells with > 0 thickness
+ pct_cells=np.sum(layers[-1]>layer_botm+tol)/layers[-1].size
+ ifpct_cells>minimum_frac_active_cells:
+ new_botms.append(np.reshape(layer_botm,(1,*layer_botm.shape)))
+ layers=np.vstack(new_botms)
+ # otherwise just set the lowest voxel edges to the highest layer in model botm
+ # (model botm was already set to lowest valid voxels that were lower than the model botm;
+ # this extends any voxels that were above the model botm to the model botm)
+ else:
+ layers[-1]=model_botm[0]
+
+ # finally, fill any remaining nans with next layer elevation (going upward)
+ # might still have nans in areas where there are no voxel values, but model top and botm values
+ botm=fill_cells_vertically(layers[0],layers[1:])
+ layers=np.vstack([np.reshape(layers[0],(1,*layers[0].shape)),botm])
+ returnlayers
+[docs]
+defdump(filename,data):
+"""Write a dictionary to a configuration file."""
+ ifstr(filename).endswith('.yml')orstr(filename).endswith('.yaml'):
+ returndump_yml(filename,data)
+ eliffilename.endswith('.json'):
+ returndump_json(filename,data)
+
+
+
+
+[docs]
+defload_json(jsonfile):
+"""Convenience function to load a json file; replacing
+ some escaped characters."""
+ withopen(jsonfile)asf:
+ returnjson.load(f)
+
+
+
+
+[docs]
+defdump_json(jsonfile,data):
+"""Write a dictionary to a json file."""
+ withopen(jsonfile,'w')asoutput:
+ json.dump(data,output,indent=4,sort_keys=True)
+ print('wrote {}'.format(jsonfile))
+[docs]
+defsave_array(filename,arr,nodata=-9999,
+ **kwargs):
+"""Save and array and print that it was written."""
+ ifisinstance(filename,dict)and'filename'infilename.keys():
+ filename=filename.copy().pop('filename')
+ t0=time.time()
+ ifnp.issubdtype(arr.dtype,np.unsignedinteger):
+ arr=arr.copy()
+ arr=arr.astype(int)
+ arr[np.isnan(arr)]=nodata
+ np.savetxt(filename,arr,**kwargs)
+ print('wrote {}'.format(filename),end=', ')
+ print("took {:.2f}s".format(time.time()-t0))
+
+
+
+
+[docs]
+defappend_csv(filename,df,**kwargs):
+"""Read data from filename,
+ append to dataframe, and write appended dataframe
+ back to filename."""
+ ifos.path.exists(filename):
+ written=pd.read_csv(filename)
+ df=pd.concat([df,written],axis=0)
+ df.to_csv(filename,**kwargs)
+
+
+
+
+[docs]
+defload_cfg(cfgfile,verbose=False,default_file=None):
+"""This method loads a YAML or JSON configuration file,
+ applies configuration defaults from a default_file if specified,
+ adds the absolute file path of the configuration file
+ to the configuration dictionary, and converts any
+ relative paths in the configuration dictionary to
+ absolute paths, assuming the paths are relative to
+ the configuration file location.
+
+ Parameters
+ ----------
+ cfgfile : str
+ Path to MFsetup configuration file (json or yaml)
+
+ Returns
+ -------
+ cfg : dict
+ Dictionary of configuration data
+
+ Notes
+ -----
+ This function is used by the model instance load and setup_from_yaml
+ classmethods, so that configuration defaults can be applied to the
+ simulation and model blocks before they are passed to the flopy simulation
+ constructor and the model constructor.
+ """
+ print('loading configuration file {}...'.format(cfgfile))
+ source_path=Path(__file__).parent
+ default_file=Path(default_file)
+ check_source_files([cfgfile,source_path/default_file])
+
+ # default configuration
+ default_cfg={}
+ ifdefault_fileisnotNone:
+ default_cfg=load(source_path/default_file)
+ default_cfg['filename']=source_path/default_file
+
+ # for now, only apply defaults for the model and simulation blocks
+ # which are needed for the model instance constructor
+ # other defaults are applied in _set_cfg,
+ # which is called by model.__init__
+ # intermediate_data is needed by some tests
+ apply_defaults={'simulation','model','intermediate_data'}
+ default_cfg={k:vfork,vindefault_cfg.items()
+ ifkinapply_defaults}
+
+ # recursively update defaults with information from yamlfile
+ cfg=default_cfg.copy()
+ user_specified_cfg=load(cfgfile)
+
+ update(cfg,user_specified_cfg)
+ cfg['model'].update({'verbose':verbose})
+ cfg['filename']=os.path.abspath(cfgfile)
+
+ # convert relative paths in the configuration dictionary
+ # to absolute paths, based on the location of the config file
+ config_file_location=os.path.split(os.path.abspath(cfgfile))[0]
+ cfg=set_cfg_paths_to_absolute(cfg,config_file_location)
+ returncfg
+
+
+
+
+[docs]
+defset_cfg_paths_to_absolute(cfg,config_file_location):
+ version=None
+ if'simulation'incfg:
+ version='mf6'
+ else:
+ version=cfg['model'].get('version')
+ ifversion=='mf6':
+ file_path_keys_relative_to_config=[
+ 'simulation.sim_ws',
+ 'parent.model_ws',
+ 'parent.simulation.sim_ws',
+ 'parent.headfile',
+ #'setup_grid.lgr.config_file'
+ ]
+ model_ws=os.path.normpath(os.path.join(config_file_location,
+ cfg['simulation']['sim_ws']))
+ else:
+ file_path_keys_relative_to_config=[
+ 'model.model_ws',
+ 'parent.model_ws',
+ 'parent.simulation.sim_ws',
+ 'parent.headfile',
+ 'nwt.use_existing_file'
+ ]
+ model_ws=os.path.normpath(os.path.join(config_file_location,
+ cfg['model']['model_ws']))
+ file_path_keys_relative_to_model_ws=[
+ 'setup_grid.grid_file'
+ ]
+ # add additional paths by looking for source_data
+ # within these input blocks, convert file paths to absolute
+ look_for_files_in=['source_data',
+ 'perimeter_boundary',
+ 'lgr',
+ 'sfrmaker_options'
+ ]
+ forpckgname,pckgincfg.items():
+ ifisinstance(pckg,dict):
+ forinput_blockinlook_for_files_in:
+ ifinput_blockinpckg.keys():
+ # handle LGR sub-blocks separately
+ # if LGR configuration is specified within the yaml file
+ # (or as a dictionary), we don't want to touch it at this point
+ # (just convert filepaths to configuration files for sub-models)
+ ifinput_block=='lgr':
+ formodel_name,configinpckg[input_block].items():
+ if'filename'inconfig:
+ file_keys=_parse_file_path_keys_from_source_data(
+ {model_name:config})
+ else:
+ file_keys=_parse_file_path_keys_from_source_data(pckg[input_block])
+ forkeyinfile_keys:
+ file_path_keys_relative_to_config. \
+ append('.'.join([pckgname,input_block,key]))
+ forlocin['output_files',
+ 'output_folders',
+ 'output_folder',
+ 'output_path']:
+ iflocinpckg.keys():
+ file_keys=_parse_file_path_keys_from_source_data(pckg[loc],paths=True)
+ forkeyinfile_keys:
+ file_path_keys_relative_to_model_ws. \
+ append('.'.join([pckgname,loc,key]).strip('.'))
+
+ # set locations that are relative to configuration file
+ cfg=_set_absolute_paths_to_location(file_path_keys_relative_to_config,
+ config_file_location,cfg)
+
+ # set locations that are relative to model_ws
+ cfg=_set_absolute_paths_to_location(file_path_keys_relative_to_model_ws,
+ model_ws,
+ cfg)
+ returncfg
+
+
+
+def_set_path(keys,abspath,cfg):
+"""From a sequence of keys that point to a file
+ path in a nested dictionary, convert the file
+ path at that location from relative to absolute,
+ based on a provided absolute path.
+
+ Parameters
+ ----------
+ keys : sequence or str of dict keys separated by '.'
+ that point to a relative path
+ Example: 'parent.model_ws' for cfg['parent']['model_ws']
+ abspath : absolute path
+ cfg : dictionary
+
+ Returns
+ -------
+ updates cfg with an absolute path based on abspath,
+ at the location in the dictionary specified by keys.
+ """
+ ifisinstance(keys,str):
+ keys=keys.split('.')
+ d=cfg.get(keys[0])
+ ifdisnotNone:
+ forlevelinrange(1,len(keys)):
+ iflevel==len(keys)-1:
+ k=keys[level]
+ ifkind:
+ ifd[k]isnotNone:
+ d[k]=os.path.normpath(os.path.join(abspath,d[k]))
+ elifk.isdigit():
+ k=int(k)
+ ifd[k]isnotNone:
+ d[k]=os.path.join(abspath,d[k])
+ else:
+ key=keys[level]
+ ifkeyind:
+ d=d[keys[level]]
+ returncfg
+
+
+def_set_absolute_paths_to_location(paths,location,cfg):
+"""Set relative file paths in a configuration dictionary
+ to a specified location.
+
+ Parameters
+ ----------
+ paths : sequence
+ Sequence of dictionary keys read by set_path.
+ e.g. ['parent.model_ws', 'parent.headfile']
+ location : str (path to folder)
+ cfg : configuration dictionary (as read in by load_cfg)
+
+ """
+ forkeysinpaths:
+ cfg=_set_path(keys,location,cfg)
+ returncfg
+
+
+def_parse_file_path_keys_from_source_data(source_data,prefix=None,paths=False):
+"""Parse a source data entry in the configuration file.
+
+ pseudo code:
+ For each key or item in source_data,
+ If it is a string that ends with a valid extension,
+ a file is expected.
+ If it is a dict or list,
+ it is expected to be a file or set of files with metadata.
+ For each item in the dict or list,
+ If it is a string that ends with a valid extension,
+ a file is expected.
+ If it is a dict or list,
+ A set of files corresponding to
+ model layers or stress periods is expected.
+
+ valid source data file extensions: csv, shp, tif, asc
+
+ Parameters
+ ----------
+ source_data : dict
+ prefix : str
+ text to prepend to results, e.g.
+ keys = prefix.keys
+ paths = Bool
+ if True, overrides check for valid extension
+
+ Returns
+ -------
+ keys
+ """
+ valid_extensions=['csv','shp','tif',
+ 'ref','dat',
+ 'nc',
+ 'yml','json',
+ 'hds','cbb','cbc',
+ 'grb']
+ file_keys=['filename',
+ 'filenames',
+ 'binaryfile',
+ 'nhdplus_paths']
+ keys=[]
+ ifsource_dataisNone:
+ return[]
+ ifisinstance(source_data,str):
+ return['']
+ ifisinstance(source_data,list):
+ items=enumerate(source_data)
+ elifisinstance(source_data,dict):
+ items=source_data.items()
+ fork0,vinitems:
+ ifisinstance(v,str):
+ ifk0infile_keys:
+ keys.append(k0)
+ elifv[-3:]invalid_extensionsorpaths:
+ keys.append(k0)
+ elif'output'insource_data:
+ keys.append(k0)
+ elifisinstance(v,list):
+ fori,v1inenumerate(v):
+ ifk0infile_keys:
+ keys.append('.'.join([str(k0),str(i)]))
+ elifpathsorisinstance(v1,str)andv1[-3:]invalid_extensions:
+ keys.append('.'.join([str(k0),str(i)]))
+ elifisinstance(v,dict):
+ keys+=_parse_file_path_keys_from_source_data(v,prefix=k0,paths=paths)
+ ifprefixisnotNone:
+ keys=['{}.{}'.format(prefix,k)forkinkeys]
+ returnkeys
+
+
+
+[docs]
+defsetup_external_filepaths(model,package,variable_name,
+ filename_format,file_numbers=None,
+ relative_external_paths=True):
+"""Set up external file paths for a MODFLOW package variable. Sets paths
+ for intermediate files, which are written from the (processed) source data.
+ Intermediate files are supplied to Flopy as external files for a given package
+ variable. Flopy writes external files to a specified location when the MODFLOW
+ package file is written. This method gets the external file paths that
+ will be written by FloPy, and puts them in the configuration dictionary
+ under their respective variables.
+
+ Parameters
+ ----------
+ model : mfsetup.MF6model or mfsetup.MFnwtModel instance
+ Model with cfg attribute to update.
+ package : str
+ Three-letter package abreviation (e.g. 'DIS' for discretization)
+ variable_name : str
+ FloPy name of variable represented by external files (e.g. 'top' or 'botm')
+ filename_format : str
+ File path to the external file(s). Can be a string representing a single file
+ (e.g. 'top.dat'), or for variables where a file is written for each layer or
+ stress period, a format string that will be formated with the zero-based layer
+ number (e.g. 'botm{}.dat') for files botm0.dat, botm1.dat, ...
+ file_numbers : list of ints
+ List of numbers for the external files. Usually these represent zero-based
+ layers or stress periods.
+
+ Returns
+ -------
+ filepaths : list
+ List of external file paths
+
+ Adds intermediated file paths to model.cfg[<package>]['intermediate_data']
+ For MODFLOW-6 models, Adds external file paths to model.cfg[<package>][<variable_name>]
+ """
+ package=package.lower()
+ iffile_numbersisNone:
+ file_numbers=[0]
+
+ # in lieu of a way to get these from Flopy somehow
+ griddata_variables=['top','botm','idomain','strt',
+ 'k','k33','sy','ss']
+ transient2D_variables={'rech','recharge',
+ 'finf','pet','extdp','extwc',
+ }
+ transient3D_variables={'lakarr','bdlknc'}
+ tabular_variables={'connectiondata'}
+ transient_tabular_variables={'stress_period_data'}
+ transient_variables=transient2D_variables|transient3D_variables|transient_tabular_variables
+
+ model.get_package(package)
+ # intermediate data
+ filename_format=os.path.split(filename_format)[-1]
+ ifnotrelative_external_paths:
+ intermediate_files=[os.path.normpath(os.path.join(model.tmpdir,
+ filename_format).format(i))foriinfile_numbers]
+ else:
+ intermediate_files=[os.path.join(model.tmpdir,
+ filename_format).format(i)foriinfile_numbers]
+
+ ifvariable_nameintransient2D_variablesorvariable_nameintransient_tabular_variables:
+ model.cfg['intermediate_data'][variable_name]={per:fforper,fin
+ zip(file_numbers,intermediate_files)}
+ elifvariable_nameintransient3D_variables:
+ model.cfg['intermediate_data'][variable_name]={0:intermediate_files}
+ elifvariable_nameintabular_variables:
+ model.cfg['intermediate_data']['{}_{}'.format(package,variable_name)]=intermediate_files
+ else:
+ model.cfg['intermediate_data'][variable_name]=intermediate_files
+
+ # external array(s) read by MODFLOW
+ # (set to reflect expected locations where flopy will save them)
+ ifnotrelative_external_paths:
+ external_files=[os.path.normpath(os.path.join(model.model_ws,
+ model.external_path,
+ filename_format.format(i)))foriinfile_numbers]
+ else:
+ external_files=[os.path.join(model.model_ws,
+ model.external_path,
+ filename_format.format(i))foriinfile_numbers]
+
+ ifvariable_nameintransient2D_variablesorvariable_nameintransient_tabular_variables:
+ model.cfg['external_files'][variable_name]={per:fforper,fin
+ zip(file_numbers,external_files)}
+ elifvariable_nameintransient3D_variables:
+ model.cfg['external_files'][variable_name]={0:external_files}
+ else:
+ model.cfg['external_files'][variable_name]=external_files
+
+ ifmodel.version=='mf6':
+ # skip these for now (not implemented yet for MF6)
+ ifvariable_nameintransient3D_variables:
+ return
+ ext_files_key='external_files'
+ ifvariable_namenotintransient_variables:
+ filepaths=[{'filename':f}forfinmodel.cfg[ext_files_key][variable_name]]
+ else:
+ filepaths={per:{'filename':f}
+ forper,finmodel.cfg[ext_files_key][variable_name].items()}
+ # set package variable input (to Flopy)
+ ifvariable_nameingriddata_variables:
+ model.cfg[package]['griddata'][variable_name]=filepaths
+ elifvariable_nameintabular_variables:
+ model.cfg[package][variable_name]=filepaths[0]
+ model.cfg[ext_files_key]['{}_{}'.format(package,variable_name)]=model.cfg[ext_files_key].pop(variable_name)
+ #elif variable_name in transient_variables:
+ # filepaths = {per: {'filename': f} for per, f in
+ # zip(file_numbers, model.cfg[ext_files_key][variable_name])}
+ # model.cfg[package][variable_name] = filepaths
+ elifvariable_nameintransient_tabular_variables:
+ model.cfg[package][variable_name]=filepaths
+ model.cfg[ext_files_key]['{}_{}'.format(package,variable_name)]=model.cfg[ext_files_key].pop(variable_name)
+ else:
+ model.cfg[package][variable_name]=filepaths# {per: d for per, d in zip(file_numbers, filepaths)}
+ else:
+ filepaths=model.cfg['intermediate_data'][variable_name]
+ model.cfg[package][variable_name]=filepaths
+
+ returnfilepaths
+
+
+
+
+[docs]
+defflopy_mf2005_load(m,load_only=None,forgive=False,check=False):
+"""Execute the code in flopy.modflow.Modflow.load on an existing
+ flopy.modflow.Modflow instance."""
+ version=m.version
+ verbose=m.verbose
+ model_ws=m.model_ws
+
+ # similar to modflow command: if file does not exist , try file.nam
+ namefile_path=os.path.join(model_ws,m.namefile)
+ if(notos.path.isfile(namefile_path)and
+ os.path.isfile(namefile_path+'.nam')):
+ namefile_path+='.nam'
+ ifnotos.path.isfile(namefile_path):
+ raiseIOError('cannot find name file: '+str(namefile_path))
+
+ files_successfully_loaded=[]
+ files_not_loaded=[]
+
+ # set the reference information
+ attribs=mfreadnam.attribs_from_namfile_header(namefile_path)
+
+ #ref_attributes = SpatialReference.load(namefile_path)
+
+ # read name file
+ ext_unit_dict=mfreadnam.parsenamefile(
+ namefile_path,m.mfnam_packages,verbose=verbose)
+ ifm.verbose:
+ print('\n{}\nExternal unit dictionary:\n{}\n{}\n'
+ .format(50*'-',ext_unit_dict,50*'-'))
+
+ # create a dict where key is the package name, value is unitnumber
+ ext_pkg_d={v.filetype:kfor(k,v)inext_unit_dict.items()}
+
+ # reset version based on packages in the name file
+ if"NWT"inext_pkg_dor"UPW"inext_pkg_d:
+ version="mfnwt"
+ if"GLOBAL"inext_pkg_d:
+ ifversion!="mf2k":
+ m.glo=ModflowGlobal(m)
+ version="mf2k"
+ if"SMS"inext_pkg_d:
+ version="mfusg"
+ if"DISU"inext_pkg_d:
+ version="mfusg"
+ m.structured=False
+ # update the modflow version
+ m.set_version(version)
+
+ # reset unit number for glo file
+ ifversion=="mf2k":
+ if"GLOBAL"inext_pkg_d:
+ unitnumber=ext_pkg_d["GLOBAL"]
+ filepth=os.path.basename(ext_unit_dict[unitnumber].filename)
+ m.glo.unit_number=[unitnumber]
+ m.glo.file_name=[filepth]
+ else:
+ # TODO: is this necessary? it's not done for LIST.
+ m.glo.unit_number=[0]
+ m.glo.file_name=[""]
+
+ # reset unit number for list file
+ if'LIST'inext_pkg_d:
+ unitnumber=ext_pkg_d['LIST']
+ filepth=os.path.basename(ext_unit_dict[unitnumber].filename)
+ m.lst.unit_number=[unitnumber]
+ m.lst.file_name=[filepth]
+
+ # look for the free format flag in bas6
+ bas_key=ext_pkg_d.get('BAS6')
+ ifbas_keyisnotNone:
+ bas=ext_unit_dict[bas_key]
+ start=bas.filehandle.tell()
+ line=bas.filehandle.readline()
+ whileline.startswith("#"):
+ line=bas.filehandle.readline()
+ if"FREE"inline.upper():
+ m.free_format_input=True
+ bas.filehandle.seek(start)
+ ifverbose:
+ print("ModflowBas6 free format:{0}\n".format(m.free_format_input))
+
+ # load dis
+ dis_key=ext_pkg_d.get('DIS')orext_pkg_d.get('DISU')
+ ifdis_keyisNone:
+ raiseKeyError('discretization entry not found in nam file')
+ disnamdata=ext_unit_dict[dis_key]
+ dis=disnamdata.package.load(
+ disnamdata.filename,m,
+ ext_unit_dict=ext_unit_dict,check=False)
+ files_successfully_loaded.append(disnamdata.filename)
+ ifm.verbose:
+ print(' {:4s} package load...success'.format(dis.name[0]))
+ m.setup_grid()# reset model grid now that DIS package is loaded
+ assertm.pop_key_list.pop()==dis_key
+ ext_unit_dict.pop(dis_key)#.filehandle.close()
+ #start_datetime = attribs.pop("start_datetime", "01-01-1970")
+ #itmuni = attribs.pop("itmuni", 4)
+ #ref_source = attribs.pop("source", "defaults")
+ # if m.structured:
+ # # get model units from usgs.model.reference, if provided
+ # if ref_source == 'usgs.model.reference':
+ # pass
+ # # otherwise get them from the DIS file
+ # else:
+ # itmuni = dis.itmuni
+ # ref_attributes['lenuni'] = dis.lenuni
+ # sr = SpatialReference(delr=m.dis.delr.array, delc=ml.dis.delc.array,
+ # **ref_attributes)
+ # else:
+ # sr = None
+ #
+ #dis.sr = m.sr
+ #dis.tr = TemporalReference(itmuni=itmuni, start_datetime=start_datetime)
+ #dis.start_datetime = start_datetime
+
+ ifload_onlyisNone:
+ # load all packages/files
+ load_only=ext_pkg_d.keys()
+ else:# check items in list
+ ifnotisinstance(load_only,list):
+ load_only=[load_only]
+ not_found=[]
+ fori,filetypeinenumerate(load_only):
+ load_only[i]=filetype=filetype.upper()
+ iffiletypenotinext_pkg_d:
+ not_found.append(filetype)
+ ifnot_found:
+ raiseKeyError(
+ "the following load_only entries were not found "
+ "in the ext_unit_dict: "+str(not_found))
+
+ # zone, mult, pval
+ if"PVAL"inext_pkg_d:
+ m.mfpar.set_pval(m,ext_unit_dict)
+ assertm.pop_key_list.pop()==ext_pkg_d.get("PVAL")
+ if"ZONE"inext_pkg_d:
+ m.mfpar.set_zone(m,ext_unit_dict)
+ assertm.pop_key_list.pop()==ext_pkg_d.get("ZONE")
+ if"MULT"inext_pkg_d:
+ m.mfpar.set_mult(m,ext_unit_dict)
+ assertm.pop_key_list.pop()==ext_pkg_d.get("MULT")
+
+ # try loading packages in ext_unit_dict
+ forkey,iteminext_unit_dict.items():
+ ifitem.packageisnotNone:
+ ifitem.filetypeinload_only:
+ ifforgive:
+ try:
+ package_load_args= \
+ list(inspect.getfullargspec(item.package.load))[0]
+ if"check"inpackage_load_args:
+ item.package.load(
+ item.filename,m,
+ ext_unit_dict=ext_unit_dict,check=False)
+ else:
+ item.package.load(
+ item.filename,m,
+ ext_unit_dict=ext_unit_dict)
+ files_successfully_loaded.append(item.filename)
+ ifm.verbose:
+ print(' {:4s} package load...success'
+ .format(item.filetype))
+ exceptExceptionase:
+ m.load_fail=True
+ ifm.verbose:
+ print(' {:4s} package load...failed\n{!s}'
+ .format(item.filetype,e))
+ files_not_loaded.append(item.filename)
+ else:
+ package_load_args= \
+ list(inspect.getfullargspec(item.package.load))[0]
+ if"check"inpackage_load_args:
+ item.package.load(
+ item.filename,m,
+ ext_unit_dict=ext_unit_dict,check=False)
+ else:
+ item.package.load(
+ item.filename,m,
+ ext_unit_dict=ext_unit_dict)
+ files_successfully_loaded.append(item.filename)
+ ifm.verbose:
+ print(' {:4s} package load...success'
+ .format(item.filetype))
+ else:
+ ifm.verbose:
+ print(' {:4s} package load...skipped'
+ .format(item.filetype))
+ files_not_loaded.append(item.filename)
+ elif"data"notinitem.filetype.lower():
+ files_not_loaded.append(item.filename)
+ ifm.verbose:
+ print(' {:4s} package load...skipped'
+ .format(item.filetype))
+ elif"data"initem.filetype.lower():
+ ifm.verbose:
+ print(' {} file load...skipped\n{}'
+ .format(item.filetype,
+ os.path.basename(item.filename)))
+ ifkeynotinm.pop_key_list:
+ # do not add unit number (key) if it already exists
+ ifkeynotinm.external_units:
+ m.external_fnames.append(item.filename)
+ m.external_units.append(key)
+ m.external_binflag.append("binary"
+ initem.filetype.lower())
+ m.external_output.append(False)
+ else:
+ raiseKeyError('unhandled case: {}, {}'.format(key,item))
+
+ # pop binary output keys and any external file units that are now
+ # internal
+ forkeyinm.pop_key_list:
+ try:
+ m.remove_external(unit=key)
+ ext_unit_dict.pop(key)
+ exceptKeyError:
+ ifm.verbose:
+ print('Warning: external file unit {} does not exist in '
+ 'ext_unit_dict.'.format(key))
+
+ # write message indicating packages that were successfully loaded
+ ifm.verbose:
+ print('')
+ print(' The following {0} packages were successfully loaded.'
+ .format(len(files_successfully_loaded)))
+ forfnameinfiles_successfully_loaded:
+ print(' '+os.path.basename(fname))
+ iflen(files_not_loaded)>0:
+ print(' The following {0} packages were not loaded.'
+ .format(len(files_not_loaded)))
+ forfnameinfiles_not_loaded:
+ print(' '+os.path.basename(fname))
+ ifcheck:
+ m.check(f='{}.chk'.format(m.name),verbose=m.verbose,level=0)
+
+ # return model object
+ returnm
+
+
+
+
+[docs]
+defflopy_mfsimulation_load(sim,model,strict=True,load_only=None,
+ verify_data=False):
+"""Execute the code in flopy.mf6.MFSimulation.load on
+ existing instances of flopy.mf6.MFSimulation and flopy.mf6.MF6model"""
+
+ instance=sim
+ ifnotisinstance(model,list):
+ model_instances=[model]
+ else:
+ model_instances=model
+ version=sim.version
+ exe_name=sim.exe_name
+ verbosity_level=instance.simulation_data.verbosity_level
+
+ ifverbosity_level.value>=VerbosityLevel.normal.value:
+ print('loading simulation...')
+
+ # build case consistent load_only dictionary for quick lookups
+ load_only=PackageContainer._load_only_dict(load_only)
+
+ # load simulation name file
+ ifverbosity_level.value>=VerbosityLevel.normal.value:
+ print(' loading simulation name file...')
+ instance.name_file.load(strict)
+
+ # load TDIS file
+ tdis_pkg='tdis{}'.format(mfstructure.MFStructure().
+ get_version_string())
+ tdis_attr=getattr(instance.name_file,tdis_pkg)
+ instance._tdis_file=mftdis.ModflowTdis(instance,
+ filename=tdis_attr.get_data())
+
+ instance._tdis_file._filename=instance.simulation_data.mfdata[
+ ('nam','timing',tdis_pkg)].get_data()
+ ifverbosity_level.value>=VerbosityLevel.normal.value:
+ print(' loading tdis package...')
+ instance._tdis_file.load(strict)
+
+ # load models
+ try:
+ model_recarray=instance.simulation_data.mfdata[('nam','models',
+ 'models')]
+ models=model_recarray.get_data()
+ exceptMFDataExceptionasmfde:
+ message='Error occurred while loading model names from the ' \
+ 'simulation name file.'
+ raiseMFDataException(mfdata_except=mfde,
+ model=instance.name,
+ package='nam',
+ message=message)
+ foriteminmodels:
+ # resolve model working folder and name file
+ path,name_file=os.path.split(item[1])
+
+ # get the existing model instance
+ # corresponding to its entry in the simulation name file
+ # (in flopy the model instance is obtained from PackageContainer.model_factory below)
+ model_obj=[mforminmodel_instancesifm.namefile==name_file]
+ iflen(model_obj)==0:
+ print('model {} attached to {} not found in {}'.format(item,instance,model_instances))
+ return
+ model_obj=model_obj[0]
+ #model_obj = PackageContainer.model_factory(item[0][:-1].lower())
+
+ # load model
+ ifverbosity_level.value>=VerbosityLevel.normal.value:
+ print(' loading model {}...'.format(item[0].lower()))
+
+ instance._models[item[2]]=flopy_mf6model_load(instance,model_obj,
+ strict=strict,
+ model_rel_path=path,
+ load_only=load_only)
+
+ # original flopy code to load model
+ #instance._models[item[2]] = model_obj.load(
+ # instance,
+ # instance.structure.model_struct_objs[item[0].lower()], item[2],
+ # name_file, version, exe_name, strict, path, load_only)
+
+ # load exchange packages and dependent packages
+ try:
+ exchange_recarray=instance.name_file.exchanges
+ has_exch_data=exchange_recarray.has_data()
+ exceptMFDataExceptionasmfde:
+ message='Error occurred while loading exchange names from the ' \
+ 'simulation name file.'
+ raiseMFDataException(mfdata_except=mfde,
+ model=instance.name,
+ package='nam',
+ message=message)
+ ifhas_exch_data:
+ try:
+ exch_data=exchange_recarray.get_data()
+ exceptMFDataExceptionasmfde:
+ message='Error occurred while loading exchange names from ' \
+ 'the simulation name file.'
+ raiseMFDataException(mfdata_except=mfde,
+ model=instance.name,
+ package='nam',
+ message=message)
+ forexgfileinexch_data:
+ ifload_onlyisnotNoneandnot \
+ PackageContainer._in_pkg_list(load_only,exgfile[0],
+ exgfile[2]):
+ ifinstance.simulation_data.verbosity_level.value>= \
+ VerbosityLevel.normal.value:
+ print(' skipping package {}..'
+ '.'.format(exgfile[0].lower()))
+ continue
+ # get exchange type by removing numbers from exgtype
+ exchange_type=''.join([charforcharinexgfile[0]if
+ notchar.isdigit()]).upper()
+ # get exchange number for this type
+ ifexchange_typenotininstance._exg_file_num:
+ exchange_file_num=0
+ instance._exg_file_num[exchange_type]=1
+ else:
+ exchange_file_num=instance._exg_file_num[exchange_type]
+ instance._exg_file_num[exchange_type]+=1
+
+ exchange_name='{}_EXG_{}'.format(exchange_type,
+ exchange_file_num)
+ # find package class the corresponds to this exchange type
+ package_obj=PackageContainer.package_factory(
+ exchange_type.replace('-','').lower(),'')
+ ifnotpackage_obj:
+ message='An error occurred while loading the ' \
+ 'simulation name file. Invalid exchange type ' \
+ '"{}" specified.'.format(exchange_type)
+ type_,value_,traceback_=sys.exc_info()
+ raiseMFDataException(instance.name,
+ 'nam',
+ 'nam',
+ 'loading simulation name file',
+ exchange_recarray.structure.name,
+ inspect.stack()[0][3],
+ type_,value_,traceback_,message,
+ instance._simulation_data.debug)
+
+ # build and load exchange package object
+ exchange_file=package_obj(instance,exgtype=exgfile[0],
+ exgmnamea=exgfile[2],
+ exgmnameb=exgfile[3],
+ filename=exgfile[1],
+ pname=exchange_name,
+ loading_package=True)
+ ifverbosity_level.value>=VerbosityLevel.normal.value:
+ print(' loading exchange package {}..'
+ '.'.format(exchange_file._get_pname()))
+ exchange_file.load(strict)
+ # Flopy>=3.9
+ ifhasattr(instance,'_package_container'):
+ instance._package_container.add_package(exchange_file)
+ instance._exchange_files[exgfile[1]]=exchange_file
+
+ # load simulation packages
+ solution_recarray=instance.simulation_data.mfdata[('nam',
+ 'solutiongroup',
+ 'solutiongroup'
+ )]
+
+ try:
+ solution_group_dict=solution_recarray.get_data()
+ exceptMFDataExceptionasmfde:
+ message='Error occurred while loading solution groups from ' \
+ 'the simulation name file.'
+ raiseMFDataException(mfdata_except=mfde,
+ model=instance.name,
+ package='nam',
+ message=message)
+ forsolution_groupinsolution_group_dict.values():
+ forsolution_infoinsolution_group:
+ ifload_onlyisnotNoneandnotPackageContainer._in_pkg_list(
+ load_only,solution_info[0],solution_info[2]):
+ ifinstance.simulation_data.verbosity_level.value>= \
+ VerbosityLevel.normal.value:
+ print(' skipping package {}..'
+ '.'.format(solution_info[0].lower()))
+ continue
+ ims_file=mfims.ModflowIms(instance,filename=solution_info[1],
+ pname=solution_info[2])
+ ifverbosity_level.value>=VerbosityLevel.normal.value:
+ print(' loading ims package {}..'
+ '.'.format(ims_file._get_pname()))
+ ims_file.load(strict)
+
+ instance.simulation_data.mfpath.set_last_accessed_path()
+ ifverify_data:
+ instance.check()
+ returninstance
+[docs]
+defadd_version_to_fileheader(filename,model_info=None):
+"""Add modflow-setup, flopy and optionally model
+ version info to an existing file header denoted by
+ the comment characters ``#``, ``!``, or ``//``.
+ """
+ tempfile=str(filename)+'.temp'
+ shutil.copy(filename,tempfile)
+ withopen(tempfile)assrc:
+ withopen(filename,'w')asdest:
+ ifmodel_infoisNone:
+ header=''
+ else:
+ header=f'# {model_info}\n'
+ read_header=True
+ forlineinsrc:
+ ifread_headerandlen(line.strip())>0and \
+ line.strip()[0]in{'#','!','//'}:
+ ifmodel_infoisNoneormodel_infonotinline:
+ header+=line
+ elifread_header:
+ if'modflow-setup'notinheader:
+ headerlist=header.strip().split('\n')
+ if'flopy'inheader.lower():
+ pos,flopy_info=[(i,s)fori,sinenumerate(headerlist)
+ if'flopy'ins.lower()][0]
+ #flopy_info = header.strip().split('\n')[-1]
+ if'version'notinflopy_info.lower():
+ flopy_version=f'flopy version {flopy.__version__}'
+ flopy_info=flopy_info.lower().replace('flopy',
+ flopy_version)
+ headerlist[pos]=flopy_info
+
+ #header = '\n'.join(header.split('\n')[:-2] +
+ # [flopy_info + '\n'])
+ mfsetup_text='# via '
+ pos+=1# insert mfsetup header after flopy
+ else:
+ mfsetup_text='# File created by '
+ pos=-1# insert mfsetup header at end
+ mfsetup_text+='modflow-setup version {}'.format(mfsetup.__version__)
+ mfsetup_text+=' at {:%Y-%m-%d %H:%M:%S}'.format(dt.datetime.now())
+ headerlist.insert(pos,mfsetup_text)
+ header='\n'.join(headerlist)+'\n'
+ dest.write(header)
+ read_header=False
+ dest.write(line)
+ else:
+ dest.write(line)
+ os.remove(tempfile)
+
+
+
+
+[docs]
+defremove_file_header(filename):
+"""Remove the header of a MODFLOW input file,
+ to allow comparison betwee files that have different
+ headers but are otherwise the same, for example."""
+ backup_file=str(filename)+'.backup'
+ shutil.copy(filename,backup_file)
+ withopen(backup_file)assrc:
+ withopen(filename,'w')asdest:
+ forlineinsrc:
+ ifnotline.strip().startswith('#'):
+ dest.write(line)
+ os.remove(backup_file)
+"""
+Code for creating and working with regular (structured) grids. Focus is on the 2D representation of
+the grid in the cartesian plane. For methods involving layering (in the vertical dimension), see
+the discretization module.
+"""
+importcollections
+importtime
+importwarnings
+frompathlibimportPath
+
+importgeopandasasgpd
+importgisutils
+importnumpyasnp
+importpandasaspd
+importpyproj
+importshapely
+fromflopy.discretizationimportStructuredGrid
+fromflopy.mf6.utils.binarygrid_utilimportMfGrdFile
+fromgeopandas.geodataframeimportGeoDataFrame
+fromgisutilsimportdf2shp,get_proj_str,project,shp2df
+frompackagingimportversion
+fromrasterioimportAffine
+fromscipyimportspatial
+fromshapely.geometryimportMultiPolygon,Point,Polygon,box
+
+frommfsetupimportfileioasfileio
+
+from.mf5to6importget_model_length_units
+from.unitsimportconvert_length_units
+from.utilsimportget_input_arguments
+
+
+
+[docs]
+classMFsetupGrid(StructuredGrid):
+"""Class representing a structured grid. Extends flopy.discretization.StructuredGrid
+ to facilitate gis operations in a projected (real-word) coordinate reference system (CRS).
+
+ Parameters
+ ----------
+ delc : ndarray
+ 1D numpy array of grid spacing along a column (len nrow), in CRS units.
+ delr : ndarray
+ 1D numpy array of grid spacing along a row (len ncol), in CRS units.
+ top : ndarray
+ 2D numpy array of model top elevations
+ botm : ndarray
+ 3D numpy array of model bottom elevations
+ idomain : ndarray
+ 3D numpy array of model idomain values
+ laycbd : ndarray
+ (Modflow 2005 and earlier style models only):
+ LAYCBD—is a flag, with one value for each model layer,
+ that indicates whether or not a layer has a Quasi-3D
+ confining bed below it. 0 indicates no confining bed,
+ and not zero indicates a confining bed.
+ LAYCBD for the bottom layer must be 0.
+ lenuni : int, optional
+ MODFLOW length units variable. See
+ `the Online Guide to MODFLOW <https://water.usgs.gov/ogw/modflow-nwt/MODFLOW-NWT-Guide/index.html?beginners_guide_to_modflow.htm>`_
+ epsg : int, optional
+ EPSG code for the model CRS
+ proj_str : str, optional
+ PROJ string for model CRS. In general, a spatial reference ID
+ (such as an EPSG code) or Well-Known Text (WKT) string is prefered
+ over a PROJ string (see References)
+ prj : str, optional
+ Filepath for ESRI projection file (containing wkt) describing model CRS
+ wkt : str, optional
+ Well-known text string describing model CRS.
+ crs : obj, optional
+ A Python int, dict, str, or pyproj.crs.CRS instance
+ passed to :meth:`pyproj.crs.CRS.from_user_input`
+ Can be any of:
+
+ - PROJ string
+ - Dictionary of PROJ parameters
+ - PROJ keyword arguments for parameters
+ - JSON string with PROJ parameters
+ - CRS WKT string
+ - An authority string [i.e. 'epsg:4326']
+ - An EPSG integer code [i.e. 4326]
+ - A tuple of ("auth_name": "auth_code") [i.e ('epsg', '4326')]
+ - An object with a `to_wkt` method.
+ - A :class:`pyproj.crs.CRS` class
+
+ xoff, yoff : float, float, optional
+ Model grid offset (location of lower left corner), by default 0.0, 0.0
+ xul, yul : float, float, optional
+ Model grid offset (location of upper left corner), by default 0.0, 0.0
+ angrot : float, optional
+ Rotation of the model grid, in degrees counter-clockwise about the lower left corner.
+ Non-zero rotation values require input of xoff, yoff (xul, yul not supported).
+ By default 0.0
+
+ References
+ ----------
+ https://proj.org/faq.html#what-is-the-best-format-for-describing-coordinate-reference-systems
+
+ """
+
+ def__init__(self,delc,delr,top=None,botm=None,idomain=None,
+ laycbd=None,lenuni=None,binary_grid_file=None,
+ epsg=None,proj_str=None,prj=None,wkt=None,crs=None,
+ xoff=0.0,yoff=0.0,xul=None,yul=None,angrot=0.0):
+ super(MFsetupGrid,self).__init__(delc=np.array(delc),delr=np.array(delr),
+ top=top,botm=botm,idomain=idomain,
+ laycbd=laycbd,lenuni=lenuni,
+ epsg=epsg,proj4=proj_str,prj=prj,
+ xoff=xoff,yoff=yoff,angrot=angrot
+ )
+
+ # properties
+ self._crs=None
+ # pass all CRS representations through pyproj.CRS.from_user_input
+ # to convert to pyproj.CRS instance
+ self.crs=get_crs(crs=crs,epsg=epsg,prj=prj,wkt=wkt,proj_str=proj_str)
+
+ # other CRS-related properties are set in the flopy Grid base class
+ self._vertices=None
+ self._polygons=None
+ self._dataframe=None
+
+ # MODFLOW 6 binary grid file, for getting intercell connections
+ # (needed for reading cell budget files)
+ self.binary_grid_file=binary_grid_file
+
+ # if no epsg, set from proj4 string if possible
+ #if epsg is None and proj_str is not None and 'epsg' in proj_str.lower():
+ # self.epsg = int(proj_str.split(':')[1])
+
+ # in case the upper left corner is known but the lower left corner is not
+ ifxulisnotNoneandyulisnotNone:
+ xll=self._xul_to_xll(xul)
+ yll=self._yul_to_yll(yul)
+ self.set_coord_info(xoff=xll,yoff=yll,epsg=epsg,proj4=proj_str,angrot=angrot)
+
+ def__eq__(self,other):
+ ifnotisinstance(other,StructuredGrid):
+ returnFalse
+ ifnotnp.allclose(other.xoffset,self.xoffset):
+ returnFalse
+ ifnotnp.allclose(other.yoffset,self.yoffset):
+ returnFalse
+ ifnotnp.allclose(other.angrot,self.angrot):
+ returnFalse
+ ifnotother.crs==self.crs:
+ returnFalse
+ ifnotnp.array_equal(other.delr,self.delr):
+ returnFalse
+ ifnotnp.array_equal(other.delc,self.delc):
+ returnFalse
+ returnTrue
+
+ def__repr__(self):
+ txt=''
+ ifself.nlayisnotNone:
+ txt+=f'{self.nlay:d} layer(s), '
+ txt+=f'{self.nrow:d} row(s), {self.ncol:d} column(s)\n'
+ txt+=(f'delr: [{self.delr[0]:.2f}...{self.delr[-1]:.2f}]'
+ f' {self.units}\n'
+ f'delc: [{self.delc[0]:.2f}...{self.delc[-1]:.2f}]'
+ f' {self.units}\n'
+ )
+ txt+=f'CRS: {self.crs}\n'
+ txt+=f'length units: {self.length_units}\n'
+ txt+=f'xll: {self.xoffset}; yll: {self.yoffset}; rotation: {self.rotation}\n'
+ txt+='Bounds: {}\n'.format(self.extent)
+ returntxt
+
+ def__str__(self):
+ returnStructuredGrid.__repr__(self)
+
+ @property
+ defxul(self):
+ x0=self.xyedges[0][0]
+ y0=self.xyedges[1][0]
+ x0r,y0r=self.get_coords(x0,y0)
+ returnx0r
+
+ @property
+ defyul(self):
+ x0=self.xyedges[0][0]
+ y0=self.xyedges[1][0]
+ x0r,y0r=self.get_coords(x0,y0)
+ returny0r
+
+ @property
+ defbbox(self):
+"""Shapely polygon bounding box of the model grid."""
+ returnget_grid_bounding_box(self)
+
+ @property
+ defbounds(self):
+"""Grid bounding box in order used by shapely.
+ """
+ x0,x1,y0,y1=self.extent
+ returnx0,y0,x1,y1
+
+ @property
+ defsize(self):
+ ifself.nlayisNone:
+ returnself.nrow*self.ncol
+ returnself.nlay*self.nrow*self.ncol
+
+ @property
+ deftransform(self):
+"""Rasterio Affine object (same as transform attribute of rasters).
+ """
+ returnget_transform(self)
+
+ @property
+ defcrs(self):
+"""pyproj.crs.CRS instance describing the coordinate reference system
+ for the model grid.
+ """
+ returnself._crs
+
+ @crs.setter
+ defcrs(self,crs):
+"""Get a pyproj CRS instance from various inputs
+ (epsg, proj string, wkt, etc.).
+
+ crs : obj, optional
+ Coordinate reference system for model grid.
+ A Python int, dict, str, or pyproj.crs.CRS instance
+ passed to the pyproj.crs.from_user_input
+ See http://pyproj4.github.io/pyproj/stable/api/crs/crs.html#pyproj.crs.CRS.from_user_input.
+ Can be any of:
+ - PROJ string
+ - Dictionary of PROJ parameters
+ - PROJ keyword arguments for parameters
+ - JSON string with PROJ parameters
+ - CRS WKT string
+ - An authority string [i.e. 'epsg:4326']
+ - An EPSG integer code [i.e. 4326]
+ - A tuple of ("auth_name": "auth_code") [i.e ('epsg', '4326')]
+ - An object with a `to_wkt` method.
+ - A :class:`pyproj.crs.CRS` class
+ """
+ crs=get_crs(crs=crs)
+ self._crs=crs
+
+ @property
+ defproj_str(self):
+ ifself.crsisnotNone:
+ returnself.crs.to_proj4()
+
+ @property
+ defwkt(self):
+ ifself.crsisnotNone:
+ returnself.crs.to_wkt(pretty=True)
+
+ @property
+ deflength_units(self):
+ returnget_crs_length_units(self.crs)
+
+ @property
+ defvertices(self):
+"""Vertices for grid cell polygons."""
+ ifself._verticesisNone:
+ self._set_vertices()
+ returnself._vertices
+
+ @property
+ defpolygons(self):
+"""Vertices for grid cell polygons."""
+ ifself._polygonsisNone:
+ self._set_polygons()
+ returnself._polygons
+
+ @property
+ defdataframe(self):
+"""Pandas DataFrame of grid cell polygons
+ with i, j locations."""
+ ifself._dataframeisNone:
+ self._dataframe=self.get_dataframe(layers=True)
+ returnself._dataframe
+
+ @property
+ defintercell_connections(self):
+"""Pandas DataFrame of flow connections between grid cells."""
+ ifself._intercell_connectionsisNone:
+ self._intercell_connections=self.get_intercell_connections()
+ returnself._intercell_connections
+
+ @property
+ deftop(self):
+ returnself._top
+
+ @top.setter
+ deftop(self,top):
+ self._top=top
+
+ @property
+ defbotm(self):
+ returnself._botm
+
+ @botm.setter
+ defbotm(self,botm):
+ if(self._StructuredGrid__nrow,self._StructuredGrid__ncol)!=botm.shape[1:]:
+ raiseValueError("botm array shape is inconsistent with the model grid")
+ self._StructuredGrid__nlay=botm.shape[0]
+ ifself._laycbd.size!=botm.shape[0]:
+ self._laycbd=np.zeros(botm.shape[0],dtype=int)
+ self._botm=botm
+
+
+[docs]
+ defget_intercell_connections(self,binary_grid_file=None):
+"""_summary_
+
+ Parameters
+ ----------
+ binary_grid_file : str or pathlike
+ MODFLOW 6 binary grid file
+
+ Returns
+ -------
+ df : DataFrame
+ Intercell connections, with the following columns:
+
+ === =============================================================
+ n from zero-based node number
+ kn from zero-based layer
+ in from zero-based row
+ jn from zero-based column
+ m to zero-based node number
+ kn to zero-based layer
+ in to zero-based row
+ jn to zero-based column
+ === =============================================================
+
+ Raises
+ ------
+ ValueError
+ _description_
+ """
+ ifbinary_grid_fileisnotNone:
+ self.binary_grid_file=binary_grid_file
+ ifself.binary_grid_fileisNone:
+ raiseValueError("A MODFLOW 6 binary_grid_file "
+ "is needed to get intercell connections. "
+ "Either run get_intercell_connections or "
+ "re-instantiate the grid with a binary_grid_file argument.")
+ self._intercell_connections=get_intercell_connections(self.binary_grid_file)
+ returnself._intercell_connections
+
+
+
+[docs]
+ defget_dataframe(self,layers=True):
+"""Get a pandas DataFrame of grid cell polygons
+ with i, j locations.
+
+ Parameters
+ ----------
+ layers : bool
+ If True, return a row for each k, i, j location
+ and a 'k' column; if False, only return i, j
+ locations with no 'k' column. By default, True
+
+ Returns
+ -------
+ layers : DataFrame
+ Pandas Dataframe with k, i, j and geometry column
+ with a shapely polygon representation of each model cell.
+ """
+ # get dataframe of model grid cells
+ i,j=np.indices((self.nrow,self.ncol))
+ geoms=self.polygons
+ df=gpd.GeoDataFrame({'i':i.ravel(),
+ 'j':j.ravel(),
+ 'geometry':geoms},crs=5070)
+ iflayersandself.nlayisnotNone:
+ # add layer information
+ dfs=[]
+ forkinrange(self.nlay):
+ layer_df=df.copy()
+ layer_df['k']=k
+ dfs.append(layer_df)
+ df=pd.concat(dfs)
+ df=df[['k','i','j','geometry']].copy()
+ returndf
+[docs]
+defget_ij(grid,x,y,local=False):
+"""Return the row and column of a point or sequence of points
+ in real-world coordinates.
+
+ Parameters
+ ----------
+ grid : flopy.discretization.StructuredGrid instance
+ x : scalar or sequence of x coordinates
+ y : scalar or sequence of y coordinates
+ local: bool (optional)
+ If True, x and y are in local coordinates (defaults to False)
+
+ Returns
+ -------
+ i : row or sequence of rows (zero-based)
+ j : column or sequence of columns (zero-based)
+ """
+ xc,yc=grid.xcellcenters,grid.ycellcenters
+ iflocal:
+ x,y=grid.get_coords(x,y)
+ print('getting i, j locations...')
+ t0=time.time()
+ xyc=np.array([xc.ravel(),yc.ravel()]).transpose()
+ pxy=np.array([x,y]).transpose()
+ kdtree=spatial.KDTree(xyc)
+ distance,loc=kdtree.query(pxy)
+ i,j=np.unravel_index(loc,(grid.nrow,grid.ncol))
+ print("finished in {:.2f}s\n".format(time.time()-t0))
+ returni,j
+
+
+
+
+[docs]
+defget_kij_from_node3d(node3d,nrow,ncol):
+"""For a consecutive cell number in row-major order
+ (row, column, layer), get the zero-based row, column position.
+ """
+ node2d=node3d%(nrow*ncol)
+ k=node3d//(nrow*ncol)
+ i=node2d//ncol
+ j=node2d%ncol
+ returnk,i,j
+[docs]
+defget_nearest_point_on_grid(x,y,transform=None,
+ xul=None,yul=None,
+ dx=None,dy=None,rotation=0.,
+ offset='center',op=None):
+"""
+
+ Parameters
+ ----------
+ x : float
+ x-coordinate of point
+ y : float
+ y-coordinate of point
+ transform : Affine instance, optional
+ Affine object instance describing grid
+ xul : float
+ x-coordinate of upper left corner of the grid
+ yul : float
+ y-coordinate of upper left corner of the grid
+ dx : float
+ grid spacing in the x-direction (along rows)
+ dy : float
+ grid spacing in the y-direction (along columns)
+ rotation : float
+ grid rotation about the upper left corner, in degrees clockwise from the x-axis
+ offset : str, {'center', 'edge'}
+ Whether the point on the grid represents a cell center or corner (edge). This
+ argument is only used if xul, yul, dx, dy and rotation are supplied. If
+ an Affine transform instance is supplied, it is assumed to already incorporate
+ the offset.
+ op : function, optional
+ Function to convert fractional pixels to whole numbers (np.round, np.floor, np.ceiling).
+ Defaults to np.round if offset == 'center'; otherwise defaults to np.floor.
+
+
+
+ Returns
+ -------
+ x_nearest, y_nearest : float
+ Coordinates of nearest grid cell center.
+
+ """
+ # get the closet (fractional) grid cell location
+ # (in case the grid is rotated)
+ iftransformisNone:
+ transform=Affine(dx,0.,xul,
+ 0.,dy,yul)* \
+ Affine.rotation(rotation)
+ ifoffset=='center':
+ transform*=Affine.translation(0.5,0.5)
+ x_raster,y_raster=~transform*(x,y)
+
+ ifoffset=='center':
+ op=np.round
+ elifopisNone:
+ op=np.floor
+
+ j=int(op(x_raster))
+ i=int(op(y_raster))
+
+ x_nearest,y_nearest=transform*(j,i)
+ returnx_nearest,y_nearest
+
+
+
+
+[docs]
+defget_point_on_national_hydrogeologic_grid(x,y,offset='edge',**kwargs):
+"""Given an x, y location representing the upper left
+ corner of a model grid, return the upper left corner
+ of the cell in the National Hydrogeologic Grid that
+ contains it."""
+ params=get_input_arguments(national_hydrogeologic_grid_parameters,get_nearest_point_on_grid)
+ params.update(kwargs)
+ returnget_nearest_point_on_grid(x,y,offset=offset,**params)
+[docs]
+defrasterize(feature,grid,id_column=None,
+ include_ids=None,exclude_ids=None,names_column=None,
+ crs=None,**kwargs):
+"""Rasterize a feature onto the model grid, using
+ the rasterio.features.rasterize method. Features are intersected
+ if they contain the cell center.
+
+ Parameters
+ ----------
+ feature : str (shapefile path), list of shapely objects,
+ or dataframe with geometry column
+ id_column : str
+ Column with unique integer identifying each feature; values
+ from this column will be assigned to the output raster.
+ include_ids : sequence
+ Subset of IDs in id_column to include
+ exclude_ids : sequence
+ Subset of IDs in id_column to exclude
+ names_column : str, optional
+ By default, the IDs in id_column, or sequential integers
+ are returned. This option allows another column of strings
+ to be specified (i.e. feature names); in which case
+ an array of the strings will be returned.
+ grid : grid.StructuredGrid instance
+ crs : obj
+ A Python int, dict, str, or pyproj.crs.CRS instance
+ passed to :meth:`pyproj.crs.CRS.from_user_input`
+ Can be any of:
+
+ - PROJ string
+ - Dictionary of PROJ parameters
+ - PROJ keyword arguments for parameters
+ - JSON string with PROJ parameters
+ - CRS WKT string
+ - An authority string [i.e. 'epsg:4326']
+ - An EPSG integer code [i.e. 4326]
+ - A tuple of ("auth_name": "auth_code") [i.e ('epsg', '4326')]
+ - An object with a `to_wkt` method.
+ - A :class:`pyproj.crs.CRS` class
+ **kwargs : keyword arguments to rasterio.features.rasterize()
+ https://rasterio.readthedocs.io/en/stable/api/rasterio.features.html
+
+ Returns
+ -------
+ 2D numpy array with intersected values
+
+ """
+ try:
+ fromrasterioimportAffine,features
+ except:
+ print('This method requires rasterio.')
+ return
+
+ ifcrsisnotNone:
+ ifversion.parse(gisutils.__version__)<version.parse('0.2.0'):
+ raiseValueError("The rasterize function requires gisutils >= 0.2")
+ fromgisutilsimportget_authority_crs
+ crs=get_authority_crs(crs)
+
+ trans=get_transform(grid)
+
+ ifisinstance(feature,str)orisinstance(feature,Path):
+ df=gpd.read_file(feature)
+ elifisinstance(feature,pd.DataFrame):
+ df=feature.copy()
+ df=gpd.GeoDataFrame(df,crs=crs)
+ elifisinstance(feature,collections.abc.Iterable):
+ # list of shapefiles
+ ifisinstance(feature[0],str)orisinstance(feature[0],Path):
+ # use shp2df to read multiple shapefiles
+ # then convert to gdf
+ df=shp2df(feature,dest_crs=grid.crs)
+ df=gpd.GeoDataFrame(df,crs=grid.crs)
+ else:
+ df=pd.DataFrame({'geometry':feature})
+ df=gpd.GeoDataFrame(df,crs=crs)
+ elifnotisinstance(feature,collections.abc.Iterable):
+ df=pd.DataFrame({'geometry':[feature]})
+ df=gpd.GeoDataFrame(df,crs=crs)
+ else:
+ print('unrecognized feature input')
+ return
+
+ # reproject to grid crs
+ ifdf.crsisnotNone:
+ orig_crs=df.crs
+ try:
+ df.to_crs(grid.crs,inplace=True)
+ except:
+ df.to_crs(grid.crs,inplace=True)
+ ifnotdf['geometry'].is_valid.all():
+ df['geometry']=[g.buffer(0)forgindf.geometry]
+ geoms_are_valid=df['geometry'].is_valid.all()& \
+ (notdf.geometry.is_empty.any())& \
+ np.isfinite(df.geometry.bounds.sum().sum())
+ ifnotgeoms_are_valid:
+ raiseValueError('Something went wrong with reprojecting '
+ f'the input features from\n{orig_crs}\nto\n{grid.crs}\n'
+ 'Check the input feature and model grid projections'
+ 'If you are on a network that requires special '
+ 'SSL authentication, try running this operation '
+ 'again off-network.'
+ )
+
+ # subset to include_ids
+ ifid_columnisnotNoneandinclude_idsisnotNone:
+ df=df.loc[df[id_column].isin(include_ids)].copy()
+ ifid_columnisnotNoneandexclude_idsisnotNone:
+ df=df.loc[~df[id_column].isin(exclude_ids)].copy()
+ # create list of GeoJSON features, with unique value for each feature
+ ifid_columnisNone:
+ numbers=list(range(1,len(df)+1))
+ # if IDs are strings, get a number for each one
+ # pd.DataFrame.unique() generally preserves order
+ elifdf[id_column].dtype==object:
+ unique_values=df[id_column].unique()
+ values=dict(zip(unique_values,range(1,len(unique_values)+1)))
+ numbers=[values[n]fornindf[id_column]]
+ else:
+ # enforce integers; very long NHDPlusIDs
+ # can cause trouble if they are in float64 format
+ numbers=df[id_column].values.astype('int64')
+ # add one if the lowest number is 0
+ # (zero indicates non-intersected raster cells)
+ ifnp.min(numbers)==0:
+ numbers+=1
+ elifnp.min(numbers)<0:
+ raiseValueError("id_column must have positive integers!")
+ numbers=list(numbers)
+
+ geoms=list(zip(df.geometry,numbers))
+ result=features.rasterize(geoms,
+ out_shape=(grid.nrow,grid.ncol),
+ transform=trans,**kwargs)
+ assertresult.sum(axis=(0,1))!=0,"Nothing was intersected!"
+ ifnames_columnisnotNone:
+ names_lookup=dict(zip(numbers,df[names_column]))
+ result=[names_lookup.get(n,'')forninresult.flat]
+ result=np.reshape(result,(grid.nrow,grid.ncol))
+ result=result.astype(object)
+ returnresult
+
+
+
+
+[docs]
+defsnap_to_cell_corner(x,y,modelgrid,corner='nearest'):
+"""Move an x, y location to the nearest cell corner on
+ a rectilinear modelgrid.
+
+ Parameters
+ ----------
+ x : float
+ x coordinate in coordinate reference system of modelgrid.
+ y : _type_
+ y coordinate in coordinate reference system of modelgrid.
+ modelgrid : Flopy StructuredGrid instance
+ corner : str, optional
+ 'upper left', 'lower right' or 'nearest', by default 'nearest'
+
+ Returns
+ -------
+ x_corner, y_corner
+ x, y location of cell corner in coordinate reference system
+ of modelgrid.
+
+ Raises
+ ------
+ ValueError
+ If x, y are outside of the model domain, or if an invalid
+ cell corner is specified.
+ """
+ ifcorner=='nearest':
+ vx,vy,vz=modelgrid.xyzvertices
+ loc=np.argmin(np.sqrt((x-vx)**2+(y-vy)**2))
+ x_corner,y_corner=vx.flat[loc],vy.flat[loc]
+ returnx_corner,y_corner
+
+ x_model,y_model=modelgrid.get_local_coords(x,y)
+
+ # move away from the corner of a cell
+ # delr: column spacing along a row
+ # delc: row spacing along a column
+ # use .min() values of delr/delc because
+ # we may not be able to get the i, j location
+ # from Flopy without first backing the point away from the corner
+ # (if the x, y is initially very close to the cell corner)
+ ifcorner=='upper left':
+ x_model+=1e-6#(modelgrid.delr.min() * 0.25)
+ y_model-=1e-6#(modelgrid.delc.min() * 0.25)
+ elifcorner=='lower right':
+ x_model-=1e-6#(modelgrid.delr.min() * 0.25)
+ y_model+=1e-6#(modelgrid.delc.min() * 0.25)
+ else:
+ raiseValueError("Only snapping to 'upper left' and "
+ "'lower right' corners is supported")
+ # flip back to world coords
+ #x1, y1 = modelgrid.get_coords(x_model, y_model)
+ # get corresponding cell
+ pi,pj=modelgrid.intersect(x_model,y_model,local=True,forgive=True)
+ #pi, pj = modelgrid.intersect(x1, y1, forgive=True)
+ ifany(np.isnan([pi,pj])):
+ raiseValueError(f"Point {x:.2f}, {y:.2f} "
+ "is outside of the model domain!")
+ # find the vertices of that cell
+ verts=np.array(modelgrid.get_cell_vertices(pi,pj))
+ # flip to model space to easily locate the corner
+ verts_model_space=np.array([modelgrid.get_local_coords(xv,yv)
+ forxv,yvinverts])
+ ifcorner=='upper left':
+ x_corner_model=verts_model_space[:,0].min()
+ y_corner_model=verts_model_space[:,1].max()
+ elifcorner=='lower right':
+ x_corner_model=verts_model_space[:,0].max()
+ y_corner_model=verts_model_space[:,1].min()
+ else:
+ raiseValueError("Only snapping to 'upper left' and "
+ "'lower right' corners is supported")
+ # finally, back to world space
+ x_corner,y_corner=modelgrid.get_coords(x_corner_model,y_corner_model)
+ returnx_corner,y_corner
+
+
+
+
+[docs]
+defsetup_structured_grid(xoff=None,yoff=None,xul=None,yul=None,
+ nrow=None,ncol=None,nlay=None,
+ dxy=None,delr=None,delc=None,
+ top=None,botm=None,
+ rotation=0.,
+ parent_model=None,snap_to_parent=True,snap_to_NHG=False,
+ features=None,features_shapefile=None,
+ id_column=None,include_ids=None,
+ buffer=1000,
+ crs=None,epsg=None,prj=None,wkt=None,
+ model_length_units=None,
+ grid_file='grid.json',
+ bbox_shapefile=None,**kwargs):
+"""_summary_
+
+ Parameters
+ ----------
+ xoff : _type_, optional
+ _description_, by default None
+ yoff : _type_, optional
+ _description_, by default None
+ xul : _type_, optional
+ _description_, by default None
+ yul : _type_, optional
+ _description_, by default None
+ nrow : _type_, optional
+ _description_, by default None
+ ncol : _type_, optional
+ _description_, by default None
+ nlay : _type_, optional
+ _description_, by default None
+ dxy : _type_, optional
+ Specified uniform row/column spacing, in model grid
+ (coordinate reference system) units, by default None
+ delr : scalar or sequence, optional
+ Column spacing along a row, in model grid
+ (coordinate reference system) units,
+ by default None
+ delc : scalar or sequence, optional
+ Row spacing along a column, in model grid
+ (coordinate reference system) units,
+ by default None
+ top : _type_, optional
+ _description_, by default None
+ botm : _type_, optional
+ _description_, by default None
+ rotation : _type_, optional
+ _description_, by default 0.
+ parent_model : _type_, optional
+ _description_, by default None
+ snap_to_parent : bool, optional
+ _description_, by default True
+ snap_to_NHG : bool, optional
+ _description_, by default False
+ features : _type_, optional
+ _description_, by default None
+ features_shapefile : _type_, optional
+ _description_, by default None
+ id_column : _type_, optional
+ _description_, by default None
+ include_ids : _type_, optional
+ _description_, by default None
+ buffer : int, optional
+ _description_, by default 1000
+ crs : _type_, optional
+ _description_, by default None
+ epsg : _type_, optional
+ _description_, by default None
+ prj : _type_, optional
+ _description_, by default None
+ wkt : _type_, optional
+ _description_, by default None
+ model_length_units : _type_, optional
+ _description_, by default None
+ grid_file : str, optional
+ _description_, by default 'grid.json'
+ bbox_shapefile : _type_, optional
+ _description_, by default None
+
+ Returns
+ -------
+ _type_
+ _description_
+
+ Raises
+ ------
+ ValueError
+ _description_
+ ValueError
+ _description_
+ ValueError
+ _description_
+ ValueError
+ _description_
+ ValueError
+ _description_
+ ValueError
+ _description_
+ """""""""
+ print('setting up model grid...')
+ t0=time.time()
+
+ ifparent_modelisNone:
+ snap_to_parent=False
+ elifnotnp.allclose(parent_model.modelgrid.rotation,rotation):
+ snap_to_parent=False
+
+ # make sure crs is populated, then get CRS units for the grid
+ crs=get_crs(crs=crs,epsg=epsg,prj=prj,wkt=wkt)
+ ifcrsisNoneandparent_modelisnotNone:
+ crs=parent_model.modelgrid.crs
+
+ grid_units=get_crs_length_units(crs)
+ ifgrid_unitsnotin{'feet','meters'}:
+ raiseValueError(f'unrecognized CRS units {grid_units}: CRS must be projected in feet or meters')
+
+ # conversion from model length units
+ # to model grid (coordinate reference system) units
+ to_grid_units_inset=convert_length_units(model_length_units,grid_units)
+
+ regular=True
+ ifdxyisnotNone:
+ delr_grid=np.round(dxy,4)# dxy is specified in CRS units
+ delc_grid=delr_grid
+ ifdelrisnotNone:
+ # delr is expected to be in model grid (CRS) units
+ delr_grid=np.round(np.array(delr),4)
+ ifnotnp.isscalar(delr_grid):
+ iflen(set(delr_grid))==1:
+ delr_grid=delr_grid[0]
+ else:
+ regular=False
+ ifdelcisnotNone:
+ delc_grid=np.round(np.array(delc),4)
+ ifnotnp.isscalar(delc_grid):
+ iflen(set(delc_grid))==1:
+ delc_grid=delc_grid[0]
+ else:
+ regular=False
+ ifparent_modelisnotNoneandsnap_to_parent:
+ to_grid_units_parent=convert_length_units(get_model_length_units(parent_model),grid_units)
+ # parent model grid spacing in meters
+ #parent_delr_grid = np.round(parent_model.dis.delr.array[0] * to_grid_units_parent, 4)
+ #if not parent_delr_grid % delr_grid % parent_delr_grid == 0:
+ # raise ValueError('inset delr spacing of {} must be factor of parent spacing of {}'.format(delr_grid,
+ # parent_delr_grid))
+ #parent_delc_grid = np.round(parent_model.dis.delc.array[0] * to_grid_units_parent, 4)
+ #if not parent_delc_grid % delc_grid % parent_delc_grid == 0:
+ # raise ValueError('inset delc spacing of {} must be factor of parent spacing of {}'.format(delc_grid,
+ # parent_delc_grid))
+
+ # option 1: make grid from xoff, yoff and specified dimensions
+ ifxoffisnotNoneandyoffisnotNone:
+ assertnrowisnotNoneandncolisnotNone, \
+ "Need to specify nrow and ncol if specifying xoffset and yoffset."
+ ifregular:
+ height_grid=np.round(delc_grid*nrow,4)
+ width_grid=np.round(delr_grid*ncol,4)
+ else:
+ height_grid=np.sum(delc_grid)
+ width_grid=np.sum(delr_grid)
+
+ # optionally align grid with national hydrologic grid
+ # grids snapping to NHD must have spacings that are a factor of 1 km
+ ifsnap_to_NHG:
+ ifrotation!=0:
+ raiseValueError(f'rotation = {rotation}: snap_to_NHD option '
+ 'only compatible with unrotated grids!')
+ ifnot(regularandnp.allclose(1000%delc_grid,0,atol=1e-4)):
+ raiseValueError(f'snap_to_NHD option '
+ 'only compatible with uniformly spaced '
+ 'structured grids!')
+ x,y=get_point_on_national_hydrogeologic_grid(xoff,yoff,
+ offset='edge',op=np.floor)
+ xoff=x
+ yoff=y
+
+
+ # make a bounding box so that other important corners can be specified
+ lower_left_corner=Point(xoff,yoff)
+ unrotated_bbox=box(xoff,yoff,xoff+width_grid,yoff+height_grid)
+ # get the upper right corner
+ ur=shapely.affinity.rotate(Point(xoff,yoff+height_grid),rotation,
+ origin=lower_left_corner)
+ xul,yul=ur.x,ur.y
+
+ # option 2: make grid using buffered feature bounding box
+ else:
+ # read in the feature from a shapefile
+ iffeaturesisNoneandfeatures_shapefileisnotNone:
+ bbox_filter=None
+ ifparent_modelisnotNone:
+ pmg_l,pmg_r,pmg_b,pmg_t=parent_model.modelgrid.extent
+ bbox_filter=gpd.GeoSeries(box(pmg_l,pmg_b,pmg_r,pmg_t),
+ crs=parent_model.modelgrid.crs)
+ df=gpd.read_file(features_shapefile,bbox=bbox_filter)
+ ifid_columnisnotNoneandinclude_idsisnotNone:
+ datatype=set(type(s)forsininclude_ids)
+ iflen(datatype)>1:
+ raiseValueError(f"Inconsistent datatypes in include_ids: {include_ids}")
+ datatype=datatype.pop()
+ dtype={id_column:datatype}
+ df=df.loc[df[id_column].astype(dtype).isin(include_ids)]
+ # inexplicable shapely.errors.GEOSException: IllegalArgumentException:
+ # Points of LinearRing do not form a closed linestring
+ # error resolved by calling to_crs twice
+ # (for mfsetup/tests/test_grid.py::test_grid_crs_units[3696-feet-meters])
+ try:
+ df.to_crs(crs,inplace=True)
+ except:
+ df.to_crs(crs,inplace=True)
+ # use all features by default
+ features=df.geometry.tolist()
+ eliffeaturesisNoneandfeatures_shapefileisnotNone:
+ raiseValueError(
+ "setup_grid: need one of xoff/yoff, xul/yul, features_shapefile or "
+ "features inputs")
+ # alternatively, accept features as an argument
+ # convert multiple features to a MultiPolygon
+ ifisinstance(features,list):
+ iflen(features)>1:
+ features=MultiPolygon(features)
+ else:
+ features=features[0]
+
+ # size the grid based on the bbox for features
+ # buffer and then unrotate the feature
+ buffered_features=features.buffer(buffer)
+ unrotated_features=shapely.affinity.rotate(buffered_features,-rotation,
+ origin=buffered_features.centroid)
+ unrotated_bbox=box(*unrotated_features.bounds)
+
+ # Get the initial grid height and width
+ height_grid=np.round(unrotated_bbox.bounds[3]-unrotated_bbox.bounds[1])
+ width_grid=np.round(unrotated_bbox.bounds[2]-unrotated_bbox.bounds[0])
+ # initial rows and columns (prior to snapping, if specified)
+ nrow=int(np.ceil(height_grid/delc_grid))
+ ncol=int(np.ceil(width_grid/delr_grid))
+ # correct the height and width to be consistent with nrow, ncol
+ height_grid=nrow*delc_grid
+ width_grid=ncol*delr_grid
+ # make a new box with the corrected height
+ unrotated_bbox=box(unrotated_bbox.bounds[0],unrotated_bbox.bounds[1],
+ unrotated_bbox.bounds[0]+width_grid,
+ unrotated_bbox.bounds[1]+height_grid)
+ # Get important corners
+ # upper left corner
+ xul_ur,yul_ur=unrotated_bbox.bounds[0],unrotated_bbox.bounds[3]
+ ul=shapely.affinity.rotate(Point(xul_ur,yul_ur),rotation,
+ origin=buffered_features.centroid)
+ xul,yul=ul.x,ul.y
+ # lower left corner
+ xll_ur,yll_ur=unrotated_bbox.bounds[0],unrotated_bbox.bounds[1]
+ lower_left_corner=shapely.affinity.rotate(
+ Point(xll_ur,yll_ur),rotation,origin=buffered_features.centroid)
+ # lower right corner
+ xlr_ur,ylr_ur=unrotated_bbox.bounds[2],unrotated_bbox.bounds[1]
+ lower_right_corner=shapely.affinity.rotate(
+ Point(xlr_ur,ylr_ur),rotation,origin=buffered_features.centroid)
+ # xoff, yoff here for consistency with flopy model grid language
+ xoff,yoff=lower_left_corner.x,lower_left_corner.y
+
+
+ # align model with parent grid if there is a parent model
+ # (and not snapping to national hydrologic grid)
+ # for grids created from a buffer around a feature
+ # (without a pre-defined number of rows and columns)
+ # this likely means increasing nrow and ncol
+ ifparent_modelisnotNoneand(snap_to_parentandnotsnap_to_NHG):
+
+ iffeaturesisnotNone:
+ # snap the upper left corner
+ # to ensure that grid perimeter is at least buffer distance from feature(s)
+ xul,yul=snap_to_cell_corner(xul,yul,parent_model.modelgrid,
+ corner='upper left')
+ ul_ur=shapely.affinity.rotate(Point(xul,yul),
+ -rotation,
+ origin=buffered_features.centroid)
+ # snap the lower right corner for the same reason
+ xlr,ylr=snap_to_cell_corner(lower_right_corner.x,lower_right_corner.y,
+ parent_model.modelgrid,
+ corner='lower right')
+ lr_ur=shapely.affinity.rotate(Point(xlr,ylr),
+ -rotation,
+ origin=buffered_features.centroid)
+ grid_height=ul_ur.y-lr_ur.y
+ grid_width=lr_ur.x-ul_ur.x
+ assertnp.round(grid_height)%delc_grid==0.
+ assertnp.round(grid_width)%delr_grid==0.
+ nrow=int(round(grid_height/delc_grid))
+ ncol=int(round(grid_width/delr_grid))
+
+ # get revised lower left corner (offset)
+ ll=shapely.affinity.rotate(Point(ul_ur.x,lr_ur.y),
+ rotation,
+ origin=buffered_features.centroid)
+ xoff,yoff=ll.x,ll.y
+
+ else:
+ xoff,yoff=snap_to_cell_corner(xoff,yoff,parent_model.modelgrid,
+ corner='nearest')
+ grid_height=unrotated_bbox.bounds[3]-unrotated_bbox.bounds[1]
+ xul_ur,yul_ur=xoff,yoff+grid_height
+ upper_left_corner=shapely.affinity.rotate(Point(xul_ur,yul_ur),rotation,
+ origin=Point(xoff,yoff))
+ xul,yul=upper_left_corner.x,upper_left_corner.y
+
+ assertxoffisnotNone
+ # xoff = xul + (np.sin(np.radians(rotation)) * height_grid)
+ assertyoffisnotNone
+ # yoff = yul - (np.cos(np.radians(rotation)) * height_grid)
+ # check that the top left and bottom left corners are consistent with discretization
+ ifnp.isscalar(delr_grid):
+ pass#assert np.allclose(np.sqrt((yul - yoff)**2 + (xul - xoff)**2),
+ # nrow * delc_grid)
+ else:
+ assertnp.allclose(np.sqrt((yul-yoff)**2+(xul-xoff)**2),
+ delc_grid.sum())
+ # set the grid configuration dictionary
+ grid_cfg={'nrow':int(nrow),'ncol':int(ncol),
+ 'nlay':nlay,
+ 'delr':delr_grid,'delc':delc_grid,
+ 'xoff':xoff,'yoff':yoff,
+ 'xul':xul,'yul':yul,
+ 'rotation':rotation,
+ #'lenuni': 2,
+ 'structured':True
+ }
+
+ ifregular:
+ grid_cfg['delr']=np.ones(grid_cfg['ncol'],dtype=float)*grid_cfg['delr']
+ grid_cfg['delc']=np.ones(grid_cfg['nrow'],dtype=float)*grid_cfg['delc']
+ grid_cfg['delr']=grid_cfg['delr'].tolist()# for serializing to json
+ grid_cfg['delc']=grid_cfg['delc'].tolist()
+
+ # renames for flopy modelgrid
+ renames={'rotation':'angrot'}
+ fork,vinrenames.items():
+ ifkingrid_cfg:
+ grid_cfg[v]=grid_cfg.pop(k)
+
+ # add epsg or wkt if there isn't an epsg
+ ifcrsisnotNone:
+ grid_cfg['crs']=crs
+ elifepsgisnotNone:
+ grid_cfg['epsg']=epsg
+ else:
+ warnings.warn(("Coordinate Reference System information must be supplied via"
+ "the 'crs'' argument."))
+
+ # set up the model grid instance
+ grid_cfg['top']=top
+ grid_cfg['botm']=botm
+ grid_cfg.update(kwargs)# update with any kwargs from function call
+ kwargs=get_input_arguments(grid_cfg,MFsetupGrid)
+ modelgrid=MFsetupGrid(**kwargs)
+ modelgrid.cfg=grid_cfg
+
+ # write grid info to json, and shapefile of bbox
+ # omit top and botm arrays from json represenation of grid
+ # (just for horizontal disc.)
+ delgrid_cfg['top']
+ delgrid_cfg['botm']
+
+ # crs needs to be cast to epsg or wkt to be serialized
+ ifisinstance(crs,pyproj.CRS):
+ grid_cfg['epsg']=grid_cfg['crs'].to_epsg()
+ ifgrid_cfg['epsg']isNone:
+ grid_cfg['wkt']=grid_cfg['crs'].to_wkt()
+ delgrid_cfg['crs']
+
+ fileio.dump(grid_file,grid_cfg)
+ ifbbox_shapefileisnotNone:
+ write_bbox_shapefile(modelgrid,bbox_shapefile)
+ print("finished in {:.2f}s\n".format(time.time()-t0))
+ returnmodelgrid
+
+
+
+
+[docs]
+defget_cellface_midpoint(grid,k,i,j,direction):
+"""Return the midpoint of vertical cell face within a structured grid.
+ For example, the midpoint for the right cell face is halfway between
+ the upper and lower right corners of the cell, halfway between the
+ top and bottom edges."""
+ ifnp.isscalar(k):
+ k=[k]
+ ifnp.isscalar(i):
+ i=[i]
+ ifnp.isscalar(j):
+ j=[j]
+ k=np.array(k).astype(int)
+ i=np.array(i).astype(int)
+ j=np.array(j).astype(int)
+ ifisinstance(direction,str):
+ direction=[direction]*len(k)
+ x_edges_model=grid.xyedges[0]
+ x_centers_model=grid.xycenters[0]
+ y_edges_model=grid.xyedges[1]
+ y_centers_model=grid.xycenters[1]
+ model_x=[]
+ model_y=[]
+ forii,jj,dninzip(i,j,direction):
+ ifdn=='right':
+ x=x_edges_model[jj+1]
+ y=y_centers_model[ii]
+ elifdn=='left':
+ x=x_edges_model[jj]
+ y=y_centers_model[ii]
+ elifdn=='top':
+ x=x_centers_model[jj]
+ y=y_edges_model[ii]
+ elifdn=='bottom':
+ x=x_centers_model[jj]
+ y=y_edges_model[ii+1]
+ else:
+ raiseValueError("direction needs to be right, left, top or bottom")
+ model_x.append(x)
+ model_y.append(y)
+ x,y=grid.get_coords(model_x,model_y)
+ z=grid.zcellcenters[k,i,j]
+ returnx,y,z
+
+
+
+
+[docs]
+defget_intercell_connections(binary_grid_file):
+"""Get all of the connections between cells in a
+ MODFLOW 6 structured grid.
+
+ Parameters
+ ----------
+ binary_grid_file : str or pathlike
+ MODFLOW 6 binary grid file
+
+ Returns
+ -------
+ df : DataFrame
+ Intercell connections, with the following columns:
+
+ ==== =============================================================
+ n from zero-based node number
+ kn from zero-based layer
+ in from zero-based row
+ jn from zero-based column
+ m to zero-based node number
+ kn to zero-based layer
+ in to zero-based row
+ jn to zero-based column
+ qidx index position of flow in cell budget file
+ ==== =============================================================
+
+ Raises
+ ------
+ ValueError
+ _description_
+ """
+ print('Getting intercell connections...')
+ ta=time.time()
+ bgf=MfGrdFile(binary_grid_file)
+ nrow=bgf.nrow
+ ncol=bgf.ncol
+ # IA array maps cell number to connection number
+ # (one-based index number of first connection at each cell)?
+ # taking the forward difference then yields nconnections per cell
+ ia=bgf._datadict['IA']-1
+ # Connections in the JA array correspond directly with the
+ # FLOW-JA-FACE record that is written to the budget file.
+ ja=bgf._datadict['JA']-1# cell connections
+
+ all_n=[]
+ m=[]
+ qidx=[]
+ forninrange(len(ia)-1):
+ foriposinrange(ia[n]+1,ia[n+1]):
+ all_n.append(n)
+ m.append(ja[ipos])# m is the cell that n connects to
+ qidx.append(ipos)
+ df=pd.DataFrame({'n':all_n,'m':m,'qidx':qidx})
+ k,i,j=get_kij_from_node3d(df['n'].values,nrow,ncol)
+ df['kn'],df['in'],df['jn']=k,i,j
+ k,i,j=get_kij_from_node3d(df['m'].values,nrow,ncol)
+ df['km'],df['im'],df['jm']=k,i,j
+ df.reset_index()
+ print(f"Getting intercell connections took {time.time()-ta:.2f}s\n")
+ returndf
+
+
+
+
+[docs]
+defget_transform(modelgrid):
+"""Get a rasterio Affine object from a Flopy modelgrid
+ (same as transform attribute of rasters).
+ """
+ ifnotisinstance(modelgrid,StructuredGrid):
+ raiseValueError(
+ f"{type(modelgrid)}: Input needs to be a flopy.discretization.StructuredGrid")
+ x0=modelgrid.xyedges[0][0]
+ y0=modelgrid.xyedges[1][0]
+ xul,yul=modelgrid.get_coords(x0,y0)
+ returnAffine(modelgrid.delr[0],0.,xul,
+ 0.,-modelgrid.delc[0],yul)* \
+ Affine.rotation(-modelgrid.angrot)
+[docs]
+definterp_weights(xyz,uvw,d=2,mask=None):
+"""Speed up interpolation vs scipy.interpolate.griddata (method='linear'),
+ by only computing the weights once:
+ https://stackoverflow.com/questions/20915502/speedup-scipy-griddata-for-multiple-interpolations-between-two-irregular-grids
+
+ Parameters
+ ----------
+ xyz : ndarray or tuple
+ x, y, z, ... locations of source data.
+ (shape n source points x ndims)
+ uvw : ndarray or tuple
+ x, y, z, ... locations of where source data will be interpolated
+ (shape n destination points x ndims)
+ d : int
+ Number of dimensions (2 for 2D, 3 for 3D, etc.)
+
+ Returns
+ -------
+ indices : ndarray of shape n destination points x 3
+ Index positions in flattened (1D) xyz array
+ weights : ndarray of shape n destination points x 3
+ Fractional weights for each row position
+ in indices. Weights in each row sum to 1
+ across the 3 columns.
+ """
+ print(f'Calculating {d}D interpolation weights...')
+ # convert input to ndarrays of the right shape
+ uvw=np.array(uvw)
+ ifuvw.shape[-1]!=d:
+ uvw=uvw.T
+ xyz=np.array(xyz)
+ ifxyz.shape[-1]!=d:
+ xyz=xyz.T
+ t0=time.time()
+ tri=qhull.Delaunay(xyz)
+ simplex=tri.find_simplex(uvw)
+ vertices=np.take(tri.simplices,simplex,axis=0)
+ temp=np.take(tri.transform,simplex,axis=0)
+ delta=uvw-temp[:,d]
+ bary=np.einsum('njk,nk->nj',temp[:,:d,:],delta)
+ weights=np.hstack((bary,1-bary.sum(axis=1,keepdims=True)))
+ # round the weights,
+ # so that the weights for each simplex sum to 1
+ # sums not exactly == 1 seem to cause spurious values
+ weights=np.round(weights,6)
+ print("finished in {:.2f}s\n".format(time.time()-t0))
+ returnvertices,weights
+
+
+
+
+[docs]
+definterpolate(values,vtx,wts,fill_value='mean'):
+"""Apply the interpolation weights to a set of values.
+
+ Parameters
+ ----------
+ values : 1D array of length n source points (same as xyz in interp_weights)
+ vtx : indices returned by interp_weights
+ wts : weights returned by interp_weights
+ fill_value : float
+ Value used to fill in for requested points outside of the convex hull
+ of the input points (i.e., those with at least one negative weight).
+ If not provided, then the default is nan.
+ Returns
+ -------
+ interpolated values
+ """
+ result=np.einsum('nj,nj->n',np.take(values,vtx),wts)
+
+ # fill nans that might result from
+ # child grid points that are outside of the convex hull of the parent grid
+ # and for an unknown reason on the Appveyor Windows environment
+ iffill_value=='mean':
+ fill_value=np.nanmean(result)
+ iffill_valueisnotNone:
+ result[np.any(wts<0,axis=1)]=fill_value
+ returnresult
+
+
+
+
+[docs]
+defregrid(arr,grid,grid2,mask1=None,mask2=None,method='linear'):
+"""Interpolate array values from one model grid to another,
+ using scipy.interpolate.griddata.
+
+ Parameters
+ ----------
+ arr : 2D numpy array
+ Source data
+ grid : flopy.discretization.StructuredGrid instance
+ Source grid
+ grid2 : flopy.discretization.StructuredGrid instance
+ Destination grid (to interpolate onto)
+ mask1 : boolean array
+ mask for source grid. Areas that are masked will be converted to
+ nans, and not included in the interpolation.
+ mask2 : boolean array
+ mask denoting active area for destination grid.
+ The mean value will be applied to inactive areas if linear interpolation
+ is used (not for integer/categorical arrays).
+ method : str
+ interpolation method ('nearest', 'linear', or 'cubic')
+ """
+ try:
+ fromscipy.interpolateimportgriddata
+ except:
+ print('scipy not installed\ntry pip install scipy')
+ returnNone
+
+ arr=arr.copy()
+ # only include points specified by mask
+ x,y=grid.xcellcenters,grid.ycellcenters
+ ifmask1isnotNone:
+ mask1=mask1.astype(bool)
+ arr=arr[mask1]
+ x=x[mask1]
+ y=y[mask1]
+
+ points=np.array([x.ravel(),y.ravel()]).transpose()
+
+ arr2=griddata(points,arr.flatten(),
+ (grid2.xcellcenters,grid2.ycellcenters),
+ method=method,fill_value=np.nan)
+
+ # fill any areas that are nan
+ # (new active area includes some areas not in uwsp model)
+ fill=np.isnan(arr2)
+
+ # if new active area is supplied, fill areas outside of that too
+ ifmask2isnotNone:
+ mask2=mask2.astype(bool)
+ fill=~mask2|fill
+
+ # only fill with mean value if linear interpolation used
+ # (floating point arrays)
+ ifmethod=='linear':
+ fill_value=np.nanmean(arr2)
+ arr2[fill]=np.nanmean(arr2[~fill])
+ #else:
+ # arr2[fill] = nodataval
+ ifarr2.min()<0:
+ j=2
+ returnarr2
+
+
+
+
+[docs]
+defregrid3d(arr,grid,grid2,mask1=None,mask2=None,method='linear'):
+"""Interpolate array values from one model grid to another,
+ using scipy.interpolate.griddata.
+
+ Parameters
+ ----------
+ arr : 3D numpy array
+ Source data
+ grid : flopy.discretization.StructuredGrid instance
+ Source grid
+ grid2 : flopy.discretization.StructuredGrid instance
+ Destination grid (to interpolate onto)
+ mask1 : boolean array
+ mask for source grid. Areas that are masked will be converted to
+ nans, and not included in the interpolation.
+ mask2 : boolean array
+ mask denoting active area for destination grid.
+ The mean value will be applied to inactive areas if linear interpolation
+ is used (not for integer/categorical arrays).
+ method : str
+ interpolation method ('nearest', 'linear', or 'cubic')
+
+ Returns
+ -------
+ arr : 3D numpy array
+ Interpolated values at the x, y, z locations in grid2.
+ """
+ try:
+ fromscipy.interpolateimportgriddata
+ except:
+ print('scipy not installed\ntry pip install scipy')
+ returnNone
+
+ assertlen(arr.shape)==3,"input array must be 3d"
+ ifgrid2.botmisNone:
+ raiseValueError('regrid3d: grid2.botm is None; grid2 must have cell bottom elevations')
+
+ # source model grid points
+ px,py,pz=grid.xyzcellcenters
+
+ # pad z cell centers to avoid nans
+ # from dest cells that are above or below source cells
+ # pad top by top layer thickness
+ b1=grid.top-grid.botm[0]
+ top=pz[0]+b1
+ # pad botm by botm layer thickness
+ ifgrid.shape[0]>1:
+ b2=-np.diff(grid.botm[-2:],axis=0)[0]
+ else:
+ b2=b1
+ botm=pz[-1]-b2
+ pz=np.vstack([[top],pz,[botm]])
+ nlay,nrow,ncol=pz.shape
+ px=np.tile(px,(nlay,1,1))
+ py=np.tile(py,(nlay,1,1))
+
+ # pad the source array (and mask) on the top and bottom
+ # so that dest cells above and below the top/bottom cell centers
+ # will be within the interpolation space
+ # (source x, y, z locations already contain this pad)
+ arr=np.pad(arr,pad_width=1,mode='edge')[:,1:-1,1:-1]
+
+ # apply the mask
+ ifmask1isnotNone:
+ mask1=mask1.astype(bool)
+ # tile the mask to nlay x nrow x ncol
+ iflen(mask1.shape)==2:
+ mask1=np.tile(mask1,(nlay,1,1))
+ # pad the mask vertically to match the source array
+ elif(len(mask1.shape)==3)and(mask1.shape[0]==(nlay-2)):
+ mask1=np.pad(mask1,pad_width=1,mode='edge')[:,1:-1,1:-1]
+ arr=arr[mask1]
+ px=px[mask1]
+ py=py[mask1]
+ pz=pz[mask1]
+ else:
+ px=px.ravel()
+ py=py.ravel()
+ pz=pz.ravel()
+ arr=arr.ravel()
+
+ # dest modelgrid points
+ x,y,z=grid2.xyzcellcenters
+ try:
+ nlay,nrow,ncol=z.shape
+ except:
+ j=2
+ x=np.tile(x,(nlay,1,1))
+ y=np.tile(y,(nlay,1,1))
+
+ # interpolate inset boundary heads from 3D parent head solution
+ arr2=griddata((px,py,pz),arr,
+ (x,y,z),method='linear')
+ # get the locations of any bad values
+ bk,bi,bj=np.where(np.isnan(arr2))
+ bx=x[bk,bi,bj]
+ by=y[bk,bi,bj]
+ bz=z[bk,bi,bj]
+ # tweak the result slightly to resolve any apparent triangulation errors
+ fixed=griddata((px,py,pz),arr,
+ (bx+0.0001,by+0.0001,bz+0.0001),method='linear')
+ arr2[bk,bi,bj]=fixed
+
+ # fill any remaining areas that are nan
+ # (new active area includes some areas not in uwsp model)
+ fill=np.isnan(arr2)
+
+ # if new active area is supplied, fill areas outside of that too
+ ifmask2isnotNone:
+ mask2=mask2.astype(bool)
+ fill=~mask2|fill
+
+ # only fill with mean value if linear interpolation used
+ # (floating point arrays)
+ ifmethod=='linear':
+ arr2[fill]=np.nanmean(arr2[~fill])
+ returnarr2
+
+
+
+
+[docs]
+classInterpolator:
+"""Speed up barycentric interpolation similar to scipy.interpolate.griddata
+ (method='linear'), by computing the weights once and then re-using them for
+ successive interpolation with the same source and destination points.
+
+ Parameters
+ ----------
+ xyz : ndarray or tuple
+ x, y, z, ... locations of source data.
+ (shape n source points x ndims)
+ uvw : ndarray or tuple
+ x, y, z, ... locations of where source data will be interpolated
+ (shape n destination points x ndims)
+ d : int
+ Number of dimensions (2 for 2D, 3 for 3D, etc.)
+ source_values_mask : boolean array
+ Boolean array of same structure as the `source_values` array
+ input to the :meth:`~mfsetup.interpolate.Interpolator.interpolate` method,
+ with the same number of active values as the size of `xyz`.
+
+ Notes
+ -----
+ The methods employed are based on this Stack Overflow post:
+ https://stackoverflow.com/questions/20915502/speedup-scipy-griddata-for-multiple-interpolations-between-two-irregular-grids
+
+ """
+ def__init__(self,xyz,uvw,d=2,source_values_mask=None):
+
+ self.xyz=xyz
+ self.uvw=uvw
+ self.d=d
+
+ # properties
+ self._interp_weights=None
+ self._source_values_mask=None
+ self.source_values_mask=source_values_mask
+
+ @property
+ definterp_weights(self):
+"""Calculate the interpolation weights."""
+ ifself._interp_weightsisNone:
+ self._interp_weights=interp_weights(self.xyz,self.uvw,self.d)
+ returnself._interp_weights
+
+ @property
+ defsource_values_mask(self):
+ returnself._source_values_mask
+
+ @source_values_mask.setter
+ defsource_values_mask(self,source_values_mask):
+ ifsource_values_maskisnotNoneand \
+ np.sum(source_values_mask)!=len(self.xyz[0]):
+ raiseValueError('source_values_mask must contain the same number '
+ 'of True (active) values as there are source (xyz) points')
+ self._source_values_mask=source_values_mask
+
+
+[docs]
+ definterpolate(self,source_values,method='linear'):
+"""Interpolate values in source_values to the destination points in the *uvw* attribute.
+ using modelgrid instances
+ attached to the source and destination models.
+
+ Parameters
+ ----------
+ source_values : ndarray
+ Values to be interpolated to destination points. Array must be the same size as
+ the number of source points, or the number of active points within source points,
+ as defined by the `source_values_mask` array input to the :class:`~mfsetup.interpolate.Interpolator`.
+ method : str ('linear', 'nearest')
+ Interpolation method. With 'linear' a triangular mesh is discretized around
+ the source points, and barycentric weights representing the influence of the *d* +1
+ source points on each destination point (where *d* is the number of dimensions),
+ are computed. With 'nearest', the input is simply passed to :meth:`scipy.interpolate.griddata`.
+
+ Returns
+ -------
+ interpolated : 1D numpy array
+ Array of interpolated values at the destination locations.
+ """
+ ifself.source_values_maskisnotNone:
+ source_values=source_values.flatten()[self.source_values_mask.flatten()]
+ ifmethod=='linear':
+ interpolated=interpolate(source_values,*self.interp_weights,
+ fill_value=None)
+ elifmethod=='nearest':
+ interpolated=griddata(self.xyz,source_values,
+ self.uvw,method=method)
+ returninterpolated
+
+
+
+
+if__name__=='__main__':
+"""Example from stack overflow. In this example, both
+ xyz and uvw have points in 3 dimensions. (npoints x ndim)"""
+ m,n,d=int(3.5e4),int(3e3),3
+ # make sure no new grid point is extrapolated
+ bounding_cube=np.array(list(itertools.product([0,1],repeat=d)))
+ xyz=np.vstack((bounding_cube,
+ np.random.rand(int(m-len(bounding_cube)),d)))
+ f=np.random.rand(m)
+ g=np.random.rand(m)
+ uvw=np.random.rand(n,d)
+
+ vtx,wts=interp_weights(xyz,uvw)
+
+ np.allclose(interpolate(f,vtx,wts),griddata(xyz,f,uvw))
+
+[docs]
+classMF6model(MFsetupMixin,mf6.ModflowGwf):
+"""Class representing a MODFLOW-6 model.
+ """
+ default_file='mf6_defaults.yml'
+
+ def__init__(self,simulation=None,modelname='model',parent=None,cfg=None,
+ exe_name='mf6',load=False,
+ version='mf6',lgr=False,**kwargs):
+ defaults={'simulation':simulation,
+ 'parent':parent,
+ 'modelname':modelname,
+ 'exe_name':exe_name,
+ 'version':version,
+ 'lgr':lgr}
+ # load configuration, if supplied
+ ifcfgisnotNone:
+ ifnotisinstance(cfg,dict):
+ cfg=self.load_cfg(cfg)
+ cfg=self._parse_model_kwargs(cfg)
+ defaults.update(cfg['model'])
+ kwargs={k:vfork,vinkwargs.items()ifknotindefaults}
+ # otherwise, pass arguments on to flopy constructor
+ args=get_input_arguments(defaults,mf6.ModflowGwf,
+ exclude='packages')
+ mf6.ModflowGwf.__init__(self,**args,**kwargs)
+ #mf6.ModflowGwf.__init__(self, simulation,
+ # modelname, exe_name=exe_name, version=version,
+ # **kwargs)
+ MFsetupMixin.__init__(self,parent=parent)
+
+ self._is_lgr=lgr
+ self._package_setup_order=['tdis','dis','ic','npf','sto','rch','oc',
+ 'chd','drn','ghb','sfr','lak','riv',
+ 'wel','maw','obs']
+ # set up the model configuration dictionary
+ # start with the defaults
+ self.cfg=load_config(self.source_path/self.default_file)#'mf6_defaults.yml')
+ self.relative_external_paths=self.cfg.get('model',{}).get('relative_external_paths',True)
+ # set the model workspace and change working directory to there
+ self.model_ws=self._get_model_ws(cfg=cfg)
+ # update defaults with user-specified config. (loaded above)
+ # set up and validate the model configuration dictionary
+ self._load=load# whether the model is being created or loaded
+ self._set_cfg(cfg)
+
+ # property attributes
+ self._idomain=None
+
+ # other attributes
+ self._features={}# dictionary for caching shapefile datasets in memory
+ self._drop_thin_cells=self.cfg.get('dis',{}).get('drop_thin_cells',True)
+
+ # arrays remade during this session
+ self.updated_arrays=set()
+
+ # delete the temporary 'original-files' folder
+ # if it already exists, to avoid side effects from stale files
+ ifnotself._is_lgr:
+ shutil.rmtree(self.tmpdir,ignore_errors=True)
+
+ def__repr__(self):
+ returnMFsetupMixin.__repr__(self)
+
+ def__str__(self):
+ returnMFsetupMixin.__repr__(self)
+
+ @property
+ defnlay(self):
+ returnself.cfg['dis']['dimensions'].get('nlay',1)
+
+ @property
+ deflength_units(self):
+ returnself.cfg['dis']['options']['length_units']
+
+ @property
+ deftime_units(self):
+ returnself.cfg['tdis']['options']['time_units']
+
+
+ @property
+ defperioddata(self):
+"""DataFrame summarizing stress period information.
+ Columns:
+ ============== =========================================
+ start_datetime Start date of each model stress period
+ end_datetime End date of each model stress period
+ time MODFLOW elapsed time, in days*
+ per Model stress period number
+ perlen Stress period length (days)
+ nstp Number of timesteps in stress period
+ tsmult Timestep multiplier
+ steady Steady-state or transient
+ oc Output control setting for MODFLOW
+ parent_sp Corresponding parent model stress period
+ ============== =========================================
+ """
+ ifself._perioddataisNone:
+ # check first for already loaded time discretization info
+ try:
+ tdis_perioddata_config={col:getattr(self.modeltime,col)
+ forcolin['perlen','nstp','tsmult']}
+ nper=self.modeltime.nper
+ steady=self.modeltime.steady_state
+ default_start_datetime=self.modeltime.start_datetime
+ except:
+ tdis_perioddata_config=self.cfg['tdis']['perioddata']
+ default_start_datetime=self.cfg['tdis']['options'].get('start_date_time',
+ '1970-01-01')
+ #tdis_dimensions_config = self.cfg['tdis']['dimensions']
+ nper=self.cfg['tdis']['dimensions'].get('nper')
+ # steady can be input in either the tdis or sto input blocks
+ steady=self.cfg['tdis'].get('steady')
+ ifsteadyisNone:
+ steady=self.cfg['sto'].get('steady')
+
+ parent_stress_periods=self.cfg.get('parent').get('copy_stress_periods')
+ perioddata=setup_perioddata(
+ self,
+ tdis_perioddata_config=tdis_perioddata_config,
+ default_start_datetime=default_start_datetime,
+ nper=nper,steady=steady,
+ time_units=self.time_units,
+ parent_model=self.parent,
+ parent_stress_periods=parent_stress_periods,
+ )
+ self._perioddata=perioddata
+ # reset nper property so that it will reference perioddata table
+ self._nper=None
+ self._perioddata.to_csv(f'{self._tables_path}/stress_period_data.csv',index=False)
+ # update the model configuration
+ if'parent_sp'inperioddata.columns:
+ self.cfg['parent']['copy_stress_periods']=perioddata['parent_sp'].tolist()
+
+ returnself._perioddata
+
+ @property
+ defidomain(self):
+"""3D array indicating which cells will be included in the simulation.
+ Made a property so that it can be easily updated when any packages
+ it depends on change.
+ """
+ ifself._idomainisNoneand'DIS'inself.get_package_list():
+ self._set_idomain()
+ returnself._idomain
+
+ def_set_idomain(self):
+"""Remake the idomain array from the source data,
+ no data values in the top and bottom arrays, and
+ so that cells above SFR reaches are inactive.
+
+ Also remakes irch for the recharge package"""
+ print('(re)setting the idomain array...')
+ # loop thru LGR models and inactivate area of parent grid for each one
+ lgr_idomain=np.ones(self.dis.idomain.array.shape,dtype=int)
+ ifisinstance(self.lgr,dict):
+ fork,vinself.lgr.items():
+ lgr_idomain[v.idomain==0]=0
+ self._lgr_idomain2d=lgr_idomain[0]
+ idomain_from_layer_elevations=make_idomain(self.dis.top.array,
+ self.dis.botm.array,
+ nodata=self._nodata_value,
+ minimum_layer_thickness=self.cfg['dis'].get('minimum_layer_thickness',1),
+ drop_thin_cells=self._drop_thin_cells,
+ tol=1e-4)
+ # include cells that are active in the existing idomain array
+ # and cells inactivated on the basis of layer elevations
+ idomain=(self.dis.idomain.array>=1)& \
+ (idomain_from_layer_elevations>=1)& \
+ (lgr_idomain>=1)
+ idomain=idomain.astype(int)
+
+ # remove cells that conincide with lakes
+ # idomain[self.isbc == 1] = 0.
+
+ # remove cells that are above stream cells
+ ifself.get_package('sfr')isnotNone:
+ idomain=deactivate_idomain_above(idomain,self.sfr.packagedata)
+
+ # inactivate any isolated cells that could cause problems with the solution
+ idomain=find_remove_isolated_cells(idomain,minimum_cluster_size=20)
+
+ # create pass-through cells in inactive cells that have an active cell above and below
+ # by setting these cells to -1
+ idomain=create_vertical_pass_through_cells(idomain)
+
+ self._idomain=idomain
+
+ # take the updated idomain array and set cells != 1 to np.nan in layer botm array
+ # including lake cells
+ # effect is that the layer thicknesses in these cells will be set to zero
+ # fill_cells_vertically will be run in the setup_array routine,
+ # to collapse the nan cells to zero-thickness
+ # (assign their layer botm to the next valid layer botm above)
+ botm=self.dis.botm.array.copy()
+ botm[(idomain!=1)]=np.nan
+
+ # re-write the input files
+ # todo: integrate this better with setup_dis
+ # to reduce the number of times the arrays need to be remade
+ self._setup_array('dis','botm',
+ data={i:arrfori,arrinenumerate(botm)},
+ datatype='array3d',resample_method='linear',
+ write_fmt='%.2f',dtype=float)
+ self.dis.botm=self.cfg['dis']['griddata']['botm']
+ self._setup_array('dis','idomain',
+ data={i:arrfori,arrinenumerate(idomain)},
+ datatype='array3d',resample_method='nearest',
+ write_fmt='%d',dtype=int)
+ self.dis.idomain=self.cfg['dis']['griddata']['idomain']
+ self._mg_resync=False
+ self.setup_grid()# reset the model grid
+
+ # rebuild irch to keep it in sync with idomain changes
+ irch=make_irch(idomain)
+ self._setup_array('rch','irch',
+ data={0:irch},
+ datatype='array2d',
+ write_fmt='%d',dtype=int)
+ #self.dis.irch = self.cfg['dis']['irch']
+
+ def_update_grid_configuration_with_dis(self):
+"""Update grid configuration with any information supplied to dis package
+ (so that settings specified for DIS package have priority). This method
+ is called by MFsetupMixin.setup_grid.
+ """
+ forparamin['nlay','nrow','ncol']:
+ ifparaminself.cfg['dis']['dimensions']:
+ self.cfg['setup_grid'][param]=self.cfg['dis']['dimensions'][param]
+ forparamin['delr','delc']:
+ ifparaminself.cfg['dis']['griddata']:
+ self.cfg['setup_grid'][param]=self.cfg['dis']['griddata'][param]
+
+
+[docs]
+ defget_flopy_external_file_input(self,var):
+"""Repath intermediate external file input to the
+ external file path that MODFLOW will use. Copy the
+ file because MF6 flopy reads and writes to the same location.
+
+ Parameters
+ ----------
+ var : str
+ key in self.cfg['intermediate_data'] dict
+
+ Returns
+ -------
+ input : dict or list of dicts
+ MODFLOW6 external file input format
+ {'filename': <filename>}
+ """
+ pass
+[docs]
+ defget_package_list(self):
+"""Replicate this method in flopy.modflow.Modflow.
+ """
+ # TODO: this should reference namfile dict
+ return[p.name[0].upper()forpinself.packagelist]
+
+
+
+[docs]
+ defget_raster_values_at_cell_centers(self,raster,out_of_bounds_errors='coerce'):
+"""Sample raster values at centroids
+ of model grid cells."""
+ values=get_values_at_points(raster,
+ x=self.modelgrid.xcellcenters.ravel(),
+ y=self.modelgrid.ycellcenters.ravel(),
+ points_crs=self.modelgrid.crs,
+ out_of_bounds_errors=out_of_bounds_errors)
+ ifself.modelgrid.grid_type=='structured':
+ values=np.reshape(values,(self.nrow,self.ncol))
+ returnvalues
+
+
+
+[docs]
+ defget_raster_statistics_for_cells(self,top,stat='mean'):
+"""Compute zonal statics for raster pixels within
+ each model cell.
+ """
+ raiseNotImplementedError()
+
+
+ defcreate_lgr_models(self):
+ fork,vinself.cfg['setup_grid']['lgr'].items():
+ # load the config file for lgr inset model
+ if'filename'inv:
+ inset_cfg=load_cfg(v['filename'],
+ default_file='mf6_defaults.yml')
+ elif'cfg'inv:
+ inset_cfg=copy.deepcopy(v['cfg'])
+ else:
+ raiseValueError('Unrecognized input in subblock lgr: '
+ 'Supply either a configuration filename: '
+ 'or additional yaml configuration under cfg:'
+ )
+ # if lgr inset has already been created
+ ifinset_cfg['model']['modelname']inself.simulation._models:
+ return
+ inset_cfg['model']['simulation']=self.simulation
+ if'ims'ininset_cfg['model']['packages']:
+ inset_cfg['model']['packages'].remove('ims')
+ # set parent configuation dictionary here
+ # (even though parent model is explicitly set below)
+ # so that the LGR grid is snapped to the parent grid
+ inset_cfg['parent']={'namefile':self.namefile,
+ 'model_ws':self.model_ws,
+ 'version':'mf6',
+ 'hiKlakes_value':self.cfg['model']['hiKlakes_value'],
+ 'default_source_data':True,
+ 'length_units':self.length_units,
+ 'time_units':self.time_units
+ }
+ inset_cfg=MF6model._parse_model_kwargs(inset_cfg)
+ kwargs=get_input_arguments(inset_cfg['model'],mf6.ModflowGwf,
+ exclude='packages')
+ kwargs['parent']=self# otherwise will try to load parent model
+ inset_model=MF6model(cfg=inset_cfg,lgr=True,load=self._load,**kwargs)
+ #inset_model._load = self._load # whether model is being made or loaded from existing files
+ inset_model.setup_grid()
+ delinset_model.cfg['ims']
+ inset_model.cfg['tdis']=self.cfg['tdis']
+ ifself.insetisNone:
+ self.inset={}
+ self.lgr={}
+
+ self.inset[inset_model.name]=inset_model
+ #self.inset[inset_model.name]._is_lgr = True
+
+ # establish inset model layering within parent model
+ parent_start_layer=v.get('parent_start_layer',0)
+ # parent_end_layer is specified as the last zero-based
+ # parent layer that includes LGR refinement (not as a slice end)
+ parent_end_layer=v.get('parent_end_layer',self.nlay-1)
+ # the layer refinement can be specified as an int, a list or a dict
+ ncppl_input=v.get('layer_refinement',1)
+ ifnp.isscalar(ncppl_input):
+ ncppl=np.array([0]*self.modelgrid.nlay)
+ ncppl[parent_start_layer:parent_end_layer+1]=ncppl_input
+ elifisinstance(ncppl_input,list):
+ ifnotlen(ncppl_input)==self.modelgrid.nlay:
+ raiseValueError(
+ "Configuration input: layer_refinement specified as"
+ "a list must include a value for every layer."
+ )
+ ncppl=ncppl_input.copy()
+ elifisinstance(ncppl_input,dict):
+ ncppl=[ncppl_input.get(i,0)foriinrange(self.modelgrid.nlay)]
+ else:
+ raiseValueError("Configuration input: Unsupported input for "
+ "layer_refinement: supply an int, list or dict.")
+
+ # refined layers must be consecutive, starting from layer 1
+ is_refined=(np.array(ncppl)>0).astype(int)
+ last_refined_layer=max(np.where(is_refined>0)[0])
+ consecutive=all(np.diff(is_refined)[:last_refined_layer]==0)
+ if(is_refined[0]!=1)|(notconsecutive):
+ raiseValueError("Configuration input: layer_refinement must "
+ "include consecutive sequence of layers, "
+ "starting with the top layer.")
+ # check the specified DIS package input is consistent
+ # with the specified layer_refinement
+ specified_nlay_dis=inset_cfg['dis']['dimensions'].get('nlay')
+ # skip this check if nlay hasn't been entered into the configuration file yet
+ ifspecified_nlay_disand(np.sum(ncppl)!=specified_nlay_dis):
+ raiseValueError(
+ f"Configuration input: layer_refinement of {ncppl} "
+ f"implies {is_refined.sum()} inset model layers.\n"
+ f"{specified_nlay_dis} inset model layers specified in DIS package.")
+ # mapping between parent and inset model layers
+ # that is used for copying input from parent model
+ inset_parent_layer_mapping=dict()
+ inset_k=-1
+ forparent_k,n_inset_layinenumerate(ncppl):
+ foriinrange(n_inset_lay):
+ inset_k+=1
+ inset_parent_layer_mapping[inset_k]=parent_k
+ self.inset[inset_model.name].cfg['parent']['inset_layer_mapping']=\
+ inset_parent_layer_mapping
+ # create idomain indicating area of parent grid that is LGR
+ lgr_idomain=make_lgr_idomain(self.modelgrid,self.inset[inset_model.name].modelgrid,
+ ncppl)
+
+ # inset model horizontal refinement from parent resolution
+ refinement=self.modelgrid.delr[0]/self.inset[inset_model.name].modelgrid.delr[0]
+ ifnotnp.round(refinement,4).is_integer():
+ raiseValueError(f"LGR inset model spacing must be a factor of the parent model spacing.")
+ ncpp=int(refinement)
+ self.lgr[inset_model.name]=Lgr(self.nlay,self.nrow,self.ncol,
+ self.dis.delr.array,self.dis.delc.array,
+ self.dis.top.array,self.dis.botm.array,
+ lgr_idomain,ncpp,ncppl)
+ inset_model._perioddata=self.perioddata
+ # set parent model top in LGR area to bottom of LGR area
+ # this is an initial draft;
+ # bottom elevations are readjusted in sourcedata.py
+ # when inset model DIS package botm array is set up
+ # (set to mean of inset model bottom elevations
+ # within each parent cell)
+ # number of layers in parent model with LGR
+ n_parent_lgr_layers=np.sum(np.array(ncppl)>0)
+ lgr_area=self.lgr[inset_model.name].idomain==0
+ self.dis.top[lgr_area[0]]=\
+ self.lgr[inset_model.name].botmp[n_parent_lgr_layers-1][lgr_area[0]]
+ # set parent model layers in LGR area to zero-thickness
+ new_parent_botm=self.dis.botm.array.copy()
+ forkinrange(n_parent_lgr_layers):
+ new_parent_botm[k][lgr_area[0]]=self.dis.top[lgr_area[0]]
+ self.dis.botm=new_parent_botm
+ self._update_top_botm_external_files()
+
+
+ def_update_top_botm_external_files(self):
+"""Update the external files after assigning new elevations to the
+ Discretization Package top and botm arrays; adjust idomain as needed."""
+ # reset the model top
+ # (this step may not be needed if the "original top" functionality
+ # is limited to cases where there is a lake package,
+ # or if the "original top"/"lake bathymetry" functionality is eliminated
+ # and we instead require the top to be pre-processed)
+ original_top_file=Path(self.external_path,
+ f"{self.name}_{self.cfg['dis']['top_filename_fmt']}.original")
+ original_top_file.unlink(missing_ok=True)
+ self._setup_array('dis','top',
+ data={0:self.dis.top.array},
+ datatype='array2d',resample_method='linear',
+ write_fmt='%.2f',dtype=float)
+ # _set_idomain() regerates external files for bottom array
+ self._set_idomain()
+
+
+ defsetup_lgr_exchanges(self):
+
+ forinset_name,inset_modelinself.inset.items():
+
+ # update cell information for computing any bottom exchanges
+ self.lgr[inset_name].top=inset_model.dis.top.array
+ self.lgr[inset_name].botm=inset_model.dis.botm.array
+ # update only the layers of the parent model below the child model
+ parent_top_below_child=np.sum(self.lgr[inset_name].ncppl>0)-1
+ self.lgr[inset_name].botmp[parent_top_below_child:]=\
+ self.dis.botm.array[parent_top_below_child:]
+
+ # get the exchange data
+ exchangelist=self.lgr[inset_name].get_exchange_data(angldegx=True,cdist=True)
+
+ # make a dataframe for concise unpacking of cellids
+ columns=['cellidm1','cellidm2','ihc','cl1','cl2','hwva','angldegx','cdist']
+ exchangedf=pd.DataFrame(exchangelist,columns=columns)
+
+ # unpack the cellids and get their respective ibound values
+ k1,i1,j1=zip(*exchangedf['cellidm1'])
+ k2,i2,j2=zip(*exchangedf['cellidm2'])
+ # limit connections to
+ active1=self.idomain[k1,i1,j1]>=1
+
+ active2=inset_model.idomain[k2,i2,j2]>=1
+
+ # screen out connections involving an inactive cell
+ active_connections=active1&active2
+ nexg=active_connections.sum()
+ active_exchangelist=[lfori,linenumerate(exchangelist)ifactive_connections[i]]
+
+ # arguments to ModflowGwfgwf
+ kwargs={'exgtype':'gwf6-gwf6',
+ 'exgmnamea':self.name,
+ 'exgmnameb':inset_name,
+ 'nexg':nexg,
+ 'auxiliary':[('angldegx','cdist')],
+ 'exchangedata':active_exchangelist
+ }
+ kwargs=get_input_arguments(kwargs,mf6.ModflowGwfgwf)
+
+ # set up the exchange package
+ gwfgwf=mf6.ModflowGwfgwf(self.simulation,**kwargs)
+
+ # set up a Mover Package if needed
+ self.setup_simulation_mover(gwfgwf)
+
+
+ defsetup_dis(self,**kwargs):
+""""""
+ package='dis'
+ print('\nSetting up {} package...'.format(package.upper()))
+ t0=time.time()
+
+ # resample the top from the DEM
+ ifself.cfg['dis']['remake_top']:
+ self._setup_array(package,'top',datatype='array2d',
+ resample_method='linear',
+ write_fmt='%.2f')
+
+ # make the botm array
+ self._setup_array(package,'botm',datatype='array3d',
+ resample_method='linear',
+ write_fmt='%.2f')
+
+ # set number of layers to length of the created bottom array
+ # this needs to be set prior to setting up the idomain,
+ # otherwise idomain may have wrong number of layers
+ self.cfg['dis']['dimensions']['nlay']=len(self.cfg['dis']['griddata']['botm'])
+
+ # initial idomain input for creating a dis package instance
+ self._setup_array(package,'idomain',datatype='array3d',write_fmt='%d',
+ resample_method='nearest',
+ dtype=int)
+
+ # put together keyword arguments for dis package
+ kwargs=self.cfg['grid'].copy()# nrow, ncol, delr, delc
+ kwargs.update(self.cfg['dis'])
+ kwargs.update(self.cfg['dis']['dimensions'])# nper, nlay, etc.
+ kwargs.update(self.cfg['dis']['griddata'])
+
+ # modelgrid: dis arguments
+ remaps={'xoff':'xorigin',
+ 'yoff':'yorigin',
+ 'rotation':'angrot'}
+
+ fork,vinremaps.items():
+ ifvnotinkwargs:
+ kwargs[v]=kwargs.pop(k)
+ kwargs['length_units']=self.length_units
+ # get the arguments for the flopy version of ModflowGwfdis
+ # but instantiate with modflow-setup subclass of ModflowGwfdis
+ kwargs=get_input_arguments(kwargs,mf6.ModflowGwfdis)
+ dis=ModflowGwfdis(model=self,**kwargs)
+ self._mg_resync=False
+ self._reset_bc_arrays()
+ self._set_idomain()
+ print("finished in {:.2f}s\n".format(time.time()-t0))
+ returndis
+
+ #def setup_tdis(self):
+
+[docs]
+ defsetup_ic(self,**kwargs):
+"""
+ Sets up the IC package.
+ """
+ package='ic'
+ print('\nSetting up {} package...'.format(package.upper()))
+ t0=time.time()
+
+ kwargs=self.cfg[package]
+ kwargs.update(self.cfg[package]['griddata'])
+ kwargs['source_data_config']=kwargs['source_data']
+ kwargs['filename_fmt']=kwargs['strt_filename_fmt']
+
+ # make the starting heads array
+ strt=setup_strt(self,package,**kwargs)
+
+ ic=mf6.ModflowGwfic(self,strt=strt)
+ print("finished in {:.2f}s\n".format(time.time()-t0))
+ returnic
+
+
+
+[docs]
+ defsetup_npf(self,**kwargs):
+"""
+ Sets up the NPF package.
+ """
+ package='npf'
+ print('\nSetting up {} package...'.format(package.upper()))
+ t0=time.time()
+ hiKlakes_value=float(self.cfg['parent'].get('hiKlakes_value',1e4))
+
+ # make the k array
+ self._setup_array(package,'k',vmin=0,vmax=hiKlakes_value,
+ resample_method='linear',
+ datatype='array3d',write_fmt='%.6e')
+
+ # make the k33 array (kv)
+ self._setup_array(package,'k33',vmin=0,vmax=hiKlakes_value,
+ resample_method='linear',
+ datatype='array3d',write_fmt='%.6e')
+
+ kwargs=self.cfg[package]['options'].copy()
+ kwargs.update(self.cfg[package]['griddata'].copy())
+ kwargs=get_input_arguments(kwargs,mf6.ModflowGwfnpf)
+ npf=mf6.ModflowGwfnpf(self,**kwargs)
+ print("finished in {:.2f}s\n".format(time.time()-t0))
+ returnnpf
+
+
+
+[docs]
+ defsetup_sto(self,**kwargs):
+"""
+ Sets up the STO package.
+ """
+
+ ifnp.all(self.perioddata['steady']):
+ print('Skipping STO package, no transient stress periods...')
+ return
+
+ package='sto'
+ print('\nSetting up {} package...'.format(package.upper()))
+ t0=time.time()
+
+ # make the sy array
+ self._setup_array(package,'sy',datatype='array3d',resample_method='linear',
+ write_fmt='%.6e')
+
+ # make the ss array
+ self._setup_array(package,'ss',datatype='array3d',resample_method='linear',
+ write_fmt='%.6e')
+
+ kwargs=self.cfg[package]['options'].copy()
+ kwargs.update(self.cfg[package]['griddata'].copy())
+ # get steady/transient info from perioddata table
+ # which parses it from either DIS or STO input (to allow consistent input structure with mf2005)
+ kwargs['steady_state']={k:vfork,vinzip(self.perioddata['per'],self.perioddata['steady'])ifv}
+ kwargs['transient']={k:notvfork,vinzip(self.perioddata['per'],self.perioddata['steady'])}
+ kwargs=get_input_arguments(kwargs,mf6.ModflowGwfsto)
+ sto=mf6.ModflowGwfsto(self,**kwargs)
+ print("finished in {:.2f}s\n".format(time.time()-t0))
+ returnsto
+
+
+
+[docs]
+ defsetup_rch(self,**kwargs):
+"""
+ Sets up the RCH package.
+ """
+ package='rch'
+ print('\nSetting up {} package...'.format(package.upper()))
+ t0=time.time()
+
+ # make the irch array
+ irch=make_irch(self.idomain)
+
+ self._setup_array('rch','irch',
+ data={0:irch},
+ datatype='array2d',
+ write_fmt='%d',dtype=int)
+
+ # make the rech array
+ self._setup_array(package,'recharge',datatype='transient2d',
+ resample_method='nearest',write_fmt='%.6e',
+ write_nodata=0.)
+
+ kwargs=self.cfg[package].copy()
+ kwargs.update(self.cfg[package]['options'])
+ kwargs=get_input_arguments(kwargs,mf6.ModflowGwfrcha)
+ rch=mf6.ModflowGwfrcha(self,**kwargs)
+ print("finished in {:.2f}s\n".format(time.time()-t0))
+ returnrch
+
+
+
+[docs]
+ defsetup_lak(self,**kwargs):
+"""
+ Sets up the Lake package.
+ """
+ package='lak'
+ print('\nSetting up {} package...'.format(package.upper()))
+ t0=time.time()
+ ifself.lakarr.sum()==0:
+ print("lakes_shapefile not specified, or no lakes in model area")
+ return
+
+ # option to write connectiondata to external file
+ external_files=self.cfg['lak']['external_files']
+ horizontal_connections=self.cfg['lak']['horizontal_connections']
+
+
+ # source data
+ source_data=self.cfg['lak']['source_data']
+
+ # munge lake package input
+ # returns dataframe with information for each lake
+ self.lake_info=setup_lake_info(self)
+
+ # returns dataframe with connection information
+ connectiondata=setup_lake_connectiondata(self,for_external_file=external_files,
+ include_horizontal_connections=horizontal_connections)
+ # lakeno column will have # in front if for_external_file=True
+ lakeno_col=[cforcinconnectiondata.columnsif'lakeno'inc][0]
+ nlakeconn=connectiondata.groupby(lakeno_col).count().iconn.to_dict()
+ offset=0ifexternal_fileselse1
+ self.lake_info['nlakeconn']=[nlakeconn[id-offset]foridinself.lake_info['lak_id']]
+
+ # set up the tab files
+ if'stage_area_volume_file'insource_data:
+ tab_files=setup_lake_tablefiles(self,source_data['stage_area_volume_file'])
+
+ # tabfiles aren't rewritten by flopy on package write
+ self.cfg['lak']['tab_files']=tab_files
+ # kludge to deal with ugliness of lake package external file handling
+ # (need to give path relative to model_ws, not folder that flopy is working in)
+ tab_files_argument=[os.path.relpath(f)forfintab_files]
+ else:
+ tab_files=None
+ # todo: implement lake outlets with SFR
+
+ # perioddata
+ self.lake_fluxes=setup_lake_fluxes(self)
+ lakeperioddata=get_lakeperioddata(self.lake_fluxes)
+
+ # set up external files
+ connectiondata_cols=[lakeno_col,'iconn','k','i','j','claktype','bedleak',
+ 'belev','telev','connlen','connwidth']
+ ifexternal_files:
+ # get the file path (allowing for different external file locations, specified name format, etc.)
+ filepath=self.setup_external_filepaths(package,'connectiondata',
+ self.cfg[package]['connectiondata_filename_fmt'])
+ connectiondata[connectiondata_cols].to_csv(filepath[0]['filename'],index=False,sep=' ')
+ # make a copy for the intermediate data folder, for consistency with mf-2005
+ shutil.copy(filepath[0]['filename'],self.cfg['intermediate_data']['output_folder'])
+ else:
+ connectiondata_cols=connectiondata_cols[:2]+['cellid']+connectiondata_cols[5:]
+ self.cfg[package]['connectiondata']=connectiondata[connectiondata_cols].values.tolist()
+
+ # set up input arguments
+ kwargs=self.cfg[package].copy()
+ options=self.cfg[package]['options'].copy()
+ renames={'budget_fileout':'budget_filerecord',
+ 'stage_fileout':'stage_filerecord'}
+ fork,vinrenames.items():
+ ifkinoptions:
+ options[v]=options.pop(k)
+ kwargs.update(self.cfg[package]['options'])
+ kwargs['time_conversion']=convert_time_units(self.time_units,'seconds')
+ kwargs['length_conversion']=convert_time_units(self.length_units,'meters')
+ kwargs['nlakes']=len(self.lake_info)
+ kwargs['noutlets']=0# not implemented
+ # [lakeno, strt, nlakeconn, aux, boundname]
+ packagedata_cols=['lak_id','strt','nlakeconn']
+ ifkwargs.get('boundnames'):
+ packagedata_cols.append('name')
+ packagedata=self.lake_info[packagedata_cols]
+ packagedata['lak_id']-=1# convert to zero-based
+ kwargs['packagedata']=packagedata.values.tolist()
+ iftab_files!=None:
+ kwargs['ntables']=len(tab_files)
+ kwargs['tables']=[(i,f)#, 'junk', 'junk')
+ fori,finenumerate(tab_files)]
+ kwargs['outlets']=None# not implemented
+ #kwargs['outletperioddata'] = None # not implemented
+ kwargs['perioddata']=lakeperioddata
+
+ # observations
+ kwargs['observations']=setup_mf6_lake_obs(kwargs)
+
+ kwargs=get_input_arguments(kwargs,mf6.ModflowGwflak)
+ lak=mf6.ModflowGwflak(self,**kwargs)
+ print("finished in {:.2f}s\n".format(time.time()-t0))
+ returnlak
+
+
+
+
+[docs]
+ defsetup_chd(self,**kwargs):
+"""Set up the CHD Package.
+ """
+ returnself._setup_basic_stress_package(
+ 'chd',mf6.ModflowGwfchd,['head'],**kwargs)
+
+
+
+
+[docs]
+ defsetup_drn(self,**kwargs):
+"""Set up the Drain Package.
+ """
+ returnself._setup_basic_stress_package(
+ 'drn',mf6.ModflowGwfdrn,['elev','cond'],**kwargs)
+
+
+
+
+[docs]
+ defsetup_ghb(self,**kwargs):
+"""Set up the General Head Boundary Package.
+ """
+ returnself._setup_basic_stress_package(
+ 'ghb',mf6.ModflowGwfghb,['bhead','cond'],**kwargs)
+
+
+
+
+[docs]
+ defsetup_riv(self,rivdata=None,**kwargs):
+"""Set up the River Package.
+ """
+ returnself._setup_basic_stress_package(
+ 'riv',mf6.ModflowGwfriv,['stage','cond','rbot'],
+ rivdata=rivdata,**kwargs)
+
+
+
+
+[docs]
+ defsetup_wel(self,**kwargs):
+"""Set up the Well Package.
+ """
+ returnself._setup_basic_stress_package(
+ 'wel',mf6.ModflowGwfwel,['q'],**kwargs)
+
+
+
+
+[docs]
+ defsetup_obs(self,**kwargs):
+"""
+ Sets up the OBS utility.
+ """
+ package='obs'
+ print('\nSetting up {} package...'.format(package.upper()))
+ t0=time.time()
+
+ iobs_domain=None
+ ifnotkwargs['mfsetup_options']['allow_obs_in_bc_cells']:
+ # for now, discard any head observations in same (i, j) column of cells
+ # as a non-well boundary condition
+ # including lake package lakes and non lake, non well BCs
+ # (high-K lakes are excluded, since we may want head obs at those locations,
+ # to serve as pseudo lake stage observations)
+ iobs_domain=~((self.isbc==1)|np.any(self.isbc>2,axis=0))
+
+ # munge the observation data
+ df=setup_head_observations(self,
+ obs_package=package,
+ obsname_column='obsname',
+ iobs_domain=iobs_domain,
+ **kwargs['source_data'],
+ **kwargs['mfsetup_options'])
+
+ # reformat to flopy input format
+ obsdata=df[['obsname','obstype','id']].to_records(index=False)
+ filename=self.cfg[package]['mfsetup_options']['filename_fmt'].format(self.name)
+ obsdata={filename:obsdata}
+
+ kwargs=self.cfg[package].copy()
+ kwargs.update(self.cfg[package]['options'])
+ kwargs['continuous']=obsdata
+ kwargs=get_input_arguments(kwargs,mf6.ModflowUtlobs)
+ obs=mf6.ModflowUtlobs(self,**kwargs)
+ print("finished in {:.2f}s\n".format(time.time()-t0))
+ returnobs
+[docs]
+ defsetup_ims(self):
+"""
+ Sets up the IMS package.
+ """
+ package='ims'
+ print('\nSetting up {} package...'.format(package.upper()))
+ t0=time.time()
+ kwargs=flatten(self.cfg[package])
+ # renames to cover difference between mf6: flopy input
+ renames={'csv_outer_output':'csv_outer_output_filerecord',
+ 'csv_inner_output':'csv_outer_inner_filerecord'
+ }
+ fork,vinrenames.items():
+ ifkinkwargs:
+ kwargs[v]=kwargs[k]
+ kwargs=get_input_arguments(kwargs,mf6.ModflowIms)
+ ims=mf6.ModflowIms(self.simulation,**kwargs)
+ #self.simulation.register_ims_package(ims, [self.name])
+ print("finished in {:.2f}s\n".format(time.time()-t0))
+ returnims
+
+
+
+[docs]
+ defsetup_simulation_mover(self,gwfgwf):
+"""Set up the MODFLOW-6 water mover package at the simulation level.
+ Automate set-up of the mover between SFR packages in LGR parent and inset models.
+ todo: automate set-up of mover between SFR and lakes (within a model).
+
+ Parameters
+ ----------
+ gwfgwf : Flopy :class:`~flopy.mf6.modflow.mfgwfgwf.ModflowGwfgwf` package instance
+
+ Notes
+ ------
+ Other uses of the water mover need to be configured manually using flopy.
+ """
+ package='mvr'
+ print('\nSetting up the simulation water mover package...')
+ t0=time.time()
+
+ perioddata_dfs=[]
+ ifself.get_package('sfr')isnotNone:
+ ifself.insetisnotNone:
+ forinset_name,insetinself.inset.items():
+ ifinset.get_package('sfr'):
+ inset_perioddata=get_mover_sfr_package_input(
+ self,inset,gwfgwf.exchangedata.array)
+ perioddata_dfs.append(inset_perioddata)
+ # for each SFR reach with a connection
+ # to a reach in another model
+ # set the SFR Package downstream connection to 0
+ fori,rininset_perioddata.iterrows():
+ rd=self.simulation.get_model(r['mname1']).sfrdata.reach_data
+ rd.loc[rd['rno']==r['id1']+1,'outreach']=0
+ # fix flopy connectiondata as well
+ sfr_package=self.simulation.get_model(r['mname1']).sfr
+ cd=sfr_package.connectiondata.array.tolist()
+ # there should be no downstream reaches
+ # (indicated by negative numbers)
+ cd[r['id1']]=tuple(vforvincd[r['id1']]ifv>0)
+ sfr_package.connectiondata=cd
+ # re-write the shapefile exports with corrected routing
+ inset.sfrdata.write_shapefiles(f'{inset._shapefiles_path}/{inset_name}')
+
+ self.sfrdata.write_shapefiles(f'{self._shapefiles_path}/{self.name}')
+
+
+ iflen(perioddata_dfs)>0:
+ perioddata=pd.concat(perioddata_dfs)
+ iflen(perioddata)>0:
+ kwargs=flatten(self.cfg[package])
+ # modelnames (boolean) keyword to indicate that all package names will
+ # be preceded by the model name for the package. Model names are
+ # required when the Mover Package is used with a GWF-GWF Exchange. The
+ # MODELNAME keyword should not be used for a Mover Package that is for
+ # a single GWF Model.
+ # this argument will need to be adapted for implementing a mover package within a model
+ # (between lakes and sfr)
+ kwargs['modelnames']=True
+ kwargs['maxmvr']=len(perioddata)# assumes that input for period 0 applies to all periods
+ packages=set(list(zip(perioddata.mname1,perioddata.pname1))+
+ list(zip(perioddata.mname2,perioddata.pname2)))
+ kwargs['maxpackages']=len(packages)
+ kwargs['packages']=list(packages)
+ kwargs['perioddata']={0:perioddata.values.tolist()}# assumes that input for period 0 applies to all periods
+ kwargs=get_input_arguments(kwargs,mf6.ModflowGwfmvr)
+ mvr=mf6.ModflowMvr(gwfgwf,**kwargs)
+ print("finished in {:.2f}s\n".format(time.time()-t0))
+ returnmvr
+ else:
+ print("no packages with mover information\n")
+
+
+
+[docs]
+ defwrite_input(self):
+"""Write the model input.
+ """
+ # prior to writing output
+ # remove any BCs in inactive cells
+ # handle cases of single model or multi-model LGR simulation
+ # by working with the simulation-level model dictionary
+ formodel_name,modelinself.simulation.model_dict.items():
+ pckgs=['chd','drn','ghb','riv','wel']
+ forpckginpckgs:
+ package_instance=getattr(model,pckg.lower(),None)
+ ifpackage_instanceisnotNone:
+ external_files=model.cfg[pckg.lower()]['stress_period_data']
+ remove_inactive_bcs(package_instance,
+ external_files=external_files)
+ ifhasattr(model,'obs'):
+ # handle case of single obs package, in which case model.obs
+ # will be a ModflowUtlobs package instance
+ try:
+ len(model.obs)
+ obs_packages=model.obs
+ except:
+ obs_packages=[model.obs]
+ forobs_package_instanceinobs_packages:
+ remove_inactive_obs(obs_package_instance)
+
+ # write the model with flopy
+ # but skip the sfr package
+ # by monkey-patching the write method
+ defskip_write(**kwargs):
+ pass
+ ifhasattr(model,'sfr'):
+ model.sfr.write=skip_write
+ self.simulation.write_simulation()
+
+ # post-flopy write actions
+ formodel_name,modelinself.simulation.model_dict.items():
+ # write the sfr package with SFRmaker
+ if'SFR'in' '.join(model.get_package_list()):
+ options=[]
+ fork,binmodel.cfg['sfr']['options'].items():
+ options.append(k)
+ if'save_flows'inoptions:
+ budget_fileout='{}.{}'.format(model_name,
+ model.cfg['sfr']['budget_fileout'])
+ stage_fileout='{}.{}'.format(model_name,
+ model.cfg['sfr']['stage_fileout'])
+ options.append('budget fileout {}'.format(budget_fileout))
+ options.append('stage fileout {}'.format(stage_fileout))
+ iflen(model.sfrdata.observations)>0:
+ options.append('obs6 filein {}.{}'.format(model_name,
+ model.cfg['sfr']['obs6_filein_fmt'])
+ )
+ model.sfrdata.write_package(idomain=model.idomain,
+ version='mf6',
+ options=options,
+ external_files_path=model.external_path
+ )
+ # add version info to package file headers
+ files=[model.namefile]
+ files+=[p.filenameforpinmodel.packagelist]
+ files+=[p[0].filenamefork,pinmodel.simulation.package_key_dict.items()]
+ forfinfiles:
+ add_version_to_fileheader(f,model_info=model.header)
+
+ ifnotmodel.cfg['mfsetup_options']['keep_original_arrays']:
+ shutil.rmtree(model.tmpdir)
+
+ # label stress periods in tdis file with comments
+ self.perioddata.sort_values(by='per',inplace=True)
+ add_date_comments_to_tdis(self.simulation.tdis.filename,
+ self.perioddata.start_datetime,
+ self.perioddata.end_datetime
+ )
+
+
+
+
+ @staticmethod
+ def_parse_model_kwargs(cfg):
+
+ ifisinstance(cfg['model']['simulation'],str):
+ # assume that simulation for model
+ # is the one simulation specified in configuration
+ # (regardless of the name specified in model configuration)
+ cfg['model']['simulation']=cfg['simulation']
+ ifisinstance(cfg['model']['simulation'],dict):
+ # create simulation from simulation block in config dict
+ kwargs=cfg['simulation'].copy()
+ kwargs.update(cfg['simulation']['options'])
+ kwargs=get_input_arguments(kwargs,mf6.MFSimulation)
+ sim=flopy.mf6.MFSimulation(**kwargs)
+ cfg['model']['simulation']=sim
+ sim_ws=cfg['simulation']['sim_ws']
+ # if a simulation has already been created, get the path from the instance
+ elifisinstance(cfg['model']['simulation'],mf6.MFSimulation):
+ sim_ws=cfg['model']['simulation'].simulation_data.mfpath._sim_path
+ else:
+ raiseTypeError('unrecognized configuration input for simulation.')
+
+ # listing file
+ cfg['model']['list']=os.path.join(cfg['model']['list_filename_fmt']
+ .format(cfg['model']['modelname']))
+
+ # newton options
+ ifcfg['model']['options'].get('newton',False):
+ cfg['model']['options']['newtonoptions']=['']
+ ifcfg['model']['options'].get('newton_under_relaxation',False):
+ cfg['model']['options']['newtonoptions']=['under_relaxation']
+ cfg['model'].update(cfg['model']['options'])
+ returncfg
+
+
+
+[docs]
+ @classmethod
+ defload_from_config(cls,yamlfile,load_only=None):
+"""Load a model from a configuration file and set of MODFLOW files.
+
+ Parameters
+ ----------
+ yamlfile : pathlike
+ Modflow setup YAML format configuration file
+ load_only : list
+ List of package abbreviations or package names corresponding to
+ packages that flopy will load. default is None, which loads all
+ packages. the discretization packages will load regardless of this
+ setting. subpackages, like time series and observations, will also
+ load regardless of this setting.
+ example list: ['ic', 'maw', 'npf', 'oc', 'ims', 'gwf6-gwf6']
+
+ Returns
+ -------
+ m : mfsetup.MF6model instance
+ """
+ print('\nLoading simulation in {}\n'.format(yamlfile))
+ t0=time.time()
+
+ #cfg = load_cfg(yamlfile, verbose=verbose, default_file=cls.default_file) # 'mf6_defaults.yml')
+ #cfg = cls._parse_model_kwargs(cfg)
+ #kwargs = get_input_arguments(cfg['model'], mf6.ModflowGwf,
+ # exclude='packages')
+ #model = cls(cfg=cfg, **kwargs)
+ model=cls(cfg=yamlfile,load=True)
+ if'grid'notinmodel.cfg.keys():
+ model.setup_grid()
+ sim=model.cfg['model']['simulation']# should be a flopy.mf6.MFSimulation instance
+ models=[model]
+ ifisinstance(model.inset,dict):
+ forinset_name,insetinmodel.inset.items():
+ models.append(inset)
+
+ # execute the flopy load code on the pre-defined simulation and model instances
+ # (so that the end result is a MFsetup.MF6model instance)
+ # (kludgy)
+ sim=flopy_mfsimulation_load(sim,models,load_only=load_only)
+
+ # just return the parent model (inset models should be attached through the inset attribute,
+ # in addition to through the .simulation flopy attribute)
+ m=sim.get_model(model_name=model.name)
+ print('finished loading model in {:.2f}s'.format(time.time()-t0))
+ returnm
+[docs]
+classMFsetupMixin():
+"""Mixin class for shared functionality between MF6model and MFnwtModel.
+ Meant to be inherited by both those classes and not be called directly.
+
+ https://stackoverflow.com/questions/533631/what-is-a-mixin-and-why-are-they-useful
+ """
+ source_path=Path(__file__).parent
+""" -1 : well
+ 0 : no lake
+ 1 : lak package lake (lakarr > 0)
+ 2 : high-k lake
+ 3 : ghb
+ 4 : sfr"""
+ # package variable name: number
+ bc_numbers={'wel':-1,
+ 'lak':1,
+ 'high-k lake':2,
+ 'ghb':3,
+ 'sfr':4,
+ 'riv':5
+ }
+ model_type="mfsetup"
+
+ def__init__(self,parent):
+
+ # property attributes
+ self._cfg=None
+ self._nper=None
+ self._perioddata=None
+ self._sr=None
+ self._modelgrid=None
+ self._bbox=None
+ self._parent=parent
+ self._parent_layers=None
+ self._parent_default_source_data=False
+ self._parent_mask=None
+ self._lakarr_2d=None
+ self._isbc_2d=None
+ self._lakarr=None
+ self._isbc=None
+ self._lake_bathymetry=None
+ self._high_k_lake_recharge=None
+ self._nodata_value=-9999
+ self._model_ws=None
+ self._abs_model_ws=None
+ self._model_version=None# semantic version of model
+ self._longname=None# long name for model (short name is self.name)
+ self._header=None# header for files and repr
+ self.inset=None# dictionary of inset models attached to LGR parent
+ self._is_lgr=False# flag for lgr inset models
+ self.lgr=None# holds flopy Lgr utility object
+ self._lgr_idomain2d=None# array of Lgr inset model locations within parent grid
+ self.tmr=None# holds TMR class instance for TMR-type perimeter boundaries
+ self._load=False# whether model is being made or loaded from existing files
+ self.lake_info=None
+ self.lake_fluxes=None
+
+ # flopy settings
+ self._mg_resync=False
+
+ self._features={}# dictionary for caching shapefile datasets in memory
+
+ # arrays remade during this session
+ self.updated_arrays=set()
+
+ # cache of interpolation weights to speed up regridding
+ self._interp_weights=None
+
+
+ def__repr__(self):
+ header=f'{self.header}\n'
+ txt=''
+ ifself.parentisnotNone:
+ txt+='Parent model: {}/{}\n'.format(self.parent.model_ws,self.parent.name)
+ ifself._modelgridisnotNone:
+ txt+=f'{self._modelgrid.__repr__()}'
+ txt+='Packages:'
+ forpkginself.get_package_list():
+ txt+=' {}'.format(pkg.lower())
+ txt+='\n'
+ txt+=f'{self.nper:d} period(s):\n'
+ ifself._perioddataisnotNone:
+ cols=['per','start_datetime','end_datetime','perlen','steady','nstp']
+ txt+=self.perioddata[cols].head(3).to_string(index=False)
+ txt+='\n ...\n'
+ tail=self.perioddata[cols].tail(1).to_string(index=False)
+ txt+=tail.split('\n')[1]
+ txt=header+txt
+ returntxt
+
+ def__eq__(self,other):
+"""Test for equality to another model object."""
+ ifnotisinstance(other,self.__class__):
+ returnFalse
+ # kludge: skip obs packages for now
+ # - obs packages aren't read in with same name under which they were created
+ # - also SFR_OBS package is handled by SFRmaker instead of Flopy;
+ # a loaded version of a model might have SFR_OBS,
+ # where a freshly made version may not (even though SFRmaker will write it)
+ #
+ all_packages=set(self.get_package_list()).union(other.get_package_list())
+ exceptions={pforpinall_packagesifp.lower().startswith('obs')
+ orp.lower().endswith('obs')}
+ other_packages=[sforsinsorted(other.get_package_list())
+ ifsnotinexceptions]
+ packages=[sforsinsorted(self.get_package_list())
+ ifsnotinexceptions]
+ ifother_packages!=packages:
+ returnFalse
+ ifother.modelgrid!=self.modelgrid:
+ returnFalse
+ ifother.nlay!=self.nlay:
+ returnFalse
+ ifnotnp.array_equal(other.perioddata,self.perioddata):
+ returnFalse
+ # TODO: add checks of actual array values and other parameters
+ fork,vinself.__dict__.items():
+ ifkin['cfg',
+ 'sfrdata',
+ '_load',
+ '_packagelist',
+ '_package_paths',
+ 'package_key_dict',
+ 'package_type_dict',
+ 'package_name_dict',
+ '_ftype_num_dict']:
+ continue
+ elifknotinother.__dict__:
+ returnFalse
+ eliftype(v)==bool:
+ ifnotv==other.__dict__[k]:
+ returnFalse
+ elifk=='cfg':
+ continue
+ eliftype(v)in[str,int,float,dict,list]:
+ ifv!=other.__dict__[k]:
+ pass
+ continue
+ returnTrue
+
+ @property
+ defnper(self):
+ ifself.perioddataisnotNone:
+ returnlen(self.perioddata)
+
+ @property
+ defnrow(self):
+ ifself.modelgrid.grid_type=='structured':
+ returnself.modelgrid.nrow
+
+ @property
+ defncol(self):
+ ifself.modelgrid.grid_type=='structured':
+ returnself.modelgrid.ncol
+
+ @property
+ defmodelgrid(self):
+ ifself._modelgridisNone:
+ self.setup_grid()
+ # trap for instance where default (base) modelgrid
+ # instance is attached to the flopy model
+ # (because the grid hasn't been set up with)
+ # self._modelgrid.nlay will error in this case
+ # because of NotImplementedError in base class
+ elifself._modelgrid.grid_typeisNone:
+ pass
+ # add layer tops and bottoms and idomain to the model grid
+ # if they haven't been yet
+ elifself._modelgrid.nlayisNoneand'DIS'inself.get_package_list():
+ self._modelgrid._top=self.dis.top.array
+ self._modelgrid._botm=self.dis.botm.array
+ ifself.version=='mf6':
+ self._modelgrid._idomain=self.dis.idomain.array
+ elif'bas6'inself.get_package_list():
+ self._modelgrid._idomain=self.bas6.ibound.array
+ #self.setup_grid()
+ returnself._modelgrid
+
+ @property
+ defbbox(self):
+ ifself._bboxisNoneandself.modelgridisnotNone:
+ self._bbox=self.modelgrid.bbox
+ returnself._bbox
+
+ #@property
+ #def perioddata(self):
+ # """DataFrame summarizing stress period information.
+#
+ # Columns:
+#
+ # start_date_time : pandas datetimes; start date/time of each stress period
+ # (does not include steady-state periods)
+ # end_date_time : pandas datetimes; end date/time of each stress period
+ # (does not include steady-state periods)
+ # time : float; cumulative MODFLOW time (includes steady-state periods)
+ # per : zero-based stress period
+ # perlen : stress period length in model time units
+ # nstp : number of timesteps in the stress period
+ # tsmult : timestep multiplier for stress period
+ # steady : True=steady-state, False=Transient
+ # oc : MODFLOW-6 output control options
+ # """
+ # if self._perioddata is None:
+ # perioddata = setup_perioddata(self)
+ # return self._perioddata
+
+ @property
+ defparent(self):
+ returnself._parent
+
+ @property
+ defparent_layers(self):
+"""Mapping between layers in source model and
+ layers in destination model.
+
+ Returns
+ -------
+ parent_layers : dict
+ {inset layer : parent layer}
+ """
+ ifself._parent_layersisNone:
+ parent_layers=None
+ botm_source_data=self.cfg['dis'].get('source_data',{}).get('botm',{})
+ nlay=self.modelgrid.nlay
+ ifnlayisNone:
+ nlay=self.cfg['dis']['dimensions']['nlay']
+ ifself.cfg['parent'].get('inset_layer_mapping')isnotNone:
+ parent_layers=self.cfg['parent'].get('inset_layer_mapping')
+ elifisinstance(botm_source_data,dict)and'from_parent'inbotm_source_data:
+ parent_layers=botm_source_data.get('from_parent')
+ elifself.parentisnotNoneand(self.parent.modelgrid.nlay==nlay):
+ parent_layers=dict(zip(range(self.parent.modelgrid.nlay),
+ range(nlay)))
+ else:
+ #parent_layers = dict(zip(range(self.parent.modelgrid.nlay), range(self.parent.modelgrid.nlay)))
+ parent_layers=None
+ self._parent_layers=parent_layers
+ returnself._parent_layers
+
+ @property
+ defparent_stress_periods(self):
+"""Mapping between stress periods in source model and
+ stress periods in destination model.
+
+ Returns
+ -------
+ parent_stress_periods : dict
+ {inset stress period : parent stress period}
+ """
+ returndict(zip(self.perioddata['per'],self.perioddata['parent_sp']))
+
+ @property
+ defpackage_list(self):
+"""Definitive list of packages. Get from namefile input first
+ (as in mf6 input), then look under model input.
+ """
+ packages=self.cfg.get('nam',{}).get('packages',[])
+ iflen(packages)==0:
+ packages=self.cfg['model'].get('packages',[])
+ return[pforpinself._package_setup_order
+ ifpinpackages]
+
+ @property
+ defperimeter_bc_type(self):
+"""Dictates how perimeter boundaries are set up.
+
+ if 'head'; a constant head package is created
+ from the parent model starting heads
+ if 'flux'; a specified flux boundary is created
+ from parent model cell by cell flow output
+ """
+ perimeter_boundary_type=self.cfg['model'].get('perimeter_boundary_type')
+ ifperimeter_boundary_typeisnotNone:
+ if'head'inperimeter_boundary_type:
+ return'head'
+ if'flux'inperimeter_boundary_type:
+ return'flux'
+
+ @property
+ defmodel_ws(self):
+ ifself._model_wsisNone:
+ self._model_ws=Path(self._get_model_ws())
+ returnself._model_ws
+
+ @model_ws.setter
+ defmodel_ws(self,model_ws):
+ self._model_ws=model_ws
+ self._abs_model_ws=os.path.normpath(os.path.abspath(model_ws))
+
+ @property
+ defmodel_version(self):
+"""Semantic version of model, using a hacked version of the versioneer.
+ Version is reported using git tags for the model repository
+ or a start_version: key specified in the configuration file (default 0).
+ The start_version or tag is then appended by the remaining information
+ in a pep440-post style version tag (e.g. most recent git commit hash
+ for the model repository + "dirty" if the model repository has uncommited changes)
+
+ References
+ ----------
+ https://github.com/warner/python-versioneer
+ https://github.com/warner/python-versioneer/blob/master/details.md
+ """
+ ifself._model_versionisNone:
+ self._model_version=get_versions(path=self.model_ws,
+ start_version=str(self.cfg['metadata']['start_version']))
+ returnself._model_version
+
+ @property
+ deflongname(self):
+ ifself._longnameisNone:
+ longname=self.cfg['metadata'].get('longname')
+ iflongnameisNone:
+ longname=f'{self.name} model'
+ self._longname=longname
+ returnself._longname
+
+ @property
+ defheader(self):
+ ifself._headerisNone:
+ version_str=self.model_version['version']
+ header=f'{self.longname} version {version_str}'
+ self._header=header
+ returnself._header
+
+ @property
+ deftmpdir(self):
+ #abspath = os.path.abspath(
+ # self.cfg['intermediate_data']['output_folder'])
+ abspath=self.model_ws/'original-arrays'
+ self.cfg['intermediate_data']['output_folder']=str(abspath)
+ abspath.mkdir(exist_ok=True)
+ #if not os.path.isdir(abspath):
+ # os.makedirs(abspath)
+ tmpdir=abspath
+ ifself.relative_external_paths:
+ #tmpdir = os.path.relpath(abspath)
+ tmpdir=abspath.relative_to(self.model_ws)
+ #else:
+ # do we need to normalize with Pathlib??
+ # tmpdir = os.path.normpath(abspath)
+ returntmpdir
+
+ @property
+ defexternal_path(self):
+ abspath=os.path.abspath(
+ self.cfg.get('model',{}).get('external_path','external'))
+ ifnotos.path.isdir(abspath):
+ os.makedirs(abspath)
+ ifself.relative_external_paths:
+ ext_path=os.path.relpath(abspath)
+ else:
+ ext_path=os.path.normpath(abspath)
+ returnext_path
+
+ @external_path.setter
+ defexternal_path(self,x):
+ pass# bypass any setting in parent class
+
+ @property
+ definterp_weights(self):
+"""For a given parent, only calculate interpolation weights
+ once to speed up re-gridding of arrays to pfl_nwt."""
+ ifself._interp_weightsisNone:
+ parent_xy,inset_xy=get_source_dest_model_xys(self.parent,
+ self)
+ self._interp_weights=interp_weights(parent_xy,inset_xy)
+ returnself._interp_weights
+
+ @property
+ defparent_mask(self):
+"""Boolean array indicating window in parent model grid (subset of cells)
+ that encompass the inset model domain, with a surrounding buffer.
+ Used to speed up interpolation of parent grid values onto inset model grid."""
+ ifself._parent_maskisNone:
+ x,y=np.squeeze(self.bbox.exterior.coords.xy)
+ pi,pj=get_ij(self.parent.modelgrid,x,y)
+ pad=3
+ i0=np.max([pi.min()-pad,0])
+ i1=np.min([pi.max()+pad+1,self.parent.modelgrid.nrow])
+ j0=np.max([pj.min()-pad,0])
+ j1=np.min([pj.max()+pad+1,self.parent.modelgrid.ncol])
+ mask=np.zeros((self.parent.modelgrid.nrow,self.parent.modelgrid.ncol),dtype=bool)
+ mask[i0:i1,j0:j1]=True
+ self._parent_mask=mask
+ returnself._parent_mask
+
+ @property
+ defnlakes(self):
+ ifself.lakarrisnotNone:
+ returnint(np.max(self.lakarr))
+ else:
+ return0
+
+ @property
+ def_lakarr2d(self):
+"""2-D array of areal extent of lakes. Non-zero values
+ correspond to lak package IDs."""
+ ifself._lakarr_2disNone:
+ self._set_lakarr2d()
+ returnself._lakarr_2d
+
+ @property
+ deflakarr(self):
+"""3-D array of lake extents in each layer. Non-zero values
+ correspond to lak package IDs. Extent of lake in
+ each layer is based on bathymetry and model layer thickness.
+ """
+ ifself._lakarrisNone:
+ self.setup_external_filepaths('lak','lakarr',
+ self.cfg['lak']['{}_filename_fmt'.format('lakarr')],
+ file_numbers=list(range(self.nlay)))
+ ifself.isbcisNone:
+ returnNone
+ else:
+ self._set_lakarr()
+ returnself._lakarr
+
+ @property
+ def_isbc2d(self):
+"""2-D array indicating the i, j locations of
+ boundary conditions.
+ -1 : well
+ 0 : no lake
+ 1 : lak package lake (lakarr > 0)
+ 2 : high-k lake
+ 3 : ghb
+ 4 : sfr
+ 5 : riv
+
+ see also the .bc_numbers attibute
+ """
+ ifself._isbc_2disNone:
+ self._set_isbc2d()
+ returnself._isbc_2d
+
+ @property
+ defisbc(self):
+"""3D array indicating which cells have a lake in each layer.
+ -1 : well
+ 0 : no lake
+ 1 : lak package lake (lakarr > 0)
+ 2 : high-k lake
+ 3 : ghb
+ 4 : sfr
+ 5 : riv
+
+ see also the .bc_numbers attibute
+ """
+ # DIS package is needed to set up the isbc array
+ # (to compare lake bottom elevations to layer bottoms)
+ ifself.get_package('dis')isNone:
+ returnNone
+ ifself._isbcisNone:
+ self._set_isbc()
+ returnself._isbc
+
+ @property
+ deflake_bathymetry(self):
+"""Put lake bathymetry setup logic here instead of DIS package.
+ """
+
+ ifself._lake_bathymetryisNone:
+ self._set_lake_bathymetry()
+ returnself._lake_bathymetry
+
+ @property
+ defhigh_k_lake_recharge(self):
+"""Recharge value to apply to high-K lakes, in model units.
+ """
+ ifself._high_k_lake_rechargeisNoneandself.cfg['high_k_lakes']['simulate_high_k_lakes']:
+ ifself.lake_infoisNone:
+ self.lake_info=setup_lake_info(self)
+ ifself.lake_infoisnotNone:
+ self.lake_fluxes=setup_lake_fluxes(self,block='high_k_lakes')
+ self._high_k_lake_recharge=self.lake_fluxes.groupby('per').mean()['highk_lake_rech'].sort_index()
+ returnself._high_k_lake_recharge
+
+ defload_array(self,filename):
+ ifisinstance(filename,list):
+ arrays=[]
+ forfinfilename:
+ arrays.append(load_array(f,
+ shape=(self.nrow,self.ncol),
+ nodata=self._nodata_value
+ )
+ )
+ returnnp.array(arrays)
+ returnload_array(filename,shape=(self.nrow,self.ncol))
+
+
+[docs]
+ defload_features(self,filename,bbox_filter=None,
+ id_column=None,include_ids=None,
+ cache=True):
+"""Load vector and attribute data from a shapefile;
+ cache it to the _features dictionary.
+ """
+ ifisinstance(filename,str):
+ features_file=[filename]
+
+ dfs_list=[]
+ forfinfeatures_file:
+ iffnotinself._features.keys():
+ ifos.path.exists(f):
+ features_crs=get_shapefile_crs(f)
+ ifbbox_filterisNone:
+ ifself.bboxisnotNone:
+ bbox=self.bbox
+ elifself.parent.modelgridisnotNone:
+ bbox=self.parent.modelgrid.bbox
+ model_crs=self.parent.modelgrid.crs
+ assertmodel_crsisnotNone
+
+ iffeatures_crs!=self.modelgrid.crs:
+ bbox_filter=project(bbox,self.modelgrid.crs,features_crs).bounds
+ else:
+ bbox_filter=bbox.bounds
+
+ # implement automatic reprojection in gis-utils
+ # maintaining backwards compatibility
+ df=gpd.read_file(f)
+ df.to_crs(self.modelgrid.crs,inplace=True)
+ df.columns=[c.lower()forcindf.columns]
+ ifcache:
+ print('caching data in {}...'.format(f))
+ self._features[f]=df
+ else:
+ print('feature input file {} not found'.format(f))
+ return
+ else:
+ df=self._features[f]
+ ifid_columnisnotNone:
+ id_column=id_column.lower()
+ # convert any floating point dtypes to integer
+ ifdf[id_column].dtype==float:
+ df[id_column]=df[id_column].astype('int64')
+ df.index=df[id_column]
+ ifinclude_idsisnotNone:
+ df=df.loc[include_ids].copy()
+ dfs_list.append(df)
+ df=pd.concat(dfs_list)
+ iflen(df)==0:
+ warnings.warn('No features loaded from {}!'.format(filename))
+ returndf
+
+
+
+[docs]
+ defget_boundary_cells(self,exclude_inactive=False):
+"""Get the i, j locations of cells along the model perimeter.
+
+ Returns
+ -------
+ k, i, j : 1D numpy arrays of ints
+ zero-based layer, row, column locations of boundary cells
+ """
+ # top row, right side, left side, bottom row
+ i_top=[0]*self.ncol
+ j_top=list(range(self.ncol))
+ i_left=list(range(1,self.nrow-1))
+ j_left=[0]*(self.nrow-2)
+ i_right=i_left
+ j_right=[self.ncol-1]*(self.nrow-2)
+ i_botm=[self.nrow-1]*self.ncol
+ j_botm=j_top
+ i=i_top+i_left+i_right+i_botm
+ j=j_top+j_left+j_right+j_botm
+
+ assertlen(i)==2*self.nrow+2*self.ncol-4
+ nlaycells=len(i)
+ k=np.array(sorted(list(range(self.nlay))*len(i)))
+ i=np.array(i*self.nlay)
+ j=np.array(j*self.nlay)
+ assertnp.sum(k[nlaycells:nlaycells*2])==nlaycells
+
+ ifexclude_inactive:
+ ifself.version=='mf6':
+ active_cells=self.idomain[k,i,j]>=1
+ else:
+ active_cells=self.ibound[k,i,j]>=1
+ k=k[active_cells].copy()
+ i=i[active_cells].copy()
+ j=j[active_cells].copy()
+ returnk,i,j
+
+
+
+[docs]
+ defregrid_from_parent(self,parent_array,
+ mask=None,
+ method='linear'):
+"""Interpolate values in parent array onto
+ the pfl_nwt model grid, using model grid instances
+ attached to the parent and pfl_nwt models.
+
+ Parameters
+ ----------
+ parent_array : ndarray
+ Values from parent model to be interpolated to pfl_nwt grid.
+ 1 or 2-D numpy array of same sizes as a
+ layer of the parent model.
+ mask : ndarray (bool)
+ 1 or 2-D numpy array of same sizes as a
+ layer of the parent model. True values
+ indicate cells to include in interpolation,
+ False values indicate cells that will be
+ dropped.
+ method : str ('linear', 'nearest')
+ Interpolation method.
+ """
+ ifmaskisnotNone:
+ returnregrid(parent_array,self.parent.modelgrid,self.modelgrid,
+ mask1=mask,
+ method=method)
+ ifmethod=='linear':
+ #parent_values = parent_array.flatten()[self.parent_mask.flatten()]
+ parent_values=parent_array[self.parent_mask].flatten()
+ regridded=interpolate(parent_values,
+ *self.interp_weights)
+ elifmethod=='nearest':
+ regridded=regrid(parent_array,self.parent.modelgrid,self.modelgrid,
+ method='nearest')
+ regridded=np.reshape(regridded,(self.nrow,self.ncol))
+ returnregridded
+
+
+
+[docs]
+ defsetup_external_filepaths(self,package,variable_name,
+ filename_format,file_numbers=None):
+"""Set up external file paths for a MODFLOW package variable. Sets paths
+ for intermediate files, which are written from the (processed) source data.
+ Intermediate files are supplied to Flopy as external files for a given package
+ variable. Flopy writes external files to a specified location when the MODFLOW
+ package file is written. This method gets the external file paths that
+ will be written by FloPy, and puts them in the configuration dictionary
+ under their respective variables.
+
+ Parameters
+ ----------
+ package : str
+ Three-letter package abreviation (e.g. 'DIS' for discretization)
+ variable_name : str
+ FloPy name of variable represented by external files (e.g. 'top' or 'botm')
+ filename_format : str
+ File path to the external file(s). Can be a string representing a single file
+ (e.g. 'top.dat'), or for variables where a file is written for each layer or
+ stress period, a format string that will be formated with the zero-based layer
+ number (e.g. 'botm{}.dat') for files botm0.dat, botm1.dat, ...
+ file_numbers : list of ints
+ List of numbers for the external files. Usually these represent zero-based
+ layers or stress periods.
+ relative_external_paths : bool
+ If true, external paths will be specified relative to model_ws,
+ otherwise, they will be absolute paths
+ Returns
+ -------
+ filepaths : list
+ List of external file paths
+
+ Adds intermediated file paths to model.cfg[<package>]['intermediate_data']
+ Adds external file paths to model.cfg[<package>][<variable_name>]
+ """
+ # for lgr models, add the model name to the external filename
+ # if lgr parent or lgr inset
+ ifself.lgrorself._is_lgr:
+ filename_format='{}_{}'.format(self.name,filename_format)
+ returnsetup_external_filepaths(self,package,variable_name,
+ filename_format,file_numbers=file_numbers,
+ relative_external_paths=self.relative_external_paths)
+
+
+ def_get_model_ws(self,cfg=None):
+ ifcfgisNone:
+ cfg=self.cfg
+ ifself.version=='mf6':
+ abspath=os.path.abspath(cfg.get('simulation',{}).get('sim_ws','.'))
+ else:
+ abspath=os.path.abspath(cfg.get('model',{}).get('model_ws','.'))
+ ifnotos.path.exists(abspath):
+ os.makedirs(abspath)
+ self._abs_model_ws=os.path.normpath(abspath)
+ os.chdir(abspath)# within a session, modflow-setup operates in the model_ws
+ ifself.relative_external_paths:
+ model_ws=os.path.relpath(abspath)
+ else:
+ model_ws=os.path.normpath(abspath)
+ returnPath(model_ws)
+
+ def_reset_bc_arrays(self):
+"""Reset the boundary condition property arrays in order.
+ _lakarr2d (depends on _lakarr_2d
+ _isbc2d (depends on _lakarr2d)
+ _lake_bathymetry (depends on _isbc2d)
+ _isbc (depends on _isbc2d)
+ _lakarr (depends on _isbc and _lakarr2d)
+ """
+ self._lakarr_2d=None
+ self._isbc_2d=None# (depends on _lakarr2d)
+ self._lake_bathymetry=None# (depends on _isbc2d)
+ self._isbc=None# (depends on _isbc2d)
+ self._lakarr=None#
+ #self._set_lakarr2d() # calls self._set_isbc2d(), which calls self._set_lake_bathymetry()
+ #self._set_isbc() # calls self._set_lakarr()
+
+ def_set_cfg(self,user_specified_cfg):
+"""Load configuration file; update dictionary.
+ """
+ #self.cfg = defaultdict(dict)
+ self.cfg=defaultdict(dict,self.cfg)
+
+ ifisinstance(user_specified_cfg,str)or \
+ isinstance(user_specified_cfg,Path):
+ raiseValueError("Configuration should have already been loaded")
+ # convert to an absolute path
+ #user_specified_cfg = Path(user_specified_cfg).resolve()
+ #assert user_specified_cfg.exists(), \
+ # "config file {} not found".format(user_specified_cfg)
+ #updates = load(user_specified_cfg)
+ #updates['filename'] = user_specified_cfg
+ elifisinstance(user_specified_cfg,dict):
+ updates=user_specified_cfg.copy()
+ elifuser_specified_cfgisNone:
+ return
+ else:
+ raiseTypeError("unrecognized input for cfg")
+
+ # if the user specifies a complexity option for IMS or NWT,
+ # don't import any defaults
+ ims_cfg=updates.get('ims',{})
+ ifims_cfg.get('options',{}).get('complexity'):
+ # delete the defaults
+ fordefault_blockin'nonlinear','linear':
+ ifdefault_blockinself.cfg['ims']:
+ delself.cfg['ims'][default_block]
+ nwt_cfg=updates.get('nwt',{})
+ ifnwt_cfg.get('options','specified').lower()!='specified':
+ keep_args={'headtol','fluxtol','maxiterout',
+ 'thickfact','linmeth','iprnwt','ibotav',
+ 'Continue','use_existing_file'}
+ self.cfg['nwt']={k:vfork,vinself.cfg['nwt'].items()ifkinkeep_args}
+
+ update(self.cfg,updates)
+ # make sure empty variables get initialized as dicts
+ fork,vinself.cfg.items():
+ ifvisNone:
+ self.cfg[k]={}
+
+ if'filename'inself.cfg:
+ config_file_path=Path(self.cfg['filename'])
+ ifconfig_file_path.is_absolute():
+ self.cfg=set_cfg_paths_to_absolute(self.cfg,config_file_path.parent)
+
+ # mf6 models: set up or load the simulation
+ ifself.version=='mf6':
+ kwargs=self.cfg['simulation'].copy()
+ kwargs.update(self.cfg['simulation']['options'])
+ ifos.path.exists('{}.nam'.format(kwargs['sim_name']))andself._load:
+ try:
+ kwargs=get_input_arguments(kwargs,mf6.MFSimulation.load,warn=False)
+ self._sim=mf6.MFSimulation.load(**kwargs)
+ except:
+ # create simulation
+ kwargs=get_input_arguments(kwargs,mf6.MFSimulation,warn=False)
+ self._sim=mf6.MFSimulation(**kwargs)
+ else:
+ # create simulation
+ kwargs=get_input_arguments(kwargs,mf6.MFSimulation,warn=False)
+ self._sim=mf6.MFSimulation(**kwargs)
+
+ # load the parent model (skip if already attached)
+ if'namefile'inself.cfg.get('parent',{}).keys():
+ self._set_parent()
+
+ output_paths=self.cfg['postprocessing']['output_folders']
+ forname,folder_pathinoutput_paths.items():
+ ifnotos.path.exists(folder_path):
+ os.makedirs(folder_path)
+ setattr(self,'_{}_path'.format(name),folder_path)
+
+ # absolute path to config file
+ self._config_path=os.path.split(os.path.abspath(str(self.cfg['filename'])))[0]
+
+ # set package keys to default dicts
+ forpkginself._package_setup_order:
+ self.cfg[pkg]=defaultdict(dict,self.cfg.get(pkg,{}))
+
+ # other variables
+ self.cfg['external_files']={}
+
+ # validate the configuration
+ validate_configuration(self.cfg)
+
+ def_get_high_k_lakes(self):
+"""Get the i, j locations of any high-k lakes within the model grid.
+ """
+ lakesdata=None
+ lakes_shapefile=self.cfg['high_k_lakes'].get('source_data',{}).get('lakes_shapefile')
+ iflakes_shapefileisnotNone:
+ ifisinstance(lakes_shapefile,str):
+ lakes_shapefile={'filename':lakes_shapefile}
+ kwargs=get_input_arguments(lakes_shapefile,self.load_features)
+ if'include_ids'inkwargs:# load all lakes in shapefile
+ kwargs.pop('include_ids')
+ lakesdata=self.load_features(**kwargs)
+ iflakesdataisnotNone:
+ is_high_k_lake=rasterize(lakesdata,self.modelgrid)
+ returnis_high_k_lake>0
+
+ def_set_isbc2d(self):
+"""Set up the _isbc2d array, that indicates the i,j locations
+ of boundary conditions.
+ """
+ isbc=np.zeros((self.nrow,self.ncol),dtype=int)
+
+ # high-k lakes
+ ifself.cfg['high_k_lakes']['simulate_high_k_lakes']:
+ is_high_k_lake=self._get_high_k_lakes()
+ ifis_high_k_lakeisnotNone:
+ isbc[is_high_k_lake]=2
+
+ # lake package lakes
+ isbc[self._lakarr2d>0]=1
+
+ # add other bcs
+ forpackagename,bcnumberinself.bc_numbers.items():
+ if'lak'notinpackagename:
+ package=self.get_package(packagename)
+ ifpackageisnotNone:
+ # handle multiple instances of package
+ # (in MODFLOW-6)
+ ifisinstance(package,flopy.pakbase.PackageInterface):
+ packages=[package]
+ else:
+ packages=package
+ forpackageinpackages:
+ k,i,j=get_bc_package_cells(package)
+ not_a_lake=np.where(isbc[i,j]!=1)
+ i=i[not_a_lake]
+ j=j[not_a_lake]
+ isbc[i,j]=bcnumber
+ self._isbc_2d=isbc
+ self._set_lake_bathymetry()
+
+ def_set_isbc(self):
+ isbc=np.zeros((self.nlay,self.nrow,self.ncol),dtype=int)
+ isbc[0]=self._isbc2d
+
+ # in mf6 models, the model top is set to the lake botm
+ # and any layers originally above the lake botm
+ # are also reset to the lake botm (given zero-thickness)
+ lake_botm_elevations=self.dis.top.array
+ below=self.dis.botm.array>=lake_botm_elevations
+ ifnotself.version=='mf6':
+ lake_botm_elevations=self.dis.top.array-self.lake_bathymetry
+ layer_tops=np.concatenate([[self.dis.top.array],self.dis.botm.array[:-1]])
+ # lakes must be at least 10% into a layer to get simulated in that layer
+ below=layer_tops>lake_botm_elevations+0.1
+ fori,ibelowinenumerate(below[1:]):
+ ifnp.any(ibelow):
+ isbc[i+1][ibelow]=self._isbc2d[ibelow]
+ # add other bcs
+ forpackagename,bcnumberinself.bc_numbers.items():
+ if'lak'notinpackagename:
+ package=self.get_package(packagename)
+ ifpackageisnotNone:
+ # handle multiple instances of package
+ # (in MODFLOW-6)
+ ifisinstance(package,flopy.pakbase.PackageInterface):
+ packages=[package]
+ else:
+ packages=package
+ forpackageinpackages:
+ k,i,j=get_bc_package_cells(package)
+ not_a_lake=np.where(isbc[k,i,j]!=1)
+ k=k[not_a_lake]
+ i=i[not_a_lake]
+ j=j[not_a_lake]
+ isbc[k,i,j]=bcnumber
+ self._isbc=isbc
+ self._set_lakarr()
+
+ def_set_lakarr2d(self):
+ lakarr2d=np.zeros((self.nrow,self.ncol),dtype=int)
+ if'lak'inself.package_list:
+ lakes_shapefile=self.cfg['lak'].get('source_data',{}).get('lakes_shapefile',{}).copy()
+ iflakes_shapefile:
+ kwargs=get_input_arguments(lakes_shapefile,self.load_features)
+ lakesdata=self.load_features(**kwargs)# caches loaded features
+ lakes_shapefile['lakesdata']=lakesdata
+ lakes_shapefile.pop('filename')
+ kwargs=get_input_arguments(lakes_shapefile,make_lakarr2d)
+ lakarr2d=make_lakarr2d(self.modelgrid,**kwargs)
+ self._lakarr_2d=lakarr2d
+ self._set_isbc2d()
+
+ def_set_lakarr(self):
+ self.setup_external_filepaths('lak','lakarr',
+ self.cfg['lak']['{}_filename_fmt'.format('lakarr')],
+ file_numbers=list(range(self.nlay)))
+ # assign lakarr values from 3D isbc array
+ lakarr=np.zeros((self.nlay,self.nrow,self.ncol),dtype=int)
+ forkinrange(self.nlay):
+ lakarr[k][self.isbc[k]==1]=self._lakarr2d[self.isbc[k]==1]
+ fork,ilakarrinenumerate(lakarr):
+ save_array(self.cfg['intermediate_data']['lakarr'][0][k],ilakarr,fmt='%d')
+ self._lakarr=lakarr
+
+ def_set_lake_bathymetry(self):
+ bathymetry_file=self.cfg.get('lak',{}).get('source_data',{}).get('bathymetry_raster')
+ default_lake_depth=self.cfg['model'].get('default_lake_depth',2)
+ ifbathymetry_fileisnotNone:
+ lmult=1.0
+ ifisinstance(bathymetry_file,dict):
+ lmult=convert_length_units(bathymetry_file.get('length_units',0),
+ self.length_units)
+ bathymetry_file=bathymetry_file['filename']
+
+ # sample pre-made bathymetry at grid points
+ bathy=get_values_at_points(bathymetry_file,
+ x=self.modelgrid.xcellcenters.ravel(),
+ y=self.modelgrid.ycellcenters.ravel(),
+ points_crs=self.modelgrid.crs,
+ out_of_bounds_errors='coerce')
+ bathy=np.reshape(bathy,(self.nrow,self.ncol))*lmult
+ bathy[(bathy<0)|np.isnan(bathy)]=0
+
+ # fill bathymetry grid in remaining lake cells with default lake depth
+ # also ensure that all non lake cells have bathy=0
+ fill=(bathy==0)&(self._isbc2d>0)&(self._isbc2d<3)
+ bathy[fill]=default_lake_depth
+ bathy[(self._isbc2d>1)&(self._isbc2d>2)]=0
+ else:
+ bathy=np.zeros((self.nrow,self.ncol))
+ self._lake_bathymetry=bathy
+
+ def_set_parent_modelgrid(self,mg_kwargs=None):
+"""Reset the parent model grid from keyword arguments
+ or existing modelgrid, and DIS package.
+ """
+
+ ifmg_kwargsisnotNone:
+ kwargs=mg_kwargs.copy()
+ else:
+ kwargs={'xoff':self.parent.modelgrid.xoffset,
+ 'yoff':self.parent.modelgrid.yoffset,
+ 'angrot':self.parent.modelgrid.angrot,
+ 'crs':self.parent.modelgrid.crs,
+ 'epsg':self.parent.modelgrid.epsg,
+ #'proj4': self.parent.modelgrid.proj4,
+ }
+ parent_units=get_model_length_units(self.parent)
+ if'lenuni'inself.cfg['parent']:
+ parent_units=lenuni_text[self.cfg['parent']['lenuni']]
+ elif'length_units'inself.cfg['parent']:
+ parent_units=self.cfg['parent']['length_units']
+
+ ifself.version=='mf6':
+ self.parent.dis.length_units=parent_units
+ else:
+ self.parent.dis.lenuni=lenuni_values[parent_units]
+
+ # make sure crs is populated, then get CRS units for the grid
+ fromgisutilsimportget_authority_crs
+ ifkwargs.get('crs')isnotNone:
+ kwargs['crs']=get_authority_crs(kwargs['crs'])
+ elifkwargs.get('epsg')isnotNone:
+ kwargs['crs']=get_authority_crs(kwargs['epsg'])
+ # no parent CRS info, assume the parent model is in the same CRS
+ elifself.cfg['setup_grid'].get('crs')isnotNone:
+ kwargs['crs']=get_authority_crs(self.cfg['setup_grid']['crs'])
+ # no parent CRS info, assume the parent model is in the same CRS
+ elifself.cfg['setup_grid'].get('epsg')isnotNone:
+ kwargs['crs']=get_authority_crs(self.cfg['setup_grid']['epsg'])
+ else:
+ raiseValueError('No coordinate reference input in setup_grid: or parent: '
+ 'SpatialReference: blocks of configuration file. Supply '
+ 'at least coordinate reference information to '
+ 'setup_grid: crs: item.')
+
+ parent_grid_units=kwargs['crs'].axis_info[0].unit_name
+
+ if'foot'inparent_grid_units.lower()or'feet'inparent_grid_units.lower():
+ parent_grid_units='feet'
+ elif'metre'inparent_grid_units.lower()or'meter'inparent_grid_units.lower():
+ parent_grid_units='meters'
+ else:
+ raiseValueError(f'unrecognized CRS units {parent_grid_units}: CRS must be projected in feet or meters')
+
+ # assume that model grid is in a projected CRS of meters
+ lmult=convert_length_units(parent_units,parent_grid_units)
+ kwargs['delr']=self.parent.dis.delr.array*lmult
+ kwargs['delc']=self.parent.dis.delc.array*lmult
+ kwargs['top']=self.parent.dis.top.array
+ kwargs['botm']=self.parent.dis.botm.array
+ ifhasattr(self.parent.dis,'laycbd'):
+ kwargs['laycbd']=self.parent.dis.laycbd.array
+ # renames for parent modelgrid
+ renames={'rotation':'angrot'}
+ fork,vinrenames.items():
+ ifkinkwargs:
+ kwargs[v]=kwargs.pop(k)
+
+ kwargs=get_input_arguments(kwargs,MFsetupGrid,warn=False)
+ self._parent._mg_resync=False
+ self._parent._modelgrid=MFsetupGrid(**kwargs)
+
+ def_set_parent(self):
+"""Set attributes related to a parent or source model
+ if one is specified.
+ """
+
+ # if it's an LGR model (where parent is also being created)
+ # set up the parent DIS package
+ ifself._is_lgrandisinstance(self.parent,MFsetupMixin):
+ if'DIS'notinself.parent.get_package_list():
+ dis=self.parent.setup_dis()
+
+ kwargs=self.cfg['parent'].copy()
+ ifkwargsisnotNone:
+ kwargs=kwargs.copy()
+
+ # load MF6 or MF2005 parent
+ ifself.parentisNone:
+ print('loading parent model {}...'.format(os.path.join(kwargs['model_ws'],
+ kwargs['namefile'])))
+ t0=time.time()
+
+ # load only specified packages that the parent model has
+ packages_in_parent_namefile=get_packages(os.path.join(kwargs['model_ws'],
+ kwargs['namefile']))
+ # load at least these packages
+ # so that there is complete information on model time and space dis
+ default_parent_packages={'dis','tdis'}
+ specified_packages=set(self.cfg['model'].get('packages',set()))
+ specified_packages.update(default_parent_packages)
+
+ # get equivalent packages to load if parent is another MODFLOW version;
+ # then flatten (a package may have more than one equivalent)
+ parent_packages=[get_package_name(p,kwargs['version'])
+ forpinspecified_packages]
+ parent_packages={itemforsubsetinparent_packagesforiteminsubset}
+ ifkwargs['version']=='mf6':
+ parent_packages.add('sto')
+ load_only=list(set(packages_in_parent_namefile).intersection(parent_packages))
+ if'load_only'notinkwargs:
+ kwargs['load_only']=load_only
+ if'skip_load'inkwargs:
+ kwargs['skip_load']=[s.lower()forsinkwargs['skip_load']]
+ kwargs['load_only']=[pckgforpckginkwargs['load_only']
+ ifpckgnotinkwargs['skip_load']]
+
+ ifself.cfg['parent']['version']=='mf6':
+ sim_kwargs=kwargs.copy()
+ if'sim_name'notinkwargs:
+ sim_kwargs['sim_name']=kwargs.get('simulation','mfsim')
+ if'sim_ws'notinkwargs:
+ sim_kwargs['sim_ws']=sim_kwargs.get('model_ws','.')
+ sim_kwargs=get_input_arguments(sim_kwargs,mf6.MFSimulation.load,warn=False)
+ parent_sim=mf6.MFSimulation.load(**sim_kwargs)
+ modelname,_=os.path.splitext(kwargs['namefile'])
+ self._parent=parent_sim.get_model(modelname)
+ else:
+ kwargs['f']=kwargs.pop('namefile')
+ kwargs=get_input_arguments(kwargs,fm.Modflow.load,warn=False)
+ self._parent=fm.Modflow.load(**kwargs)
+ print("finished in {:.2f}s\n".format(time.time()-t0))
+
+ # set parent model units in config if not entered
+ if'length_units'notinself.cfg['parent']:
+ self.cfg['parent']['length_units']=get_model_length_units(self.parent)
+ if'time_units'notinself.cfg['parent']:
+ self.cfg['parent']['time_units']=get_model_time_units(self.parent)
+
+ # set the parent model grid from mg_kwargs if not None
+ # otherwise, convert parent model grid to MFsetupGrid
+ mg_kwargs=self.cfg['parent'].get('SpatialReference',
+ self.cfg['parent'].get('modelgrid',None))
+ # check configuration file input
+ # for consistency with parent model DIS package input
+ # (configuration file input may be different if an existing model
+ # doesn't have a valid spatial reference in the DIS package)
+ mf6_names={
+ 'rotation':'angrot',
+ 'xoff':'xorigin',
+ 'yoff':'yorigin'
+ }
+ ifmg_kwargsisnotNoneand(self.parent.version=='mf6')andnot\
+ mg_kwargs.get('override_dis_package_input',False):
+ forvariable,mf6_nameinmf6_names.items():
+ if(variableinmg_kwargs)and\
+ ('DIS'inself.parent.get_package_list()):
+ dis_value=getattr(self.parent.dis,mf6_name).array
+ ifnotnp.allclose(mg_kwargs[variable],dis_value):
+ raiseValueError(
+ "Configuration file entry parent: SpatialReference: "
+ f"{variable}: {mg_kwargs[variable]} does not match {mf6_name}={dis_value} "
+ "specified in the parent model DIS package file. Either make "
+ "these consistent or specify override_dis_package_input: True "
+ "in the parent: SpatialReference: configuration block.")
+ self._set_parent_modelgrid(mg_kwargs)
+
+ # setup parent model perioddata table
+ ifgetattr(self.parent,'perioddata',None)isNone:
+ kwargs=self.cfg['parent'].copy()
+ kwargs['model_time_units']=self.cfg['parent']['time_units']
+ ifself.parent.version=='mf6':
+ forvarin['perlen','nstp','tsmult']:
+ kwargs[var]=getattr(self.parent.modeltime,var)
+ kwargs['steady']=self.parent.modeltime.steady_state
+ kwargs['nper']=self.parent.simulation.tdis.nper.array
+ else:
+ forvarin['perlen','steady','nstp','tsmult']:
+ kwargs[var]=self.parent.dis.__dict__[var].array
+ kwargs['nper']=self.parent.dis.nper
+ kwargs=get_input_arguments(kwargs,setup_perioddata_group)
+ kwargs['oc_saverecord']={}
+ ifhasattr(self.parent,'_perioddata'):
+ self._parent._perioddata=setup_perioddata_group(**kwargs)
+ else:
+ self._parent.perioddata=setup_perioddata_group(**kwargs)
+
+ # default_source_data, where omitted configuration input is
+ # obtained from parent model by default
+ # Set default_source_data to True by default if it isn't specified
+ ifself.cfg['parent'].get('default_source_data')isNone:
+ self.cfg['parent']['default_source_data']=True
+ ifself.cfg['parent'].get('default_source_data'):
+ self._parent_default_source_data=True
+
+ # set number of layers from parent if not specified
+ ifself.version=='mf6'andself.cfg['dis']['dimensions'].get('nlay')isNone:
+ self.cfg['dis']['dimensions']['nlay']=getattr(self.parent.dis.nlay,'array',
+ self.parent.dis.nlay)
+ elifself.cfg['dis'].get('nlay')isNone:
+ self.cfg['dis']['nlay']=getattr(self.parent.dis.nlay,'array',
+ self.parent.dis.nlay)
+
+ # set start date/time from parent if not specified
+ ifnotself._is_lgr:
+ parent_start_date_time=self.cfg.get('parent',{}).get('start_date_time')
+ ifself.version=='mf6':
+ ifself.cfg['tdis']['options'].get('start_date_time','1970-01-01')=='1970-01-01' \
+ andparent_start_date_timeisnotNone:
+ self.cfg['tdis']['options']['start_date_time']=self.cfg['parent']['start_date_time']
+ else:
+ ifself.cfg['dis'].get('start_date_time','1970-01-01')=='1970-01-01' \
+ andparent_start_date_timeisnotNone:
+ self.cfg['dis']['start_date_time']=self.cfg['parent']['start_date_time']
+
+ # only get time dis information from parent if
+ # no periodata groups are specified, and nper is not specified under dimensions
+ tdis_package='tdis'ifself.version=='mf6'else'dis'
+ # check if any item within perioddata block is a dictionary
+ # (groups are subblocks within perioddata block)
+ has_perioddata_groups=any([isinstance(k,dict)
+ forkinself.cfg[tdis_package]['perioddata'].values()])
+ # get the number of inset model periods
+ ifnothas_perioddata_groups:
+ ifself.version=='mf6':
+ ifself.cfg['tdis']['dimensions'].get('nper')isNone:
+ self.cfg['tdis']['dimensions']['nper']=self.parent.modeltime.nper
+ nper=self.cfg['tdis']['dimensions']['nper']
+ else:
+ ifself.cfg['dis']['nper']isNone:
+ self.cfg['dis']['nper']=self.dis.nper
+ nper=self.cfg['dis']['nper']
+ # get the periods that are shared with the parent model
+ parent_periods=get_parent_stress_periods(self.parent,nper=nper,
+ parent_stress_periods=self.cfg['parent'][
+ 'copy_stress_periods'])
+ # get time discretization info. from the parent model
+ ifself.version=='mf6':
+ forvarin['perlen','nstp','tsmult']:
+ ifself.cfg['tdis']['perioddata'].get(var)isNone:
+ self.cfg['tdis']['perioddata'][var]=getattr(self.parent.modeltime,var)[
+ parent_periods]
+ # 'steady' can be specified under sto package (as in MODFLOW-6)
+ # or within perioddata group blocks
+ # but not in the tdis perioddata block itset
+ ifself.cfg['sto'].get('steady')isNone:
+ self.cfg['sto']['steady']=self.parent.modeltime.steady_state[parent_periods]
+ else:
+ forvarin['perlen','nstp','tsmult','steady']:
+ ifself.cfg['dis'].get(var)isNone:
+ self.cfg['dis'][var]=self.parent.dis.__dict__[var].array[parent_periods]
+
+ def_setup_array(self,package,var,vmin=-1e30,vmax=1e30,
+ source_model=None,source_package=None,
+ **kwargs):
+ returnsetup_array(self,package,var,vmin=vmin,vmax=vmax,
+ source_model=source_model,source_package=source_package,
+ **kwargs)
+
+ def_setup_basic_stress_package(self,package,flopy_package_class,
+ variable_columns,rivdata=None,
+ **kwargs):
+ print(f'\nSetting up {package.upper()} package...')
+ t0=time.time()
+
+ # possible future support to
+ # handle filenames of multiple packages
+ # leave this out for now because of additional complexity
+ # from multiple sets of external files
+ #existing_packages = getattr(self, package, None)
+ #filename = f"{self.name}.{package}"
+ #if existing_packages is not None:
+ # try:
+ # len(existing_packages)
+ # suffix = len(existing_packages) + 1
+ # except:
+ # suffix = 1
+ # filename = f"{self.name}-{suffix}.{package}"
+
+ # perimeter boundary (CHD or WEL)
+ dfs=[]
+ if'perimeter_boundary'inkwargs:
+ perimeter_cfg=kwargs['perimeter_boundary']
+ ifpackage=='chd':
+ perimeter_cfg['boundary_type']='head'
+ boundname='perimeter-heads'
+ elifpackage=='wel':
+ perimeter_cfg['boundary_type']='flux'
+ boundname='perimeter-fluxes'
+ else:
+ raiseValueError(f'Unsupported package for perimeter_boundary: {package.upper()}')
+ if'inset_parent_period_mapping'notinperimeter_cfg:
+ perimeter_cfg['inset_parent_period_mapping']=self.parent_stress_periods
+ if'parent_start_time'notinperimeter_cfg:
+ perimeter_cfg['parent_start_date_time']=self.parent.perioddata['start_datetime'][0]
+ self.tmr=Tmr(self.parent,self,**perimeter_cfg)
+ df=self.tmr.get_inset_boundary_values()
+
+ # add boundname to allow boundary flux to be tracked as observation
+ df['boundname']=boundname
+ dfs.append(df)
+
+ # RIV package converted from SFR input
+ elifrivdataisnotNone:
+ if'name'inrivdata.stress_period_data.columns:
+ rivdata.stress_period_data['boundname']=rivdata.stress_period_data['name']
+ dfs.append(rivdata.stress_period_data)
+
+ # set up package from user input
+ df_sd=None
+ if'source_data'inkwargs:
+ ifpackage=='wel':
+ dropped_wells_file=\
+ kwargs.get('output_files',{})\
+ .get('dropped_wells_file','{}_dropped_wells.csv').format(self.name)
+ df_sd=setup_wel_data(self,
+ source_data=kwargs['source_data'],
+ dropped_wells_file=dropped_wells_file)
+ else:
+ df_sd=setup_basic_stress_data(self,**kwargs['source_data'],**kwargs.get('mfsetup_options',dict()))
+ ifdf_sdisnotNoneandlen(df_sd)>0:
+ dfs.append(df_sd)
+ # set up package from parent model
+ elifself.cfg['parent'].get('default_source_data')and\
+ hasattr(self.parent,package):
+ ifpackage=='wel':
+ dropped_wells_file=\
+ kwargs['output_files']['dropped_wells_file'].format(self.name)
+ df_sd=setup_wel_data(self,
+ dropped_wells_file=dropped_wells_file)
+ else:
+ print(f'Skipping setup of {package.upper()} Package from parent model-- not implemented.')
+ ifdf_sdisnotNoneandlen(df_sd)>0:
+ dfs.append(df_sd)
+ iflen(dfs)==0:
+ print(f"{package.upper()} package:\n"
+ "No input specified or package configuration file input "
+ "not understood. See the Configuration "
+ "File Gallery in the online docs for example input "
+ "Note that direct input to basic stress period packages "
+ "is currently not supported.")
+ return
+ else:
+ df=pd.concat(dfs,axis=0)
+
+ # option to write stress_period_data to external files
+ ifself.version=='mf6':
+ external_files=self.cfg[package]['mfsetup_options'].get('external_files',True)
+ else:
+ # external list or tabular type files not supported for MODFLOW-NWT
+ # adding support for this may require changes to Flopy
+ external_files=False
+ external_filename_fmt=self.cfg[package]['mfsetup_options']['external_filename_fmt']
+ spd=setup_flopy_stress_period_data(self,package,df,
+ flopy_package_class=flopy_package_class,
+ variable_columns=variable_columns,
+ external_files=external_files,
+ external_filename_fmt=external_filename_fmt)
+
+ kwargs=self.cfg[package]
+ ifisinstance(self.cfg[package]['options'],dict):
+ kwargs.update(self.cfg[package]['options'])
+ #kwargs['filename'] = filename
+ # add observation for perimeter BCs
+ # and any user input with a boundname col
+ obslist=[]
+ obsfile=f'{self.name}.{package}.obs.output.csv'
+ if'perimeter_boundary'inkwargs:
+ perimeter_btype=f"perimeter-{perimeter_cfg['boundary_type']}"
+ obslist.append((perimeter_btype,package,perimeter_btype))
+ if'boundname'indf.columns:
+ unique_boundnames=df['boundname'].unique()
+ forbnameinunique_boundnames:
+ obslist.append((bname,package,bname))
+ iflen(obslist)>0:
+ kwargs['observations']={obsfile:obslist}
+ kwargs=get_input_arguments(kwargs,flopy_package_class)
+ ifnotexternal_files:
+ kwargs['stress_period_data']=spd
+ pckg=flopy_package_class(self,**kwargs)
+ print("finished in {:.2f}s\n".format(time.time()-t0))
+ returnpckg
+
+
+[docs]
+ defsetup_grid(self):
+"""Set up the attached modelgrid instance from configuration input
+ """
+ ifself.cfg['grid']:
+ cfg=self.cfg['grid']
+ cfg['rotation']=self.cfg['grid']['angrot']
+ else:
+ cfg=self.cfg['setup_grid']#.copy()
+ # update grid configuration with any information supplied to dis package
+ # (so that settings specified for DIS package have priority)
+ self._update_grid_configuration_with_dis()
+ ifnotcfg['structured']:
+ raiseNotImplementedError('Support for unstructured grids')
+ features_shapefile=cfg.get('source_data',{}).get('features_shapefile')
+ iffeatures_shapefileisnotNoneand'features_shapefile'notincfg:
+ features_shapefile['features_shapefile']=features_shapefile['filename']
+ delfeatures_shapefile['filename']
+ cfg.update(features_shapefile)
+ cfg['parent_model']=self.parent
+ cfg['model_length_units']=self.length_units
+ output_files=self.cfg['setup_grid']['output_files']
+ cfg['grid_file']=output_files['grid_file'].format(self.name)
+ bbox_shapefile_name=Path(output_files['bbox_shapefile'].format(self.name)).name
+ cfg['bbox_shapefile']=Path(self._shapefiles_path)/bbox_shapefile_name
+ if'DIS'inself.get_package_list():
+ cfg['top']=self.dis.top.array
+ cfg['botm']=self.dis.botm.array
+
+ # if model is an LGR inset with the default rotation=0
+ # and the LGR parent is rotated
+ # assume that the inset model rotation should == parent
+ # (different LGR parent/inset rotations not allowed)
+ ifself._is_lgrand(cfg['rotation']==0)and\
+ self.parent.modelgrid.angrot!=0:
+ cfg['rotation']=self.parent.modelgrid.angrot
+
+ ifos.path.exists(cfg['grid_file'])andself._load:
+ print('Loading model grid definition from {}'.format(cfg['grid_file']))
+ cfg.update(load(cfg['grid_file']))
+ self.cfg['grid']=cfg
+ kwargs=get_input_arguments(self.cfg['grid'],MFsetupGrid)
+ self._modelgrid=MFsetupGrid(**kwargs)
+ self._modelgrid.cfg=self.cfg['grid']
+ else:
+ kwargs=get_input_arguments(cfg,setup_structured_grid)
+ ifnotset(kwargs.keys()).intersection({
+ 'features_shapefile','features','xoff','yoff','xul','yul'}):
+ raiseValueError(
+ "No features_shapefile or xoff, yoff supplied "
+ "to setup_grid: block. Check configuration file input, "
+ "including for accidental indentation of the setup_grid: block.")
+ self._modelgrid=setup_structured_grid(**kwargs)
+ self.cfg['grid']=self._modelgrid.cfg
+ # update DIS package configuration
+ ifself.version=='mf6':
+ self.cfg['dis']['dimensions']['nrow']=self.cfg['grid']['nrow']
+ self.cfg['dis']['dimensions']['ncol']=self.cfg['grid']['ncol']
+ else:
+ self.cfg['dis']['nrow']=self.cfg['grid']['nrow']
+ self.cfg['dis']['ncol']=self.cfg['grid']['ncol']
+
+ self._reset_bc_arrays()
+
+ # set up local grid refinement
+ if'lgr'inself.cfg['setup_grid'].keys():
+ ifself.version!='mf6':
+ raiseTypeError('LGR only supported for MODFLOW-6 models.')
+ ifnotself.lgr:
+ self.lgr=True
+ forkey,cfginself.cfg['setup_grid']['lgr'].items():
+ existing_inset_models=set()
+ ifisinstance(self.inset,dict):
+ existing_inset_models={kfork,vinself.inset.items()}
+ ifkeynotinexisting_inset_models:
+ self.create_lgr_models()
+
+
+
+[docs]
+ defload_grid(self,gridfile=None):
+"""Load model grid information from a json or yml file."""
+ ifgridfileisNone:
+ ifos.path.exists(self.cfg['setup_grid']['grid_file']):
+ gridfile=self.cfg['setup_grid']['grid_file']
+ print('Loading model grid information from {}'.format(gridfile))
+ self.cfg['grid']=load(gridfile)
+
+
+ defsetup_sfr(self,**kwargs):
+ package='sfr'
+ print('\nSetting up {} package...'.format(package.upper()))
+ t0=time.time()
+
+ # input
+ flowlines=self.cfg['sfr'].get('source_data',{}).get('flowlines')
+ ifflowlinesisnotNone:
+ if'nhdplus_paths'inflowlines.keys():
+ nhdplus_paths=flowlines['nhdplus_paths']
+ forfinnhdplus_paths:
+ ifnotos.path.exists(f):
+ print('SFR setup: missing input file: {}'.format(f))
+ nhdplus_paths.remove(f)
+ iflen(nhdplus_paths)==0:
+ return
+
+ # create an sfrmaker.lines instance
+ bbox_filter=project(self.bbox,self.modelgrid.crs,'epsg:4269').bounds
+ lines=Lines.from_nhdplus_v2(NHDPlus_paths=nhdplus_paths,
+ bbox_filter=bbox_filter)
+ else:
+ forkeyin['filename','filenames']:
+ ifkeyinflowlines:
+ kwargs=flowlines.copy()
+ kwargs['shapefile']=kwargs.pop(key)
+ check_source_files(kwargs['shapefile'])
+ if'epsg'notinkwargs:
+ try:
+ fromgisutilsimportget_shapefile_crs
+ shapefile_crs=get_shapefile_crs(kwargs['shapefile'])
+ exceptExceptionase:
+ print(e)
+ msg=('Need gis-utils >= 0.2 to get crs'
+ ' for shapefile: {}\nPlease pip install '
+ '--upgrade gis-utils'.format(kwargs['shapefile']))
+ print(msg)
+ else:
+ shapefile_crs=pyproj.crs.CRS.from_epsg(kwargs['epsg'])
+ authority=shapefile_crs.to_authority()
+ ifauthorityisnotNone:
+ shapefile_crs=pyproj.CRS.from_user_input(shapefile_crs.to_authority())
+
+ bbox_filter=self.bbox.bounds
+ ifshapefile_crs!=self.modelgrid.crs:
+ bbox_filter=project(self.bbox,self.modelgrid.crs,shapefile_crs).bounds
+ kwargs['bbox_filter']=bbox_filter
+ # create an sfrmaker.lines instance
+ kwargs=get_input_arguments(kwargs,Lines.from_shapefile)
+ lines=Lines.from_shapefile(**kwargs)
+ break
+ else:
+ return
+
+ # output
+ output_path=self.cfg['sfr'].get('output_path')
+ ifoutput_pathisnotNone:
+ ifnotos.path.isdir(output_path):
+ os.makedirs(output_path)
+ else:
+ output_path=self.cfg['postprocessing']['output_folders']['shapefiles']
+ self.cfg['sfr']['output_path']=output_path
+
+ # create isfr array (where SFR cells will be populated)
+ ifself.version=='mf6':
+ active_cells=np.sum(self.idomain>=1,axis=0)>0
+ # For models with LGR, set the LGR area to isfr=0
+ # to prevent SFR from being generated within the LGR area
+ # needed for LGR models that only have refinement
+ # in some layers (in other words, active parent model cells
+ # below the LGR inset)
+ ifself.lgr:
+ active_cells[self._lgr_idomain2d==0]=0
+ else:
+ active_cells=np.sum(self.ibound>=1,axis=0)>0
+ #active_cells = self.ibound.sum(axis=0) > 0
+ # only include active cells that don't have another boundary condition
+ # (besides the wel package)
+ isfr=active_cells&(self._isbc2d<=0)
+
+ # kludge to get sfrmaker to work with modelgrid
+ self.modelgrid.model_length_units=self.length_units
+
+ # create an sfrmaker.sfrdata instance from the lines instance
+ to_sfr_kwargs=self.cfg['sfr'].copy()
+ ifnotself.cfg['sfr'].get('sfrmaker_options'):
+ self.cfg['sfr']['sfrmaker_options']={}
+ to_sfr_kwargs.update(self.cfg['sfr']['sfrmaker_options'])
+ #to_sfr_kwargs = get_input_arguments(to_sfr_kwargs, Lines.to_sfr)
+ sfr=lines.to_sfr(grid=self.modelgrid,
+ isfr=isfr,
+ model=self,
+ **to_sfr_kwargs)
+ ifself.cfg['sfr'].get('set_streambed_top_elevations_from_dem'):
+ warnings.warn('sfr: set_streambed_top_elevations_from_dem option is now under sfr: sfrmaker_options',
+ DeprecationWarning)
+ self.cfg['sfr']['sfrmaker_options']['set_streambed_top_elevations_from_dem']=True
+ ifself.cfg['sfr']['sfrmaker_options'].get('set_streambed_top_elevations_from_dem'):
+ dem_kwargs=self.cfg['sfr']['sfrmaker_options'].get('set_streambed_top_elevations_from_dem')
+ ifnotisinstance(dem_kwargs,dict):
+ dem_kwargs={}
+ error_msg=(
+ "If set_streambed_top_elevations_from_dem=True, "
+ "need a dem block in source_data for SFR package. "
+ "Otherwise set_streambed_top_elevations_from_dem should be"
+ "a block with arguments to "
+ "sfrmaker.SFRData.set_streambed_top_elevations_from_dem")
+ assert'dem'inself.cfg['sfr'].get('source_data',{}),error_msg
+ dem_kwargs.update(self.cfg['sfr']['source_data']['dem'])
+ sfr.set_streambed_top_elevations_from_dem(**dem_kwargs)
+ else:
+ sfr.reach_data['strtop']=sfr.interpolate_to_reaches('elevup','elevdn')
+
+ # assign layers to the sfr reaches
+ botm=self.dis.botm.array.copy()
+ ifself.version=='mf6':
+ idomain=self.dis.idomain.array
+ else:
+ idomain=self.bas6.ibound.array
+ layers,new_botm=assign_layers(sfr.reach_data,
+ botm_array=botm,
+ idomain=idomain)
+ sfr.reach_data['k']=layers
+ ifnew_botmisnotNone:
+ # run thru setup_array so that DIS input remains open/close
+ self._setup_array('dis','botm',
+ data={i:arrfori,arrinenumerate(new_botm)},
+ datatype='array3d',write_fmt='%.2f',dtype=int)
+ # reset the bottom array in flopy (and in memory)
+ # is this necessary? =
+ self.dis.botm=new_botm
+ # set bottom array to external files
+ ifself.version=='mf6':
+ self.dis.botm=self.cfg['dis']['griddata']['botm']
+ else:
+ self.dis.botm=self.cfg['dis']['botm']
+ print('\nModel cell bottom elevations adjusted after assigning '
+ 'SFR reaches to layers\n(to accommodate SFR reach bottoms '
+ 'below the previous model bottom)\n')
+
+ # option to convert reaches to the River Package
+ ifself.cfg['sfr'].get('to_riv'):
+ warnings.warn('sfr: to_riv option is now under sfr: sfrmaker_options',
+ DeprecationWarning)
+ self.cfg['sfr']['sfrmaker_options']['to_riv']=self.cfg['sfr'].get('to_riv')
+ ifself.cfg['sfr'].get('sfrmaker_options',{}).get('to_riv'):
+ rivdata=sfr.to_riv(line_ids=self.cfg['sfr']['sfrmaker_options']['to_riv'],
+ drop_in_sfr=True)
+ # setup of RIV package from SFRmaker-derived RIVdata
+ # and any user input
+ # do this instead of 2 seperate packages
+ # to avoid having two sets of external files
+ self.setup_riv(rivdata,**self.cfg['riv'],**self.cfg['riv']['mfsetup_options'])
+ rivdata_filename=self.cfg['riv']['output_files']['rivdata_file'].format(self.name)
+ rivdata.write_table(os.path.join(self._tables_path,rivdata_filename))
+ rivdata.write_shapefiles('{}/{}'.format(self._shapefiles_path,self.name))
+
+ # optional routing input
+ # (for a complete representation of a larger or more detailed
+ # stream network that may be culled in SFR package)
+ sd=self.cfg['sfr'].get('source_data',{})
+ routing_input_key=[kforkinsd.keys()if'routing'ink]
+ routing_input=None
+ iflen(routing_input_key)>0:
+ routing_input=sd.get(routing_input_key[0])
+ routing=pd.read_csv(routing_input['filename'])
+ routing=dict(zip(routing[routing_input['id_column']],
+ routing[routing_input['routing_column']]))
+ # set any values (downstream lines) not in keys (upstream lines)
+ # to 0 (outlet condition)
+ routing={k:vifvinrouting.keys()else0
+ fork,vinrouting.items()}
+ # use _original_routing attached to Lines instance as default
+ else:
+ routing=lines._original_routing
+
+ # add inflows
+ inflows_input=self.cfg['sfr'].get('source_data',{}).get('inflows')
+ ifinflows_inputisnotNone:
+ # resample inflows to model stress periods
+ inflows_input['id_column']=inflows_input['line_id_column']
+ sd=TransientTabularSourceData.from_config(inflows_input,
+ dest_model=self)
+ inflows_by_stress_period=sd.get_data()
+
+ missing_sites=set(inflows_by_stress_period[inflows_input['id_column']]). \
+ difference(routing.keys())
+ ifany(missing_sites):
+ # cast IDs to strings for compatibility with SFRmaker > 0.11.3
+ # for now, assume IDs are numeric; future updates to SFRmaker
+ # may eventually allow for alpha numeric IDs
+ inflows_by_stress_period[inflows_input['id_column']]=\
+ inflows_by_stress_period[inflows_input['id_column']].astype(int).astype(str)
+
+ # check if all inflow sites are included in sfr network
+ missing_sites=set(inflows_by_stress_period[inflows_input['id_column']]). \
+ difference(routing.keys())
+ # if there are missing sites, try using the supplied routing
+ ifany(missing_sites):
+ raiseKeyError(('inflow sites {} are not within the model sfr network. '
+ 'Please supply an inflows_routing source_data block '
+ '(see shellmound example config file)'.format(missing_sites)))
+
+ # add resampled inflows to SFR package
+ inflows_input['data']=inflows_by_stress_period
+ inflows_input['flowline_routing']=routing
+ ifself.version=='mf6':
+ inflows_input['variable']='inflow'
+ method=sfr.add_to_perioddata
+ else:
+ inflows_input['variable']='flow'
+ method=sfr.add_to_segment_data
+ kwargs=get_input_arguments(inflows_input.copy(),method)
+ method(**kwargs)
+
+ # add runoff
+ runoff_input=self.cfg['sfr'].get('source_data',{}).get('runoff')
+ ifrunoff_inputisnotNone:
+ # resample inflows to model stress periods
+ runoff_input['id_column']=runoff_input['line_id_column']
+ sd=TransientTabularSourceData.from_config(runoff_input,
+ dest_model=self)
+ runoff_by_stress_period=sd.get_data()
+
+ # check if all sites are included in sfr network
+ missing_sites=set(runoff_by_stress_period[runoff_input['id_column']]). \
+ difference(routing.keys())
+ ifany(missing_sites):
+ warnings.warn(('runoff sites {} are not within the model sfr network. '
+ 'Please supply an inflows_routing source_data block '
+ '(see shellmound example config file)'.format(missing_sites)),
+ UserWarning)
+
+ # add resampled inflows to SFR package
+ runoff_input['data']=runoff_by_stress_period
+ runoff_input['flowline_routing']=routing
+ runoff_input['variable']='runoff'
+ runoff_input['distribute_flows_to_reaches']=True
+ ifself.version=='mf6':
+ method=sfr.add_to_perioddata
+ else:
+ method=sfr.add_to_segment_data
+ kwargs=get_input_arguments(runoff_input.copy(),method)
+ method(**kwargs)
+
+ # add observations
+ observations_input=self.cfg['sfr'].get('source_data',{}).get('observations')
+ ifself.version!='mf6':
+ sfr.gage_starting_unit_number=self.cfg['gag']['starting_unit_number']
+ ifobservations_inputisnotNone:
+ key='filename'if'filename'inobservations_inputelse'filenames'
+ observations_input['data']=observations_input[key]
+ kwargs=get_input_arguments(observations_input.copy(),sfr.add_observations)
+ obsdata=sfr.add_observations(**kwargs)
+ # resample observations to model stress periods; write to table
+
+ # write reach and segment data tables
+ sfr.write_tables('{}/{}'.format(self._tables_path,self.name))
+
+ # export shapefiles of lines, routing, cell polygons, inlets and outlets
+ sfr.write_shapefiles('{}/{}'.format(self._shapefiles_path,self.name))
+
+ # create the flopy SFR package instance
+ sfr.create_modflow_sfr2(model=self,istcb2=223)
+ ifself.version!='mf6':
+ sfr_package=sfr.modflow_sfr2
+ else:
+ # pass options kwargs through to mf6 constructor
+ kwargs=flatten({k:vfork,vinself.cfg[package].items()ifknotin
+ {'source_data','flowlines','inflows','observations',
+ 'inflows_routing','dem','sfrmaker_options'}})
+ kwargs=get_input_arguments(kwargs,mf6.ModflowGwfsfr)
+ sfr_package=sfr.create_mf6sfr(model=self,**kwargs)
+ # monkey patch ModflowGwfsfr instance to behave like ModflowSfr2
+ sfr_package.reach_data=sfr.modflow_sfr2.reach_data
+
+ # attach the sfrmaker.sfrdata instance as an attribute
+ self.sfrdata=sfr
+
+ # reset dependent arrays
+ self._reset_bc_arrays()
+ ifself.version=='mf6':
+ self._set_idomain()
+ print("finished in {:.2f}s\n".format(time.time()-t0))
+ returnsfr_package
+
+ defsetup_solver(self):
+ ifself.version=='mf6':
+ solver_package='ims'
+ else:
+ solver_package='nwt'
+ assertsolver_packagenotinself.package_list
+ setup_method_name='setup_{}'.format(solver_package)
+ package_setup=getattr(self,setup_method_name,None)
+ package_setup()
+
+ defsetup_packages(self,reset_existing=True):
+ package_list=self.package_list#['sfr'] #m.package_list # ['tdis', 'dis', 'npf', 'oc']
+ ifnotreset_existing:
+ package_list=[pforpinpackage_listifp.upper()notinself.get_package_list()]
+ forpkginpackage_list:
+ setup_method_name=f'setup_{pkg}'
+ package_setup=getattr(self,setup_method_name,None)
+ ifpackage_setupisNone:
+ print('{} package not supported for MODFLOW version={}'.format(pkg.upper(),self.version))
+ continue
+ ifnotcallable(package_setup):
+ package_setup=getattr(MFsetupMixin,'setup_{}'.format(pkg.strip('6')))
+ # avoid multiple package instances for now, except for obs
+ ifself.version!='mf6'orpkg=='obs'ornothasattr(self,pkg):
+ package_setup(**self.cfg[pkg],**self.cfg[pkg]['mfsetup_options'])
+
+
+
+[docs]
+ @classmethod
+ defload_cfg(cls,yamlfile,verbose=False):
+"""Loads a configuration file, with default settings
+ specific to the MFnwtModel or MF6model class.
+
+ Parameters
+ ----------
+ yamlfile : str (filepath)
+ Configuration file in YAML format with pfl_nwt setup information.
+ verbose : bool
+
+ Returns
+ -------
+ cfg : dict (configuration dictionary)
+ """
+ returnload_cfg(yamlfile,verbose=verbose,default_file=cls.default_file)
+
+
+
+[docs]
+ @classmethod
+ defsetup_from_yaml(cls,yamlfile,verbose=False):
+"""Make a model from scratch, using information in a yamlfile.
+
+ Parameters
+ ----------
+ yamlfile : str (filepath)
+ Configuration file in YAML format with pfl_nwt setup information.
+ verbose : bool
+
+ Returns
+ -------
+ m : model instance
+ """
+ cfg=cls.load_cfg(yamlfile,verbose=verbose)
+ returncls.setup_from_cfg(cfg,verbose=verbose)
+
+
+
+[docs]
+ @classmethod
+ defsetup_from_cfg(cls,cfg,verbose=False):
+"""Make a model from scratch, using information in a configuration dictionary.
+
+ Parameters
+ ----------
+ cfg : dict
+ Configuration dictionary, as produced by the model.load_cfg method.
+ verbose : bool
+
+ Returns
+ -------
+ m : model instance
+ """
+ cfg_filename=Path(cfg.get('filename','')).name
+ msg=f"\nSetting up {cfg['model']['modelname']} model"
+ iflen(cfg_filename)>0:
+ msg+=f" from configuration in {cfg_filename}"
+ print(msg)
+ t0=time.time()
+
+ m=cls(cfg=cfg)#, **kwargs)
+
+ # make a grid if one isn't already specified
+ if'grid'notinm.cfg.keys():
+ m.setup_grid()
+
+ # establish time discretization, including TDIS setup for MODFLOW-6
+ m.setup_tdis()
+
+ # set up the solver
+ m.setup_solver()
+
+ # set up all of the packages specified in the config file
+ m.setup_packages(reset_existing=False)
+
+ # LGR inset model(s)
+ ifm.insetisnotNone:
+ fork,vinm.inset.items():
+ ifv._is_lgr:
+ v.setup_packages()
+ m.setup_lgr_exchanges()
+
+ print('finished setting up model in {:.2f}s'.format(time.time()-t0))
+ print('\n{}'.format(m))
+ returnm
+[docs]
+classMFnwtModel(MFsetupMixin,Modflow):
+"""Class representing a MODFLOW-NWT model"""
+ default_file='mfnwt_defaults.yml'
+
+ def__init__(self,parent=None,cfg=None,
+ modelname='model',exe_name='mfnwt',
+ version='mfnwt',model_ws='.',
+ external_path='external/',**kwargs):
+ defaults={'parent':parent,
+ 'modelname':modelname,
+ 'exe_name':exe_name,
+ 'version':version,
+ 'model_ws':model_ws,
+ 'external_path':external_path,
+ }
+ # load configuration, if supplied
+ ifcfgisnotNone:
+ ifnotisinstance(cfg,dict):
+ cfg=self.load_cfg(cfg)
+ cfg=self._parse_model_kwargs(cfg)
+ defaults.update(cfg['model'])
+ kwargs={k:vfork,vinkwargs.items()ifknotindefaults}
+ # otherwise, pass arguments on to flopy constructor
+ args=get_input_arguments(defaults,Modflow,
+ exclude='packages')
+ Modflow.__init__(self,**args,**kwargs)
+ #Modflow.__init__(self, modelname, exe_name=exe_name, version=version,
+ # model_ws=model_ws, external_path=external_path,
+ # **kwargs)
+ MFsetupMixin.__init__(self,parent=parent)
+
+ # default configuration
+ self._package_setup_order=['dis','bas6','upw','rch','oc',
+ 'chd','ghb','lak','sfr','riv','wel','mnw2',
+ 'gag','hyd']
+ # set up the model configuration dictionary
+ # start with the defaults
+ self.cfg=load(self.source_path/self.default_file)# 'mf6_defaults.yml')
+ self.relative_external_paths=self.cfg.get('model',{}).get('relative_external_paths',True)
+ # set the model workspace and change working directory to there
+ self.model_ws=self._get_model_ws(cfg=cfg)
+ # update defaults with user-specified config. (loaded above)
+ # set up and validate the model configuration dictionary
+ self._set_cfg(cfg)
+
+ # set the list file path
+ self.lst.file_name=[self.cfg['model']['list_filename_fmt'].format(self.name)]
+
+ # the "drop thin cells" option is not available for MODFLOW-2005 models
+ self._drop_thin_cells=False
+
+ # property arrays
+ self._ibound=None
+
+ # delete the temporary 'original-files' folder
+ # if it already exists, to avoid side effects from stale files
+ shutil.rmtree(self.tmpdir,ignore_errors=True)
+
+ def__repr__(self):
+ returnMFsetupMixin.__repr__(self)
+
+ @property
+ defnlay(self):
+ returnself.cfg['dis'].get('nlay',1)
+
+ @property
+ deflength_units(self):
+ returnlenuni_text[self.cfg['dis']['lenuni']]
+
+ @property
+ deftime_units(self):
+ returnitmuni_text[self.cfg['dis']['itmuni']]
+
+ @property
+ defperioddata(self):
+"""DataFrame summarizing stress period information.
+ Columns:
+ ============== =========================================
+ start_datetime Start date of each model stress period
+ end_datetime End date of each model stress period
+ time MODFLOW elapsed time, in days*
+ per Model stress period number
+ perlen Stress period length (days)
+ nstp Number of timesteps in stress period
+ tsmult Timestep multiplier
+ steady Steady-state or transient
+ oc Output control setting for MODFLOW
+ parent_sp Corresponding parent model stress period
+ ============== =========================================
+
+ TODO: the code here might still need to be adapted to
+ parallel the code in MF6model.perioddata, to work with
+ parent models that are already loaded but have no configuration.
+ """
+ ifself._perioddataisNone:
+ default_start_datetime=self.cfg['dis'].get('start_date_time','1970-01-01')
+ tdis_perioddata_config=self.cfg['dis']
+ nper=self.cfg['dis'].get('nper')
+ steady=self.cfg['dis'].get('steady')
+ parent_stress_periods=None
+ ifself.parentisnotNone:
+ parent_stress_periods=self.cfg['parent'].get('copy_stress_periods')
+ perioddata=setup_perioddata(
+ self,
+ tdis_perioddata_config=tdis_perioddata_config,
+ default_start_datetime=default_start_datetime,
+ nper=nper,steady=steady,time_units=self.time_units,
+ parent_model=self.parent,
+ parent_stress_periods=parent_stress_periods,
+ )
+ self._perioddata=perioddata
+ # reset nper property so that it will reference perioddata table
+ self._nper=None
+ self._perioddata.to_csv(f'{self._tables_path}/stress_period_data.csv',index=False)
+ # update the model configuration
+ if'parent_sp'inperioddata.columns:
+ self.cfg['parent']['copy_stress_periods']=perioddata['parent_sp'].tolist()
+
+ returnself._perioddata
+
+ @property
+ defipakcb(self):
+"""By default write everything to one cell budget file."""
+ returnself.cfg['upw'].get('ipakcb',53)
+
+ @property
+ defibound(self):
+"""3D array indicating which cells will be included in the simulation.
+ Made a property so that it can be easily updated when any packages
+ it depends on change.
+ """
+ ifself._iboundisNoneand'BAS6'inself.get_package_list():
+ self._set_ibound()
+ returnself._ibound
+
+ def_set_ibound(self):
+"""Remake the idomain array from the source data,
+ no data values in the top and bottom arrays, and
+ so that cells above SFR reaches are inactive."""
+ ibound_from_layer_elevations=make_ibound(self.dis.top.array,
+ self.dis.botm.array,
+ nodata=self._nodata_value,
+ minimum_layer_thickness=self.cfg['dis'].get(
+ 'minimum_layer_thickness',1),
+ #drop_thin_cells=self._drop_thin_cells,
+ tol=1e-4)
+
+ # include cells that are active in the existing idomain array
+ # and cells inactivated on the basis of layer elevations
+ ibound=(self.bas6.ibound.array>0)&(ibound_from_layer_elevations>=1)
+ ibound=ibound.astype(int)
+
+ # remove cells that conincide with lakes
+ ibound[self.isbc==1]=0.
+
+ # remove cells that are above stream cells
+ ifself.get_package('sfr')isnotNone:
+ ibound=deactivate_idomain_above(ibound,self.sfr.reach_data)
+ # remove cells that are above ghb cells
+ ifself.get_package('ghb')isnotNone:
+ ibound=deactivate_idomain_above(ibound,self.ghb.stress_period_data[0])
+
+ # inactivate any isolated cells that could cause problems with the solution
+ ibound=find_remove_isolated_cells(ibound,minimum_cluster_size=20)
+
+ self._ibound=ibound
+ # re-write the input files
+ self._setup_array('bas6','ibound',resample_method='nearest',
+ data={i:arrfori,arrinenumerate(ibound)},
+ datatype='array3d',write_fmt='%d',dtype=int)
+ self.bas6.ibound=self.cfg['bas6']['ibound']
+
+ def_set_parent(self):
+"""Set attributes related to a parent or source model
+ if one is specified."""
+
+ ifself.cfg['parent'].get('version')=='mf6':
+ raiseNotImplementedError("MODFLOW-6 parent models")
+
+ kwargs=self.cfg['parent'].copy()
+ ifkwargsisnotNone:
+ kwargs=kwargs.copy()
+ kwargs['f']=kwargs.pop('namefile')
+ # load only specified packages that the parent model has
+ packages_in_parent_namefile=get_packages(os.path.join(kwargs['model_ws'],
+ kwargs['f']))
+ load_only=list(set(packages_in_parent_namefile).intersection(
+ set(self.cfg['model'].get('packages',set()))))
+ if'load_only'notinkwargs:
+ kwargs['load_only']=load_only
+ if'skip_load'inkwargs:
+ kwargs['skip_load']=[s.lower()forsinkwargs['skip_load']]
+ kwargs['load_only']=[pckgforpckginkwargs['load_only']
+ ifpckgnotinkwargs['skip_load']]
+ kwargs=get_input_arguments(kwargs,fm.Modflow.load,warn=False)
+
+ print('loading parent model {}...'.format(os.path.join(kwargs['model_ws'],
+ kwargs['f'])))
+ t0=time.time()
+ self._parent=fm.Modflow.load(**kwargs)
+ print("finished in {:.2f}s\n".format(time.time()-t0))
+
+ # parent model units
+ if'length_units'notinself.cfg['parent']:
+ self.cfg['parent']['length_units']=lenuni_text[self.parent.dis.lenuni]
+ if'time_units'notinself.cfg['parent']:
+ self.cfg['parent']['time_units']=itmuni_text[self.parent.dis.itmuni]
+
+ # set the parent model grid from mg_kwargs if not None
+ # otherwise, convert parent model grid to MFsetupGrid
+ mg_kwargs=self.cfg['parent'].get('SpatialReference',
+ self.cfg['parent'].get('modelgrid',None))
+ self._set_parent_modelgrid(mg_kwargs)
+
+ # parent model perioddata
+ ifnothasattr(self.parent,'perioddata'):
+ kwargs={}
+ kwargs['start_date_time']=self.cfg['parent'].get('start_date_time',
+ self.cfg['model'].get('start_date_time',
+ '1970-01-01'))
+ kwargs['nper']=self.parent.nper
+ kwargs['model_time_units']=self.cfg['parent']['time_units']
+ forvarin['perlen','steady','nstp','tsmult']:
+ kwargs[var]=self.parent.dis.__dict__[var].array
+ kwargs=get_input_arguments(kwargs,setup_perioddata_group)
+ kwargs['oc_saverecord']={}
+ self._parent.perioddata=setup_perioddata_group(**kwargs)
+
+ # default_source_data, where omitted configuration input is
+ # obtained from parent model by default
+ # Set default_source_data to True by default if it isn't specified
+ ifself.cfg['parent'].get('default_source_data')isNone:
+ self.cfg['parent']['default_source_data']=True
+ ifself.cfg['parent'].get('default_source_data'):
+ self._parent_default_source_data=True
+ ifself.cfg['dis'].get('nlay')isNone:
+ self.cfg['dis']['nlay']=self.parent.dis.nlay
+ parent_start_date_time=self.cfg.get('parent',{}).get('start_date_time')
+ ifself.cfg['dis'].get('start_date_time','1970-01-01')=='1970-01-01'andparent_start_date_timeisnotNone:
+ self.cfg['dis']['start_date_time']=self.cfg['parent']['start_date_time']
+ ifself.cfg['dis'].get('nper')isNone:
+ self.cfg['dis']['nper']=self.parent.dis.nper
+ parent_periods=get_parent_stress_periods(self.parent,nper=self.cfg['dis']['nper'],
+ parent_stress_periods=self.cfg['parent']['copy_stress_periods'])
+ forvarin['perlen','nstp','tsmult','steady']:
+ ifself.cfg['dis'].get(var)isNone:
+ self.cfg['dis'][var]=self.parent.dis.__dict__[var].array[parent_periods]
+
+ def_update_grid_configuration_with_dis(self):
+"""Update grid configuration with any information supplied to dis package
+ (so that settings specified for DIS package have priority). This method
+ is called by MFsetupMixin.setup_grid.
+ """
+ forparamin['nrow','ncol','delr','delc']:
+ ifparaminself.cfg['dis']:
+ self.cfg['setup_grid'][param]=self.cfg['dis'][param]
+
+ defsetup_dis(self,**kwargs):
+""""""
+ package='dis'
+ print('\nSetting up {} package...'.format(package.upper()))
+ t0=time.time()
+
+ # resample the top from the DEM
+ ifself.cfg['dis']['remake_top']:
+ self._setup_array(package,'top',datatype='array2d',
+ resample_method='linear',
+ write_fmt='%.2f')
+
+ # make the botm array
+ self._setup_array(package,'botm',datatype='array3d',
+ resample_method='linear',
+ write_fmt='%.2f')
+
+ # put together keyword arguments for dis package
+ kwargs=self.cfg['grid'].copy()# nrow, ncol, delr, delc
+ kwargs.update(self.cfg['dis'])# nper, nlay, etc.
+ kwargs=get_input_arguments(kwargs,fm.ModflowDis)
+ # we need flopy to read the intermediate files
+ # (it will write the files in cfg)
+ lmult=convert_length_units('meters',self.length_units)
+ kwargs.update({'top':self.cfg['intermediate_data']['top'][0],
+ 'botm':self.cfg['intermediate_data']['botm'],
+ 'nper':self.nper,
+ 'delc':self.modelgrid.delc*lmult,
+ 'delr':self.modelgrid.delr*lmult
+ })
+ forargin['perlen','nstp','tsmult','steady']:
+ kwargs[arg]=self.perioddata[arg].values
+
+ dis=fm.ModflowDis(model=self,**kwargs)
+ self._perioddata=None# reset perioddata
+ #if not isinstance(self._modelgrid, MFsetupGrid):
+ # self._modelgrid = None # override DIS package grid setup
+ self.setup_grid()# reset the model grid
+ self._reset_bc_arrays()
+ #self._isbc = None # reset BC property arrays
+ print("finished in {:.2f}s\n".format(time.time()-t0))
+ returndis
+
+
+[docs]
+ defsetup_tdis(self,**kwargs):
+"""Calls the _set_perioddata, to establish time discretization. Only purpose
+ is to conform to same syntax as mf6 for MFsetupMixin.setup_from_yaml()
+ """
+ self.perioddata
+[docs]
+ defsetup_upw(self,**kwargs):
+"""
+ """
+ package='upw'
+ print('\nSetting up {} package...'.format(package.upper()))
+ t0=time.time()
+ hiKlakes_value=float(self.cfg['parent'].get('hiKlakes_value',1e4))
+
+ # copy transient variables if they were included in config file
+ # defaults are hard coded to arrays in parent model priority
+ # over config file values, in the case that ss and sy weren't entered
+ hk=self.cfg['upw'].get('hk')
+ vka=self.cfg['upw'].get('vka')
+ default_sy=0.1
+ default_ss=1e-6
+
+ # Determine which hk, vka to use
+ # load parent upw if it's needed and not loaded
+ source_package=package
+ ifNonein[hk,vka]and \
+ 'UPW'notinself.parent.get_package_list()and \
+ 'LPF'notinself.parent.get_package_list():
+ forext,pckgclsin{'upw':fm.ModflowUpw,
+ 'lpf':fm.ModflowLpf,
+ }.items():
+ pckgfile='{}/{}.{}'.format(self.parent.model_ws,self.parent.name,package)
+ ifos.path.exists(pckgfile):
+ upw=pckgcls.load(pckgfile,self.parent)
+ source_package=ext
+ break
+
+ self._setup_array(package,'hk',vmin=0,vmax=hiKlakes_value,resample_method='linear',
+ source_package=source_package,datatype='array3d',write_fmt='%.6e')
+ self._setup_array(package,'vka',vmin=0,vmax=hiKlakes_value,resample_method='linear',
+ source_package=source_package,datatype='array3d',write_fmt='%.6e')
+ ifnp.any(~self.dis.steady.array):
+ self._setup_array(package,'sy',vmin=0,vmax=1,resample_method='linear',
+ source_package=source_package,
+ datatype='array3d',write_fmt='%.6e')
+ self._setup_array(package,'ss',vmin=0,vmax=1,resample_method='linear',
+ source_package=source_package,
+ datatype='array3d',write_fmt='%.6e')
+ sy=self.cfg['intermediate_data']['sy']
+ ss=self.cfg['intermediate_data']['ss']
+ else:
+ sy=default_sy
+ ss=default_ss
+
+ upw=fm.ModflowUpw(self,hk=self.cfg['intermediate_data']['hk'],
+ vka=self.cfg['intermediate_data']['vka'],
+ sy=sy,
+ ss=ss,
+ layvka=self.cfg['upw']['layvka'],
+ laytyp=self.cfg['upw']['laytyp'],
+ hdry=self.cfg['upw']['hdry'],
+ ipakcb=self.cfg['upw']['ipakcb'])
+ print("finished in {:.2f}s\n".format(time.time()-t0))
+ returnupw
+
+
+ defsetup_mnw2(self,**kwargs):
+
+ print('setting up MNW2 package...')
+ t0=time.time()
+
+ # added wells
+ # todo: generalize MNW2 source data input; add auto-reprojection
+ added_wells=self.cfg['mnw'].get('added_wells')
+ ifadded_wellsisnotNone:
+ ifisinstance(added_wells,str):
+ aw=pd.read_csv(added_wells)
+ aw.rename(columns={'name':'comments'},inplace=True)
+ elifisinstance(added_wells,dict):
+ added_wells={k:vfork,vinadded_wells.items()ifvisnotNone}
+ iflen(added_wells)>0:
+ aw=pd.DataFrame(added_wells).T
+ aw['comments']=aw.index
+ else:
+ aw=None
+ elifisinstance(added_wells,pd.DataFrame):
+ aw=added_wells
+ aw['comments']=aw.index
+ else:
+ raiseIOError('unrecognized added_wells input')
+
+ k,ztop,zbotm=0,0,0
+ zpump=None
+
+ wells=aw.groupby('comments').first()
+ periods=aw
+ if'x'inwells.columnsand'y'inwells.columns:
+ wells['i'],wells['j']=self.modelgrid.intersect(wells['x'].values,
+ wells['y'].values)
+ if'depth'inwells.columns:
+ wellhead_elevations=self.dis.top.array[wells.i,wells.j]
+ ztop=wellhead_elevations-(5*.3048)# 5 ft casing
+ zbotm=wellhead_elevations-wells.depth
+ zpump=zbotm+1# 1 meter off bottom
+ elif'ztop'inwells.columnsand'zbotm'inwells.columns:
+ ztop=wells.ztop
+ zbotm=wells.zbotm
+ zpump=zbotm+1
+ if'k'inwells.columns:
+ k=wells.k
+
+ forvarin['losstype','pumploc','rw','rskin','kskin']:
+ ifvarnotinwells.columns:
+ wells[var]=self.cfg['mnw']['defaults'][var]
+
+ nd=fm.ModflowMnw2.get_empty_node_data(len(wells))
+ nd['k']=k
+ nd['i']=wells.i
+ nd['j']=wells.j
+ nd['ztop']=ztop
+ nd['zbotm']=zbotm
+ nd['wellid']=wells.index
+ nd['losstype']=wells.losstype
+ nd['pumploc']=wells.pumploc
+ nd['rw']=wells.rw
+ nd['rskin']=wells.rskin
+ nd['kskin']=wells.kskin
+ ifzpumpisnotNone:
+ nd['zpump']=zpump
+
+ spd={}
+ forper,groupinperiods.groupby('per'):
+ spd_per=fm.ModflowMnw2.get_empty_stress_period_data(len(group))
+ spd_per['wellid']=group.comments
+ spd_per['qdes']=group.flux
+ spd[per]=spd_per
+ itmp=[]
+ forperinrange(self.nper):
+ ifperinspd.keys():
+ itmp.append(len(spd[per]))
+ else:
+ itmp.append(0)
+
+ mnw=fm.ModflowMnw2(self,mnwmax=len(wells),ipakcb=self.ipakcb,
+ mnwprnt=1,
+ node_data=nd,stress_period_data=spd,
+ itmp=itmp
+ )
+ print("finished in {:.2f}s\n".format(time.time()-t0))
+ returnmnw
+ else:
+ print('No wells specified in configuration file!\n')
+ returnNone
+
+ defsetup_lak(self,**kwargs):
+
+ print('setting up LAKE package...')
+ t0=time.time()
+ # if shapefile of lakes was included,
+ # lakarr should be automatically built by property method
+ ifself.lakarr.sum()==0:
+ print("lakes_shapefile not specified, or no lakes in model area")
+ return
+
+ # source data
+ source_data=self.cfg['lak']['source_data']
+ self.lake_info=setup_lake_info(self)
+ nlakes=len(self.lake_info)
+
+ # set up the tab files, if any
+ tab_files_argument=None
+ tab_units=None
+ start_tab_units_at=150# default starting number for iunittab
+ if'stage_area_volume_file'insource_data:
+
+ tab_files=setup_lake_tablefiles(self,source_data['stage_area_volume_file'])
+ tab_units=list(range(start_tab_units_at,start_tab_units_at+len(tab_files)))
+
+ # tabfiles aren't rewritten by flopy on package write
+ self.cfg['lak']['tab_files']=tab_files
+ # kludge to deal with ugliness of lake package external file handling
+ # (need to give path relative to model_ws, not folder that flopy is working in)
+ tab_files_argument=[os.path.relpath(f)forfintab_files]
+
+ self.setup_external_filepaths('lak','lakzones',
+ self.cfg['lak']['{}_filename_fmt'.format('lakzones')])
+ self.setup_external_filepaths('lak','bdlknc',
+ self.cfg['lak']['{}_filename_fmt'.format('bdlknc')],
+ file_numbers=list(range(self.nlay)))
+
+ # make the arrays or load them
+ lakzones=make_bdlknc_zones(self.modelgrid,self.lake_info,
+ include_ids=self.lake_info['feat_id'],
+ littoral_zone_buffer_width=source_data['littoral_zone_buffer_width'])
+ save_array(self.cfg['intermediate_data']['lakzones'][0],lakzones,fmt='%d')
+
+ bdlknc=np.zeros((self.nlay,self.nrow,self.ncol))
+ # make the areal footprint of lakebed leakance from the zones (layer 1)
+ bdlknc[0]=make_bdlknc2d(lakzones,
+ self.cfg['lak']['source_data']['littoral_leakance'],
+ self.cfg['lak']['source_data']['profundal_leakance'])
+ forkinrange(self.nlay):
+ ifk>0:
+ # for each underlying layer, assign profundal leakance to cells were isbc == 1
+ bdlknc[k][self.isbc[k]==1]=self.cfg['lak']['source_data']['profundal_leakance']
+ save_array(self.cfg['intermediate_data']['bdlknc'][0][k],bdlknc[k],fmt='%.6e')
+
+ # get estimates of stage from model top, for specifying ranges
+ stages=[]
+ forlakidinself.lake_info['lak_id']:
+ loc=self.lakarr[0]==lakid
+ est_stage=self.dis.top.array[loc].min()
+ stages.append(est_stage)
+ stages=np.array(stages)
+
+ # setup stress period data
+ tol=5# specify lake stage range as +/- this value
+ ssmn,ssmx=stages-tol,stages+tol
+ stage_range=list(zip(ssmn,ssmx))
+
+ # set up dataset 9
+ # ssmn and ssmx values only required for steady-state periods > 0
+ self.lake_fluxes=setup_lake_fluxes(self)
+ precip=self.lake_fluxes['precipitation'].tolist()
+ evap=self.lake_fluxes['evaporation'].tolist()
+ flux_data={}
+ fori,steadyinenumerate(self.dis.steady.array):
+ ifi>0andsteady:
+ flux_data_i=[]
+ forlake_ssmn,lake_ssmxinzip(ssmn,ssmx):
+ flux_data_i.append([precip[i],evap[i],0,0,lake_ssmn,lake_ssmx])
+ else:
+ flux_data_i=[[precip[i],evap[i],0,0]]*nlakes
+ flux_data[i]=flux_data_i
+ options=['tableinput']iftab_files_argumentisnotNoneelseNone
+
+ kwargs=self.cfg['lak']
+ kwargs['nlakes']=len(self.lake_info)
+ kwargs['stages']=stages
+ kwargs['stage_range']=stage_range
+ kwargs['flux_data']=flux_data
+ kwargs['tab_files']=tab_files_argument#This needs to be in the order of the lake IDs!
+ kwargs['tab_units']=tab_units
+ kwargs['options']=options
+ kwargs['ipakcb']=self.ipakcb
+ kwargs['lwrt']=0
+ kwargs=get_input_arguments(kwargs,fm.mflak.ModflowLak)
+ lak=fm.ModflowLak(self,**kwargs)
+ print("finished in {:.2f}s\n".format(time.time()-t0))
+ returnlak
+
+
+[docs]
+ defsetup_chd(self,**kwargs):
+"""Set up the CHD Package.
+ """
+ returnself._setup_basic_stress_package(
+ 'chd',fm.ModflowChd,['head'],**kwargs)
+
+
+
+[docs]
+ defsetup_drn(self,**kwargs):
+"""Set up the Drain Package.
+ """
+ returnself._setup_basic_stress_package(
+ 'drn',fm.ModflowDrn,['elev','cond'],**kwargs)
+
+
+
+[docs]
+ defsetup_ghb(self,**kwargs):
+"""Set up the General Head Boundary Package.
+ """
+ returnself._setup_basic_stress_package(
+ 'ghb',fm.ModflowGhb,['bhead','cond'],**kwargs)
+
+
+
+
+[docs]
+ defsetup_riv(self,rivdata=None,**kwargs):
+"""Set up the River Package.
+ """
+ returnself._setup_basic_stress_package(
+ 'riv',fm.ModflowRiv,['stage','cond','rbot'],
+ rivdata=rivdata,**kwargs)
+
+
+
+[docs]
+ defsetup_wel(self,**kwargs):
+"""Set up the Well Package.
+ """
+ returnself._setup_basic_stress_package(
+ 'wel',fm.ModflowWel,['flux'],**kwargs)
+
+
+ defsetup_nwt(self,**kwargs):
+
+ print('setting up NWT package...')
+ t0=time.time()
+ use_existing_file=self.cfg['nwt'].get('use_existing_file')
+ kwargs=self.cfg['nwt']
+ ifuse_existing_fileisnotNone:
+ #set use_existing_file relative to source path
+ filepath=os.path.join(self._config_path,
+ use_existing_file)
+
+ assertos.path.exists(filepath),"Couldn't find {}, need a path to a NWT file".format(filepath)
+ nwt=fm.ModflowNwt.load(filepath,model=self)
+ else:
+ kwargs=get_input_arguments(kwargs,fm.ModflowNwt)
+ nwt=fm.ModflowNwt(self,**kwargs)
+ print("finished in {:.2f}s\n".format(time.time()-t0))
+ returnnwt
+
+
+[docs]
+ defsetup_hyd(self,**kwargs):
+"""TODO: generalize hydmod setup with specific input requirements"""
+ package='hyd'
+ print('setting up HYDMOD package...')
+ t0=time.time()
+
+ iobs_domain=None
+ ifnotkwargs['mfsetup_options']['allow_obs_in_bc_cells']:
+ # for now, discard any head observations in same (i, j) column of cells
+ # as a non-well boundary condition
+ # including lake package lakes and non lake, non well BCs
+ # (high-K lakes are excluded, since we may want head obs at those locations,
+ # to serve as pseudo lake stage observations)
+ iobs_domain=(self.isbc==1)|np.any(self.isbc>2)
+
+ # munge the observation data
+ df=setup_head_observations(self,
+ obs_package=package,
+ obsname_column='hydlbl',
+ iobs_domain=iobs_domain,
+ **kwargs['source_data'],
+ **kwargs['mfsetup_options'])
+
+ # create observation data recarray
+ obsdata=fm.ModflowHyd.get_empty(len(df))
+ forcinobsdata.dtype.names:
+ assertcindf.columns,"Missing observation data field: {}".format(c)
+ obsdata[c]=df[c]
+ nhyd=len(df)
+ hyd=flopy.modflow.ModflowHyd(self,nhyd=nhyd,hydnoh=-999,obsdata=obsdata)
+ print("finished in {:.2f}s\n".format(time.time()-t0))
+ returnhyd
+
+
+ defsetup_gag(self,**kwargs):
+
+ print('setting up GAGE package...')
+ t0=time.time()
+ # setup gage package output for all included lakes
+ ngages=0
+ nlak_gages=0
+ starting_unit_number=self.cfg['gag']['starting_unit_number']
+ ifself.get_package('lak')isnotNone:
+ nlak_gages=self.lak.nlakes
+ ifnlak_gages>0:
+ ngages+=nlak_gages
+ lak_gagelocs=list(np.arange(1,nlak_gages+1)*-1)
+ lak_gagerch=[0]*nlak_gages# dummy list to maintain index position
+ lak_outtype=[self.cfg['gag']['lak_outtype']]*nlak_gages
+ # need minus sign to tell MF to read outtype
+ lake_unit=list(-np.arange(starting_unit_number,
+ starting_unit_number+nlak_gages,dtype=int))
+ # TODO: make private attribute to facilitate keeping track of lake IDs
+ lak_files=['lak{}_{}.ggo'.format(i+1,hydroid)
+ fori,hydroidinenumerate(self.cfg['lak']['source_data']['lakes_shapefile']['include_ids'])]
+ # update the starting unit number of avoid collisions with other gage packages
+ starting_unit_number=np.max(np.abs(lake_unit))+1
+
+ # need to add streams at some point
+ nstream_gages=0
+ stream_gageseg=[]
+ stream_gagerch=[]
+ stream_unit=[]
+ stream_outtype=[]
+ stream_files=[]
+ ifself.get_package('sfr')isnotNone:
+ #observations_input = self.cfg['sfr'].get('source_data', {}).get('observations')
+ #obs_info_files = self.cfg['gag'].get('observation_data')
+ #if obs_info_files is not None:
+ # # get obs_info_files into dictionary format
+ # # filename: dict of column names mappings
+ # if isinstance(obs_info_files, str):
+ # obs_info_files = [obs_info_files]
+ # if isinstance(obs_info_files, list):
+ # obs_info_files = {f: self.cfg['gag']['default_columns']
+ # for f in obs_info_files}
+ # elif isinstance(obs_info_files, dict):
+ # for k, v in obs_info_files.items():
+ # if v is None:
+ # obs_info_files[k] = self.cfg['gag']['default_columns']
+#
+ # print('Reading observation files...')
+ # check_source_files(obs_info_files.keys())
+ # dfs = []
+ # for f, column_info in obs_info_files.items():
+ # print(f)
+ # df = read_observation_data(f,
+ # column_info,
+ # column_mappings=self.cfg['hyd'].get('column_mappings'))
+ # dfs.append(df) # cull to cols that are needed
+ # df = pd.concat(dfs, axis=0)
+ df=self.sfrdata.observations
+ nstream_gages=len(df)
+ stream_files=['{}.ggo'.format(site_no)forsite_noindf.obsname]
+ stream_gageseg=df.iseg.tolist()
+ stream_gagerch=df.ireach.tolist()
+ stream_unit=list(np.arange(starting_unit_number,
+ starting_unit_number+nstream_gages,dtype=int))
+ stream_outtype=[self.cfg['gag']['sfr_outtype']]*nstream_gages
+ ngages+=nstream_gages
+
+ ifngages==0:
+ print('No gage package input.')
+ return
+
+ # create flopy gage package object
+ gage_data=fm.ModflowGage.get_empty(ncells=ngages)
+ gage_data['gageloc']=lak_gagelocs+stream_gageseg
+ gage_data['gagerch']=lak_gagerch+stream_gagerch
+ gage_data['unit']=lake_unit+stream_unit
+ gage_data['outtype']=lak_outtype+stream_outtype
+ iflen(self.cfg['gag'].get('ggo_files',{}))==0:
+ self.cfg['gag']['ggo_files']=lak_files+stream_files
+ gag=fm.ModflowGage(self,numgage=len(gage_data),
+ gage_data=gage_data,
+ files=self.cfg['gag']['ggo_files'],
+ )
+ print("finished in {:.2f}s\n".format(time.time()-t0))
+ returngag
+
+
+[docs]
+ defwrite_input(self):
+"""Write the model input.
+ """
+ # prior to writing output
+ # remove any BCs in inactive cells
+ pckgs=['CHD']
+ forpckginpckgs:
+ package_instance=getattr(self,pckg.lower(),None)
+ ifpackage_instanceisnotNone:
+ remove_inactive_bcs(package_instance)
+
+ # write the model with flopy
+ # but skip the sfr package
+ # by monkey-patching the write method
+ SelPackList=[pforpinself.get_package_list()ifp!='SFR']
+ super().write_input(SelPackList=SelPackList)
+
+ # write the sfr package with SFRmaker
+ # gage package was already set-up and then written by Flopy
+ if'SFR'inself.get_package_list():
+ self.sfrdata.write_package(write_observations_input=False)
+
+ # add version info to file headers
+ files=[self.namefile]
+ files+=[p.file_name[0]forpinself.packagelist]
+ forfinfiles:
+ # either flopy or modflow
+ # doesn't allow headers for some packages
+ ext=Path(f).suffix
+ ifextin{'.hyd','.gag','.gage'}:
+ continue
+ add_version_to_fileheader(f,model_info=self.header)
+
+ ifnotself.cfg['mfsetup_options']['keep_original_arrays']:
+ tmpdir_path=self.tmpdir
+ shutil.rmtree(tmpdir_path)
+[docs]
+ @classmethod
+ defload(cls,yamlfile,load_only=None,verbose=False,forgive=False,check=False):
+"""Load a model from a config file and set of MODFLOW files.
+ """
+ cfg=load_cfg(yamlfile,verbose=verbose,default_file=cls.default_file)# 'mfnwt_defaults.yml')
+ print('\nLoading {} model from data in {}\n'.format(cfg['model']['modelname'],yamlfile))
+ t0=time.time()
+
+ m=cls(cfg=cfg,**cfg['model'])
+ if'grid'notinm.cfg.keys():
+ m.setup_grid()
+ #grid_file = cfg['setup_grid']['output_files']['grid_file']
+ #if os.path.exists(grid_file):
+ # print('Loading model grid definition from {}'.format(grid_file))
+ # m.cfg['grid'] = load(grid_file)
+ #else:
+ # m.setup_grid()
+
+ m=flopy_mf2005_load(m,load_only=load_only,forgive=forgive,check=check)
+ print('finished loading model in {:.2f}s'.format(time.time()-t0))
+ returnm
+[docs]
+defconvert_freq_to_period_start(freq):
+"""convert pandas frequency to period start"""
+ ifisinstance(freq,str):
+ forprefixin['M','Q','A','Y']:
+ ifprefixinfreq.upper()and"S"notinfreq.upper():
+ freq=freq.replace(prefix,"{}S".format(prefix)).upper()
+ returnfreq
+
+
+
+
+[docs]
+defget_parent_stress_periods(parent_model,nper=None,
+ parent_stress_periods='all'):
+
+ parent_sp=copy.copy(parent_stress_periods)
+ parent_model_nper=parent_model.modeltime.nper
+
+ # use all stress periods from parent model
+ ifisinstance(parent_sp,str)andparent_sp.lower()=='all':
+ ifnperisNone:# or nper < parent_model.nper:
+ nper=parent_model_nper
+ parent_sp=list(range(nper))
+ elifnper>parent_model_nper:
+ parent_sp=list(range(parent_model_nper))
+ foriinrange(nper-parent_model_nper):
+ parent_sp.append(parent_sp[-1])
+ else:
+ parent_sp=list(range(nper))
+
+ # use only specified stress periods from parent model
+ elifisinstance(parent_sp,list):
+ # limit parent stress periods to include
+ # to those in parent model and nper specified for pfl_nwt
+ ifnperisNone:
+ nper=len(parent_sp)
+
+ perlen=[parent_model.modeltime.perlen[0]]
+ fori,pinenumerate(parent_sp):
+ ifi==nper:
+ break
+ ifp==parent_model_nper:
+ break
+ ifp>0andp>=parent_sp[-1]andlen(parent_sp)<nper:
+ parent_sp.append(p)
+ perlen.append(parent_model.modeltime.perlen[p])
+ ifnper<len(parent_sp):
+ nper=len(parent_sp)
+ else:
+ n_parent_per=len(parent_sp)
+ foriinrange(nper-n_parent_per):
+ parent_sp.append(parent_sp[-1])
+
+ # no parent stress periods specified,
+ # default to just using first stress period
+ # (repeating if necessary;
+ # for example if creating transient inset model with steady bc from parent)
+ else:
+ ifnperisNone:
+ nper=1
+ parent_sp=[0]
+ foriinrange(nper-1):
+ parent_sp.append(parent_sp[-1])
+ assertlen(parent_sp)==nper
+ returnparent_sp
+
+
+
+
+[docs]
+defparse_perioddata_groups(perioddata_dict,
+ **kwargs):
+"""Reorganize input in perioddata dict into
+ a list of groups (dicts).
+ """
+ perioddata_groups=[]
+ defaults={
+ 'start_date_time':'1970-01-01'
+ }
+ defaults.update(kwargs)
+ group0=defaults.copy()
+
+ valid_txt="if transient: perlen specified or 3 of start_date_time, " \
+ "end_date_time, nper or freq;\n" \
+ "if steady: nper or perlen specified. Default perlen " \
+ "for steady-state periods is 1."
+ fork,vinperioddata_dict.items():
+ if'group'ink.lower():
+ data=defaults.copy()
+ data.update(v)
+ ifis_valid_perioddata(data):
+ data=get_input_arguments(data,setup_perioddata_group,
+ errors='raise')
+ perioddata_groups.append(data)
+ else:
+ print_item(k,data)
+ prefix="perioddata input for {} must have".format(k)
+ raiseException(prefix+valid_txt)
+ elif'perioddata'ink.lower():
+ perioddata_groups+=parse_perioddata_groups(perioddata_dict[k],**defaults)
+ else:
+ group0[k]=v
+ iflen(perioddata_groups)==0:
+ ifnotis_valid_perioddata(group0):
+ print_item('perioddata:',group0)
+ prefix="perioddata input must have"
+ raiseException(prefix+valid_txt)
+ data=get_input_arguments(group0,setup_perioddata_group)
+ perioddata_groups=[data]
+ forgroupinperioddata_groups:
+ if'steady'ingroup:
+ ifnp.isscalar(group['steady'])orgroup['steady']isNone:
+ group['steady']={0:group['steady']}
+ elifnotisinstance(group['steady'],dict):
+ group['steady']={i:sfori,sinenumerate(group['steady'])}
+ returnperioddata_groups
+
+
+
+
+[docs]
+defsetup_perioddata_group(start_date_time,end_date_time=None,
+ nper=None,perlen=None,model_time_units='days',freq=None,
+ steady={0:True,1:False},
+ nstp=10,tsmult=1.5,
+ oc_saverecord={0:['save head last',
+ 'save budget last']},
+ ):
+"""Sets up time discretization for a model; outputs a DataFrame with
+ stress period dates/times and properties. Stress periods can be established
+ by explicitly specifying perlen as a list of period lengths in
+ model units. Or, stress periods can be generated via :func:`pandas.date_range`,
+ using three of the start_date_time, end_date_time, nper, and freq arguments.
+
+ Parameters
+ ----------
+ start_date_time : str or datetime-like
+ Left bound for generating stress period dates. See :func:`pandas.date_range`.
+ end_date_time : str or datetime-like, optional
+ Right bound for generating stress period dates. See :func:`pandas.date_range`.
+ nper : int, optional
+ Number of stress periods. Only used if perlen is None, or in combination with freq
+ if an end_date_time isn't specified.
+ perlen : sequence or None, optional
+ A list of stress period lengths in model time units. Or specify as None and
+ specify 3 of start_date_time, end_date_time, nper and/or freq.
+ model_time_units : str, optional
+ 'days' or 'seconds'.
+ By default, 'days'.
+ freq : str or DateOffset, default None
+ For setting up uniform stress periods between a start and end date, or of length nper.
+ Same as argument to pandas.date_range. Frequency strings can have multiples,
+ e.g. ‘6MS’ for a 6 month interval on the start of each month.
+ See the pandas documentation for a list of frequency aliases. Note: Only "start"
+ frequences (e.g. MS vs M for "month end") are supported.
+ steady : dict
+ Dictionary with zero-based stress periods as keys and boolean values. Similar to MODFLOW-6
+ input, the information specified for a period will continue to apply until
+ information for another period is specified.
+ nstp : int or sequence
+ Number of timesteps in a stress period. Must be an integer if perlen=None.
+ nstp : int or sequence
+ Timestep multiplier for a stress period. Must be an integer if perlen=None.
+ oc_saverecord : dict
+ Dictionary with zero-based stress periods as keys and output control options as values.
+ Similar to MODFLOW-6 input, the information specified for a period will
+ continue to apply until information for another period is specified.
+
+ Returns
+ -------
+ perioddata : pandas.DataFrame
+ DataFrame summarizing stress period information. Data columns:
+
+ ================== ================ ==============================================
+ **start_datetime** pandas datetimes start date/time of each stress period
+ **end_datetime** pandas datetimes end date/time of each stress period
+ **time** float cumulative MODFLOW time at end of period
+ **per** int zero-based stress period
+ **perlen** float stress period length in model time units
+ **nstp** int number of timesteps in the stress period
+ **tsmult** int timestep multiplier for stress period
+ **steady** bool True=steady-state, False=Transient
+ **oc** dict MODFLOW-6 output control options
+ ================== ================ ==============================================
+
+ Notes
+ -----
+ *Initial steady-state period*
+
+ If the first stress period is specified as steady-state (``steady[0] == True``),
+ the period length (perlen) in MODFLOW time is automatically set to 1. If subsequent
+ stress periods are specified, or if no end-date is specified, the end date for
+ the initial steady-state stress period is set equal to the start date. In the latter case,
+ the assumption is that the specified start date represents the start of the transient simulation,
+ and the initial steady-state (which is time-invarient anyways) is intended to produce a valid
+ starting condition. If only a single steady-state stress period is specified with an end date,
+ then that end date is retained.
+
+ *MODFLOW time vs real time*
+
+ The ``time`` column of the output DataFrame represents time in the MODFLOW simulation,
+ which cannot have zero-lengths for any period. Therefore, initial steady-state periods
+ are automatically assigned lengths of one (as described above), and MODFLOW time is incremented
+ accordingly. If the model has an initial steady-state period, this means that subsequent MODFLOW
+ times will be 1 time unit greater than the acutal date-times.
+
+ *End-dates*
+
+ Specified ``end_date_time`` represents the right bound of the time discretization,
+ or in other words, the time increment *after* the last time increment to be
+ simulated. For example, ``end_date_time='2019-01-01'`` would mean that
+ ``'2018-12-31'`` is the last date simulated by the model
+ (which ends at ``2019-01-01 00:00:00``).
+
+
+
+ """
+ specified_start_datetime=None
+ ifstart_date_timeisnotNone:
+ specified_start_datetime=pd.Timestamp(start_date_time)
+ elifend_date_timeisNone:
+ raiseValueError('If no start_datetime, must specify end_datetime')
+ specified_end_datetime=None
+ ifend_date_timeisnotNone:
+ specified_end_datetime=pd.Timestamp(end_date_time)
+
+ # if times are specified by start & end dates and freq,
+ # period is determined by pd.date_range
+ ifall({specified_start_datetime,specified_end_datetime,freq}):
+ nper=None
+ freq=convert_freq_to_period_start(freq)
+ oc=oc_saverecord
+ ifnotisinstance(steady,dict):
+ steady={i:vfori,vinenumerate(steady)}
+
+ # nstp and tsmult need to be lists
+ ifnotnp.isscalar(nstp):
+ nstp=list(nstp)
+ ifnotnp.isscalar(tsmult):
+ tsmult=list(tsmult)
+
+ txt="Specify perlen as a list of lengths in model units, or\nspecify 3 " \
+ "of start_date_time, end_date_time, nper and/or freq."
+
+ # Explicitly specified stress period lengths
+ start_datetime=[]# datetimes at period starts
+ end_datetime=[]# datetimes at period ends
+ ifperlenisnotNone:
+ ifnp.isscalar(perlen):
+ perlen=[perlen]
+ start_datetime=[specified_start_datetime]
+ iflen(perlen)>1:
+ fori,lengthinenumerate(perlen):
+ # initial steady-state period
+ # set perlen to 0
+ # and start/end dates to be equal
+ ifi==0andsteady[0]:
+ next_start=start_datetime[i]
+ perlen[0]==1
+ else:
+ next_start=start_datetime[i]+ \
+ pd.Timedelta(length,unit=model_time_units)
+ start_datetime.append(next_start)
+ end_datetime=pd.to_datetime(start_datetime[1:])
+ start_datetime=pd.to_datetime(start_datetime[:-1])
+ # single specified stress period length
+ else:
+ end_datetime=[specified_start_datetime+pd.Timedelta(perlen[0],
+ unit=model_time_units)]
+ time=np.cumsum(perlen)# time at end of period, in MODFLOW units
+
+ # single steady-state period
+ elifnper==1andsteady[0]:
+ perlen=[1]
+ time=[1]
+ start_datetime=pd.to_datetime([specified_start_datetime])
+ ifspecified_end_datetimeisnotNone:
+ end_datetime=pd.to_datetime([specified_end_datetime])
+ else:
+ end_datetime=pd.to_datetime([specified_start_datetime])
+
+ # Set up datetimes based on 3 of start_date_time, specified_end_datetime, nper and/or freq (scalar perlen)
+ else:
+ assertnp.isscalar(nstp),"nstp: {}; nstp must be a scalar if perlen " \
+ "is not specified explicitly as a list.\n{}".format(nstp,txt)
+ assertnp.isscalar(tsmult),"tsmult: {}; tsmult must be a scalar if perlen " \
+ "is not specified explicitly as a list.\n{}".format(tsmult,txt)
+ periods=None
+ ifspecified_end_datetimeisNone:
+ # start_date_time, periods and freq
+ # (i.e. nper periods of length perlen starting on stat_date)
+ iffreqisnotNone:
+ periods=nper
+ else:
+ raiseValueError("Unrecognized input for perlen: {}.\n{}".format(perlen,txt))
+ else:
+ # specified_end_datetime and freq and periods
+ ifspecified_start_datetimeisNone:
+ periods=nper+1
+ # start_date_time, specified_end_datetime and uniform periods
+ # (i.e. nper periods of uniform length between start_date_time and specified_end_datetime)
+ eliffreqisNone:
+ periods=nper#-1 if steady[0] else nper
+ # start_date_time, specified_end_datetime and frequency
+ eliffreqisnotNone:
+ pass
+ datetimes=pd.date_range(specified_start_datetime,specified_end_datetime,
+ periods=periods,freq=freq)
+ # if end_datetime, periods and freq were specified
+ ifspecified_start_datetimeisNone:
+ specified_start_datetime=datetimes[0]
+ start_datetime=datetimes[:-1]
+ end_datetime=datetimes[1:]
+ time_edges=getattr((datetimes-start_datetime[0]),
+ model_time_units).tolist()
+ perlen=np.diff(time_edges)
+ # time is elapsed time at the end of each period
+ time=time_edges[1:]
+ else:
+ start_datetime=datetimes
+ end_datetime=pd.to_datetime(datetimes[1:].tolist()+
+ [specified_end_datetime])
+ # Edge case of end date falling on the start date freq
+ # (zero-length sp at end)
+ ifend_datetime[-1]==start_datetime[-1]:
+ start_datetime=start_datetime[:-1]
+ end_datetime=end_datetime[:-1]
+ time_edges=getattr((end_datetime-start_datetime[0]),
+ model_time_units).tolist()
+ time_edges=[0]+time_edges
+ perlen=np.diff(time_edges)
+ # time is elapsed time at the end of each period
+ time=time_edges[1:]
+ #if len(datetimes) == 1:
+ # perlen = [(specified_end_datetime - specified_start_datetime).days]
+ # time = np.array(perlen)
+
+ # if first period is steady-state,
+ # insert it at the beginning of the generated range
+ # (only do for pd.date_range -based discretization)
+ ifsteady[0]:
+ start_datetime=[start_datetime[0]]+start_datetime.tolist()
+ end_datetime=[start_datetime[0]]+end_datetime.tolist()
+ perlen=[1]+list(perlen)
+ time=[1]+(np.array(time)+1).tolist()
+ ifisinstance(nstp,list):
+ nstp=[1]+nstp
+ ifisinstance(tsmult,list):
+ tsmult=[1]+tsmult
+
+ perioddata=pd.DataFrame({
+ 'start_datetime':start_datetime,
+ 'end_datetime':end_datetime,
+ 'time':time,
+ 'per':range(len(time)),
+ 'perlen':np.array(perlen).astype(float),
+ 'nstp':nstp,
+ 'tsmult':tsmult,
+ })
+
+ # specify steady-state or transient for each period, filling empty
+ # periods with previous state (same logic as MF6 input)
+ issteady=[steady[0]]
+ foriinrange(len(perioddata)):
+ issteady.append(steady.get(i,issteady[i]))
+ perioddata['steady']=issteady[1:]
+ perioddata['steady']=perioddata['steady'].astype(bool)
+
+ # set up output control, using previous value to fill empty periods
+ # (same as MF6)
+ oclist=[None]
+ foriinrange(len(perioddata)):
+ oclist.append(oc.get(i,oclist[i]))
+ perioddata['oc']=oclist[1:]
+
+ # correct nstp and tsmult to be 1 for steady-state periods
+ perioddata.loc[perioddata.steady.values,'nstp']=1
+ perioddata.loc[perioddata.steady.values,'tsmult']=1
+ returnperioddata
+
+
+
+
+[docs]
+defconcat_periodata_groups(perioddata_groups,time_units='days'):
+"""Concatenate multiple perioddata DataFrames, but sort
+ result on (absolute) datetimes and increment model time and stress period
+ numbers accordingly."""
+
+ # update any missing variables in the groups with global variables
+ group_dfs=[]
+ fori,groupinenumerate(perioddata_groups):
+ group.update({'model_time_units':time_units,
+ })
+ df=setup_perioddata_group(**group)
+ group_dfs.append(df)
+
+ df=pd.concat(group_dfs).sort_values(by=['end_datetime'])
+ perlen=np.ones(len(df))
+ perlen[~df.steady.values]=df.loc[~df.steady.values,'perlen']
+ df['time']=np.cumsum(perlen)
+ df['per']=range(len(df))
+ df.index=range(len(df))
+ returndf
+
+
+
+
+[docs]
+defsetup_perioddata(model,
+ tdis_perioddata_config,
+ default_start_datetime=None,
+ nper=None,
+ steady=None,time_units='days',
+ oc_saverecord=None,parent_model=None,
+ parent_stress_periods=None,
+ ):
+"""Sets up the perioddata DataFrame that is used to reference model
+ stress period start and end times to real date time.
+
+ Parameters
+ ----------
+ model : _type_
+ _description_
+ tdis_perioddata_config : dict
+ ``perioddata:``, ``tdis:`` (MODFLOW 6 models) or ``dis:`` (MODFLOW-2005 models)
+ block from the Modflow-setup configuration file.
+ default_start_datetime : str, optional
+ Start date for model from the tdis: options: block in the configuration file,
+ or ``model.modeltime.start_datetime`` Flopy attribute. Only used
+ where start_datetime information is missing, for example if a group
+ for an initial steady-state period in ``tdis_perioddata_config``
+ doesn't have a start_datetime: entry. By default, None, in which case
+ the default start_datetime of 1970-01-01 may be applied by
+ py:func:`setup_perioddata_group`.
+ nper : int, optional
+ Number of stress periods. Only used if nper is specified in the
+ tdis: dimensions: block of the configuration file and
+ not in a perioddata group.
+ steady : bool, sequence or dict
+ Whether each period is steady-state or transient. Only used
+ if steady is specified in the tdis: or sto: configuration file
+ blocks (MODFLOW 6 models) or the dis: block (MODFLOW-2005 models),
+ and not in perioddata groups.
+ time_units : str, optional
+ Model time units, by default 'days'.
+ oc_saverecord : dict, optional
+ Output control settings, keyed by stress period. Only
+ used to record this information in the stress period data table.
+ parent_model : flopy model instance, optional
+ Parent model, if model is an inset.
+ parent_stress_periods : list of ints, optional
+ Parent model stress periods to apply to the inset model
+ (read from the parent: copy_stress_periods: item in the
+ configuration file).
+
+ Returns
+ -------
+ perioddata : DataFrame
+ Table of stress period information with columns:
+
+ ============== =========================================
+ start_datetime Start date of each model stress period
+ end_datetime End date of each model stress period
+ time MODFLOW elapsed time, in days [#f1]_
+ per Model stress period number
+ perlen Stress period length (days)
+ nstp Number of timesteps in stress period
+ tsmult Timestep multiplier
+ steady Steady-state or transient
+ oc Output control setting for MODFLOW
+ parent_sp Corresponding parent model stress period
+ ============== =========================================
+
+ Notes
+ -----
+ perioddata is also saved to stress_period_data.csv in the tables folder
+ (usually `/tables`).
+
+ .. rubric:: Footnotes
+
+ .. [#f1] Modflow elapsed time includes the time lengths specified for
+ any steady-state periods (at least 1 day). Therefore if the model
+ has an initial steady-state period with a ``perlen`` of one day,
+ the elapsed time at the model start date will already be 1 day.
+ """
+ # get start_date_time from parent if available and start_date_time wasn't specified
+ # only apply to tdis_perioddata_config if it wasn't specified there
+ iftdis_perioddata_config.get('start_datetime','1970-01-01')=='1970-01-01'and \
+ default_start_datetime!='1970-01-01':
+ tdis_perioddata_config['start_date_time']=default_start_datetime
+
+ # option to define stress periods in table prior to model build
+ if'csvfile'intdis_perioddata_config:
+ csvfile=Path(model._config_path)/tdis_perioddata_config['csvfile']['filename']
+ perioddata=pd.read_csv(csvfile)
+ defaults={
+ 'start_datetime_column':'start_datetime',
+ 'end_datetime_column':'end_datetime',
+ 'steady_column':'steady',
+ 'nstp_column':'nstp',
+ 'tsmult_column':'tsmult'
+ }
+
+ csv_config=tdis_perioddata_config['csvfile']
+ renames={csv_config.get(k):v
+ fork,vindefaults.items()ifkincsv_config}
+ perioddata.rename(columns=renames,inplace=True)
+ required_cols=defaults.values()
+ forcolinrequired_cols:
+ ifcolnotinperioddata.columns:
+ raiseKeyError(f"{col} column missing in supplied stress "
+ f"period table {csvfile}.")
+ perioddata['start_datetime']=pd.to_datetime(perioddata['start_datetime'])
+ perioddata['end_datetime']=pd.to_datetime(perioddata['end_datetime'])
+ perioddata['per']=np.arange(len(perioddata))
+ perlen=getattr((perioddata['end_datetime']-
+ perioddata['start_datetime']).dt,
+ model.time_units).tolist()
+ # set initial steady-state stress period to at least length 1
+ ifperioddata['steady'][0]andperlen[0]<1:
+ perlen[0]=1
+ perioddata['perlen']=perlen
+ perioddata['time']=np.cumsum(perlen)
+ cols=['start_datetime','end_datetime','time',
+ 'per','perlen','nstp','tsmult','steady']
+ # option to supply Output Contorl INstructions as well
+ if'oc'inperioddata.columns:
+ cols.append('oc')
+ perioddata=perioddata[cols]
+ # some validation
+ assertnp.all(perioddata['perlen']>0)
+ assertnp.all(np.diff(perioddata['time'])>0)
+ # define stress periods from perioddata group blocks in configuration file
+ else:
+ perioddata_groups=parse_perioddata_groups(tdis_perioddata_config,
+ nper=nper,steady=steady,
+ start_date_time=default_start_datetime)
+ # set up the perioddata table from the groups
+ perioddata=concat_periodata_groups(perioddata_groups,time_units)
+
+ # assign parent model stress periods to each inset model stress period
+ parent_sp=None
+ ifparent_modelisnotNone:
+ ifparent_stress_periodsisnotNone:
+ # parent_sp has parent model stress period corresponding
+ # to each inset model stress period (len=nper)
+ # the same parent stress period can be specified for multiple inset model periods
+ parent_sp=get_parent_stress_periods(parent_model,nper=len(perioddata),
+ parent_stress_periods=parent_stress_periods)
+ elifmodel._is_lgr:
+ parent_sp=perioddata['per'].values
+
+ # add corresponding stress periods in parent model if there are any
+ perioddata['parent_sp']=parent_sp
+ assertnp.array_equal(perioddata['per'].values,np.arange(len(perioddata)))
+ returnperioddata
+
+
+
+
+[docs]
+defaggregate_dataframe_to_stress_period(data,id_column,data_column,datetime_column='datetime',
+ end_datetime_column=None,category_column=None,
+ start_datetime=None,end_datetime=None,period_stat='mean',
+ resolve_duplicates_with='raise error'):
+"""Aggregate time-series data in a DataFrame to a single value representing
+ a period defined by a start and end date.
+
+ Parameters
+ ----------
+ data : DataFrame
+ Must have an id_column, data_column, datetime_column, and optionally,
+ an end_datetime_column.
+ id_column : str
+ Column in data with location identifier (e.g. node or well id).
+ data_column : str or list
+ Column(s) in data with values to aggregate.
+ datetime_column : str
+ Column in data with times for each value. For downsampling of multiple values in data
+ to a longer period represented by start_datetime and end_datetime, this is all that is needed.
+ Aggregated values will include values in datetime_column that are >= start_datetime and < end_datetime.
+ In other words, datetime_column represents the start of each time interval in data.
+ Values can be strings (e.g. YYYY-MM-DD) or pandas Timestamps. By default, None.
+ end_datetime_column : str
+ Column in data with end times for period represented by each value. This is only needed
+ for upsampling, where the interval defined by start_datetime and end_datetime is smaller
+ than the time intervals in data. The row(s) in data that have a datetime_column value < end_datetime,
+ and an end_datetime_column value > start_datetime will be retained in aggregated.
+ Values can be strings (e.g. YYYY-MM-DD) or pandas Timestamps. By default, None.
+ start_datetime : str or pandas.Timestamp
+ Start time of aggregation period. Only used if an aggregation start
+ and end time are not given in period_stat. If None, and no start
+ and end time are specified in period_stat, the first time in datetime_column is used.
+ By default, None.
+ end_datetime : str or pandas.Timestamp
+ End time of aggregation period. Only used if an aggregation start
+ and end time are not given in period_stat. If None, and no start
+ and end time are specified in period_stat, the last time in datetime_column is used.
+ By default, None.
+ period_stat : str, list, or NoneType
+ Method for aggregating data. By default, 'mean'.
+
+ * Strings will be passed to DataFrame.groupby
+ as the aggregation method. For example, ``'mean'`` would result in DataFrame.groupby().mean().
+ * If period_stat is None, ``'mean'`` is used.
+ * Lists of length 2 can be used to specify a statistic for a month (e.g. ``['mean', 'august']``),
+ or for a time period that can be represented as a single string in pandas.
+ For example, ``['mean', '2014']`` would average all values in the year 2014; ``['mean', '2014-01']``
+ would average all values in January of 2014, etc. Basically, if the string
+ can be used to slice a DataFrame or Series, it can be used here.
+ * Lists of length 3 can be used to specify a statistic and a start and end date.
+ For example, ``['mean', '2014-01-01', '2014-03-31']`` would average the values for
+ the first three months of 2014.
+ resolve_duplicates_with : {'sum', 'mean', 'first', 'raise error'}
+ Method for reducing duplicates (of times, sites and measured or estimated category).
+ By default, 'raise error' will result in a ValueError if duplicates are encountered.
+ Otherwise any aggregate method in pandas can be used (e.g. DataFrame.groupby().<method>())
+
+ Returns
+ -------
+ aggregated : DataFrame
+ Aggregated values. Columns are the same as data, except the time column
+ is named 'start_datetime'. In other words, aggregated periods are represented by
+ their start dates (as opposed to midpoint dates or end dates).
+
+ """
+ data=data.copy()
+
+ ifdata.index.name==datetime_column:
+ data.sort_index(inplace=True)
+ else:
+ data.sort_values(by=datetime_column,inplace=True)
+
+ ifisinstance(period_stat,str):
+ period_stat=[period_stat]
+ elifperiod_statisNone:
+ period_stat=['mean']
+ else:
+ period_stat=period_stat.copy()
+ ifisinstance(data_column,str):
+ data_columns=[data_column]
+ else:
+ data_columns=data_column
+
+ iflen(data_columns)>1:
+ pass
+
+ start,end=None,None
+ ifisinstance(period_stat,list):
+ stat=period_stat.pop(0)
+
+ # stat for specified period
+ iflen(period_stat)==2:
+ start,end=period_stat
+ period_data=data.loc[start:end]
+
+ # stat specified by single item
+ eliflen(period_stat)==1:
+ period=period_stat.pop()
+ # stat for a specified month
+ ifperiodinmonths.keys()orperiodinmonths.values():
+ period_data=data.loc[data.index.dt.month==months.get(period,period)]
+
+ # stat for a period specified by single string (e.g. '2014', '2014-01', etc.)
+ else:
+ period_data=data.loc[period]
+
+ # no time period in source data specified for statistic; use start/end of current model period
+ eliflen(period_stat)==0:
+ assertdatetime_columnindata.columns, \
+ "datetime_column needed for " \
+ "resampling irregular data to model stress periods"
+ ifdata[datetime_column].dtype==object:
+ data[datetime_column]=pd.to_datetime(data[datetime_column])
+ ifend_datetime_columnindata.columnsand \
+ data[end_datetime_column].dtype==object:
+ data[end_datetime_column]=pd.to_datetime(data[end_datetime_column])
+ ifstart_datetimeisNone:
+ start_datetime=data[datetime_column].iloc[0]
+ ifend_datetimeisNone:
+ end_datetime=data[datetime_column].iloc[-1]
+ # >= includes the start datetime
+ # if there is no end_datetime column, select values that have start_datetimes within the period
+ # this excludes values that start before the period but don't have an end date
+ ifend_datetime_columnnotindata.columns:
+ data_overlaps_period=(data[datetime_column]<end_datetime)& \
+ (data[datetime_column]>=start_datetime)
+ # if some end_datetimes are missing, assume end_datetime is the period end
+ # this assumes that missing end datetimes indicate pumping that continues to the end of the simulation
+ elifdata[end_datetime_column].isna().any():
+ data.loc[data[end_datetime_column].isna(),'end_datetime']=end_datetime
+ data_overlaps_period=(data[datetime_column]<end_datetime)& \
+ (data[end_datetime_column]>=start_datetime)
+ # otherwise, select values with start datetimes that are before the period end
+ # and end datetimes that are after the period start
+ # in other words, include all values that overlap in time with the period
+ else:
+ ifdata[end_datetime_column].dtype==object:
+ data[end_datetime_column]=pd.to_datetime(data[end_datetime_column])
+ data_overlaps_period=(data[datetime_column]<end_datetime)& \
+ (data[end_datetime_column]>start_datetime)
+ period_data=data.loc[data_overlaps_period]
+
+ else:
+ raiseException("")
+
+ # create category column if there is none, to conform to logic below
+ categories=False
+ ifcategory_columnisNone:
+ category_column='category'
+ period_data[category_column]='measured'
+ elifcategory_columnnotinperiod_data.columns:
+ raiseKeyError('category_column: {} not in data'.format(category_column))
+ else:
+ categories=True
+
+ # compute statistic on data
+ # ensure that ids are unique in each time period
+ # by summing multiple id instances by period
+ # (only sum the data column)
+ # check for duplicates with same time, id, and category (measured vs estimated)
+ duplicated=pd.Series(list(zip(period_data[datetime_column],
+ period_data[id_column],
+ period_data[category_column]))).duplicated()
+ aggregated=period_data.groupby(id_column).first()
+ fordata_columnindata_columns:
+ ifany(duplicated):
+ ifresolve_duplicates_with=='raise error':
+ duplicate_info=period_data.loc[duplicated.values]
+ msg='The following locations are duplicates which need to be resolved:\n'.format(duplicate_info.__str__())
+ raiseValueError(msg)
+ period_data.index.name=None
+ by_period=period_data.groupby([id_column,datetime_column]).first().reset_index()
+ agg_groupedby=getattr(period_data.groupby([id_column,datetime_column]),
+ resolve_duplicates_with)(numeric_only=True)
+ by_period[data_column]=agg_groupedby[data_column].values
+ period_data=by_period
+ agg_groupedby=getattr(period_data.groupby(id_column),stat)(numeric_only=True)
+ aggregated[data_column]=agg_groupedby[data_column].values
+ # if category column was argued, get counts of measured vs estimated
+ # for each measurement location, for current stress period
+ ifcategories:
+ counts=period_data.groupby([id_column,category_column]).size().unstack(fill_value=0)
+ forcolin'measured','estimated':
+ ifcolnotincounts.columns:
+ counts[col]=0
+ aggregated['n_{}'.format(col)]=counts[col]
+ aggregated.reset_index(inplace=True)
+
+ # add datetime back in
+ aggregated['start_datetime']=startifstartisnotNoneelsestart_datetime
+ # enforce consistent datetime dtypes
+ # (otherwise pd.concat of multiple outputs from this function may fail)
+ forcolin'start_datetime','end_datetime':
+ ifcolinaggregated.columns:
+ aggregated[col]=aggregated[col].astype('datetime64[ns]')
+
+ # drop original datetime column, which doesn't reflect dates for period averages
+ drop_cols=[datetime_column]
+ ifnotcategories:# drop category column if it was created
+ drop_cols.append(category_column)
+ aggregated.drop(drop_cols,axis=1,inplace=True)
+ returnaggregated
+
+
+
+
+[docs]
+defaggregate_xarray_to_stress_period(data,datetime_coords_name='time',
+ start_datetime=None,end_datetime=None,
+ period_stat='mean'):
+
+ period_stat=copy.copy(period_stat)
+ ifisinstance(start_datetime,pd.Timestamp):
+ start_datetime=start_datetime.strftime('%Y-%m-%d')
+ ifisinstance(end_datetime,pd.Timestamp):
+ end_datetime=end_datetime.strftime('%Y-%m-%d')
+ ifisinstance(period_stat,str):
+ period_stat=[period_stat]
+ elifperiod_statisNone:
+ period_stat=['mean']
+
+ ifisinstance(period_stat,list):
+ stat=period_stat.pop(0)
+
+ # stat for specified period
+ iflen(period_stat)==2:
+ start,end=period_stat
+ arr=data.loc[start:end].values
+
+ # stat specified by single item
+ eliflen(period_stat)==1:
+ period=period_stat.pop()
+ # stat for a specified month
+ ifperiodinmonths.keys()orperiodinmonths.values():
+ arr=data.loc[data[datetime_coords_name].dt.month==months.get(period,period)].values
+
+ # stat for a period specified by single string (e.g. '2014', '2014-01', etc.)
+ else:
+ arr=data.loc[period].values
+
+ # no period specified; use start/end of current period
+ eliflen(period_stat)==0:
+
+ assertdatetime_coords_nameindata.coords, \
+ "datetime_column needed for " \
+ "resampling irregular data to model stress periods"
+ # not sure if this is needed for xarray
+ ifdata[datetime_coords_name].dtype==object:
+ data[datetime_coords_name]=pd.to_datetime(data[datetime_coords_name])
+ # default to aggregating whole dataset
+ # if start_ and end_datetime not provided
+ ifstart_datetimeisNone:
+ start_datetime=data[datetime_coords_name].values[0]
+ ifend_datetimeisNone:
+ end_datetime=data[datetime_coords_name].values[-1]
+ # >= includes the start datetime
+ # for now, in comparison to aggregate_dataframe_to_stress_period() fn
+ # for tabular data (pandas)
+ # assume that xarray data does not have an end_datetime column
+ # (infer the end datetimes)
+ arr=data.loc[start_datetime:end_datetime].values
+
+ else:
+ raiseException("")
+
+ # compute statistic on data
+ aggregated=getattr(arr,stat)(axis=0)
+
+ returnaggregated
+
+
+
+
+[docs]
+defadd_date_comments_to_tdis(tdis_file,start_dates,end_dates=None):
+"""Add stress period start and end dates to a tdis file as comments;
+ add modflow-setup version info to tdis file header.
+ """
+ tempfile=tdis_file+'.temp'
+ shutil.copy(tdis_file,tempfile)
+ withopen(tempfile)assrc:
+ withopen(tdis_file,'w')asdest:
+ header=''
+ read_header=True
+ forlineinsrc:
+ ifread_headerandlen(line)>0and \
+ line.strip()[0]in{'#','!','//'}:
+ header+=line
+ elif'begin options'in' '.join(line.lower().split()):
+ if'modflow-setup'notinheader:
+ if'flopy'inheader.lower():
+ mfsetup_text='# via '
+ else:
+ mfsetup_text='# File created by '
+ mfsetup_text+='modflow-setup version {}'.format(mfsetup.__version__)
+ mfsetup_text+=' at {:%Y-%m-%d %H:%M:%S}'.format(dt.datetime.now())
+ header+=mfsetup_text+'\n'
+ dest.write(header)
+ read_header=False
+ dest.write(line)
+ elif'begin perioddata'in' '.join(line.lower().split()):
+ dest.write(line)
+ dest.write(2*' '+'# perlen nstp tsmult\n')
+
+ fori,lineinenumerate(src):
+ if'end perioddata'in' '.join(line.lower().split()):
+ dest.write(line)
+ break
+ else:
+ line=2*' '+line.strip()+f' # period {i+1}: {start_dates[i]:%Y-%m-%d}'
+ ifend_datesisnotNone:
+ line+=f' to {end_dates[i]:%Y-%m-%d}'
+ line+='\n'
+ dest.write(line)
+ else:
+ dest.write(line)
+ os.remove(tempfile)
+[docs]
+defget_qx_qy_qz(cell_budget_file,binary_grid_file=None,
+ cell_connections_df=None,
+ version='mf6',
+ kstpkper=(0,0),
+ specific_discharge=False,
+ headfile=None,
+ modelgrid=None):
+"""Get 2 or 3D arrays of cell by cell flows across the cell faces
+ (for structured grid models).
+
+ Parameters
+ ----------
+ cell_budget_file : str, pathlike, or instance of flopy.utils.binaryfile.CellBudgetFile
+ File path or pointer to MODFLOW cell budget file.
+ binary_grid_file : str or pathlike
+ File path to MODFLOW 6 binary grid (``*.dis.grb``) file. Not needed for MFNWT
+ cell_connections_df : DataFrame
+ DataFrame of cell connections that can be provided as an alternative to bindary_grid_file,
+ to avoid having to get the connections with each call to get_qx_qy_qz. This can
+ be produced by the :meth:``mfsetup.grid.MFsetupGrid.intercell_connections`` method.
+ Must have following columns:
+
+ === =============================================================
+ n from zero-based node number
+ kn from zero-based layer
+ in from zero-based row
+ jn from zero-based column
+ m to zero-based node number
+ kn to zero-based layer
+ in to zero-based row
+ jn to zero-based column
+ === =============================================================
+
+ version : str
+ MODFLOW version- 'mf6' or other. If not 'mf6', the cell budget output
+ is assumed to be formatted similar to a MODFLOW 2005 style model.
+ model_top : 2D numpy array of shape (nrow, ncol)
+ Model top elevations (only needed for modflow 2005 style models without
+ a binary grid file)
+ model_bottom_array : 3D numpy array of shape (nlay, nrow, ncol)
+ Model bottom elevations (only needed for modflow 2005 style models
+ without a binary grid file)
+ kstpkper : tuple
+ zero-based (time step, stress period)
+ specific_discharge : bool
+ Option to return arrays of specific discharge (1D vector components)
+ instead of volumetric fluxes.
+ By default, False
+ headfile : str, pathlike, or instance of flopy.utils.binaryfile.HeadFile
+ File path or pointer to MODFLOW head file. Only required if
+ specific_discharge=True
+ modelgrid : instance of MFsetupGrid object
+ Defaults to None, only required if specific_discharge=True
+
+
+ Returns
+ -------
+ Qx, Qy, Qz : tuple of 2 or 3D numpy arrays
+ Volumetric or specific discharge fluxes across cell faces.
+ """
+ msg='Getting discharge...'
+ ifspecific_discharge:
+ msg='Getting specific discharge...'
+ print(msg)
+ ta=time.time()
+ ifversion=='mf6':
+ # get the cell connections
+ ifcell_connections_dfisnotNone:
+ df=cell_connections_df
+ elifbinary_grid_fileisnotNone:
+ df=get_intercell_connections(binary_grid_file)
+ else:
+ raiseValueError("Must specify a binary_grid_file or cell_connections_df.")
+
+ # get the flows
+ # this constitutes almost all of the execution time for this fn
+ t1=time.time()
+ ifisinstance(cell_budget_file,str)orisinstance(cell_budget_file,Path):
+ cbb=bf.CellBudgetFile(cell_budget_file)
+ else:
+ cbb=cell_budget_file
+ nlay,nrow,ncol=cbb.shape
+ flowja=cbb.get_data(text='FLOW-JA-FACE',kstpkper=kstpkper)[0][0,0,:]
+ df['q']=flowja[df['qidx']]
+ print(f"getting flows from budget file took {time.time()-t1:.2f}s\n")
+
+ # get arrays of flow through cell faces
+ # Qx (right face; TODO: confirm direction)
+ rfdf=df.loc[(df['jn']<df['jm'])]
+ nlay=rfdf['km'].max()+1
+ qx=np.zeros((nlay,nrow,ncol))
+ qx[rfdf['kn'].values,rfdf['in'].values,rfdf['jn'].values]=-rfdf.q.values
+
+ # Qy (front face; TODO: confirm direction)
+ ffdf=df.loc[(df['in']<df['im'])]
+ qy=np.zeros((nlay,nrow,ncol))
+ qy[ffdf['kn'].values,ffdf['in'].values,ffdf['jn'].values]=-ffdf.q.values
+
+ # Qz (bottom face; TODO: confirm that this is downward positive)
+ bfdf=df.loc[(df['kn']<df['km'])]
+ qz=np.zeros((nlay,nrow,ncol))
+ qz[bfdf['kn'].values,bfdf['in'].values,bfdf['jn'].values]=-bfdf.q.values
+ else:
+ ifisinstance(cell_budget_file,str)orisinstance(cell_budget_file,Path):
+ cbb=bf.CellBudgetFile(cell_budget_file)
+ else:
+ cbb=cell_budget_file
+ qx=cbb.get_data(text="flow right face",kstpkper=kstpkper)[0]
+ qy=cbb.get_data(text="flow front face",kstpkper=kstpkper)[0]
+ unique_rec_names=[bs.decode().strip().lower()forbsincbb.get_unique_record_names()]
+ if"flow lower face"inunique_rec_names:
+ qz=cbb.get_data(text="flow lower face",kstpkper=kstpkper)[0]
+ else:
+ qz=np.zeros_like(qy)
+
+ # optionally get specific discharge
+ ifspecific_discharge:
+ ifmodelgridisNone:
+ raiseException('specific discharge calculations require a modelgrid input')
+ ifheadfileisNone:
+ print('No headfile object provided - thickness for specific discharge calculations\n'+
+ 'will be based on the model top rather than the water table')
+ thickness=modelgrid.cell_thickness
+ else:
+ ifisinstance(headfile,str)orisinstance(headfile,Path):
+ hds=bf.HeadFile(headfile).get_data(kstpkper=kstpkper)
+ else:
+ hds=headfile.get_data(kstpkper=kstpkper)
+ thickness=modelgrid.saturated_thickness(array=hds)
+
+ delr_gridp,delc_gridp=np.meshgrid(modelgrid.delr,
+ modelgrid.delc)
+ nlay,nrow,ncol=modelgrid.shape
+
+ # multiply average thickness by width (along rows or cols) to
+ # obtain cross sectional area on the faces
+ # https://water.usgs.gov/ogw/modflow-nwt/MODFLOW-NWT-Guide/delrdelcillustration.png
+ qy_face_areas=np.tile(delr_gridp[:-1,:],(nlay,1,1))* \
+ ((thickness[:,:-1,:]+thickness[:,1:,:])/2)
+ # the above calculation results in a missing dimension ( only internal faces are
+ # calculated ) so we concatenate on a repetition of the final row or column
+ qy_face_areas=np.concatenate([qy_face_areas,
+ np.expand_dims(qy_face_areas[:,-1,:],axis=1)],axis=1)
+
+ qx_face_areas=np.tile(delc_gridp[:,:-1],(nlay,1,1))* \
+ ((thickness[:,:,:-1]+thickness[:,:,1:])/2)
+ qx_face_areas=np.concatenate([qx_face_areas,
+ np.expand_dims(qx_face_areas[:,:,-1],axis=2)],axis=2)
+
+ # z direction is simply delr * delc across all layers
+ qz_face_areas=np.tile(delr_gridp*delc_gridp,(nlay,1,1))
+
+ # divide by the areas resulting in normalized, specific discharge
+ qx/=qx_face_areas
+ qy/=qy_face_areas
+ qz/=qz_face_areas
+
+ print(f"{msg} took {time.time()-ta:.2f}s\n")
+ returnqx,qy,qz
+
+
+classTmr:
+"""
+ Class for general telescopic mesh refinement of a MODFLOW model. Head or
+ flux fields from parent model are interpolated to boundary cells of
+ inset model, which may be in any configuration (jagged, rotated, etc.).
+
+ Parameters
+ ----------
+ parent_model : flopy model instance instance of parent model
+ Must have a valid, attached ``modelgrid`` attribute that is an instance of
+ :class:`mfsetup.grid.MFsetupGrid`.
+ inset_model : flopy model instance instance of inset model
+ Must have a valid, attached ``modelgrid`` attribute that is an instance of
+ :class:`mfsetup.grid.MFsetupGrid`.
+ parent_head_file : filepath
+ MODFLOW binary head output
+ parent_cell_budget_file : filepath
+ MODFLOW binary cell budget output
+ parent_binary_grid_file : filepath
+ MODFLOW 6 binary grid file (``*.grb``)
+ define_connections : str, {'max_active_extent', 'by_layer'}
+ Method for defining perimeter cells where the TMR boundary
+ condition will be applied. If 'max_active_extent', the
+ maximum footprint of the active area (including all cell
+ locations with at least one layer that is active) will be used.
+ If 'by_layer', the perimeter of the active area in each layer will be used
+ (excluding any interior clusters of active cells). The 'by_layer'
+ option is potentially problematic if some layers have substantial
+ areas of pinched-out (idomain != 1) cells, which may result
+ in perimeter boundary condition cells getting placed too close
+ to the area of interest. By default, 'max_active_extent'.
+
+ Notes
+ -----
+ """
+
+ def__init__(self,parent_model,inset_model,
+ parent_head_file=None,parent_cell_budget_file=None,
+ parent_binary_grid_file=None,
+ boundary_type=None,inset_parent_period_mapping=None,
+ parent_start_date_time=None,source_mask=None,
+ define_connections_by='max_active_extent',
+ shapefile=None,
+ ):
+ self.parent=parent_model
+ self.inset=inset_model
+ self.parent_head_file=parent_head_file
+ self.parent_cell_budget_file=parent_cell_budget_file
+ self.parent_binary_grid_file=parent_binary_grid_file
+ self.define_connections_by=define_connections_by
+ self.shapefile=shapefile
+ self.boundary_type=boundary_type
+ ifboundary_typeisNoneandparent_head_fileisnotNone:
+ self.boundary_type='head'
+ elifboundary_typeisNoneandparent_cell_budget_fileisnotNone:
+ self.boundary_type='flux'
+ self.parent_start_date_time=parent_start_date_time
+
+ # Path for writing auxilliary output tables
+ # (boundary_cells.shp, etc.)
+ ifhasattr(self.inset,'_tables_path'):
+ self._tables_path=Path(self.inset._tables_path)
+ else:
+ self._tables_path=Path(self.inset.model_ws)/'tables'
+ self._tables_path.mkdir(exist_ok=True,parents=True)
+
+ # properties
+ self._idomain=None
+ self._inset_boundary_cells=None
+ self._inset_parent_period_mapping=inset_parent_period_mapping
+ self._interp_weights_heads=None
+ self._interp_weights_flux=None
+ self._source_mask=source_mask
+ self._inset_zone_within_parent=None
+
+ @property
+ defidomain(self):
+"""Active area of the inset model.
+ """
+ ifself._idomainisNone:
+ ifself.inset.version=='mf6':
+ idomain=self.inset.dis.idomain.array
+ ifidomainisNone:
+ idomain=np.ones_like(self.inset.dis.botm.array,dtype=int)
+ else:
+ idomain=self.inset.bas6.ibound.array
+ self._idomain=idomain
+ returnself._idomain
+
+ @property
+ definset_boundary_cells(self):
+ ifself._inset_boundary_cellsisNone:
+ by_layer=self.define_connections_by=='by_layer'
+ df=self.get_inset_boundary_cells(by_layer=by_layer)
+ x,y,z=self.inset.modelgrid.xyzcellcenters
+ df['x']=x[df.i,df.j]
+ df['y']=y[df.i,df.j]
+ df['z']=z[df.k,df.i,df.j]
+ self._inset_boundary_cells=df
+ self._interp_weights=None
+ returnself._inset_boundary_cells
+
+ @property
+ definset_parent_period_mapping(self):
+ nper=self.inset.nper
+ # if mapping between source and dest model periods isn't specified
+ # assume one to one mapping of stress periods between models
+ ifself._inset_parent_period_mappingisNone:
+ parent_periods=list(range(self.parent.nper))
+ self._inset_parent_period_mapping={i:parent_periods[i]
+ ifi<self.parent.nperelseparent_periods[-1]foriinrange(nper)}
+ returnself._inset_parent_period_mapping
+
+ @inset_parent_period_mapping.setter
+ definset_parent_period_mapping(self,inset_parent_period_mapping):
+ self._inset_parent_period_mapping=inset_parent_period_mapping
+
+ @property
+ definterp_weights_flux(self):
+"""For the two main directions of flux (i, j) and the four orientations of
+ inset faces to interpolate to (right.left,top,bottom
+ we can precalulate the interpolation weights of the combinations to speed up
+ interpolation"""
+ ifself._interp_weights_fluxisNone:
+ self._interp_weights_flux=dict()# we need four flux directions for the insets
+ # x, y, z locations of parent model head values for i faces
+ ipx,ipy,ipz=self.x_iface_parent,self.y_iface_parent,self.z_iface_parent
+ # x, y, z locations of parent model head values for j faces
+ jpx,jpy,jpz=self.x_jface_parent,self.y_jface_parent,self.z_jface_parent
+
+
+ # these are the i-direction fluxes
+ x,y,z=self.inset_boundary_cell_faces.loc[
+ self.inset_boundary_cell_faces.cellface.isin(['top','bottom'])][['xface','yface','zface']].T.values
+ self._interp_weights_flux['iface']=interp_weights((ipx,ipy,ipz),(x,y,z),d=3)
+ assertnotnp.any(np.isnan(self._interp_weights_flux['iface'][1]))
+
+ # these are the j-direction fluxes
+ x,y,z=self.inset_boundary_cell_faces.loc[
+ self.inset_boundary_cell_faces.cellface.isin(['left','right'])][['xface','yface','zface']].T.values
+
+ self._interp_weights_flux['jface']=interp_weights((jpx,jpy,jpz),(x,y,z),d=3)
+ assertnotnp.any(np.isnan(self._interp_weights_flux['jface'][1]))
+
+
+ returnself._interp_weights_flux
+
+ @property
+ defparent_xyzcellcenters(self):
+"""Get x, y, z locations of parent cells in a buffered area
+ (defined by the _source_grid_mask property) around the
+ inset model."""
+ px,py,pz=self.parent.modelgrid.xyzcellcenters
+
+ # add an extra layer on the top and bottom
+ # for inset model cells above or below
+ # the last cell center in the vert. direction
+ # pad top by top layer thickness
+ b1=self.parent.modelgrid.top-self.parent.modelgrid.botm[0]
+ top=pz[0]+b1
+ # pad botm by botm layer thickness
+ ifself.parent.modelgrid.shape[0]>1:
+ b2=-np.diff(self.parent.modelgrid.botm[-2:],axis=0)[0]
+ else:
+ b2=b1
+ botm=pz[-1]-b2
+ pz=np.vstack([[top],pz,[botm]])
+
+ nlay,nrow,ncol=pz.shape
+ px=np.tile(px,(nlay,1,1))
+ py=np.tile(py,(nlay,1,1))
+ mask=self._source_grid_mask
+ # mask already has extra top/botm layers
+ # (_source_grid_mask property)
+ px=px[mask]
+ py=py[mask]
+ pz=pz[mask]
+ returnpx,py,pz
+
+ @property
+ defparent_xyzcellfacecenters(self):
+"""Get x, y, z locations of the centroids of the cell faces
+ in the row and column directions in a buffered area
+ (defined by the _source_grid_mask property) around the
+ inset model. Analogous to parent_xyzcellcenters, but for
+ interpolating parent model cell by cell fluxes that are located
+ at the cell face centers (instead of heads that are located
+ at the cell centers).
+ """
+ #px, py, pz = self.parent.modelgrid.xyzcellcenters
+ k,i,j=np.indices(self.parent.modelgrid.shape)
+ xyzcellfacecenters={}
+ forcellfacein'right','bottom':
+ px,py,pz=get_cellface_midpoint(self.parent.modelgrid,
+ k,i,j,
+ cellface)
+ px=np.reshape(px,self.parent.modelgrid.shape)
+ py=np.reshape(py,self.parent.modelgrid.shape)
+ pz=np.reshape(pz,self.parent.modelgrid.shape)
+ # add an extra layer on the top and bottom
+ # for inset model cells above or below
+ # the last cell center in the vert. direction
+ # pad top by top layer thickness
+ b1=self.parent.modelgrid.top-self.parent.modelgrid.botm[0]
+ top=pz[0]+b1
+ # pad botm by botm layer thickness
+ ifself.parent.modelgrid.shape[0]>1:
+ b2=-np.diff(self.parent.modelgrid.botm[-2:],axis=0)[0]
+ else:
+ b2=b1
+ botm=pz[-1]-b2
+ pz=np.vstack([[top],pz,[botm]])
+
+ nlay,nrow,ncol=pz.shape
+ px=np.tile(px,(nlay,1,1))
+ py=np.tile(py,(nlay,1,1))
+ mask=self._source_grid_mask
+ # mask already has extra top/botm layers
+ # (_source_grid_mask property)
+ px=px[mask]
+ py=py[mask]
+ pz=pz[mask]
+
+ xyzcellfacecenters[cellface]=px,py,pz
+ returnxyzcellfacecenters
+
+
+ @property
+ def_inset_max_active_area(self):
+"""The maximum (2D) footprint of the active area within the inset
+ model grid, where each i, j location has at least 1 active cell
+ vertically, excluding any inactive holes that are surrounded by
+ active cells.
+ """
+ # get the max footprint of active cells
+ max_active_area=np.sum(self.idomain>0,axis=0)>0
+ # fill any holes within the max footprint
+ # including any LGR areas (that are inactive in this model)
+ # set min cluster size to 1 greater than number of inactive cells
+ # (to not allow any holes)
+ minimum_cluster_size=np.sum(max_active_area==0)+1
+ # find_remove_isolated_cells fills clusters of 1s with 0s
+ # to fill holes, we want to look for clusters of 0s and fill with 1s
+ to_fill=~max_active_area
+ # pad the array to fill so that exterior inactive cells
+ # (outside the active area perimeter) aren't included
+ to_fill=np.pad(to_fill,pad_width=1,mode='reflect')
+ # invert the result to get True values for active cells and filled areas
+ filled=~find_remove_isolated_cells(to_fill,minimum_cluster_size)
+ # de-pad the result
+ filled=filled[1:-1,1:-1]
+ max_active_area=filled
+ returnmax_active_area
+
+ @property
+ definset_zone_within_parent(self):
+"""The footprint of the inset model maximum active area footprint
+ (``Tmr._inset_max_active_area``) within the parentmodel grid.
+ In other words, all parent cells containing one or inset
+ model cell centers within ``Tmr._inset_max_active_area`` (ones).
+ Zeros indicate parent cells with no inset cells.
+ """
+ # get the locations of the inset model cells within _inset_max_active_area
+ x,y,z=self.inset.modelgrid.xyzcellcenters
+ x=x[self._inset_max_active_area]
+ y=y[self._inset_max_active_area]
+ pi,pj=get_ij(self.parent.modelgrid,x,y)
+ inset_zone_within_parent=np.zeros((self.parent.modelgrid.nrow,
+ self.parent.modelgrid.ncol),dtype=bool)
+ inset_zone_within_parent[pi,pj]=True
+ returninset_zone_within_parent
+
+
+ @property
+ def_source_grid_mask(self):
+"""Boolean array indicating window in parent model grid (subset of cells)
+ that encompass the inset model domain. Used to speed up interpolation
+ of parent grid values onto inset grid."""
+ ifself._source_maskisNone:
+ mask=np.zeros((self.parent.modelgrid.nrow,
+ self.parent.modelgrid.ncol),dtype=bool)
+ ifhasattr(self.inset,'parent_mask')and \
+ (self.inset.parent_mask.shape==self.parent.modelgrid.xcellcenters.shape):
+ mask=self.inset.parent_mask
+ else:
+ #x, y = np.squeeze(self.inset.modelgrid.bbox.exterior.coords.xy)
+ l,r,b,t=self.inset.modelgrid.extent
+ x=np.array([r,r,l,l,r])
+ y=np.array([b,t,t,b,b])
+ pi,pj=get_ij(self.parent.modelgrid,x,y)
+ pad=3
+ i0=np.max([pi.min()-pad,0])
+ i1=np.min([pi.max()+pad+1,self.parent.modelgrid.nrow])
+ j0=np.max([pj.min()-pad,0])
+ j1=np.min([pj.max()+pad+1,self.parent.modelgrid.ncol])
+ mask[i0:i1,j0:j1]=True
+ # make the mask 3D
+ # include extra layer for top and bottom edges of model
+ mask3d=np.tile(mask,(self.parent.modelgrid.nlay+2,1,1))
+ self._source_mask=mask3d
+ eliflen(self._source_mask.shape)==2:
+ mask3d=np.tile(self._source_mask,(self.parent.modelgrid.nlay+2,1,1))
+ self._source_mask=mask3d
+ returnself._source_mask
+
+ defget_inset_boundary_cells(self,by_layer=False,shapefile=None):
+"""Get a dataframe of connection information for
+ horizontal boundary cells.
+
+ Parameters
+ ----------
+ by_layer : bool
+ Controls how boundary cells will be defined. If True,
+ the perimeter of the active area in each layer will be used
+ (excluding any interior clusters of active cells). If
+ False, the maximum footprint of the active area
+ (including all cell locations with at least one layer that
+ is active).
+ """
+ print('\ngetting perimeter cells...')
+ t0=time.time()
+ ifshapefileisNone:
+ shapefile=self.shapefile
+ ifshapefile:
+ perimeter=gp.read_file(shapefile)
+ perimeter=perimeter[['geometry']]
+ # reproject the perimeter shapefile to the model CRS if needed
+ ifperimeter.crs!=self.inset.modelgrid.crs:
+ perimeter.to_crs(self.inset.modelgrid.crs,inplace=True)
+ # convert polygons to linear rings
+ # (so just the cells along the polygon exterior are selected)
+ geoms=[]
+ forginperimeter.geometry:
+ ifg.type=='MultiPolygon':
+ g=MultiLineString([p.exteriorforping.geoms])
+ elifg.type=='Polygon':
+ g=g.exterior
+ geoms.append(g)
+ # add a buffer of 1 cell width so that cells aren't missed
+ # extra cells will get culled later
+ # when only cells along the outer perimeter (max idomain extent)
+ # are selected
+ buffer_dist=np.mean([self.inset.modelgrid.delr.mean(),
+ self.inset.modelgrid.delc.mean()])
+ perimeter['geometry']=[g.buffer(buffer_dist*0.5)forgingeoms]
+ grid_df=self.inset.modelgrid.get_dataframe(layers=False)
+ df=gp.sjoin(grid_df,perimeter,predicate='intersects',how='inner')
+ # add layers
+ dfs=[]
+ forkinrange(self.inset.modelgrid.nlay):
+ kdf=df.copy()
+ kdf['k']=k
+ dfs.append(kdf)
+ specified_bcells=pd.concat(dfs)
+ # get the active extent in each layer
+ # and the cell faces along the edge
+ # apply those cell faces to specified_bcells
+ by_layer=True
+ else:
+ specified_bcells=None
+ ifnotby_layer:
+
+ # attached the filled array as an attribute
+ max_active_area=self._inset_max_active_area
+
+ # pad filled idomain array with zeros around the edge
+ # so that perimeter connections are identified
+ filled=np.pad(max_active_area,1,constant_values=0)
+ filled3d=np.tile(filled,(self.idomain.shape[0],1,1))
+ df=get_horizontal_connections(filled3d,connection_info=False)
+ # deincrement rows and columns
+ # so that they reflect positions in the non-padded array
+ df['i']-=1
+ df['j']-=1
+ else:
+ dfs=[]
+ fork,layer_idomaininenumerate(self.idomain):
+
+ # just get the perimeter of inactive cells
+ # (exclude any interior active cells)
+ # start by filling any interior active cells
+ fromscipy.ndimageimportbinary_fill_holes
+ binary_idm=layer_idomain>0
+ filled=binary_fill_holes(binary_idm)
+ # pad filled idomain array with zeros around the edge
+ # so that perimeter connections are identified
+ filled=np.pad(filled,1,constant_values=0)
+ # get the cells along the inside edge
+ # of the model active area perimeter,
+ # via a sobel filter
+ df=get_horizontal_connections(filled,connection_info=False)
+ df['k']=k
+ # deincrement rows and columns
+ # so that they reflect positions in the non-padded array
+ df['i']-=1
+ df['j']-=1
+ dfs.append(df)
+ df=pd.concat(dfs)
+
+ # cull the boundary cells identified above
+ # with the sobel filter on the outer perimeter
+ # to just the cells specified in the shapefile
+ ifspecified_bcellsisnotNone:
+ df['cellid']=list(zip(df.k,df.i,df.j))
+ specified_bcells['cellid']=list(zip(specified_bcells.k,specified_bcells.i,specified_bcells.j))
+ df=df.loc[df.cellid.isin(specified_bcells.cellid)]
+
+ # add layer top and bottom and idomain information
+ layer_tops=np.stack([self.inset.dis.top.array]+
+ [lforlinself.inset.dis.botm.array])[:-1]
+ df['top']=layer_tops[df.k,df.i,df.j]
+ df['botm']=self.inset.dis.botm.array[df.k,df.i,df.j]
+ df['idomain']=1
+ ifself.inset.version=='mf6':
+ df['idomain']=self.idomain[df.k,df.i,df.j]
+ elif'BAS6'inself.inset.get_package_list():
+ df['idomain']=self.inset.bas6.ibound.array[df.k,df.i,df.j]
+ df=df[['k','i','j','cellface','top','botm','idomain']]
+ # drop inactive cells
+ df=df.loc[df['idomain']>0]
+
+ # get cell polygons from modelgrid
+ # write shapefile of boundary cells with face information
+ grid_df=self.inset.modelgrid.dataframe.copy()
+ grid_df['cellid']=list(zip(grid_df.k,grid_df.i,grid_df.j))
+ geoms=dict(zip(grid_df['cellid'],grid_df['geometry']))
+ if'cellid'notindf.columns:
+ df['cellid']=list(zip(df.k,df.i,df.j))
+ df['geometry']=[geoms[cellid]forcellidindf.cellid]
+ df=gp.GeoDataFrame(df,crs=self.inset.modelgrid.crs)
+ outshp=Path(self._tables_path,'boundary_cells.shp')
+ df.drop('cellid',axis=1).to_file(outshp)
+ print(f"wrote {outshp}")
+ print("perimeter cells took {:.2f}s\n".format(time.time()-t0))
+ returndf
+
+ defget_inset_boundary_values(self,for_external_files=False):
+
+ ifself.boundary_type=='head':
+ check_source_files([self.parent_head_file])
+ hdsobj=bf.HeadFile(self.parent_head_file)# , precision='single')
+ all_kstpkper=hdsobj.get_kstpkper()
+
+ last_steps={kper:kstpforkstp,kperinall_kstpkper}
+
+ # create an interpolator instance
+ cell_centers_interp=Interpolator(self.parent_xyzcellcenters,
+ self.inset_boundary_cells[['x','y','z']].T.values,
+ d=3,
+ source_values_mask=self._source_grid_mask)
+ # compute the weights
+ _=cell_centers_interp.interp_weights
+
+ print('\ngetting perimeter heads...')
+ t0=time.time()
+ dfs=[]
+ parent_periods=[]
+ forinset_per,parent_perinself.inset_parent_period_mapping.items():
+ print(f'for stress period {inset_per}',end=', ')
+ t1=time.time()
+ # skip getting data if parent period is already represented
+ # (heads will be reused)
+ ifparent_perinparent_periods:
+ continue
+ else:
+ parent_periods.append(parent_per)
+ parent_kstpkper=last_steps[parent_per],parent_per
+ parent_heads=hdsobj.get_data(kstpkper=parent_kstpkper)
+ # pad the parent heads on the top and bottom
+ # so that inset cells above and below the top/bottom cell centers
+ # will be within the interpolation space
+ # (parent x, y, z locations already contain this pad; parent_xyzcellcenters)
+ parent_heads=np.pad(parent_heads,pad_width=1,mode='edge')[:,1:-1,1:-1]
+
+ # interpolate inset boundary heads from 3D parent head solution
+ heads=cell_centers_interp.interpolate(parent_heads,method='linear')
+ #heads = griddata((px, py, pz), parent_heads.ravel(),
+ # (x, y, z), method='linear')
+
+ # make a DataFrame of interpolated heads at perimeter cell locations
+ df=self.inset_boundary_cells.copy()
+ df['per']=inset_per
+ df['head']=heads
+
+ # boundary heads must be greater than the cell bottom
+ # and idomain > 0
+ loc=(df['head']>df['botm'])&(df['idomain']>0)
+ df=df.loc[loc]
+ # drop invalid heads (most likely due to dry cells)
+ valid=(df['head']<1e10)&(df['head']>-1e10)
+ df=df.loc[valid]
+ dfs.append(df)
+ print("took {:.2f}s".format(time.time()-t1))
+
+ df=pd.concat(dfs)
+ # drop duplicate cells (accounting for stress periods)
+ # (that may have connections in the x and y directions,
+ # and therefore would be listed twice)
+ df['cellid']=list(zip(df.per,df.k,df.i,df.j))
+ duplicates=df.duplicated(subset=['cellid'])
+ df=df.loc[~duplicates,['k','i','j','per','head']]
+ print("getting perimeter heads took {:.2f}s\n".format(time.time()-t0))
+
+
+ elifself.boundary_type=='flux':
+ check_source_files([self.parent_cell_budget_file])
+ ifself.parent.version=='mf6':
+ ifself.parent_binary_grid_fileisNone:
+ raiseValueError('Specified flux perimeter boundary requires a parent_binary_grid_file if parent is MF6')
+ else:
+ check_source_files([self.parent_binary_grid_file])
+ fileobj=bf.CellBudgetFile(self.parent_cell_budget_file)# , precision='single')
+ all_kstpkper=fileobj.get_kstpkper()
+
+ last_steps={kper:kstpforkstp,kperinall_kstpkper}
+
+ print('\ngetting perimeter fluxes...')
+ t0=time.time()
+ dfs=[]
+ parent_periods=[]
+
+ # TODO: consider refactoring to move this into its own function
+ # * handle vertical fluxes
+ # * possibly handle rotated inset with differnt angle than parent - now assuming colinear
+ # * Handle the geometry issues for the inset
+ # * need to locate edge faces (x,y,z) based on which faces is out (e.g. left, right, up, down)
+
+ # TODO: refactor self.inset_boundary_cells
+ # it's probably not ideal to have self.inset_boundary_cells
+ # be a 'public' attribute that gets modified every stress period
+ # but without any information tying the current state of it
+ # to a specific stress period. It should either have all stress periods
+ # or the stress period-specific information
+ # (the fluxes and cell thickness if we are considering sat. thickness)
+ # pulled out into a separate container
+
+ # make a dataframe to store these
+ self.inset_boundary_cell_faces=self.inset_boundary_cells.copy()
+ # get the locations of the boundary face midpoints
+ x,y,z=get_cellface_midpoint(self.inset.modelgrid,
+ *self.inset_boundary_cells[['k','i','j','cellface']].T.values)
+ self.inset_boundary_cell_faces['x']=x
+ self.inset_boundary_cell_faces['y']=y
+ self.inset_boundary_cell_faces['z']=z
+ # renaming columns to be clear now x,y,z, is for the outer cell face
+ #self.inset_boundary_cell_faces.rename(columns={'x':'xface','y':'yface','z':'zface'}, inplace=True)
+ # convert x,y coordinates to model coords from world coords
+ #self.inset_boundary_cell_faces.xface, self.inset_boundary_cell_faces.yface = \
+ # self.inset.modelgrid.get_local_coords(self.inset_boundary_cell_faces.xface, self.inset_boundary_cell_faces.yface)
+ # calculate the thickness to later get the area
+ # TODO: consider saturated thickness instead, but this would require interpolating parent heads to inset cell locations
+
+ self.inset_boundary_cell_faces['thickness']=self.inset_boundary_cell_faces.top-self.inset_boundary_cell_faces.botm
+ # populate cell face widths
+ self.inset_boundary_cell_faces['width']=np.nan
+ left_right_faces=self.inset_boundary_cell_faces['cellface'].isin({'left','right'})
+ # left and right faces are along columns
+ rows=self.inset_boundary_cell_faces.loc[left_right_faces,'i']
+ self.inset_boundary_cell_faces.loc[left_right_faces,'width']=self.inset.modelgrid.delc[rows]
+ # top and bottom faces are along rows
+ top_bottom_faces=self.inset_boundary_cell_faces['cellface'].isin({'top','bottom'})
+ columns=self.inset_boundary_cell_faces.loc[top_bottom_faces,'j']
+ self.inset_boundary_cell_faces.loc[top_bottom_faces,'width']=self.inset.modelgrid.delr[columns]
+ assertnotself.inset_boundary_cell_faces['width'].isna().any()
+
+ self.inset_boundary_cell_faces['face_area']=self.inset_boundary_cell_faces['width']*\
+ self.inset_boundary_cell_faces['thickness']
+ # pre-seed the area as thickness to later mult by width
+ #self.inset_boundary_cell_faces['face_area'] = self.inset_boundary_cell_faces['thickness'].values
+ # placeholder for interpolated values
+ self.inset_boundary_cell_faces['q_interp']=np.nan
+ # placeholder for flux to well package
+ # self.inset_boundary_cell_faces['Q'] = np.nan
+
+ # make a grid of the spacings
+ #delr_gridi, delc_gridi = np.meshgrid(self.inset.modelgrid.delr, self.inset.modelgrid.delc)
+ #
+ #for cn in self.inset_boundary_cell_faces.cellface.unique():
+ # curri = self.inset_boundary_cell_faces.loc[self.inset_boundary_cell_faces.cellface==cn].i
+ # currj = self.inset_boundary_cell_faces.loc[self.inset_boundary_cell_faces.cellface==cn].j
+ # curr_delc = delc_gridi[curri, currj]
+ # curr_delr = delr_gridi[curri, currj]
+ # if cn == 'top':
+ # #self.inset_boundary_cell_faces.loc[self.inset_boundary_cell_faces.cellface==cn, 'yface'] += curr_delc/2
+ # self.inset_boundary_cell_faces.loc[self.inset_boundary_cell_faces.cellface==cn, 'face_area'] *= curr_delr
+ # elif cn == 'bottom':
+ # #self.inset_boundary_cell_faces.loc[self.inset_boundary_cell_faces.cellface==cn, 'yface'] -= curr_delc/2
+ # self.inset_boundary_cell_faces.loc[self.inset_boundary_cell_faces.cellface==cn, 'face_area'] *= curr_delr
+ # if cn == 'right':
+ # #self.inset_boundary_cell_faces.loc[self.inset_boundary_cell_faces.cellface==cn, 'xface'] += curr_delr/2
+ # self.inset_boundary_cell_faces.loc[self.inset_boundary_cell_faces.cellface==cn, 'face_area'] *= curr_delc
+ # elif cn == 'left':
+ # #self.inset_boundary_cell_faces.loc[self.inset_boundary_cell_faces.cellface==cn, 'xface'] -= curr_delr/2
+ # self.inset_boundary_cell_faces.loc[self.inset_boundary_cell_faces.cellface==cn, 'face_area'] *= curr_delc
+
+ #
+ # Now handle the geometry issues for the parent
+ # first thicknesses (at cell centers)
+
+ parent_thick=self.parent.modelgrid.cell_thickness
+
+ # make matrices of the row and column spacings
+ # NB --> trying to preserve the always seemingly
+ # backwards delr/delc definitions
+ # also note - for now, taking average thickness at a connected face
+
+ # need XYZ locations of the center of each face for
+ # iface and jface edges (faces)
+ # NB edges are returned in model coordinates
+ #xloc_edge, yloc_edge = self.parent.modelgrid.xyedges
+ #nlay = self.parent.modelgrid.nlay
+ #nrow = self.parent.modelgrid.nrow
+ #ncol = self.parent.modelgrid.ncol
+ ## throw out the left and top edges, respectively
+ #xloc_edge=xloc_edge[1:]
+ #yloc_edge=yloc_edge[1:]
+ ## tile out to full dimensions of the grid
+ #xloc_edge = np.tile(np.atleast_2d(xloc_edge),(nlay+2,nrow,1))
+ #yloc_edge = np.tile(np.atleast_2d(yloc_edge).T,(nlay+2,1,ncol))
+#
+ ## TODO: implement vertical fluxes
+ #''' parent_vface_areas = np.tile(delc_grid, (nlay,1,1)) * \
+ # np.tile(delr_grid, (nlay,1,1))
+ #'''
+ #xloc_center, yloc_center = self.parent.modelgrid.xycenters
+#
+ ## tile out to full dimensions of the grid
+#
+ #xloc_center = np.tile(np.atleast_2d(xloc_center),(nlay+2,nrow,1))
+ #yloc_center = np.tile(np.atleast_2d(yloc_center).T,(nlay+2,1,ncol))
+#
+ ## get the vertical centroids initially at cell centroids
+ #zloc = (self.parent.modelgrid.top_botm[:-1,:,:] +
+ # self.parent.modelgrid.top_botm[1:,:,:] ) / 2
+#
+ ## pad in the vertical above and below the model
+ #zpadtop = np.expand_dims(self.parent.modelgrid.top_botm[0,:,:] + parent_thick[0], axis=0)
+ #zpadbotm = np.expand_dims(self.parent.modelgrid.top_botm[-1,:,:] - parent_thick[-1], axis=0)
+ #zloc=np.vstack([zpadtop,zloc,zpadbotm])
+#
+ ## for iface, all cols, nrow-1 rows
+ #self.x_iface_parent = xloc_center[:,:-1,:].ravel()
+ #self.y_iface_parent = yloc_edge[:,:,:-1].ravel()
+ ## need to calculate the average z location along rows
+ #self.z_iface_parent = ((zloc[:,:-1,:]+zloc[:,1:,:]) / 2).ravel()
+ ## for jface, all rows, ncol-1 cols
+ #self.x_jface_parent = xloc_edge[:,:-1,:].ravel()
+ #self.y_jface_parent = yloc_center[:,:,:-1].ravel()
+ ## need to calculate the average z location along columns
+ #self.z_jface_parent = ((zloc[:,:,:-1]+zloc[:,:,1:]) / 2).ravel()
+ ## for kface, all cols, all rows
+ #self.x_kface_parent = xloc_center.ravel()
+ #self.y_kface_parent = yloc_center.ravel()
+ ## for zlocations, -1 layers
+ #self.z_kface_parent = zloc.ravel()
+#
+ #'''
+ ## get the perimeter cells and calculate the weights
+ #_ = self.interp_weights_flux
+ #'''
+ # interpolate parent face centers
+ # (where the cell by cell flows and specific discharge values are located)
+ # to inset face centers along the exterior sides of the boundary cells
+ # (the edge of the inset model, where the boundary fluxes will be located)
+
+ # interpolate parent y fluxes (column parallel)
+ # to inset boundary cell face centers
+ #px = self.x_iface_parent
+ #py = self.y_iface_parent
+ #pz = self.z_iface_parent
+ px,py,pz=self.parent_xyzcellcenters
+ #px, py, pz = self.parent_xyzcellfacecenters['bottom']
+ iface_interp=Interpolator((px,py,pz),
+ #self.inset_boundary_cell_faces[['x', 'y', 'z']].T.values,
+ self.inset_boundary_cells[['x','y','z']].T.values,
+ d=3,source_values_mask=self._source_grid_mask
+ )
+ _=iface_interp.interp_weights
+ # interpolate parent x fluxes (row parallel)
+ # to inset boundary cell face centers
+ #px = self.x_jface_parent
+ #py = self.y_jface_parent
+ #pz = self.z_jface_parent
+ #px, py, pz = self.parent_xyzcellfacecenters['right']
+ #jface_interp = Interpolator((px, py, pz),
+ # #self.inset_boundary_cell_faces[['x', 'y', 'z']].T.values,
+ # self.inset_boundary_cells[['x', 'y', 'z']].T.values,
+ # d=3, source_values_mask=self._source_grid_mask
+ # )
+ #_ = jface_interp.interp_weights
+
+ #kface_interp = Interpolator((self.x_kface_parent, self.y_kface_parent, self.z_kface_parent),
+ # self.inset_boundary_cells[['x', 'y', 'z']].T.values,
+ # d=3)
+ #_ = kface_interp.interp_weights
+
+ # get a dataframe of cell connections
+ # (that can be reused with subsequent stress periods)
+ cell_connections_df=None
+ ifself.parent.version=='mf6':
+ cell_connections_df=get_intercell_connections(self.parent_binary_grid_file)
+
+ forinset_per,parent_perinself.inset_parent_period_mapping.items():
+ print(f'for stress period {inset_per}',end=', ')
+ t1=time.time()
+ # skip getting data if parent period is already represented
+ # (heads will be reused)
+ ifparent_perinparent_periods:
+ continue
+ else:
+ parent_periods.append(parent_per)
+ parent_kstpkper=last_steps[parent_per],parent_per
+
+ # get parent specific discharge for inset area
+ qx,qy,qz=get_qx_qy_qz(self.parent_cell_budget_file,
+ cell_connections_df=cell_connections_df,
+ version=self.parent.version,
+ kstpkper=parent_kstpkper,
+ specific_discharge=True,
+ modelgrid=self.parent.modelgrid,
+ headfile=self.parent_head_file)
+
+ # pad the two parent flux arrays on the top and bottom
+ # so that inset cells above and below the top/bottom cell centers
+ # will be within the interpolation space
+ qx=np.pad(qx,pad_width=1,mode='edge')[:,1:-1,1:-1]
+ qy=np.pad(qy,pad_width=1,mode='edge')[:,1:-1,1:-1]
+ qz=np.pad(qz,pad_width=1,mode='edge')[:,1:-1,1:-1]
+
+
+ # TODO: consider padding or not on top, left, and "top (row-wise)"
+ # (parent x, y, z locations already contain this pad - see zloc above)
+ #q_iface = np.pad(q_iface, pad_width=1, mode='edge')[:, 1:-1, 1:-1].ravel()
+ #q_jface = np.pad(q_jface, pad_width=1, mode='edge')[:, 1:-1, 1:-1].ravel()
+
+
+ # TODO: refactor interpolation to use the new interpolator object - DONE: see above
+ # interpolate q at the four different face orientations (e.g. fluxdir)
+
+ # interpolate inset boundary heads from 3D parent head solution
+ t2=time.time()
+ y_flux=iface_interp.interpolate(qy,method='linear')
+ x_flux=iface_interp.interpolate(qx,method='linear')
+ # v_flux = kface_interp.interpolate(qz, method='linear')
+ f"interpolation took {time.time()-t2:.2f}s"
+
+ t2=time.time()
+ self.inset_boundary_cell_faces=self.inset_boundary_cell_faces.assign(
+ qx_interp=x_flux,
+ qy_interp=y_flux)#,
+ #qz_interp=v_flux)
+
+ # assign q values and flip the sign for flux counter to the CBB convention directions of right and bottom
+ top_faces=self.inset_boundary_cell_faces.cellface=='top'
+ self.inset_boundary_cell_faces.loc[top_faces,'q_interp']=self.inset_boundary_cell_faces.loc[top_faces,'qy_interp']
+ bottom_faces=self.inset_boundary_cell_faces.cellface=='bottom'
+ self.inset_boundary_cell_faces.loc[bottom_faces,'q_interp']=-self.inset_boundary_cell_faces.loc[bottom_faces,'qy_interp']
+ left_faces=self.inset_boundary_cell_faces.cellface=='left'
+ self.inset_boundary_cell_faces.loc[left_faces,'q_interp']=self.inset_boundary_cell_faces.loc[left_faces,'qx_interp']
+ right_faces=self.inset_boundary_cell_faces.cellface=='right'
+ self.inset_boundary_cell_faces.loc[right_faces,'q_interp']=-self.inset_boundary_cell_faces.loc[right_faces,'qx_interp']
+
+ # convert specific discharge in inset cells to Q -- flux for well package
+ self.inset_boundary_cell_faces['q']= \
+ self.inset_boundary_cell_faces['q_interp']*self.inset_boundary_cell_faces['face_area']
+
+
+ # make a DataFrame of boundary fluxes at perimeter cell locations
+ df=self.inset_boundary_cell_faces[['k','i','j','idomain','q']].copy()
+ # aggregate fluxes by cell
+ # so that we are accurately compare to the WELL package budget in the listing file
+ #by_cell = df.groupby('cellid').first()
+ #by_cell['q'] = df.groupby('cellid').sum()['q']
+ ## drop the cellid index
+ #by_cell.reset_index(drop=True, inplace=True)
+ df['per']=inset_per
+
+ # boundary fluxes must be in active cells
+ # corresponding parent cells must be active too,
+ # otherwise a nan flux will be produced
+ # drop nan fluxes, which will revert these boundary cells to the
+ # default no-flow condition in MODFLOW
+ # (consistent with parent model cell being inactive)
+ keep=(df['idomain']>0)&~df['q'].isna()
+ dfs.append(df.loc[keep].copy())
+ f"assigning face fluxes took {time.time()-t2:.2f}s"
+ print(f"took {time.time()-t1:.2f}s total")
+
+ df=pd.concat(dfs)
+ # drop duplicate cells (accounting for stress periods)
+ # (that may have connections in the x and y directions,
+ # and therefore would be listed twice)
+ #df['cellid'] = list(zip(df.per, df.k, df.i, df.j))
+ #duplicates = df.duplicated(subset=['cellid'])
+ #df = df.loc[~duplicates, ['k', 'i', 'j', 'per', 'q']]
+ print("getting perimeter fluxes took {:.2f}s\n".format(time.time()-t0))
+
+ # convert to one-based and comment out header if df will be written straight to external file
+ iffor_external_files:
+ df.rename(columns={'k':'#k'},inplace=True)
+ df['#k']+=1
+ df['i']+=1
+ df['j']+=1
+ returndf
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/_sources/10min.rst.txt b/_sources/10min.rst.txt
new file mode 100644
index 00000000..2448a50e
--- /dev/null
+++ b/_sources/10min.rst.txt
@@ -0,0 +1,115 @@
+10 Minutes to Modflow-setup
+============================
+This is a short introduction to help get you up and running with Modflow-setup. A complete workflow can be found in the :ref:`Pleasant Lake Example`; additional examples of working configuration files can be found in the :ref:`Configuration File Gallery`.
+
+1) Define the model active area and coordinate reference system
+-----------------------------------------------------------------
+Depending on the problem, the model area might simply be a box enclosing features of interest and any relevant hydrologic boundaries, or an irregular shape surrounding a watershed or other feature. In either case, it may be helpful to :ref:`download hydrography first <3) Develop flowlines to represent streams>`, to ensure that the model area includes all important features. The model should be referenced to a `projected coordinate reference system (CRS) `_, ideally with length units of meters and an authority code (such as an `EPSG code `_) that unambiguously defines it.
+
+Modflow-setup provides two ways to define a model grid:
+
+ * x and y coordinates of the model origin (lower left or upper left corner), grid spacing, number of rows and columns, rotation, and CRS
+ * As a rectangular area of specified discretization surrounding a polygon shapefile of the model active area (traced by hand or developed by some other means) or a feature of interest buffered by a specified distance.
+
+The active model area is defined subsequently in the DIS package.
+
+ .. Note::
+
+ Don't forget about the farfield! Usually it is advised to include important competing sinks outside of the immediate area of interest (the nearfield), so that the solution is not over-specified by the perimeter boundary condition, and recognizing that the surface watershed boundary doesn't always coincide exactly with the groundwatershed boundary. See Haitjema (1995) and Anderson and others (2015) for more info.
+
+ .. Note::
+ Need a polygon defining a watershed? In the United States, the `Watershed Boundary Dataset `_ provides watershed deliniations at various scales.
+
+
+2) Create a setup script and configuration file
+------------------------------------------------
+Usually creating the desired grid requires some iteration. We can get started on this by making a model setup script and corresponding configuration file.
+
+An initial model setup script for making the model grid:
+
+ .. literalinclude:: ../../examples/initial_grid_setup.py
+ :language: python
+ :linenos:
+
+ Download the file:
+ :download:`initial_grid_setup.py <../../examples/initial_grid_setup.py>`
+
+An initial configuration file for developing a model grid around a pre-defined active area:
+
+ .. literalinclude:: ../../examples/initial_config_poly.yaml
+ :language: yaml
+ :linenos:
+
+ Download the file:
+ :download:`initial_config_poly.yaml <../../examples/initial_config_poly.yaml>`
+
+To define a model grid using an origin, grid spacing and dimensions, a ``setup_grid:`` block like this one could be substitued above:
+
+ .. literalinclude:: ../../examples/initial_config_box.yaml
+ :language: yaml
+ :start-at: setup_grid:
+
+ Download the file:
+ :download:`initial_config_poly.yaml <../../examples/initial_config_box.yaml>`
+
+Now ``initial_setup_script.py`` can be run repeatedly to explore different grids.
+
+
+3) Develop flowlines to represent streams
+------------------------------------------
+Next, let's get some data for setting up boundary conditions. For streams, Modflow-setup can accept any linestring shapefile that has a routing column indicating how the lines connect to one another. This can be created by hand, or in the United States, obtained from the National Hydrography Dataset Plus (NHDPlus). There are two types of NHDPlus:
+
+ - `NHDPlus version 2 `_ is mapped at the 1:100,000 scale, and is therefore suitable for larger regional models with cell sizes of ~100s of meters to ~1km. NHDPlus version 2 can be the best choice for larger model areas (greater than approx 1,000 km\ :sup:`2`), where NHDPlus HR might have too many lines. NHDPlus version 2 can be obtained from the `EPA `_.
+ - `NHDPlus High Resolution (HR) `_ is mapped at the finer 1:24,000 scale, and may therefore work better for smaller problems (discretizations of ~100 meters or less) where better alignment between the mapped lines and stream channel in the DEM is desired, and where the number of linestring features to manage won't be prohibitive. NHDPlus HR can be accessed via the `National Map Downloader `_.
+
+Preprocessing NHDPlus HR
+^^^^^^^^^^^^^^^^^^^^^^^^^^
+Currently, NHDPlus HR data, which comes in a file geodatabase (GDB), must be preprocessed into a shapefile for input to Modflow-setup and `SFRmaker `_ (which Modflow-setup uses to build the stream network). In many cases, multiple GDBs may need to be combined and undesired line features such as storm sewers culled. The `SFRmaker documentation `_ has examples for how to read and preprocesses NHDPlus HR.
+
+Preprocessing NHDPlus version 2
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+Depending on the application, NHDPlus version 2 may not need to be preprocessed. Reasons to preprocess include:
+
+* the model area is large, and
+
+ * read times for one or more NHDPlus drainage basins are slowing the model build
+ * the DEM being used for the model top is relatively coarse, and sampling a fine DEM during the model build is prohibitive for time or space reasons.
+
+* the stream network is too dense, with too many model cells containing SFR reaches (especially a problem in the eastern US at the 1 km resolution); or there are too many ephemeral streams represented.
+* the stream network has divergences where one or more distributary lines are downstream of a confluence.
+
+The `preprocessing module in SFRmaker `_ can resolve these issues, producing a single set of culled flowlines with width and elevation information and divergences removed. The elevation functionality in the preprocessing module requires a DEM.
+
+
+4) Get a DEM
+-------------
+The `National Map Downloader `_ has 10 meter DEMs for the United States, with finer resolutions available in many areas. Typically, these come in 1 degree x 1 degree tiles. If many tiles are needed, the uGet Download Manager linked to on the National Map site can automate downloading many tiles. Alternatively, links to the files follow a consistent format, and are therefore amenable to scripted or manual downloads. For example, the tile located between -88 and -87 west and 43 and 44 north is available at:
+
+https://prd-tnm.s3.amazonaws.com/StagedProducts/Elevation/13/TIFF/current/n44w088/USGS_13_n44w088.tif
+
+Making a virtual raster
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+Once all of the tiles are downloaded, a virtual raster can be made that allows them to be treated as a single file, without any modifications to the original data. This is required for input to SFRmaker and Modflow-setup. For example, in `QGIS `_:
+
+ a) Load all of the tiles to verify that they are correct and cover the whole model active area.
+ b) From the ``Raster`` menu, select ``Miscellaneous > Build Virtual Raster``. This will make a virtual raster file with a ``.vrt`` extension that points to the original set of GeoTIFFs, but allows them to be treated as a single continuous raster.
+
+5) Make a minimum working configuration file and model build script
+--------------------------------------------------------------------
+Now that we have a set of flowlines and a DEM (and perhaps shapefiles for other surface water boundaries), we can fill out the rest of the configuration file to get an initial working model. Later, additional details such as more layers, a well package, observations, or other features can be added in a stepwise approach (Haitjema, 1995).
+
+ .. literalinclude:: ../../examples/initial_config_full.yaml
+ :language: yaml
+ :linenos:
+
+ Download the file:
+ :download:`initial_config_full.yaml <../../examples/initial_config_full.yaml>`
+
+A setup script for making a minimum working model. Additional functions can be added later to further customize the model outside of the Modflow-setup build step.
+
+ .. literalinclude:: ../../examples/initial_model_setup.py
+ :language: python
+ :linenos:
+
+ Download the file:
+ :download:`initial_model_setup.py <../../examples/initial_model_setup.py>`
diff --git a/_sources/api/index.rst.txt b/_sources/api/index.rst.txt
new file mode 100644
index 00000000..35ebaf46
--- /dev/null
+++ b/_sources/api/index.rst.txt
@@ -0,0 +1,25 @@
+==============
+Code Reference
+==============
+
+Model classes
+--------------
+
+.. toctree::
+
+ mfsetup.mf6model
+ mfsetup.mfnwtmodel
+ mfsetup.mfmodel
+
+
+Supporting modules
+-------------------
+
+.. toctree::
+
+ mfsetup.discretization
+ mfsetup.fileio
+ mfsetup.grid
+ mfsetup.interpolate
+ mfsetup.tdis
+ mfsetup.tmr
diff --git a/_sources/api/mfsetup.discretization.rst.txt b/_sources/api/mfsetup.discretization.rst.txt
new file mode 100644
index 00000000..b2aa93c8
--- /dev/null
+++ b/_sources/api/mfsetup.discretization.rst.txt
@@ -0,0 +1,7 @@
+mfsetup.discretization module
+=============================
+
+.. automodule:: mfsetup.discretization
+ :members:
+ :undoc-members:
+ :show-inheritance:
diff --git a/_sources/api/mfsetup.fileio.rst.txt b/_sources/api/mfsetup.fileio.rst.txt
new file mode 100644
index 00000000..66519b04
--- /dev/null
+++ b/_sources/api/mfsetup.fileio.rst.txt
@@ -0,0 +1,7 @@
+mfsetup.fileio module
+=============================
+
+.. automodule:: mfsetup.fileio
+ :members:
+ :undoc-members:
+ :show-inheritance:
diff --git a/_sources/api/mfsetup.grid.rst.txt b/_sources/api/mfsetup.grid.rst.txt
new file mode 100644
index 00000000..aa9ec529
--- /dev/null
+++ b/_sources/api/mfsetup.grid.rst.txt
@@ -0,0 +1,7 @@
+mfsetup.grid module
+=============================
+
+.. automodule:: mfsetup.grid
+ :members:
+ :undoc-members:
+ :show-inheritance:
diff --git a/_sources/api/mfsetup.interpolate.rst.txt b/_sources/api/mfsetup.interpolate.rst.txt
new file mode 100644
index 00000000..1c9ef427
--- /dev/null
+++ b/_sources/api/mfsetup.interpolate.rst.txt
@@ -0,0 +1,7 @@
+mfsetup.interpolate module
+=============================
+
+.. automodule:: mfsetup.interpolate
+ :members:
+ :undoc-members:
+ :show-inheritance:
diff --git a/_sources/api/mfsetup.mf6model.rst.txt b/_sources/api/mfsetup.mf6model.rst.txt
new file mode 100644
index 00000000..a8d0164a
--- /dev/null
+++ b/_sources/api/mfsetup.mf6model.rst.txt
@@ -0,0 +1,6 @@
+MF6model class
+================================
+
+.. automodule:: mfsetup.mf6model
+ :members:
+ :show-inheritance:
diff --git a/_sources/api/mfsetup.mfmodel.rst.txt b/_sources/api/mfsetup.mfmodel.rst.txt
new file mode 100644
index 00000000..ede29c95
--- /dev/null
+++ b/_sources/api/mfsetup.mfmodel.rst.txt
@@ -0,0 +1,6 @@
+MFsetupMixin class
+=============================
+
+.. automodule:: mfsetup.mfmodel
+ :members:
+ :show-inheritance:
diff --git a/_sources/api/mfsetup.mfnwtmodel.rst.txt b/_sources/api/mfsetup.mfnwtmodel.rst.txt
new file mode 100644
index 00000000..04bf7202
--- /dev/null
+++ b/_sources/api/mfsetup.mfnwtmodel.rst.txt
@@ -0,0 +1,6 @@
+MFnwtModel class
+================================
+
+.. automodule:: mfsetup.mfnwtmodel
+ :members:
+ :show-inheritance:
diff --git a/_sources/api/mfsetup.tdis.rst.txt b/_sources/api/mfsetup.tdis.rst.txt
new file mode 100644
index 00000000..17ef5e37
--- /dev/null
+++ b/_sources/api/mfsetup.tdis.rst.txt
@@ -0,0 +1,7 @@
+mfsetup.tdis module
+=============================
+
+.. automodule:: mfsetup.tdis
+ :members:
+ :undoc-members:
+ :show-inheritance:
diff --git a/_sources/api/mfsetup.tmr.rst.txt b/_sources/api/mfsetup.tmr.rst.txt
new file mode 100644
index 00000000..0d8a73da
--- /dev/null
+++ b/_sources/api/mfsetup.tmr.rst.txt
@@ -0,0 +1,8 @@
+mfsetup.tmr module
+=============================
+
+.. automodule:: mfsetup.tmr
+ :members:
+ :undoc-members:
+ :show-inheritance:
+ :exclude-members: Tmr
diff --git a/_sources/concepts/index.rst.txt b/_sources/concepts/index.rst.txt
new file mode 100644
index 00000000..a7c63d76
--- /dev/null
+++ b/_sources/concepts/index.rst.txt
@@ -0,0 +1,10 @@
+==============================================
+Modflow-setup concepts and methods
+==============================================
+
+.. toctree::
+ :maxdepth: 1
+
+ Interpolation
+ Local grid refinement
+ Specifying perimeter boundary conditions
diff --git a/_sources/concepts/interp.rst.txt b/_sources/concepts/interp.rst.txt
new file mode 100644
index 00000000..9a821e4a
--- /dev/null
+++ b/_sources/concepts/interp.rst.txt
@@ -0,0 +1,23 @@
+===========================================================
+Interpolating data to the model grid
+===========================================================
+
+For most interpolation operations where geo-located data are sampled to the model grid, Modflow-setup uses a barycentric (triangular) interpolation scheme similar to :py:func:`scipy.interpolate.griddata`. This n-dimensional unstructured method allows for interpolation between grids that are aligned with different coordinate references systems, as well as interpolation between unstructured grids. As described `here `_, setup of the barycentric interpolation involves:
+
+ 1) Construction of a triangular mesh linking the source points
+ 2) Searching the mesh to find the containing simplex for each destination point
+ 3) Computation of barycentric coordinates (weights) that describe where each destination point is in terms of the n nearest source points (where n-1 is the number of dimensions)
+ 4) Computing the interpolated values at the destination points from the source values and the weights
+
+Steps 1-3 are time-consuming. Therefore, for each interpolation problem, Modflow-setup performs these steps once and caches the results, so that step 4 can be repeated quickly on subsequent calls. This can greatly speed, for example, the computation of hydraulic conductivity or bottom elevation values for models with many layers, or interpolation of boundary conditions for models with many stress periods.
+
+A few more notes:
+ * Linear interpolation is the default method in most instances, except for recharge, which is often based on categorical data such as land cover and soil types, and therefore has nearest-neighbor as the default method.
+ * The interpolation method can generally be specified explicitly for a given dataset by including a ``resample_method`` argument. Available methods are listed in the documentation for :py:func:`scipy.interpolate.griddata`. For example, if we wanted to override the ``'nearest'`` default for the Recharge Package:
+
+ .. literalinclude:: ../../../mfsetup/tests/data/shellmound.yml
+ :language: yaml
+ :start-after: # Recharge Package
+ :end-before: period_stats
+
+ * More details are available in the documentation for the :py:mod:`mfsetup.interpolate` module.
diff --git a/_sources/concepts/lgr.rst.txt b/_sources/concepts/lgr.rst.txt
new file mode 100644
index 00000000..bee938cc
--- /dev/null
+++ b/_sources/concepts/lgr.rst.txt
@@ -0,0 +1,89 @@
+===========================================================
+Local grid refinement
+===========================================================
+
+In MODFLOW 6, two groundwater models can be tightly coupled in the same simulation, which allows for efficient "local grid refinement" (LGR; Mehl and others, 2006) and "semistructured" (Feinstein and others, 2016) configurations that combine multiple structured model layers at different resolutions (Langevin and others, 2017). Locally refined areas are conceptualized as separate "child" models that are linked to the surrounding (usually coarser) "parent" model through the GWF Model Exchange (GWFGWF) Package. Similarly, "semistructured" configurations are represented by multiple linked models for each layer or group of layers with the same resolution.
+
+Modflow-setup supports local grid refinment via an ``lgr:`` subblock within the ``setup_grid:`` block of the configuration file. The ``lgr:`` subblock consists of one of more subblocks, each keyed by a linked model name and containing configuration input for that linked model. Vertical layer refinement relative to the "parent" model is also specified for each layer of the parent model.
+
+For example, the following "parent" configuration for the :ref:`Pleasant Lake Example ` creates a local refinement model named "``pleasant_lgr_inset``" that spans all layers of the parent model, at the same vertical resolution (1 inset model layer per parent model layer).
+
+.. literalinclude:: ../../../examples/pleasant_lgr_parent.yml
+ :language: yaml
+ :start-at: lgr:
+ :end-before: # Structured Discretization Package
+
+The horizontal location and resolution of the inset model are specified in the :ref:`inset model configuration file `, in the same way that they are specified for any model. In this example, the parent model has a uniform horizontal resolution of 200 meters, and the inset a uniform resolution of 40 meters (a horizontal refinement of 5 inset model cells per parent model cells). The inset model resolution must be a factor of the parent model resolution.
+
+.. image:: ../_static/pleasant_lgr.png
+ :width: 1200
+ :alt: Example of LGR refinement in all layers.
+
+.. image:: ../_static/pleasant_lgr_xsection.png
+ :width: 1200
+ :alt: Example of LGR refinement in all layers.
+
+Input from the ``lgr:`` subblock and the inset model configuration file(s) is passed to the :py:class:`Flopy Lgr Utility `, which helps create input for the GWF Model Exchange Package.
+
+Within the context of a Python session, inset model information is stored in a dictionary under an ``inset`` attribute attached to the parent model. For example, to access a Flopy model object for the above inset model from a parent model named ``model``:
+
+.. code-block:: python
+
+ inset_model = model.inset['pleasant_lgr_inset']
+
+
+
+
+Specification of vertical refinement
+-----------------------------------------
+Vertical refinement in the LGR child grid is specified in the ``layer_refinement:`` item, as the number of child layers in each parent model layer. Currently vertical refinement is limited to even subdivision of parent model layers. Vertical refinement can be specified as an integer for uniform refinement across all parent model layers:
+
+.. code-block:: yaml
+
+ layer_refinement: 1
+
+a list with an entry for each parent layer:
+
+.. code-block:: yaml
+
+ layer_refinement: [1, 1, 1, 0, 0]
+
+or a dictionary with entries for each parent layer that is refined:
+
+.. code-block:: yaml
+
+ layer_refinement:
+ 0:1
+ 1:1
+ 2:1
+
+Parent model layers with 0 specified refinement are excluded from the child model. The list and dictionary inputs above are equivalent, as unlisted layers in the dictionary are assigned default refinement values of 0. Refinement values > 1 result in even subdivision of the parent layers. Similar to one-way coupled inset models, LGR child model layers surfaces can be discretized at the finer child resolution from the original source data. In the example below, a 9-layer child model is set within the top 4 layers of a 5 layer parent model. The parent model ``lgr:`` configuration block is specified as:
+
+.. literalinclude:: ../../../mfsetup/tests/data/pleasant_vertical_lgr_parent.yml
+ :language: yaml
+ :start-at: lgr:
+ :end-before: # Structured Discretization Package
+
+In the child model ``dis:`` configuration block, raster surfaces that were used to define the parent model layer bottoms are specified at their respective desired locations within the child model grid; Modflow-setup then subdivides these to create the desired layer configuration. The layer specification in the child model ``dis:`` block must be consistent with the `layer_refinement:` input in the parent model configuration (see below). The image below shows
+
+.. literalinclude:: ../../../mfsetup/tests/data/pleasant_vertical_lgr_inset.yml
+ :language: yaml
+ :start-at: dis:
+ :end-before: # Recharge and Well packages are inherited
+
+The figure below shows a cross section through the model grid resulting from this configuration:
+
+.. image:: ../_static/pleasant_vlgr_xsection.png
+ :width: 1500
+ :alt: Example of partial vertical LGR refinement with layer subdivision.
+
+
+**A few notes about LGR functionality in Modflow-setup**
+
+* **Locally refined "inset" models must be aligned with the parent model grid**, which also means that their horizontal resolution must be a factor of the "parent" model resolution. Modflow-setup handles the alignment automatically by "snapping" inset model grids to the lower left corner of the parent cell containing the lower left corner of the inset model grid (the inset model origin in real-world coordinates).
+* Similarly, inset models need to align vertically with the parent model layers. Parent layers can be subdivided using values > 1 in the ``layer_refinement:`` input option.
+* Specifically, **the layer specification in the child model** ``dis:`` **block must be consistent with the** ``layer_refinement:`` **input in the parent model configuration**. For example, if a ``layer_refinement`` of ``3`` is specified for the last parent layer included in the child model domain, then the last two raster surfaces specified in the child model ``dis``: block must be specified with two layer bottoms in between. Similarly, the values in ``layer_refinement:`` in the parent model configuration must sum to ``nlay:`` specified in the child model ``dis:`` configuration block.
+* Regardless of the supplied input, the child model bottom and parent model tops are aligned to remove any vertical gaps or overlap in the numerical model grid. If a raster surface is supplied for the child model bottom, the child bottom/parent top surface is based on the mean bottom elevations sampled to the child cells within each parent cell area.
+* Child model ``layer_refinement:`` must start at the top of the parent model, and include a contiguous sequence of parent model layers.
+* Multiple inset models at different horizontal locations, and even inset models within inset models should be achievable, but have not been tested.
+* **Multi-model configurations come with costs.** Each model within a MODFLOW 6 simulation carries its own set of files, including external text array and list input files to packages. As with a single model, when using the automated parameterization functionality in `pyEMU `_, the number of files is multiplied. At some point, at may be more efficient to work with individual models, and design the grids in such a way that boundary conditions along the model perimeters have a minimal impact on the predictions of interest.
diff --git a/_sources/concepts/perimeter-bcs.rst.txt b/_sources/concepts/perimeter-bcs.rst.txt
new file mode 100644
index 00000000..55c44183
--- /dev/null
+++ b/_sources/concepts/perimeter-bcs.rst.txt
@@ -0,0 +1,111 @@
+===========================================================
+Specifying perimeter boundary conditions from another model
+===========================================================
+
+Often the area we are trying to model is part of a larger flow system, and we must account for groundwater flow across the model boundaries. Modflow-setup allows for perimeter boundary conditions to be specified from the groundwater flow solution of another Modflow model.
+
+
+Features and Limitations
+-------------------------
+* Currently, specified head perimeter boundaries are supported via the MODFLOW Constant Head (CHD) Package; specified flux boundaries are supported via the MODFLOW Well (WEL) Package.
+* The parent model solution (providing the values for the boundaries) is assumed to align with the inset model time discretization.
+* The parent model may have different length units.
+* The parent model may be of a different MODFLOW version (e.g. MODFLOW 6 inset with a MODFLOW-NWT parent)
+* For specified head perimeter boundaries, the inset model grid need not align with the parent model grid; values from the parent model solution are interpolated linearly to the cell centers along the inset model perimeter in the x, y and z directions (using a barycentric triangular method similar to :py:func:`scipy.interpolate.griddata`). However, this means that there may be some mismatch between the parent and inset model solutions along the inset model perimeter, in places where there are abrupt or non-linear head gradients. Boundaries for inset models should always be set sufficiently far away that they do not appreciably impact the model solution in the area(s) of interest. The :ref:`LGR capability ` of Modflow-setup can help with this.
+* Specified flux boundaries are currently limited to the parent and inset models being colinear.
+* The perimeter may be irregular. For example, the edge of the model active area may follow a major surface water feature along the opposite side.
+* Specified perimeter heads in MODFLOW-NWT models will have ending heads for each stress period assigned from the starting head of the next stress period (with the last period having the same starting and ending heads). The MODFLOW 6 Constant Head Package only supports assignment of a single head per stress period. This distinction only matters for models where stress periods are subdivided by multiple timesteps.
+
+
+Configuration input
+-------------------
+Input to set up perimeter boundaries are specified in two places:
+
+1) The ``parent:`` model block, in which a parent or source model can be specified. Currently only a single parent or source model is supported. The parent or source model can be used for other properties (e.g. hydraulic conductivity) and stresses (e.g. recharge) in addition to the perimeter boundary.
+
+ Input example:
+
+ .. code-block:: yaml
+
+ parent:
+ namefile: 'pleasant.nam'
+ model_ws: 'data/pleasant/'
+ version: 'mfnwt'
+ copy_stress_periods: 'all'
+ start_date_time: '2012-01-01'
+ length_units: 'meters'
+ time_units: 'days'
+
+2) In a ``perimeter_boundary:`` sub-block for the relevant package (only specified heads via CHD are currently supported).
+
+ Input example (specified head):
+
+ .. code-block:: yaml
+
+ chd:
+ perimeter_boundary:
+ parent_head_file: 'data/pleasant/pleasant.hds'
+
+ Input example (specified flux, with optional shapefile defining an irregular perimeter boundary,
+ and the MODFLOW 6 binary grid file, which is required for reading the cell budget output from MODFLOW 6 parent models):
+
+ .. code-block:: yaml
+
+ wel:
+ perimeter_boundary:
+ shapefile: 'shellmound/tmr_parent/gis/irregular_boundary.shp'
+ parent_cell_budget_file: 'shellmound/tmr_parent/shellmound.cbc'
+ parent_binary_grid_file: 'shellmound/tmr_parent/shellmound.dis.grb'
+
+
+Specifying the time discretization
+------------------------------------
+By default, inset model stress period 0 is assumed to align with parent model stress period zero (``copy_stress_periods: 'all'`` in the :ref:`configuration file ` parent block, which is the default). Alternatively, stress periods can be mapped explicitly using a dictionary. For example:
+
+.. code-block:: yaml
+
+ copy_stress_periods:
+ 0: 1
+ 1: 2
+ 2: 3
+
+where ``0: 1`` indicates that the first stress period in the inset model aligns with the second stress period in the parent model (stress period 1), etc.
+
+
+Specifying the locations of perimeter boundary cells
+----------------------------------------------------
+Modflow-setup provides 3 primary options for specifying the locations of perimeter cells. In all cases, boundary cells are produced by the :meth:`mfsetup.tmr.TmrNew.get_inset_boundary_cells` method, and the resulting cells (including the boundary faces) can be visualized in a GIS environment with the ``boundary_cells.shp`` shapefile that gets written to the ``tables/`` folder by default.
+
+**1) No specification of where the perimeter boundary should be applied** (e.g. a shapefile) and ``by_layer=False:`` (the default). Perimeter BC cells are applied to active cells that coincide with the edge of the maximum areal footprint of the active model area. In places where the edge of the active area is inside of the max active footprint, no perimeter cells are applied.
+
+ Input example:
+
+ .. code-block:: yaml
+
+ chd:
+ perimeter_boundary:
+ parent_head_file: 'data/pleasant/pleasant.hds'
+
+
+**2) No specification of where the perimeter boundary should be applied and ``by_layer=True:``**. This is the same as option 1), but the active footprint is defined by layer from the idomain array. This option is generally not recommended, as it may often lead to boundary cells being included in the model interior (along layer pinch-outs, for example). Users of this option should check the results carefully by inspecting the
+
+ Input example:
+
+ .. code-block:: yaml
+
+ chd:
+ perimeter_boundary:
+ parent_head_file: 'data/pleasant/pleasant.hds'
+ by_layer: True
+
+**3) Specification of perimeter boundary cells with a shapefile**. The locations of perimeter cells can be explicitly specified this way, but they still must coincide with the edge of the active extent in each layer (Modflow-setup will not put perimeter cells in the model interior). (Open) Polyline or Polygon shapefiles can be used; in either case a buffer is used to align the supplied features with the active area edge, which is determined using the :py:func:`Sobel edge detection filter in Scipy `.
+
+
+ Input example:
+
+ .. code-block:: yaml
+
+ chd:
+ perimeter_boundary:
+ shapefile: 'shellmound/tmr_parent/gis/irregular_boundary.shp'
+ parent_head_file: 'shellmound/tmr_parent/shellmound.hds'
diff --git a/_sources/config-file-defaults.rst.txt b/_sources/config-file-defaults.rst.txt
new file mode 100644
index 00000000..b5889ec5
--- /dev/null
+++ b/_sources/config-file-defaults.rst.txt
@@ -0,0 +1,18 @@
+Configuration defaults
+----------------------
+The following two yaml files contain default settings for MODFLOW-6 and MODFLOW-NWT. Settings not specified by the user in their configuration file are populated from these files when they are loaded into the ``MF6model`` or ``MFnwtModel`` model instances.
+
+MODFLOW-6 configuration defaults
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+
+.. literalinclude:: ../../mfsetup/mf6_defaults.yml
+ :language: yaml
+ :linenos:
+
+MODFLOW-NWT configuration defaults
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. literalinclude:: ../../mfsetup/mfnwt_defaults.yml
+ :language: yaml
+ :linenos:
diff --git a/_sources/config-file-gallery.rst.txt b/_sources/config-file-gallery.rst.txt
new file mode 100644
index 00000000..f6e65ce8
--- /dev/null
+++ b/_sources/config-file-gallery.rst.txt
@@ -0,0 +1,110 @@
+==========================
+Configuration File Gallery
+==========================
+
+Below are example (valid) configuration files from the modflow-setup test suite. The yaml files and the datasets they reference can be found under ``modflow-setup/mfsetup/tests/data/``.
+
+Shellmound test case
+^^^^^^^^^^^^^^^^^^^^
+* 13 layer MODFLOW-6 model with no parent model
+* 9 layers specified with raster surfaces; with remaining 4 layers subdividing the raster surfaces
+* `vertical pass-through cells`_ at locations of layer pinch-outs (``drop_thin_cells: True`` option)
+* variable time discretization
+* model grid aligned with the `National Hydrologic Grid`_
+* recharge read from NetCDF source data
+* SFR network created from custom hydrography
+* WEL package created from CSV input
+
+
+.. literalinclude:: ../../mfsetup/tests/data/shellmound.yml
+ :language: yaml
+ :linenos:
+
+
+Shellmound TMR inset test case
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+* 13 layer MODFLOW-6 Telescopic Mesh Refinement (TMR) model with a MODFLOW-6 parent model
+* 1:1 layer mapping between parent and TMR inset (default)
+* parent model grid defined with a SpatialReference subblock (which overrides information in MODFLOW Namefile)
+* DIS package top and bottom elevations copied from parent model
+* IC, NPF, STO, RCH, and WEL packages copied from parent model (default if not specified in config file)
+* :ref:`default OC configuration `
+* variable time discretization
+* model grid aligned with the `National Hydrologic Grid`_
+* SFR network created from custom hydrography
+
+
+.. literalinclude:: ../../mfsetup/tests/data/shellmound_tmr_inset.yml
+ :language: yaml
+ :linenos:
+
+
+Pleasant Lake test case
+^^^^^^^^^^^^^^^^^^^^^^^
+* MODFLOW-6 model with local grid refinement (LGR)
+* LGR parent model is itself a Telescopic Mesh Refinment (TMR) inset from a MODFLOW-NWT model
+* Layer 1 in TMR parent model is subdivided evenly into two layers in LGR model (``botm: from_parent: 0: -0.5``). Other layers mapped explicitly between TMR parent and LGR model.
+* starting heads from LGR parent model resampled from binary output from the TMR parent
+* rch, npf, sto, and wel input copied from parent model
+* SFR package constructed from an NHDPlus v2 dataset (path to NHDPlus files in the same structure as the `downloads from the NHDPlus website`_)
+* head observations from csv files with different column names
+* LGR inset extent based on a buffer distance around a feature of interest
+* LGR inset dis, ic, npf, sto and rch packages copied from LGR parent
+* WEL package created from custom format
+* Lake package created from polygon features, bathymetry raster, stage-area-volume file and climate data from `PRISM`_.
+* Lake package observations set up automatically (output file for each lake)
+
+LGR parent model configuration
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+.. literalinclude:: ../../examples/pleasant_lgr_parent.yml
+ :language: yaml
+ :linenos:
+
+pleasant_lgr_inset.yml
+~~~~~~~~~~~~~~~~~~~~~~
+
+.. literalinclude:: ../../examples/pleasant_lgr_inset.yml
+ :language: yaml
+ :linenos:
+
+Pleasant Lake MODFLOW-NWT test case
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+* MODFLOW-NWT TMR inset from a MODFLOW-NWT model
+* Layer 1 in parent model is subdivided evenly into two layers in the inset model (``botm: from_parent: 0: -0.5``). Other layers mapped explicitly between TMR parent and LGR model.
+* starting heads resampled from binary output from the TMR parent
+* RCH, UPW and WEL input copied from parent model
+* SFR package constructed from an NHDPlus v2 dataset (path to NHDPlus files in the same structure as the `downloads from the NHDPlus website`_)
+* HYDMOD package for head observations from csv files with different column names
+* WEL package created from custom format
+* Lake package created from polygon features, bathymetry raster, stage-area-volume file and climate data from `PRISM`_.
+* Lake package observations set up automatically (output file for each lake)
+* GHB package created from polygon feature and DEM raster
+
+.. literalinclude:: ../../mfsetup/tests/data/pleasant_nwt_test.yml
+ :language: yaml
+ :linenos:
+
+Plainfield Lakes MODFLOW-NWT test case
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+* MODFLOW-NWT TMR inset from a MODFLOW-NWT model
+* Layer 1 in parent model is subdivided evenly into two layers in the inset model (``botm: from_parent: 0: -0.5``). Other layers mapped explicitly between TMR parent and LGR model.
+* starting heads resampled from binary output from the TMR parent
+* Temporally constant recharge specified from raster file, with multiplier
+* WEL package created from custom format
+* MNW2 package with dictionary input
+* UPW input copied from parent model
+* HYDMOD package for head observations from csv files with different column names
+* WEL package created from custom format and dictionary input
+* WEL package configured to use average for a specified period (period 0) and specified month (period 1 on)
+* Lake package created from polygon features, bathymetry raster, stage-area-volume file
+* Lake package precipitation and evaporation specified directly
+* Lake package observations set up automatically (output file for each lake)
+
+.. literalinclude:: ../../mfsetup/tests/data/pfl_nwt_test.yml
+ :language: yaml
+ :linenos:
+
+.. _downloads from the NHDPlus website: https://nhdplus.com/NHDPlus/NHDPlusV2_data.php
+.. _vertical pass-through cells: https://water.usgs.gov/water-resources/software/MODFLOW-6/mf6io_6.1.0.pdf
+.. _PRISM: http://www.prism.oregonstate.edu/
+.. _National Hydrologic Grid: https://www.sciencebase.gov/catalog/item/5a95dd5de4b06990606a805e
diff --git a/_sources/config-file.rst.txt b/_sources/config-file.rst.txt
new file mode 100644
index 00000000..53460f05
--- /dev/null
+++ b/_sources/config-file.rst.txt
@@ -0,0 +1,178 @@
+The configuration file
+=======================
+
+
+The YAML format
+---------------
+The configuration file is the primary model of user input to the ``MF6model`` and ``MFnwtModel`` classes. Input is specified in the `yaml format`_, which can be thought of as a serialized python dictionary with some additional features, including the ability to include comments. Instead of curly brackets (as in `JSON`_), white space indentation is used to denote different levels of the dictionary. Value can generally be entered more or less as they are in python, except that dictionary keys (strings) don't need to be quoted. Numbers are parsed as integers or floating point types depending whether they contain a decimal point. Values in square brackets are cast into python lists; curly brackets can also be used to denote dictionaries instead of white space. Comments are indicated with the `#` symbol, and can be placed on the same line as data, as in python.
+
+Modflow-setup uses the `pyyaml`_ package to parse the configuration file into the ``cfg`` dictionary attached to a model instance. The methods attached to ``MF6model``, ``MFnwtModel`` and ``MFsetupMixin`` then use the information in the ``cfg`` dictonary to set up various aspects of the model.
+
+
+Configuration file structure
+----------------------------
+In general, the configuration file structure is patterned after the MODFLOW input structure, especially the `input structure to MODFLOW-6`_. Larger blocks represent input to MODFLOW packages or modflow-setup features, with sub-blocks representing MODFLOW-6 input blocks (within individual packages) or individual features in modflow-setup. Naming of blocks and the variables within is intended to follow MODFLOW and Flopy naming as closely as possible; where these conflict, the MODFLOW naming conventions are used (see also the `MODFLOW-NWT Online Guide`_).
+
+
+Package blocks
+^^^^^^^^^^^^^^
+The modflow-setup configuration file is divided into blocks, which represent sub-dictionaries within the ``cfg`` dictionary represented by the whole configuration file. The blocks are generally organized as input to individual object classes in Flopy, or features specific to MODFLOW-setup. For example, this block would represent input to the `Simulation class`_ for MODFLOW-6:
+
+.. code-block:: yaml
+
+ simulation:
+ sim_name: 'mfsim'
+ version: 'mf6'
+ sim_ws: '../tmp/shellmound'
+
+and would be loaded into the configuration dictionary as:
+
+.. code-block:: python
+
+ cfg['simulation'] = {'sim_name: 'mfsim',
+ 'version': 'mf6',
+ 'sim_ws': '../tmp/shellmound'
+ }
+
+The above dictionary would then be fed to the Flopy `Simulation class`_ constructor as `keyword arguments (**kwargs)`_.
+
+Sub-blocks
+^^^^^^^^^^
+Sub-blocks (nested dictionaries) with blocks are used to denote either input to MODFLOW-6 package blocks or input to modflow-setup features. For example, the options block below represents input to the options block for the MODFLOW-6 name file:
+
+.. code-block:: yaml
+
+ model:
+ simulation: 'shellmound'
+ modelname: 'shellmound'
+ options:
+ print_input: True
+ save_flows: True
+ newton: True
+ newton_under_relaxation: False
+ packages: ['dis',
+ 'ic',
+ 'npf',
+ 'oc',
+ 'sto',
+ 'rch',
+ 'sfr',
+ 'obs',
+ 'wel',
+ 'ims'
+ ]
+ external_path: 'external/'
+ relative_external_filepaths: True
+
+Note that some items in the model block above do not represent flopy
+input. The ``relative_external_filepaths`` item is a flag for modflow-setup that instructs it to reference external files relative to the model workspace, to avoid broken paths when the model is copied to a different location.
+
+Directly specifying MODFLOW input
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+MODFLOW input can be specified directly in the configuration file using the appropriate variables described in the `MODFLOW-6 input instructions`_ and `MODFLOW-NWT Online Guide`_. For example, in the block below, the dimensions and griddata sub-blocks would be fed directly to the `ModflowGwfdis`_ constructor in Flopy:
+
+.. code-block:: yaml
+
+ dis:
+ remake_top: True
+ options:
+ length_units: 'meters'
+ dimensions:
+ nlay: 2
+ nrow: 30
+ ncol: 35
+ griddata:
+ delr: 1000.
+ delc: 1000.
+ top: 2.
+ botm: [1, 0]
+
+
+Source_data sub-blocks
+^^^^^^^^^^^^^^^^^^^^^^
+Alternatively, ``source_data`` subblocks indicate input from general file formats (shapefiles, csvs, rasters, etc.) that needs to be mapped to the model space and time discretization. The ``source_data`` blocks are intended to be general across input types. For example- ``filename`` indicates a file path (string), regardless of the type of file, and ``filenames`` indicates a list or dictionary of files that map to model layers or stress periods. Items with the '_units' suffix indicate the units of the source data, allowing modflow-setup to convert the values to model units accordingly. In the example below, the model top would be read from the specified `GeoTiff`_ and mapped onto the model grid via linear interpolation (the default method for model layer elevations) using the `scipy.interpolate.griddata`_ method. The model botm elevations would be read similarly, with missing layers sub-divided evenly between the specified layers. For example, the layer 7 bottom elevations would be set halfway between the layer 6 and 8 bottoms. Finally, supplying a shapefile as input to idomain instructs modflow-setup to intersect the shapefile with the model grid (using :meth:`rasterio.features.rasterize`), and limit the active cells to the intersected area.
+
+.. code-block:: yaml
+
+ dis:
+ remake_top: True
+ options:
+ length_units: 'meters'
+ dimensions:
+ nlay: 13
+ nrow: 30
+ ncol: 35
+ griddata:
+ delr: 1000.
+ delc: 1000.
+ source_data:
+ top:
+ filename: 'shellmound/rasters/meras_100m_dem.tif' # DEM file; path relative to setup script
+ elevation_units: 'feet'
+ botm:
+ filenames:
+ 0: 'shellmound/rasters/vkbg_surf.tif' # Vicksburg-Jackson Group (top)
+ 1: 'shellmound/rasters/ucaq_surf.tif' # Upper Claiborne aquifer (top)
+ 2: 'shellmound/rasters/mccu_surf.tif' # Middle Claiborne confining unit (top)
+ 3: 'shellmound/rasters/mcaq_surf.tif' # Middle Claiborne aquifer (top)
+ 6: 'shellmound/rasters/lccu_surf.tif' # Lower Claiborne confining unit (top)
+ 8: 'shellmound/rasters/lcaq_surf.tif' # Lower Claiborne aquifer (top)
+ 9: 'shellmound/rasters/mwaq_surf.tif' # Middle Wilcox aquifer (top)
+ 10: 'shellmound/rasters/lwaq_surf.tif' # Lower Wilcox aquifer (top)
+ 12: 'shellmound/rasters/mdwy_surf.tif' # Midway confining unit (top)
+ elevation_units: 'feet'
+ idomain:
+ filename: 'shellmound/shps/active_area.shp'
+
+
+Some additional notes on YAML
+---------------------------------------
+* quotes are optional for strings without special meanings. See `this reference`_ for more details.
+* (``None`` and ``'none'``) (``'None'`` and ``'none'``) are parsed as strings (``'None'`` and ``'none'``)
+* null is parsed to a ``NoneType`` instance (``None``)
+* numbers in exponential format need a decimal place and a sign for the exponent to be parsed as floats.
+ For example, as of pyyaml 5.3.1:
+
+ * ``1e5`` parses to ``'1e5'``
+ * ``1.e5`` parses to ``'1.e5'``
+ * ``1.e+5`` parses to ``1.e5`` (a float)
+* sequences must be explicitly enclosed in brackets to be parsed as lists.
+ For example:
+
+ * ``12,1.2`` parses to ``'12,1.2'``
+ * ``[12,1.2]`` parses to ``[12,1.2]``
+ * ``(12,1.2)`` parses to ``"(12,1.2)"``
+ * ``{12,1.2}`` parses to ``{12: None, 1.2: None}``
+* dictionaries can be represented with indentation, but spaces are needed after the colon(s):
+
+ .. code-block:: yaml
+
+ items:
+ 0:1
+ 1:2
+
+ parses to ``'0:1 1:2'``
+
+ .. code-block:: yaml
+
+ items:
+ 0: 1
+ 1: 2
+
+ parses to ``{0:1, 1:2}``
+
+Using a YAML-aware text editor such as VS Code can help with these issues, for example by changing the highlighting color to indicate a string in the first dictionary example above and an interpreted python data structure in the second dictionary example.
+
+.. _JSON: https://www.json.org/json-en.html
+.. _pyyaml: https://pyyaml.org/
+.. _this reference: http://blogs.perl.org/users/tinita/2018/03/strings-in-yaml---to-quote-or-not-to-quote.html
+.. _yaml format: https://yaml.org/
+.. _GeoTIFF: https://en.wikipedia.org/wiki/GeoTIFF
+.. _input structure to MODFLOW-6: https://water.usgs.gov/water-resources/software/MODFLOW-6/mf6io_6.1.0.pdf
+.. _keyword arguments (**kwargs): https://stackoverflow.com/questions/1769403/what-is-the-purpose-and-use-of-kwargs
+.. _MODFLOW-6 input instructions: https://water.usgs.gov/water-resources/software/MODFLOW-6/mf6io_6.1.0.pdf
+.. _MODFLOW-NWT Online Guide: https://water.usgs.gov/ogw/modflow-nwt/MODFLOW-NWT-Guide/
+.. _ModflowGwf class: https://github.com/modflowpy/flopy/blob/develop/flopy/mf6/modflow/mfgwf.py
+.. _ModflowGwfdis: https://github.com/modflowpy/flopy/blob/develop/flopy/mf6/modflow/mfgwfdis.py
+.. _scipy.interpolate.griddata: https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.griddata.html
+.. _Simulation class: https://github.com/modflowpy/flopy/blob/develop/flopy/mf6/modflow/mfsimulation.py
diff --git a/_sources/contributing.rst.txt b/_sources/contributing.rst.txt
new file mode 100644
index 00000000..5f44e295
--- /dev/null
+++ b/_sources/contributing.rst.txt
@@ -0,0 +1,337 @@
+Contributing to modflow-setup
+=============================
+
+(Note: much of this page was cribbed from the `geopandas `_ project,
+which has similar guidelines to `pandas `_
+and `xarray `_.)
+
+Getting started
+----------------
+All contributions, bug reports, bug fixes, documentation improvements, enhancements, and ideas are welcome. If an issue that interests you isn't already listed in the `Issues tab`_, consider `filing an issue`_.
+
+Bug reports and enhancement requests
+------------------------------------------------
+Bug reports are an important part of improving Modflow-setup. Having a complete bug report
+will allow others to reproduce the bug and provide insight into fixing. See
+`this stackoverflow article `_ and
+`this blogpost `_
+for tips on writing a good bug report.
+
+Trying the bug-producing code out on the *develop* branch is often a worthwhile exercise
+to confirm the bug still exists. It is also worth searching existing bug reports and pull requests
+to see if the issue has already been reported and/or fixed.
+
+To file a bug report or enhancement request, from the issues tab on the `Modflow-setup GitHub page `_, select "New Issue".
+
+Bug reports must:
+
+#. Include a short, self-contained Python snippet reproducing the problem, along with the contents of your configuration file and the full error traceback.
+ You can format the code nicely by using `GitHub Flavored Markdown
+ `_::
+
+ ```python
+ >>> import mfsetup
+ >>> m = MF6model.setup_from_yaml('pleasant_lgr_parent.yml')
+ ```
+
+ e.g.::
+
+ ```yaml
+
+ ```
+
+ ```python
+
+ ```
+
+#. Include the version of Modflow-setup that you are running, which can be obtained with:
+
+ .. code-block:: python
+
+ import mfsetup
+ mfsetup.__version__
+
+ Depending on the issue, it may also be helpful to include information about the version(s)
+ of python, key dependencies (e.g. numpy, pandas, etc) and operating system. You can get the versions of packages in a conda python environment with::
+
+ conda list
+
+#. Explain why the current behavior is wrong/not desired and what you expect instead.
+
+The issue will then be visible on the `Issues tab`_ and open to comments/ideas from others.
+
+
+Code contributions
+------------------------------
+Code contributions to Modflow-setup to fix bugs, implement new features or improve existing code are encouraged. Regardless of the context, consider `filing an issue`_ first to make others aware of the problem and allow for discussion on potential approaches to addressing it.
+
+In general, Modflow-setup trys to follow the conventions of the pandas project where applicable. Contributions to Modflow-setup are likely to be accepted more quickly if they follow these guidelines.
+
+In particular, when submitting a pull request:
+
+- All existing tests should pass. Please make sure that the test
+ suite passes, both locally and on
+ `GitHub Actions `_. Status with GitHub Actions will be visible on a pull request.
+
+- New functionality should include tests. Please write reasonable
+ tests for your code and make sure that they pass on your pull request.
+
+- Classes, methods, functions, etc. should have docstrings. The first
+ line of a docstring should be a standalone summary. Parameters and
+ return values should be documented explicitly. (Note: there are admittedly more than a few places in the existing code where docstrings are missing. Docstring contributions are especially welcome!
+
+- Follow PEP 8 when possible. For more details see
+ :ref:`below `.
+
+- Following the `FloPy Commit Message Guidelines `_ (which are similar to the `Conventional Commits `_ specification) is encouraged. Structured commit messages like these can result in more explicit commit messages that are more informative, and also facilitate automation of project maintenance tasks.
+
+- Imports should be grouped with standard library imports first,
+ 3rd-party libraries next, and modflow-setup imports third. Within each
+ grouping, imports should be alphabetized. Always use absolute
+ imports when possible, and explicit relative imports for local
+ imports when necessary in tests. Imports can be sorted automatically using the isort package with a pre-commit hook. For more details see :ref:`below `.
+
+- modflow-setup supports Python 3.7+ only.
+
+
+Seven Steps for Contributing
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+There are seven basic steps to contributing to *modflow-setup*:
+
+1) Fork the *modflow-setup* git repository
+2) Create a development environment
+3) Install *modflow-setup* dependencies
+4) Installing the modflow-setup source code
+5) Make changes to code and add tests
+6) Update the documentation
+7) Submit a Pull Request
+
+Each of these 7 steps is detailed below.
+
+
+1) Forking the *modflow-setup* repository using Git
+------------------------------------------------------
+
+To the new user, working with Git is one of the more daunting aspects of contributing to *modflow-setup**.
+It can very quickly become overwhelming, but sticking to the guidelines below will help keep the process
+straightforward and mostly trouble free. As always, if you are having difficulties please
+feel free to ask for help.
+
+The code is hosted on `GitHub `_. To
+contribute you will need to sign up for a `free GitHub account
+`_. We use `Git `_ for
+version control to allow many people to work together on the project.
+
+Some great resources for learning Git:
+
+* Software Carpentry's `Git Tutorial `_
+* `Atlassian `_
+* the `GitHub help pages `_.
+* Matthew Brett's `Pydagogue `_.
+
+Getting started with Git
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+`GitHub has instructions `__ for installing git,
+setting up your SSH key, and configuring git. All these steps need to be completed before
+you can work seamlessly between your local repository and GitHub.
+
+.. _contributing.forking:
+
+Forking
+~~~~~~~
+
+You will need your own fork to work on the code. Go to the `modflow-setup project
+page `_ and hit the ``Fork`` button. You will
+want to clone your fork to your machine::
+
+ git clone git@github.com:your-user-name/modflow-setup.git modflow-setup-yourname
+ cd modflow-setup-yourname
+ git remote add upstream https://github.com/modflow-setup/modflow-setup.git
+
+This creates the directory `modflow-setup-yourname` and connects your repository to
+the upstream (main project) *modflow-setup* repository.
+
+The testing suite will run automatically on Travis-CI once your pull request is
+submitted. However, if you wish to run the test suite on a branch prior to
+submitting the pull request, then Travis-CI needs to be hooked up to your
+GitHub repository. Instructions for doing so are `here
+`__.
+
+Creating a branch
+~~~~~~~~~~~~~~~~~~
+
+You want your master branch to reflect only production-ready code, so create a
+feature branch for making your changes. For example::
+
+ git branch shiny-new-feature
+ git checkout shiny-new-feature
+
+The above can be simplified to::
+
+ git checkout -b shiny-new-feature
+
+This changes your working directory to the shiny-new-feature branch. Keep any
+changes in this branch specific to one bug or feature so it is clear
+what the branch brings to *modflow-setup*. You can have many shiny-new-features
+and switch in between them using the git checkout command.
+
+To update this branch, you need to retrieve the changes from the develop branch::
+
+ git fetch upstream
+ git rebase upstream/develop
+
+This will replay your commits on top of the latest modflow-setup git develop. If this
+leads to merge conflicts, you must resolve these before submitting your pull
+request. **It's a good idea to move slowly while doing this and pay attention to the messages from git.** The wrong command at the wrong time can quickly spiral into a confusing mess.
+
+If you have uncommitted changes, you will need to ``stash`` them prior
+to updating. This will effectively store your changes and they can be reapplied
+after updating.
+
+.. _contributing.dev_env:
+
+2 & 3) Creating a development environment with the required dependencies
+---------------------------------------------------------------------------
+A development environment is a virtual space where you can keep an independent installation of *modflow-setup*.
+This makes it easy to keep both a stable version of python in one place you use for work, and a development
+version (which you may break while playing with code) in another.
+
+An easy way to create a *modflow-setup* development environment is as follows:
+
+- Install either `Anaconda `_ or
+ `miniconda `_
+- Make sure that you have :ref:`cloned the repository `
+- ``cd`` to the *modflow-setup** source directory
+
+Tell conda to create a new environment, named ``modflow-setup_dev``, that has all of the python packages needed to contribute to modflow-setup. Note that in the `geopandas instructions `_, this step is broken into two parts- 2) creating the environment, and 3) installing the dependencies. By using a yaml file that includes the environment name and package requirements, these two steps can be combined::
+
+ conda env create -f requirements-dev.yml
+
+This will create the new environment, and not touch any of your existing environments,
+nor any existing python installation.
+
+To work in this environment, you need to ``activate`` it. The instructions below
+should work for both Windows, Mac and Linux::
+
+ conda activate modflow-setup_dev
+
+Once your environment is activated, you will see a confirmation message to
+indicate you are in the new development environment.
+
+To view your environments::
+
+ conda info -e
+
+To return to your home root environment::
+
+ conda deactivate
+
+See the full conda docs `here `__.
+
+At this point you can easily do a *development* install, as detailed in the next sections.
+
+
+4) Installing the modflow-setup source code
+------------------------------------------------------
+
+Once dependencies are in place, install the modflow-setup source code by navigating to the git clone of the *modflow-setup* repository and (with the ``modflow-setup_dev`` environment activated) running::
+
+ pip install -e .
+
+.. note::
+ Don't forget the ``.`` after ``pip install -e``!
+
+5) Making changes and writing tests
+-------------------------------------
+
+*modflow-setup* is serious about testing and strongly encourages contributors to embrace
+`test-driven development (TDD) `_.
+This development process "relies on the repetition of a very short development cycle:
+first the developer writes an (initially failing) automated test case that defines a desired
+improvement or new function, then produces the minimum amount of code to pass that test."
+So, before actually writing any code, you should write your tests. Often the test can be
+taken from the original GitHub issue. However, it is always worth considering additional
+use cases and writing corresponding tests.
+
+In general, tests are required for code pushed to *modflow-setup*. Therefore,
+it is worth getting in the habit of writing tests ahead of time so this is never an issue.
+
+*modflow-setup* uses the `pytest testing system
+`_ and the convenient
+extensions in `numpy.testing
+`_ and `pandas.testing `_.
+
+Writing tests
+~~~~~~~~~~~~~
+
+All tests should go into the ``tests`` directory. This folder contains many
+current examples of tests, and we suggest looking to these for inspiration. In general,
+the tests in this folder aim to be organized by module (e.g. ``test_lakes.py`` for the functions in ``lakes.py``) or test case (e.g. ``test_mf6_shellmound.py`` for the :ref:`Shellmound test case`).
+
+The ``.testing`` module has some special functions to facilitate writing tests. The easiest way to verify that your code is correct is to explicitly construct the result you expect, then compare the actual result to the expected correct result.
+
+Running the test suite
+~~~~~~~~~~~~~~~~~~~~~~
+
+The tests can then be run directly inside your Git clone (without having to
+install *modflow-setup*) by typing::
+
+ pytest
+
+6) Updating the Documentation
+-----------------------------
+
+The *modflow-setup* documentation resides in the `docs` folder. Changes to the docs are
+made by modifying the appropriate file in the `source` folder within `docs`.
+The *modflow-setup* docs use reStructuredText syntax, `which is explained here `_
+and the docstrings follow the `Numpy Docstring standard `_.
+
+Once you have made your changes, you can try building the docs using sphinx. To do so, you can navigate to the `doc` folder and type::
+
+ make -C docs html
+
+The resulting html pages will be located in `docs/build/html`. It's a good practice to rebuild the docs often while writing to stay on top of any mistakes. The `reStructuredText extension in VS Code `_ is another way to continuously preview a rendered documentation page while writing.
+
+
+7) Submitting a Pull Request
+------------------------------
+
+Once you've made changes and pushed them to your forked repository, you then
+submit a pull request to have them integrated into the *modflow-setup* code base.
+
+You can find a pull request (or PR) tutorial in the `GitHub's Help Docs `_.
+
+.. _contributing_style:
+
+Style Guide & Linting
+---------------------
+
+modflow-setup tries to follow the `PEP8 `_ standard. At this point, there's no enforcement of this, but I am considering implementing `Black `_, which automates a code style that is PEP8-complient. Many editors perform automatic linting that makes following PEP8 easy.
+
+modflow-setup does use the `isort `_ package to automatically organize import statements. isort can installed via pip::
+
+ $ pip install isort
+
+And then run with::
+
+ $ isort .
+
+from the root level of the project.
+
+Optionally (but recommended), you can setup `pre-commit hooks `_
+to automatically run ``isort`` when you make a git commit. This
+can be done by installing ``pre-commit``::
+
+ $ python -m pip install pre-commit
+
+From the root of the modflow-setup repository, you should then install the
+``pre-commit`` included in *modflow-setup*::
+
+ $ pre-commit install
+
+Then ``isort`` will be run automatically each time you commit changes. You can skip these checks with ``git commit --no-verify``.
+
+.. _filing an issue: https://docs.github.com/en/free-pro-team@latest/github/managing-your-work-on-github/creating-an-issue
+.. _Issues tab: https://github.com/aleaf/modflow-setup/issues
diff --git a/_sources/examples.rst.txt b/_sources/examples.rst.txt
new file mode 100644
index 00000000..6a525d88
--- /dev/null
+++ b/_sources/examples.rst.txt
@@ -0,0 +1,11 @@
+========
+Examples
+========
+
+
+
+.. toctree::
+ :maxdepth: 1
+ :caption: Example problems
+
+ Pleasant Lake Example
diff --git a/_sources/index.rst.txt b/_sources/index.rst.txt
new file mode 100644
index 00000000..e42e7f24
--- /dev/null
+++ b/_sources/index.rst.txt
@@ -0,0 +1,49 @@
+.. Packaging Scientific Python documentation master file, created by
+ sphinx-quickstart on Thu Jun 28 12:35:56 2018.
+ You can adapt this file completely to your liking, but it should at least
+ contain the root `toctree` directive.
+
+=======================
+modflow-setup |version|
+=======================
+
+
+Modflow-setup is a Python package for automating the setup of MODFLOW groundwater models from grid-independent source data including shapefiles, rasters, and other MODFLOW models that are geo-located. Input data and model construction options are summarized in a single configuration file. Source data are read from their native formats and mapped to a regular finite difference grid specified in the configuration file. An external array-based `Flopy `_ model instance with the desired packages is created from the sampled source data and configuration settings. MODFLOW input can then be written from the flopy model instance.
+
+
+.. toctree::
+ :maxdepth: 2
+ :caption: Getting Started
+
+ Philosophy
+ Installation
+ 10 Minutes to Modflow-setup <10min>
+ Examples
+ Configuration File Gallery
+
+
+.. toctree::
+ :maxdepth: 2
+ :caption: User Guide
+
+ Basic program structure and usage
+ The configuration file
+ Concepts and methods
+ Input instructions by package
+ Troubleshooting
+
+
+.. toctree::
+ :maxdepth: 1
+ :caption: Reference
+
+ Code reference
+ Configuration file defaults
+ Release History
+ Contributing to modflow-setup
+
+.. toctree::
+ :maxdepth: 1
+ :caption: Bibliography
+
+ References cited
diff --git a/_sources/input/basic-stress.rst.txt b/_sources/input/basic-stress.rst.txt
new file mode 100644
index 00000000..6b4fe7db
--- /dev/null
+++ b/_sources/input/basic-stress.rst.txt
@@ -0,0 +1,351 @@
+=======================================================================================
+Specifying boundary conditions with the 'basic' MODFLOW stress packages
+=======================================================================================
+
+This page describes configuration file input for the basic MODFLOW stress packages, including
+the CHD, DRN, GHB, RCH, RIV and WEL packages. The EVT package is not currently supported by Modflow-setup. The supported packages can be broadly placed into two categories. Feature or list-based packages such as CHD, DRN, GHB, RIV and WEL often represent discrete phenomena such as surface water features, pumping wells, or even lines that denote a perimeter boundary. Input to these packages in MODFLOW is tabular, consisting of a table for each stress period, with rows specifying stresses at individual grid cells representing the boundary features. In contrast, continuous or grid-based packages represent a stress field that applies to a large area, such as areal recharge. In past versions of MODFLOW, input to these packages was array-based, with values specified for all model cells, at each stress period. In MODFLOW 6, input to these packages can be array or list-based. The Recharge (RCH) Package is currently the only grid-based stress package supported by Modflow-setup. In keeping with the current structured grid-based paradigm of Modflow-setup, Modflow 6 recharge input is generated for the array-based recharge package (Langevin and others, 2017).
+
+
+
+List-based basic stress packages
+-------------------------------------
+
+Input for list-based basic stress packages follows a similar pattern to other packages.
+
+* Package blocks are named using the 3 letter MODFLOW abbrieviation for the Package in lower case (e.g. ``chd:``, ``ghb:``, etc.).
+* Sub-blocks within the package block include:
+
+ * ``options:`` for specifying Modflow 6 options, exactly as they are described in the input instructions (Langevin and others, 2017).
+ * ``source_data:`` for specifying grid-independent source data to be mapped to the model discretization, in addition to other package input. ``source_data:`` in turn can have the following sub-blocks and items:
+ * A ``shapefile:`` block for specifying shapefile input that maps the boundary condition features in space. Items in the shapefile block include
+
+ * ``filename:`` path to the shapefile
+ * ``boundname_col:`` column in shapefile with feature names to be applied as `boundnames` in Modflow 6 input
+ * ``all_touched:`` argument to :func:`rasterio.features.rasterize` that specifies whether all intersected grid cells should be included, or just the grid cells with centers inside the feature.
+ * One or more variable columns: Optionally the shapefile can also supply steady-state variable values by feature in attribute columns named for the variables (e.g. ``'head'``, ``'bhead'``, etc.)
+
+ Example:
+
+ .. literalinclude:: ../../../mfsetup/tests/data/shellmound.yml
+ :language: yaml
+ :start-at: shapefile:
+ :end-before: csvfile:
+
+ * A ``csvfile:`` block for specifying point feature locations or time-varying values for the variables. Items in the shapefile block include
+
+ * ``filename:`` or `filenames:` path(s) to the csv file(s)
+ * ``id_column``: unique identifier associated with each feature
+ * ``datetime_column:`` date-time associated with each stress value
+ * ``end_datetime_column:`` date-time associated with the end of stress value (optional; for rates that extend across more than one model stress period. If this is specified, ``datetime_column:`` is assumed to indicate the date-time associated with the start of each stress value.
+ * ``x_col:`` feature x-coordinate (WEL package only; default ``'x'``)
+ * ``y_col:`` feature y-coordinate (WEL package only; default ``'y'``)
+ * ``length_units:`` length units associated with the stress value (optional; if omitted no conversion is performed)
+ * ``time_units:`` time units associated with the stress value (WEL package only; optional; if omitted no conversion is performed)
+ * ``volume_units:`` value-units associated with the stress value (e.g. `gallons`) in lieu of length-based volume units (e.g., `cubic feet`) (WEL package only; optional; if omitted volumes are assumed to be in model units of L\ :sup:`3` and no conversion is performed)
+ * ``boundname_col:`` column in shapefile with feature names to be applied as `boundnames` in Modflow 6 input
+ * one or more columns for the package variables, specified in the format of ``_col``, where ```` is an input variable for the package; for example ``head_col`` for the Constant Head Package, or ``cond_col`` for the Drain or GHB packages.
+ * ``period_stats:`` a sub-block that is used to specify mapping of the input data to the model temporal discretization. Items within period stats are numbered by stress period, with the entry for each item specifying the temporal aggregation. Currently, two options are supported:
+ * aggregation of measurements falling within a stress period. For example, assigning the mean value of all input data points within the stress period. In this case, the aggregration method is simply specified as a string. While ``mean`` is typical, any of the standard numpy aggregators can be use (``min``, ``max``, etc.)
+ * aggregation of measurements from an arbitrary time window. For example, applying a long-term mean to a steady-state stress period, or transient period representing a different time window. In this case three items are specified-- the aggregation method, the start date, and end date (e.g. ``[mean, 2000-01-01, 2017-12-31]``)
+
+ Example:
+
+ .. literalinclude:: ../../../mfsetup/tests/data/shellmound.yml
+ :language: yaml
+ :start-at: csvfile:
+ :end-before: # Drain Package
+
+
+ * Additional sub-blocks or items for specifying values for each variable
+ * In general, these sub-blocks are named for the variable (e.g. ``bhead:``).
+ * Scalar values (items) can be specified in model units, and are applied globally to the variable.
+
+ Example:
+
+ .. literalinclude:: ../../../mfsetup/tests/data/shellmound.yml
+ :language: yaml
+ :start-at: cond:
+ :end-at: cond:
+
+ * Rasters can be used to specify steady-state values that vary in space; values supplied with a raster are mapped to the model grid using zonal statistics. If the raster contains projection information (GeoTIFFs are preferred in part because of this), reprojection to the model coorindate reference system (CRS) will be performed automatically as needed. Otherwise, the raster is assumed to be in the model projection. Units can optionally be specified and automatically converted; otherwise, the raster values are assumed to be in the model units. Items in the raster block include:
+
+ * ``filename:`` or `filenames:` path(s) to the raster
+ * ``length_units:`` (or ``elevation_units``; optional): length units of the raster values
+ * ``time_units:`` (optional): time units of the raster values (``cond`` variable only)
+ * ``stat:`` (optional): zonal statistic to use in sampling the raster (defaults are listed for each variable in the :ref:`Configuration defaults`)
+
+ Example:
+
+ .. literalinclude:: ../../../mfsetup/tests/data/shellmound.yml
+ :language: yaml
+ :start-at: stage:
+ :end-before: mfsetup_options:
+
+ * **Not implemented yet:** NetCDF input for gridded values that vary in time and space. Due to the lack of standardization in NetCDF coordinate reference information, automatic reprojection is currently not supported for NetCDF files; the data are assumed to be in the model CRS.
+ * ``mfsetup_options:`` Configuration options for Modflow-setup. General options that apply to all basic stress packages include:
+ * ``external_files:`` Whether to write the package input as external text arrays or tables (i.e., with ``open/close`` statements). By default ``True`` except in the case of list-based or tabular files for MODFLOW-NWT models, which are not supported. Adding support for this may require changes to Flopy, which handles external list-based files differently for MODFLOW-2005 style models.
+ * ``external_filename_fmt:`` Python string format for external file names. By default, ``"_{:03d}.dat"``. which results in filenames such as ``wel_000.dat``, ``wel_001.dat``, ``wel_002.dat``... for stress periods 0, 1, and 2, for example.
+
+ Other Modflow-setup options specific to individual packages are described below.
+
+Constant Head (CHD) Package
+++++++++++++++++++++++++++++++
+Input consists of specified head values that may vary in time or space.
+
+ **Required input**
+
+ * parent model head solution --or--
+ * shapefile of features --or--
+ * parent model package (not implemented yet)
+ * at least steady-state head values through one of the methods below
+
+ **Optional input**
+
+ * raster to specify steady state elevations by cell (for supplied shapefile)
+ * shapefile or csv to specify steady elevations by feature
+ * csv to specify transient elevation by feature (needs to be referenced to features in shapefile)
+
+ **Examples**
+ (also see the :ref:`Configuration File Gallery`)
+
+ Setting up a Constant Head package with perimeter fluxes from a parent model (Note: an additional ``source_data`` block can be added to represent other features inside of the model perimeter, as below):
+
+ .. literalinclude:: ../../../mfsetup/tests/data/pfl_nwt_test.yml
+ :language: yaml
+ :start-at: chd:
+
+ Setting up a Constant Head package from features specified in a shapefile,
+ and time-varing heads specified in a csvfile:
+
+ .. literalinclude:: ../../../mfsetup/tests/data/shellmound.yml
+ :language: yaml
+ :start-after: # Constant Head Package
+ :end-before: # Drain Package
+
+
+Drain DRN Package
+++++++++++++++++++
+Input consists of elevations and conductances that may vary in time or space.
+
+ **Required input**
+
+ * shapefile of features --or--
+ * parent model package (not implemented yet)
+ * at least steady-state head and conductance values through one of the methods below
+
+ **Optional input**
+
+ * global conductance value specified directly
+ * raster to specify steady state elevation by cell (for supplied shapefile)
+ * shapefile or csv to specify steady elevations by feature
+ * csv to specify transient elevation by feature (needs to be referenced to features in shapefile)
+
+ **Examples**
+ (also see the :ref:`Configuration File Gallery`)
+
+ .. literalinclude:: ../../../mfsetup/tests/data/shellmound.yml
+ :language: yaml
+ :start-after: # Drain Package
+ :end-before: # General Head Boundary Package
+
+General Head Boundary (GHB) Package
++++++++++++++++++++++++++++++++++++++
+Input consists of head elevations and conductances that may vary in time or space.
+
+ **Required input**
+
+ * shapefile of features --or--
+ * parent model package (not implemented yet)
+ * at least steady-state head and conductance values through one of the methods below
+
+ **Optional input**
+
+ * global conductance value specified directly
+ * shapefile or csv to specify steady elevations and conductances by feature --or--
+ * rasters to specify steady state elevations or conductances by cell (for supplied shapefile)
+ * csv to specify transient elevations or conductances by feature (needs to be referenced to features in shapefile)
+
+ **Examples**
+ (also see the :ref:`Configuration File Gallery`)
+
+ .. literalinclude:: ../../../mfsetup/tests/data/shellmound.yml
+ :language: yaml
+ :start-after: # General Head Boundary Package
+ :end-before: # River Package
+
+River (RIV) package
+++++++++++++++++++++
+Input consists of stages, river bottom elevations and conductances,
+ that may vary in time or space.
+
+ **Required input**
+
+ * shapefile of features --or--
+ * ``to_riv:`` block under ``sfrmaker_options:`` with an ``sfr:`` block (see configuration gallery)
+ * parent model package (not implemented yet)
+
+ **Optional input**
+
+ * global conductance value specified directly
+ * ``default_rbot_thick`` argument to set a uniform riverbed thickness (``rbot = stage - uniform thickness``)
+ * shapefile or csv to specify steady heads, conductances and rbots by feature --or--
+ * rasters to specify steady heads, conductances and rbots by cell (for supplied shapefile)
+ * csv to specify transient heads, conductances and rbots by feature (needs to be referenced to features in shapefile)
+
+ **Examples**
+ (also see the :ref:`Configuration File Gallery`)
+
+ .. literalinclude:: ../../../mfsetup/tests/data/shellmound.yml
+ :language: yaml
+ :start-after: # River Package
+ :end-before: # Well Package
+
+ Example of setting up the RIV package using SFRmaker (via the ``sfr:`` block):
+
+ .. literalinclude:: ../../../mfsetup/tests/data/shellmound_tmr_inset.yml
+ :language: yaml
+ :start-at: sfr:
+ :end-at: to_riv:
+
+
+Well (WEL) Package
+++++++++++++++++++++
+Input consists of flux rates that may vary in time or space.
+
+ **Required input**
+
+ * parent model cell by cell flow solution (not implemented yet) --or--
+ * parent model WEL package
+ * steady-state or transient flux values through one of the methods below
+
+ **Optional input**
+
+ * temporal discretization (default is to use the average rate(s) for each stress period)
+ * vertical discretization (default is to distribute fluxes vertically by the individual transmissivities of the intersection(s) of the well open interval with the model layers.)
+
+ **Flux input options with examples**
+ (also see the :ref:`Configuration File Gallery`)
+
+ * Fluxes translated from a parent model WEL package
+ * this input option is very simple. A parent model with a well package is needed, and ``default_source_data: True`` must be specified in the ``parent:`` block. Then, fluxes from the parent model are simply mapped to the inset model grid, based on the parent model cell centers, and the stress period mappings specified in the ``parent:`` block. Well package options can still be specified in a ``wel:`` block.
+ * Examples:
+
+ .. literalinclude:: ../../../mfsetup/tests/data/pleasant_mf6_test.yml
+ :language: yaml
+ :lines: 119-123
+
+ * CSV input from one or more files (``csvfiles:`` block)
+ * multiple files can be specified using a list, but column names and units must be consistent
+ * input for column names and units is the same for the general ``csvfile:`` block described above
+ * temporal discretization is specified using a ``period_stats:`` sub-block
+ * spatial discretization for open intervals spanning multiple layers is specified using a ``vertical_flux_distribution:`` sub-block
+ * Examples:
+
+ .. literalinclude:: ../../../mfsetup/tests/data/shellmound.yml
+ :language: yaml
+ :start-after: # Well Package
+ :end-before: # Output Control Package
+
+ * Perimeter boundary fluxes from a parent model solution:
+
+ .. literalinclude:: ../../../mfsetup/tests/data/shellmound_tmr_inset.yml
+ :language: yaml
+ :start-at: wel:
+
+
+ Similar to the Constant Head Package, a ``perimeter_boundary`` block can be mixed with the other input blocks described here to simulate pumping or injection inside of the model perimeter.
+
+ * ``wdnr_dataset`` block
+ .. note::
+ This is a custom option from early versions of Modflow-setup, and is likely to be generalized into a combined shapefile (or CSV site information file) and CSV timeseries input option similar to the other basic stress packages.
+
+ * site information is specified in a shapefile formatted like ``csls_sources_wu_pts.shp`` below
+ * pumping rates are specified by month in a CSV file formatted like ``master_wu.csv`` below
+ * temporal discretization is specified with a ``period_stats:`` block similar to the ``csvfiles:`` option
+ * vertical discretization is specified with a ``vertical_flux_distribution:`` block similar to the ``csvfiles:`` option
+
+ * Example:
+
+ .. literalinclude:: ../../../mfsetup/tests/data/pfl_nwt_test.yml
+ :language: yaml
+ :lines: 113-118
+
+ **The** ``vertical_flux_distribution:`` **sub-block**
+ * This sub-block specifies how Well Packages fluxes should be distributed vertically.
+ * Items/options include:
+ * ``across_layers:`` If ``True``, fluxes for a well will be put in the layer containing the open interval midpoint. If ``False``, fluxes will be distributed to the layers intersecting the well open interval.
+ * ``distribute_by:`` ``'transmissivity'`` (default) to distribute fluxes based on the transmissivities of open interval/layer intersections; ``'thickness'`` to distribute fluxes based on intersection thicknesses. Only relevant with ``across_layers: True``.
+ * ``minimum_layer_thickness:`` Minimum layer thickness for placing a well (by default 2 model length units). Wells in layers thinner than this will be relocated to the thickess layers at their row, column locations. If no thicker layers exist at the row, column location, the wells are dropped, and reported in *_dropped_wells.csv*.
+
+
+Grid-based basic stress packages
+-------------------------------------
+The Recharge (RCH) Package is currently the only grid-based stress package supported by Modflow-setup.
+
+
+Recharge (RCH) Package
+++++++++++++++++++++++++
+
+Direct input
+@@@@@@@@@@@@@@@@
+As with other grid-based input such as aquifer properties, input to the recharge package can be specified directly as it would in Flopy. This may be useful for setting up a test model quickly. For example, a single scalar value could be entered to apply to all locations across all periods:
+
+.. code-block:: yaml
+
+ rch:
+ recharge: 0.001
+
+Or global scalar values could be entered by stress period:
+
+.. code-block:: yaml
+
+ rch:
+ recharge:
+ 0: 0.001
+ 1: 0.01
+
+In the above example, ``0.01`` would be also be applied to all subsequent stress periods.
+
+Grid-independent input
+@@@@@@@@@@@@@@@@@@@@@@@@@@@
+Modflow-setup currently supports three methods for entering spatially-referenced recharge input not mapped to the model grid.
+
+ * Recharge translated from a parent model RCH package
+ * This input option is very simple. A parent model with a recharge package is needed, and ``default_source_data: True`` must be specified in the ``parent:`` block. Then, fluxes from the parent model are simply mapped to the inset model grid, based on the parent model cell centers, and the stress period mappings specified in the ``parent:`` block. Recharge package options can still be specified in a ``rch:`` block.
+
+ * Raster input by stress period
+ * A raster of spatially varying recharge values can be supplied for one or more model stress periods. Similar to the direct input, specified recharge will be applied to subsequent periods were recharge is not specified.
+ * If the raster contains projection information (GeoTIFFs are preferred in part because of this), any reprojection to the model coorindate reference system (CRS) will be performed automatically as needed. Otherwise, the raster is assumed to be in the model projection.
+ * Input items include:
+ * ``length_units:`` input recharge length units (optional; if omitted no conversion is performed)
+ * ``time_units:`` input recharge time units (optional; if omitted no conversion is performed)
+ * ``mult:`` option multiplier value that applies to all stress periods.
+ * ``resample_method:`` method for resampling the data from the source grid to model grid. (optional; by default, ``'nearest'``)
+
+ * Examples:
+
+ .. literalinclude:: ../../../mfsetup/tests/data/pfl_nwt_test.yml
+ :language: yaml
+ :lines: 99-106
+
+ * NetCDF input
+ * NetCDF input can be supplied for gridded values that vary in time and space.
+ * Automatic reprojection is supported for Climate Forecast (CF) 1.8-compliant netcdf files (that work with the :py:meth:`pyproj.CRS.from_cf() ` constructor), or files that have a `'crs_wkt'` or `'proj4_string'` grid mapping variable (the latter includes many or most Soil Water Balance Code models).
+ * Otherwise, coordinate reference information can be supplied via the ``crs:`` item (using any valid input to :py:class:`pyproj.crs.CRS`), and the data will be reprojected to the model coordinate reference system.
+
+ * Input items include:
+ * ``variable:`` name of variable in NetCDF file containing the recharge values.
+ * ``length_units:`` input recharge length units (optional; if omitted no conversion is performed)
+ * ``time_units:`` input recharge time units (optional; if omitted no conversion is performed)
+ * ``crs``: coordinate reference system (CRS) of the netcdf file (optional; only needed if the NetCDF file is in a different CRS than the model *and* automatic reprojection from the internal `grid mapping `_ isn't working.
+ * ``resample_method:`` method for resampling the data from the source grid to model grid. (optional; by default, ``'nearest'``)
+ * ``period_stats:`` a sub-block that is used to specify mapping of the input data to the model temporal discretization. Items within period stats are numbered by stress period, with the entry for each item specifying the temporal aggregation. Currently, two options are supported:
+ * aggregation of measurements falling within a stress period. For example, assigning the mean value of all input data points within the stress period. In this case, the aggregration method is simply specified as a string. While ``mean`` is typical, any of the standard numpy aggregators can be use (``min``, ``max``, etc.)
+ * aggregation of measurements from an arbitrary time window. For example, applying a long-term mean to a steady-state stress period, or transient period representing a different time window. In this case three items are specified-- the aggregation method, the start date, and end date (e.g. ``[mean, 2000-01-01, 2017-12-31]``; see below for an example)
+
+ * Examples:
+
+ .. literalinclude:: ../../../mfsetup/tests/data/shellmound.yml
+ :language: yaml
+ :start-after: # Recharge Package
+ :end-before: # Streamflow Routing Package
diff --git a/_sources/input/dis.rst.txt b/_sources/input/dis.rst.txt
new file mode 100644
index 00000000..73c85c20
--- /dev/null
+++ b/_sources/input/dis.rst.txt
@@ -0,0 +1,162 @@
+=======================================================================================
+Time and space discretization
+=======================================================================================
+
+This page describes spatial and temporal discretization input options to the Discretization (DIS) and Time Discretization (TDIS) Packages. Specification of the model active area in the DIS Package (MODFLOW 6) and BAS6 Package (MODFLOW-2005/NWT) is also covered. As always, additional input examples can be found in the :ref:`Configuration File Gallery` and :ref:`Configuration defaults` pages.
+
+As stated previously, a key paradigm of Modflow-setup is setup of space and time discretization during the automated model build, from grid-independent inputs. This allows different discretization schemes to be readily tested without extensive modifications to the inputs.
+
+Spatial Discretization
+----------------------
+Similar to other packages, input to the Discretization Package follows the structure of MODFLOW and Flopy. For MODFLOW 6 models, the "Options", "Dimensions" and "Griddata" input blocks are represented as sub-blocks within the ``dis:`` block. Within these blocks, model inputs can be specified directly, as long as they are consistent with the definition of the model grid. For example, if ``nlay: 2`` is specified, then the model bottom must be specified as two scalar values, or two ``nrow`` x ``ncol`` arrays:
+
+.. code-block:: yaml
+
+ dis:
+ options:
+ length_units: 'meters'
+ dimensions:
+ nlay: 2
+ nrow: 30
+ ncol: 35
+ griddata:
+ delr: 1000.
+ delc: 1000.
+ top: 2.
+ botm: [1, 0]
+
+More commonly, only ``delr`` and ``delc`` are specified in the ``griddata:`` block, and geolocated, grid-independent raster surfaces are supplied in a ``source_data`` sub-block. Modflow-setup then interpolates values from these surfaces to the grid cell centers.
+
+.. literalinclude:: ../../../mfsetup/tests/data/shellmound.yml
+ :language: yaml
+ :start-after: Discretization Package
+ :end-before: # Temporal Discretization Package
+
+**A few notes:**
+
+ * by default, linear interpolation is used, as described :ref:`here `.
+ * If a more sophisticated sampling strategy is desired, for example computing mean elevations with zonal statistics for the model top, the respective layers should be pre-processed prior to input to Modflow-setup. This is by design, as it avoids adding additional complexity to the Modflow-setup codebase, and expensive operations like zonal statistics can greatly slow a model build time and often only need to be done infrequently (in contrast to other changes where rapid iteration may be helpful).
+ * GeoTIFFs are generally best, because they include complete projection information (including the coordinate reference system) and generally use less disk space than other raster types.
+ * if an ``elevation_units:`` item is included, elevation values in the rasters will be converted to the model units
+ * the most straightforward way to input layer elevations is to simply assign a raster surface to each layer:
+
+ .. code-block:: yaml
+
+ botm:
+ filenames:
+ 0: bottom_of_layer_0.tif
+ 1: bottom_of_layer_1.tif
+ ...
+
+ * Alternatively, multiple model layers can be inserted between key layer surfaces by simply skipping those numbers. In this exampmle, Modflow-setup creates three layers of equal thickness between the two specified surfaces:
+
+ .. code-block:: yaml
+
+ botm:
+ filenames:
+ 0: bottom_of_layer_0.tif
+ # layer 1 bottom is created by Modflow-setup
+ # layer 2 bottom is created by Modflow-setup
+ 3: bottom_of_layer_3.tif
+ ...
+
+Adopting layering from a parent model
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+Similar to other input, layer bottoms can be resampled from a parent model. This ``source_data:`` block would simply adopt the same layering scheme as the parent model:
+
+.. code-block:: yaml
+
+ source_data:
+ top: from_parent
+ botm: from_parent
+
+The parent model layering can also be subdivided by mapping pairs of ``inset: parent`` model layers using a dictionary (YAML sub-block):
+
+.. code-block:: yaml
+
+ source_data:
+ top: from_parent
+ botm:
+ from_parent:
+ 0: -0.5 # bottom of layer zero in pfl_nwt is positioned at half the thickness of parent layer 1
+ 1: 0 # bottom of layer 1 corresponds to bottom of layer 0 in parent
+ 2: 1
+ 3: 2
+ 4: 3
+
+In this case, the top layer of the parent model is subdivded into two layers in the inset model. A negative number is used on the parent model side because layer 0 (the first layer bottom) of the parent model coincides with the second layer bottom of the inset model (layer 1). A value of ``-0.5`` places the first inset model layer bottom at half the thickness of the parent model layer; different values between ``-1.`` and ``0.`` could be used to move this surface up or down within the parent model layer, or multiple inset model layers could be specified within the first parent model layer:
+
+.. code-block:: yaml
+
+ source_data:
+ top: from_parent
+ botm:
+ from_parent:
+ 0: -0.9 # bottom of layer 1 set at 10% of the depth of layer 0 in the parent
+ 1: -0.3 # bottom of layer 2 set at 70% of the depth of layer 0 in the parent
+ 2: 0
+ 3: 1
+ 4: 2
+
+MODFLOW-2005/NTW input
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+Specification of ``source_data:`` blocks is the same for MODFLOW 6 and MODFLOW-2005 style models, except the latter wouldn't contain an ``idomain:`` subblock. Specification of other inputs generally follows Flopy (for example, :py:class:`~flopy.modflow.mfdis.ModflowDis`). A ``dis:`` block equivalent to the first example give would look like:
+
+.. code-block:: yaml
+
+ dis:
+ length_units: 'meters'
+ nlay: 2
+ nrow: 30
+ ncol: 35
+ delr: 1000.
+ delc: 1000.
+ top: 2.
+ botm: [1, 0]
+
+.. note::
+ The ``length_units:`` item is specific to Modflow-setup; in a MODFLOW-2005 context, Modflow-setup takes this input and enters the appropriate value of ``lenuni`` to Flopy (which writes the MODFLOW input).
+
+Modflow-setup specific input
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+* ``drop_thin_cells``: Option in MODFLOW 6 models to remove cells less than a minimum layer thickness from the model solution.
+* ``minimum_layer_thickness:`` Minimum layer thickness to allow in model. In MODFLOW 6 models, if ``drop_thin_cells: True``, layers thinner than this will be collapsed to zero thickness, and their cells either made inactive (``idomain=0``) or, if they are between two layers greater than the minimum thickness, converted to vertical pass-through cells (``idomain=1``). In MODFLOW-2005 models or if ``drop_thin_cells: False``, thin layers will be expanded downward to the minimum thicknesses.
+
+Time Discretization
+----------------------
+In MODFLOW 6, time discretization is specified at the simulation level, it its own Time Discretization (TDIS) Package. In MODFLOW-2005/NWT, time discretization is specified in the Discretization Package. Accordingly, in Modflow-setup, time discretization in specified in the appropriate package block for the model version.
+
+Specifying stress period information directly
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+Input to the DIS and TDIS packages follows the MODFLOW structure. For simple steady-state models, time discretization could be specified directly to the DIS or TDIS packages using there respective the Flopy inputs (:py:class:`~flopy.modflow.mfdis.ModflowDis`; :py:class:`~flopy.mf6.modflow.mftdis.ModflowTdis`). This example from the :ref:`Configuration File Gallery` shows direct specification of stress period information to the Discretization Package:
+
+.. literalinclude:: ../../../mfsetup/tests/data/pfl_nwt_test.yml
+ :language: yaml
+ :start-after: arguments to flopy.modflow.ModflowDis
+ :end-before: bas6:
+
+Specifying uniform stress periods frequencies by group
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+For transient models, we often want to combine an initial steady-state period with subsequent transient periods, which may be of variable lengths. To facilitate this, Modflow-setup has a ``perioddata:`` sub-block that can in turn contain multiple sub-blocks representing stress period "groups". Each group in the ``perioddata:`` sub-block contains information to generate one or more stress periods at a specified frequency and time datum (for example, months, days, every 7 days, etc.). Input to transient groups is based on the :py:func:`pandas.date_range` function, where three of the four ``start_date_time``, ``end_date_time``, ``freq`` and ``nper`` parameters must be defined. For example, this sequence of blocks from the :ref:`Configuration File Gallery` generates an initial steady-state period, followed by a 9 year "spin-up" period between two dates, and then biannual stress periods spanning another specified set of sets. Time-step information is also specified, using the MODFLOW variable names.
+
+.. literalinclude:: ../../../mfsetup/tests/data/shellmound.yml
+ :language: yaml
+ :start-after: # Temporal Discretization Package
+ :end-before: # Initial Conditions Package
+
+The ``perioddata:`` sub-block can be used within a ``tdis:`` block for MODFLOW 6 models, or a ``dis:`` block for MODFLOW-2005 style models.
+
+Specifying pre-defined stress periods from a CSV file
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+In some model applications, irregular stress periods may be needed that would require many groups to be specifed using the above ``perioddata:`` sub-block. In these cases, a stress period data table can be pre-defined and input as a CSV file:
+
+.. literalinclude:: ../../../mfsetup/tests/data/shellmound_tmr_inset.yml
+ :language: yaml
+ :start-after: drop_thin_cells: True
+ :end-before: sfr:
+
+An example of a valid table is shown below. Note that only the columns listed in the above ``csvfile:`` block are actually needed. ``perlen`` and ``time`` are calculated internally by Modflow-setup; output control (``oc``) can be specified here or in the ``oc:`` package block.
+
+.. csv-table:: Example Stress period data
+ :file: ../../../mfsetup/tests/data/shellmound/tmr_parent/tables/stress_period_data.csv
+ :header-rows: 1
diff --git a/_sources/input/ic.rst.txt b/_sources/input/ic.rst.txt
new file mode 100644
index 00000000..35b50196
--- /dev/null
+++ b/_sources/input/ic.rst.txt
@@ -0,0 +1,88 @@
+=======================================================================================
+Initial Conditions
+=======================================================================================
+
+Similar to other packages, input of initial conditions follows the structure of MODFLOW and Flopy. Setting the starting heads from the model top is often a good way to go initially. After the model has been run, starting heads can then be :ref:`updated from the initial model head output ` to improve convergence on subsequent runs.
+
+ .. Note::
+
+ With any transient model, an :ref:`initial steady-state stress period