Replies: 3 comments 1 reply
-
Others can no doubt be of more help, but here are my thoughts:
I'd try / think through the above suggestions first, then if they don't help, you could also try the following:
Chris. |
Beta Was this translation helpful? Give feedback.
-
I think @cnicol-gwlogic is right on it as far as how to fix the problem. Just some thoughts on why this might be happening:
In theory localization should help increase the degrees of freedom but Ive not seen that in practice (yet). And the spurious correlation thing is really only a problem with estimating posterior uncertainty - the large correlations (needed to match data) will usually feature strongly in the first several eigen components of the approximate jacobian so you can usually fit data pretty well with a small ensemble (unless you have heaps of high quality and diverse data...). I pretty regularly use 100K's of parameters with ensembles of size 100-300 and can find a decent fit to the important obs most of the time (as long as I define the objective function appropriately) And a word of caution on the reweighting - if you are using noise realizations, be careful not to set weights so low that highly improbable/implausible values of observation+noise are realized (the converse can also be a problem: the weights are so high, the implied noise is under represented). You can prevent this in several ways (like providing a "standard_deviation" column in the observation data of version 2 control file, supplying your own obs+noise ensemble, etc). |
Beta Was this translation helpful? Give feedback.
-
@cnicol-gwlogic @jtwhite79 Many thanks to both of you for your suggestions! I definitely agree that the objective weighting should be revisited - the observation dataset is rich and the critical periods/locations are probably under emphasized. It seems like the prior ensemble can really make or break the calibration. I was sampling the prior ensemble just based on values from a previous calibration with no defined spatial covariances, realizing that wasn't too smart either. |
Beta Was this translation helpful? Give feedback.
-
Originally and probably naively, I thought ies would be great for models using many pilot points over traditional PEST/GLM. However, I'm starting to realize that you can't just throw tons of adjustable pilot points at ies and expect it to take full advantage of the higher degree of parameterization/refinement.
For some context, I've been recalibrating a model where poor fits at certain targets are caused by lack of resolution in the original pilot point fields - after increasing pilot point refinement I was able to improve fits by making manual adjustments to a few local pilot points around poorly fit targets. However, putting the refined model through ies hasn't resulted in an improvement so far and I suspect there are things I could be doing a lot better with how my calibration is being set up:
Wondering if anyone has thoughts w.r.t. the above, or has had a similar experience using ies with models parameterized by big pilot point fields?
Beta Was this translation helpful? Give feedback.
All reactions