5.2.0 beta #233
Replies: 9 comments 8 replies
-
Thanks for this @jtwhite79. One thing I have run into with it so far is for a MOU case in which I apply the following in combination:
No urgency from my point of view, as I will try to feed pestpp a dv_par csv from earlier runs, or make one myself; it will be interesting to see what it does on the next generation. Anyway, thanks again for your efforts, amazing stuff. |
Beta Was this translation helpful? Give feedback.
-
Hey there, thanks for the work on the new release. I had not tried
autoadaloc on my model for a while, but I tried it with the new exe last
night and ended up with the following. Is the "bad allocation" in reference
to memory? Did I exceed my 256 GB? Is num_threads=4 too many? Do I need to
scale back npars to use autoadaloc? Did I mess something else up... and if
so, any suggestions on where to look?
…--- initialization complete ---
--- starting solve for iteration: 1 ---
...current lambda: 0.1
...starting automatic adaptive localization calculations
Note: 3000 parameters have no nonzero entries in autoadaloc localizer
automatic adaptive localization calculations done
...preparing fast-lookup containers for threaded localization solve
Error condition prevents further execution:
*bad allocation*
Traceback (most recent call last):
File "condor_utils_restart.py", line 665, in <module>
nqueue=nqueue, xq=xq, bin_dir = '.')
File "condor_utils_restart.py", line 252, in run_pest
pyemu.os_utils.run(f"{pest_exe} {pst_name}.pst /h :{port}",
cwd=m['m_d'])
File "D:\wes\pyemu\pyemu\utils\os_utils.py", line 126, in run
raise Exception("run() returned non-zero: {0}".format(ret_val))
Exception: run() returned non-zero: 1
On Tue, Feb 7, 2023 at 4:39 PM J Dub ***@***.***> wrote:
that makes sense. What we've been doing is seeding the initial population
with members that will be feasible - mou does much better to find the edges
of feasible from within the feasible region, rather than trying to find a
feasible region...
Either way, I'll fix this transform issue and post some new binaries
asap...
—
Reply to this email directly, view it on GitHub
<#233 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ADSJXRHNC6V5ZC7ZT3JS3G3WWG7YPANCNFSM6AAAAAAUSKTYEE>
.
You are receiving this because you are subscribed to this thread.Message
ID: ***@***.***>
--
"Perfect spheres are pointless."
|
Beta Was this translation helpful? Give feedback.
-
Thank you for your work on multithreading multimodal upgrades. I will give it a try. |
Beta Was this translation helpful? Give feedback.
-
Hi, Thanks in advance. |
Beta Was this translation helpful? Give feedback.
-
that is correct. In general the multimodal solve should be faster now even without multithreading...but some early testing shows that the threading can also help... |
Beta Was this translation helpful? Give feedback.
-
So, multimodal stuff... how many "upgrade calculations" will be done? nreals*nlambdas? Presumably my upgrades are crippled by my granular localization, but unfortunately that is integral to the question I am asking. I have 140,000 pars, using ies_num_thread=15 and my cpu is at about 50%. Any suggestions on how to speed this up... I'm estimating it could take days for a single upgrade (good thing it is ies and I only need 1 or 2!). 5.2.1 Feb 7 |
Beta Was this translation helpful? Give feedback.
-
yeesh. Multimodal localized upgrades are probably the slowest possible options you could use! By using both multimodal and localized solutions, ise is upgrading each realizations, one-at-a-time, thru a fully localized solve. Its gonna be slow-as and Im not sure what else can be done to help this...if you can ditch the localization, it would be heaps faster... |
Beta Was this translation helpful? Give feedback.
-
Yep, fast as without localization! So, I suspect "spurious correlation"
becomes even more of a problem given the subset of reals used to define the
local/neighborhood gradient. Need heaps of reals and a big hood (50-100)?
…On Thu, Mar 30, 2023, 7:12 PM J Dub ***@***.***> wrote:
yeesh. Multimodal localized upgrades are probably the slowest possible
options you could use! By using both multimodal and localized solutions,
ise is upgrading each realizations, one-at-a-time, thru a fully localized
solve. Its gonna be slow-as and Im not sure what else can be done to help
this...if you can ditch the localization, it would be heaps faster...
—
Reply to this email directly, view it on GitHub
<#233 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ADSJXRG54IIJIOXHBBOAJ63W6UP3FANCNFSM6AAAAAAUSKTYEE>
.
You are receiving this because you commented.Message ID:
***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
thats a valid point Wes but what Im thinking currently is that if we drive each realization toward a unique objective function (with a unique weight vector) and use a local gradient approx specific to that realization, maybe we do incur some spurious correlations from a small neighborhood, but maybe we occupy a larger (potentially non-gaussian) posterior region in par space (and more importantly a wider, multimodal forecast posterior)...and maybe that is more important? I dont know if there will ever be a definitive answer to this but in concept it seems interesting! |
Beta Was this translation helpful? Give feedback.
-
This is a beta release for the 5.2 version. There has been a lot of refactoring in the
EnsembleMethod
class to support weight ensembles throughout the code, new support for rebalancing weights by realization (#216), as well as a lot refactoring in the multimodal solve to support multithreading (#221). Also some new approaches and refactoring in the autoadaloc calculations (#225). Several bug fixes along the way too (#227 , #231 , @cnicol-gwlogic - fix in mou risk-as-an-obj, etc). Please provide any feedback in the discussion - thanks in advance!This discussion was created from the release 5.2.0 beta.
Beta Was this translation helpful? Give feedback.
All reactions