autoadaloc and maxsing musings #225
Replies: 1 comment 2 replies
-
Are the correlation coefficients finished by 20:00:19, or is that just the start of "shuffling" process? The autoadaloc process quickly picks out a few "fully localized" pars. Most of these are mapped to a single obs in my absurdly granular "manual" localizer. I assume the empirical sensitivity of this par to the single obs it is meant to influence is below the "background" correlation?
A bit further along and it starts slowing down, taking about 50 sec (per thread?), presumably because there are more obs to consider for each par? (Note, the counter maybe backwards as fewer get "done" with each iteration.) Based on my limited understanding, this is where I would think we don't need to do all 156994 parameters, but maybe instead a subset based on singular values. Isn't the empirical Jacobian already calculated at this point?
Another big chunk of time comes during the (very granular) upgrade. What determines the number of parts (presumably mapped par/obs relationships, number of threads, other)? (Note, more parts getting "done" with each iteration)
|
Beta Was this translation helpful? Give feedback.
-
I am an unapologetic fan boi of autoadaloc. But man, it can take some time for big problems! I'm also pretty bad at "seeing through the maths" which results in some misguided musings, so feel free to disregard!
It appears the autoadaloc calcs are done for all pars. But we don't really care about spurious correlations if the sensitivities don't even meet subspace reg constraints (other than to define the distribution of correlation coefficients), right? I wonder if we could assume the same std criteria holds for the distribution of correlation coefficients calculated using a smaller number of samples (or scale it somehow)?
I'm imagining something like doing the calcs in blocks, obvious late night psuedo-code:
Beta Was this translation helpful? Give feedback.
All reactions