-
Hi, Is there a way in I'm aware that error estimates for unconverged parameter values are not reliable. I have two use cases where this would be useful nonetheless: Case 1: Debugging models. When developing models, I sometimes want to get an estimate of the confidence intervals and correlation coefficient matrix for some given parameter values, when trying to debug cause for e.g. if model calculation fails, or if the minimizer seems to behave oddly. Case 2: Identify that model has strongly correlated parameters. In some of my models, parameters are highly correlated from the get go, which would be beneficial to see early, before starting parameter estimation. This is also useful when trying to find out good initial values for parameters. Thanks for any insight on this! |
Beta Was this translation helpful? Give feedback.
Replies: 5 comments 1 reply
-
Not really. When evaluating covariance and confidence intervals explicitly (see, for example, https://github.com/lmfit/lmfit-py/blob/master/lmfit/confidence.py), one moves each parameter away from its optimal value (typically by its expected 1-sigma uncertainty) and then optimizes all the other parameter values in response to that change. The covariance measures how parameter A responds when parameter B is changed from its optimal value. You have to make those measurements. The magic of |
Beta Was this translation helpful? Give feedback.
-
the code we use to calculate the covariance matrix using
the functions used here all part of the |
Beta Was this translation helpful? Give feedback.
-
Thanks for answers! I don't know lmfit yet well enough to make the covariance calculation code above to work, but I found out that using Nelder-Mead with a large tolerance stops the optimizer at start values, so this sort of does what I want (using the doc_fitting_withreport.py example) although it's not very elegant solution:
|
Beta Was this translation helpful? Give feedback.
-
Hey, dear ALL! I am currently working on inverse problems in the context of Gaussian processes. At this point, I would like to estimate the uncertainty associated with these source estimates. Our source estimate is a Gaussian posterior, with its mean vector representing the source estimate (the output from the inverse problem method), and the relevant covariance matrix being an N×N matrix. To estimate confidence intervals, I am currently using the diagonal elements of this matrix, which represent variances, and then taking the square root to obtain standard deviations for confidence interval estimation as a starting point. However, I believe that ignoring the off-diagonal elements of the posterior covariance matrix could lead to an inaccurate estimation of uncertainty, as I think they have a significant impact on the generation of uncertainty. My question is: How can we estimate confidence intervals or ellipses by taking the off-diagonal elements of the posterior covariance into account: estimating confidence intervals/ellipses by using the full-covariance matrices? Thank you very much in advance your valuable insights. |
Beta Was this translation helpful? Give feedback.
-
@IsmailHuseynov It is not clear to me that your question is related to the original question from 3 years ago about using lmfit to measure covariance without first finding an optimal solution. If you think it is related, can you explain why that is the case? Can you confirm that your question is actually related to lmfit? I also have trouble determining what the real question here is. It seems like you understand all the terms you're talking about, but you do not say how you got the covariance matrix. Anyway, you could read our code and see that we do use diagonal elements for uncertainties and off-diagonal elements for correlation - the covariance matrix does take into account the co-relation of pairs of variables. |
Beta Was this translation helpful? Give feedback.
@tkeskita
Not really. When evaluating covariance and confidence intervals explicitly (see, for example, https://github.com/lmfit/lmfit-py/blob/master/lmfit/confidence.py), one moves each parameter away from its optimal value (typically by its expected 1-sigma uncertainty) and then optimizes all the…