Replies: 9 comments 17 replies
-
As a quick test, I removed the |
Beta Was this translation helpful? Give feedback.
-
Yuck. Is "a=3" a feature of the model? I doubt it.
Why did you not pass a set of parameters? Make a parameters object and use that. The normal usage would be
Actually, I think it might be. You have
There is an ambiguity that you have introduced about whether to use the value of
Well, there are lots of things you could do. Making a copy is one option. Another option would be to make a parameters object and use that, perhaps something like:
|
Beta Was this translation helpful? Give feedback.
-
I would assume that this is one of the main points of the library: fit a model curve to some data, extract the model parameters, and then make predictions using the model. Maybe my use case of "evaluate the model, but with this parameter set to some other value" is slightly unusual. But it's something that I do constantly during exploratory data analysis, when trying to figure out if the model I'm using is even right for the data, or if my model has a bug.
I think that would be safer than the current option (so that you don't accidentally overwrite the fit result), but it would make exploring the result way more inconvenient. Instead of just p_copy = fit_result.params.copy()
p_copy["b"].value = 0
model.eval(p_copy, x=...) which is cumbersome and creates unnecessary temporary variables.
You are right. To me it would feel natural that the passed in params would override the attributes of the model, and that the kwargs would in turn override the attributes of the params. This is going from "most stable" to "least stable" in the sense that the model is modified rarely (basically only when creating it), the params is modified ~once per fit (from the user's POV) but possibly several times for the same model, and the kwargs would be passed for ephemeral quick evaluations of the model function.
I agree that it is a bit scary to modify, and this should be announced as a breaking change. Still, as I said, all tests pass without the mutation, so this case has apparently not been considered much.
I would say that they should be evaluated with the def eval(self, params, **kwargs):
param_values = {
k: v.value for k, v in self.make_params().items()
if v.value is not None
}
param_values.update(params.valuesdict())
for k, v in kwargs.items():
if params[k].expr is not None:
# Here, we would evaluate the Parameters._asteval
# with param_values as the symbol table.
param_values[k] = evaluate_expr(...)
else:
param_values[k] = v
return self.func(**param_values) Of course this is glossing over lots of details (like the prefix stuff I see in
Good question, I'm not entirely sure. It could be useful to explore how the model behaves when going outside the boundaries, so I think it should at most issue a warning. Looks like the current behavior with bounds is also a bit weird, when evaluating the model, the bound is ignored, but the parameter value is left at the boundary: import lmfit
def func(x, a, b):
return a * x + b
model = lmfit.Model(func)
model.set_param_hint("b", min=0)
p = model.make_params(a=3, b=2)
print(model.eval(params=p, b=-100, x=1)) # prints -97
print(p["b"].value) # prints 0 I looked at the documentation a bit more carefully, and it briefly mentions that initial values for the parameters can be supplied with |
Beta Was this translation helpful? Give feedback.
-
On Tue, Feb 14, 2023 at 4:33 AM mgunyho ***@***.***> wrote:
How would we know that you are calling Model.eval after a fit? ...
model.eval(params=my_params, b=0, x=x) is really odd usage
I would assume that this is one of the main points of the library: fit a
model curve to some data, extract the model parameters, and then make
predictions using the model.
Yep. But we still do not know when you call `Model.eval()` whether that is
before, during, or after a fit, or maybe "between fits".
Maybe my use case of "evaluate the model, but with this parameter set to
some other value" is slightly unusual.
It's not. But your usage of mixing a Parameters instance and individual
values in `Model.eval()` probably is unusual. At least, I hope it is
unusual. That ambiguously supplies two values for the same parameter, and
we have to decide what to do with that ambiguity. I would say the
ambiguity in your usage is *intentional*.
But it's something that I do constantly during exploratory data analysis,
when trying to figure out if the model I'm using is even right for the
data, or if my model has a bug.
raise an exception if keyword arguments to Model.eval() clash with params
I think that would be safer than the current option (so that you don't
accidentally overwrite the fit result), but it would make exploring the
result way more inconvenient. Instead of just model.eval(params=fit_result.params,
b=0, x=...), I would have to write
p_copy = fit_result.params.copy()p_copy["b"].value = 0model.eval(p_copy, x=...)
Yep, creating, modifying, copying, and updating Parameters objects seems
like a perfectly normal thing to do. Parameters objects are kind of the
key component of lmfit.
In addition to changing the value, you could also change bounds,
expressions, and whether certain parameters are fixed. This is very useful
for exploratory data analysis, as you can redo fits with different
parameter values and settings.
which is cumbersome and creates unnecessary temporary variables.
I think it is actually what you want.
That implies (or, really, says explicitly) that you expect some aspects of
the Model and of those Parameters to be used.
You are right. To me it would feel natural that the passed in params would
override the attributes of the model, and that the kwargs would in turn
override the attributes of the params.
Hmm, I think that "override the attributes of the params" is what we are
currently doing.
This is going from "most stable" to "least stable" in the sense that the
model is modified rarely (basically only when creating it),the params is
modified ~once per fit (from the user's POV) but possibly several times for
the same model, and the kwargs would be passed for ephemeral quick
evaluations of the model function.
Parameter values may be altered once per step/iteration of the fit --
specifically **for each evaluation of the fitting function**. That is, for
each call of `Model.eval()`.
we do use Model.eval() internally in many places, so I would want to be
very careful about changing it
I agree that it is a bit scary to modify, and this should be announced as
a breaking change. Still, as I said, all tests pass without the mutation,
so this case has apparently not been considered much.
what values should constraint expressions (say, c=2*a+b') take?
I would say that they should be evaluated with the kwargs version of b.
Basically I would expect the behavior to be roughly along the lines of
def eval(self, params, **kwargs):
param_values = {
k: v.value for k, v in self.make_params().items()
if v.value is not None
}
param_values.update(params.valuesdict())
for k, v in kwargs.items():
if params[k].expr is not None:
# Here, we would evaluate the Parameters._asteval
# with param_values as the symbol table.
param_values[k] = evaluate_expr(...)
else:
param_values[k] = v
return self.func(**param_values)
Of course this is glossing over lots of details (like the prefix stuff I
see in make_funcargs). It also seems like this isn't easy to do:
apparently asteval.Interpreter is also stateful, so it can't be used to
evaluate expressions with a different symbol table without making a copy of
the whole interpreter.
What we would actually need to do is make a copy of the parameters and
update the values with your passed-in-by argument values. That would
"create a temporary set of Parameters", but one that the user would never
see. We do not want to do that at every function evaluation during a fit.
Minimize() (and so also Model()) does, in fact, make a copy of the input
Parameters at the start of each fit, and then updates that working copy
during the fit. That is once per fit, not once per function call.
Should bounds be applied to that value of b?
Good question, I'm not entirely sure.
Using a Parameters instance would give a clear answer (Yes, bounds should
be obeyed). If you want to change the boundaries, you can do that too.
It could be useful to explore how the model behaves when going outside the
boundaries, so I think it should at most issue a warning. Looks like the
current behavior with bounds is also a bit weird, when evaluating the
model, the bound is ignored, but the parameter value is left at the
boundary:
import lmfit
def func(x, a, b):
return a * x + b
model = lmfit.Model(func)model.set_param_hint("b", min=0)p = model.make_params(a=3, b=2)
print(model.eval(params=p, b=-100, x=1)) # prints -97print(p["b"].value) # prints 0
I looked at the documentation a bit more carefully, and it briefly
mentions
<https://lmfit.github.io/lmfit-py/model.html#initializing-model-parameters>
that initial values for the parameters can be supplied with eval(),
although there is no discussion of combining params and kwargs.
I view this as more evidence that using an actual Parameters instance is
the better option.
(The docs also talk about using set_param_hint as one of the options for
setting initial values, so maybe that's where I got that idea from.)
Yeah, I think `set_param_hints` should be more-or-less a protected method,
used within subclasses, but not part of the public API.
|
Beta Was this translation helpful? Give feedback.
-
Just a as datapoint as a small time contributer, even if it is not the recommend way, i often use set_param_hint the same way as @mgunyho. In many simple cases, I do not need the explicted parameters as their own object, e.g.
In general, I save a reference the fit results object anyways, hence having an explict reference to the parameters afterwards is often not that helpful. Also models are rather cheap, so making a new one is not that bad outside of tight loops. |
Beta Was this translation helpful? Give feedback.
-
@mgunyho I very much doubt that we are not going to alter As I am sure you know, Python does not use "pure functions". Values are passed into methods by reference and may be updated by the method. Calling an object's method (here "evaluate the Model with But, it turns out that Your usage of
or
Both of those are completely acceptable and clear. But what you have done is
That is ambiguous. We do not promise in the API or the documentation what we do in such usage. The fact that we do not already raise an exception is what surprises me. It is perfectly reasonable to expect that a model will not normally update its input parameters, but we do not promise that this will never happen. For your ambiguous usage, "normal" may not apply. I write apps with lmfit to allow users to do exploratory fitting, change values, evaluate models with updated values, change bounds and constraints, compare fits, view fit histories, and so forth. I'm not saying that this could not be smoother (see earlier suggestion), but I do not really see making copies of parameters as "cumbersome" or "inconvenient". |
Beta Was this translation helpful? Give feedback.
-
On Thu, Feb 16, 2023, 1:12 AM mgunyho ***@***.***> wrote:
When you talk about ambiguity, are you specifically referring to
expression evaluation and boundary checks? Is that what you mean by
consistency? Would you agree that eval(params, b=0) *if there are no
expressions depending on b, and it is within bounds*, along with the
docstring "if a parameter appears both in params and kwargs, use the value
from the kwargs" is unambiguous? Because this is what I had in mind. Maybe
we have misunderstood each other here.
If `params` has a `b`, then there is an ambiguity of which value to use.
That is the ambiguity.
If we do not update `params`, then we are left with potential problems of
consistency. We have worked out that `Parameters` do return consistent
values, and we are not going to hack parts of that into `Model` when it
creates values for function arguments.
I agree that if the kwargs violate the constraints/expressions of the
params, it is not obvious what should happen. Raising an exception in that
case could be reasonable, but it might also be useful to be able to explore
what the model produces even if the constraits are violated. Of course in
practice it's not trivial to check whether any of the expressions need to
be re-evaluated.
If the user wants a constraint or bounds violated they need to lift those.
Just to clarify, if option c) would be implemented, would the recommended
way to "evaluate the model with a from the fit result but with b set to
0" be to do p = fit_result.params.copy(); p["b"].value = 0; model.eval(p)?
Or is there a more concise way to do it? Basically I'm looking for the
easiest way to do this.
Yep, that would be the recommendation. Claims that it is hard are just not
going to persuade me.
Again, you could call the model function with values directly. You can
also call Model.eval with values only. It is when you mix Parameters and
individual values that should be avoided, and should raise an exception.
|
Beta Was this translation helpful? Give feedback.
-
@mgunyho OK, there definitely was a severe problem with supporting mixed parameters and keywords. A perfectly valid model (and one much like I really must work), would be:
You have a composite model, with parameters from one model used to constrain parameters in another model. The problem is that
currently fails: the value of If params and keywords are mixed, this will be a slight performance hit to give correct results. But fitting always updates |
Beta Was this translation helpful? Give feedback.
-
@mgunyho Sorry, trying to helpful and positive, but also confident that I do have some understanding of this code ;)
The ambiguity is in the code. It permits the user to supply two values for a parameter. The code has to decide what to do in the face of this ambiguity. The ambiguity exists. The documentation may describe how such ambiguity is handled. The documentation does not remove the ambiguity. The documentation might be wrong. The documentation might be changed.
Yes. If the set of parameter values do not obey the constraints, they are wrong. That is a very serious error.
Nope. In the example I posted,
The function must be called with values that consider any bounds and constraints. Violating bounds is a serious error and cannot be allowed.
No.
Sorry, I am not even slightly persuaded here.
That's OK. This fails in current master branch. The good news is that I have a fix for it. If we did not have a fix for it, we would deprecate mixing keywords and parameters.
It is definitely what we want to do.
Well,
No. Checking constraints and evaluating a set of expressions does and must have state. The expression itself is state.
With
then return the value of |
Beta Was this translation helpful? Give feedback.
-
Hello,
I'm new to lmfit, and I noticed the following behavior: calling
Model.eval
with aParameters
object andkwargs
can modify theParameters
. Here is a simple example that demonstrates this:This is surprising to me. If the model is complicated and the fitting takes a long time, it's easy to accidentally overwrite my fit result. Is this intended behavior? Should I be doing something differently? A workaround I found is to use
eval(params=fit_result.params.copy())
, but that's easy to forget to do.Model.eval
callsModel.make_funcargs
, where the kwargs are used to modify the Parameters object here. IMO this should not happen, I would expect that evaluating the model has no side effects. The parameters being modified is also not documented anywhere, only as a comment in the source ofmake_funcargs
.Beta Was this translation helpful? Give feedback.
All reactions