-
Notifications
You must be signed in to change notification settings - Fork 40
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
sensitivities of free variables wrt constants ('the Jacobian') #1141
Comments
@dkhall and I played around with this, and we're pretty sure that 1) you can't because, those sensitivities are infinite at the margin (verified numerically, we didn't hammer out all the theory but it seems legit), and 2) you can fix the objective value*(1+eps) (without loss of generality, consider a single variable objective) as a substitution and make a new variable the cost to get these sensitivities. |
Just to provide some context, here's an example model that mimics the case where we ran into this and illustrates some of the issues with trying to get the "secondary" sensitivity with a second model solve: from gpkit import Model, Variable, units
from gpkit.constraints.tight import Tight
class Generator(Model):
def setup(self):
pic = Variable("p_3/p_2", 30, "-", "compressor pressure ratio")
tht = Variable("T_4/T_2", 4, "-", "turbine inlet temperature ratio")
P = Variable("P", 1e6, "W", "power")
m = Variable("m", "kg", "mass")
mdot = Variable("\\dot{m}", "kg/s", "mass flow")
Psp = Variable("P_{\\rm net}/(\\dot{m}c_pT_2)", "-", "specific power")
eta = Variable("\eta_{\\rm th}", "-", "thermal efficiency")
tauc = Variable("T_3/T_2", "-", "compressor temperature ratio")
Km = Variable("K_m", 10.0, "-", "mass scaling coefficient")
return [tauc == pic**(0.4/1.4),
Tight([1 >= eta + tauc**-1,
tht >= tauc + Psp/eta]),
P == Psp*mdot*1.006*units("kJ/kg/K")*288.15*units("K"),
m == (Km*pic*(mdot/units("kg/s"))**1.5)*units("kg"),
] The model is solved by minimizing mass: g = Generator()
m1 = Model(g["m"], [g])
s1 = m1.solve("mosek") I'm interested in the sensitivity of one of the free variables, mdes = s1["variables"]["m"]
delta = 0
m2 = Model(1/g["\\eta_{\\rm th}"], [g, g["m"] <= mdes*(1 + delta)])
s2 = m2.solve("mosek")
print(s2.summary()) The output sensitivities are as follows:
which is incorrect --
The It's worth noting in this case that relaxing the mass constraint more, to |
All interesting but @bqpd you guys talked about a different type of sensitivity. This ticket is about sensitivities that predict how free variables will change when a constraint (or constant) is changed slightly, with no change in the objective function. That's very different from anything that would involve changing the objective and resolving. I believe that it's possible to get the type of sensitivities I'm describing in this ticket from a single point solution with some simple linear algebra and no additional GP solves. I could be wrong. But let's at least try. It would be super super useful. I'll write out the math next time I have a few hrs. |
@whoburg if you have the time to take a look at this that would be great! Closing for now because I don't see a way to get this via linear algebra (on the dual solution, presumably) but would absolutely love to bring this feature in :D |
I was reading a paper just now that made me realize that this is perhaps possible. GPkit gives us the total derivatives (DF/Dx in the context of the paper), and what we need is Dy/Dx i.e. the total derivative of a variable w.r.t. a parameter. In the paper this is given as: Dy/Dx = - [dR/dy]^-1 [dR/dx] |
I think we can easily calculate the partial derivatives by taking each constraint, and doing a post-process to perturb each of them by the parameters and variables. This is somewhat cumbersome, but if done through FD it involves simple arithmetic and there are no GP solves required. This can also be done analytically even more efficiently, though that would require slightly more effort. One thing I'm confused by is the presence of inverse though... If dR/dy is zero, we have problems? I am reading up a little more. |
oooh excellent. |
Reading the paper, does this only apply for equality constraints? |
I am still trying to figure it out. I was interpreting it as active constraints, since small perturbations would mean that the constraints are still tight. |
Taking another look now, it seems like we will have to calculate the partial derivatives of the objective w..r.t. all parameters and variables, which makes the cost of Jacobian calculation n+m GP solves. This matches up with the 'one linear solve per objective and constraint variable' statement from the paper. But now I see how they do it. Shouldn't be too hard. |
Inspired by some thoughts that have been in the back of my mind as well as some requests from the learn3 team:
I ought to be able to get the post-optimization sensitivity of a free variable with respect to a change in a constant. In words, what I mean is "if I changed constant c by 0.1%" and reoptimized, how much would I expect free variable A to change by?
Here's an example of the sensitivities we want, computed grossly via finite differencing:
The output is:
I suspect that these sensitivities could be extracted from the A matrix and (nu, lambda) elegantly via some simple linear algebra.
I know that @chriscourtin and @dkhall are pretty interested in this.
The text was updated successfully, but these errors were encountered: