Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Diagnostics tool for ill-posed constraints #1454

Open
wants to merge 29 commits into
base: main
Choose a base branch
from

Conversation

andrewlee94
Copy link
Member

Fixes None

Summary/Motivation:

As part of working on the new scaling tools, I realised there are some simple checks we can do for detecting poorly-posed constraints that could cause scaling issues. This PR adds a new expression walker that looks for the following signs of poor scaling in constraints:

  • sum expressions with terms which have significant differences in magnitude.
  • sum expressions where terms cancel out (with a catch for cases of constant == sum()).

Changes proposed in this PR:

Legal Acknowledgement

By contributing to this software project, I agree to the following terms and conditions for my contribution:

  1. I agree my contributions are submitted under the license terms described in the LICENSE.txt file at the top level of this directory.
  2. I represent I am authorized to make the contributions and grant the license. If my employer has rights to intellectual property that includes these contributions, I represent that I have received permission to make contributions and grant the required license on behalf of that employer.

@andrewlee94 andrewlee94 self-assigned this Jul 22, 2024
@andrewlee94 andrewlee94 added enhancement New feature or request Priority:Normal Normal Priority Issue or PR diagnostics labels Jul 22, 2024
@codecov-commenter
Copy link

codecov-commenter commented Jul 22, 2024

Codecov Report

Attention: Patch coverage is 84.81013% with 24 lines in your changes missing coverage. Please review.

Project coverage is 76.38%. Comparing base (2b41748) to head (1048b32).

Files Patch % Lines
idaes/core/util/model_diagnostics.py 84.81% 15 Missing and 9 partials ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #1454      +/-   ##
==========================================
+ Coverage   76.37%   76.38%   +0.01%     
==========================================
  Files         393      393              
  Lines       65085    65242     +157     
  Branches    14426    14466      +40     
==========================================
+ Hits        49708    49838     +130     
- Misses      12815    12834      +19     
- Partials     2562     2570       +8     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@dallan-keylogic
Copy link
Contributor

@Robbybp , is there any possibility of combining a tool like this with block triangularization? My intuition is that a constraint in which catastrophic cancellation occurs (taking the difference between two large numbers to get a small number) might show up differently in the block structure than one in which it does not (adding a small number to a large number to get another large number), even though individual constraint might be identical.

@Robbybp
Copy link
Member

Robbybp commented Jul 23, 2024

@dallan-keylogic My only initial thought is that a "catastrophic cancellation" has different implications depending on where it appears appears in the block triangular decomposition. If it is in a diagonal block, it could be a problem (as we're relying on that term for nonsingularity). Otherwise, it is fine, as the entry could be zero and the matrix's singularity would not change.

@dallan-keylogic
Copy link
Contributor

Okay, a very basic example to see the issues at play. We're not concerned when we add a small number to a big number to get another big number:
image
This system has the matrix representation
image
However, the situation is different when we take the difference of two big numbers to get a small number:
image
This system has the matrix representation
image

Looking at these two systems, there doesn't seem to be any structural difference between them. The variable b is chosen to be a pivot in both cases. However, when we define a scaled variable
image
we get a different picture.

The first system then becomes
image
with a matrix representation
image
whose system matrix is well-conditioned with determinant -1.

The second system, on the other hand, becomes
image
with a matrix representation
image
whose system matrix is ill-conditioned with determinant -epsilon. It can be shown that the condition number blows up in the limit as epsilon goes to zero.

@Robbybp
Copy link
Member

Robbybp commented Jul 23, 2024

@dallan-keylogic I would say that scaling b by 1/eps is not the right choice here. A block triangularization on the scaled system could definitely identify that. We could potentially identify that b, as a column-singleton, should not be scaled up.

@dallan-keylogic
Copy link
Contributor

@Robbybp Whether or not we should scale by 1/eps requires a judgement call about what precision we need b. If having b to the same absolute precision as a and c is sufficient, we should not scale it. If we need b to the same relative precision as a and c, we must scale it. We should scale it when, for example, we're then going to divide it by a small number as part of a finite difference approximation to a derivative (note, @andrewlee94 , this sort of action is taken in the ControlVolume1D).

@andrewlee94 andrewlee94 marked this pull request as ready for review August 9, 2024 19:17
Copy link
Contributor

@jsiirola jsiirola left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall looks pretty reasonable. Some questions and comments (that should probably be addressed)

Comment on lines 1091 to 1096
for i in mm:
mismatch.append(f"{c.name}: {i}")
for i in cc:
cancellation.append(f"{c.name}: {i}")
if k:
constant.append(c.name)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would be happier if the "collect" routine didn't do formatting (conversion to a string).

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This one would take some more work and require changing other collect methods as well. I think that might be better as a separate issue/PR.

idaes/core/util/model_diagnostics.py Outdated Show resolved Hide resolved
idaes/core/util/model_diagnostics.py Outdated Show resolved Hide resolved
Comment on lines +4523 to +4526
if (
hasattr(node, "is_named_expression_type")
and node.is_named_expression_type()
):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have stumbled across a syntax that is a bit more concise. Not sure if you want to use it, though:

if getattr(node, "is_named_expression_type", bool)():

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think I'll keep the current form as it is a little easier to understand what it is doing.

idaes/core/util/model_diagnostics.py Outdated Show resolved Hide resolved
Comment on lines 4429 to 4434
# We will check for cancelling terms here, rather than the sum itself, to handle special cases
# We want to look for cases where a sum term results in a value much smaller
# than the terms of the sum
sums = self._sum_combinations(d[0])
if any(i <= self._sum_tol * max(d[0]) for i in sums):
cancelling.append(str(node))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think I understand what you are doing here, but wouldn't this complain loudly about the objective for every parameter estimation problem (MSE) if the problem actually solved?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One of my main concerns with this tool is its inability to distinguish between true cases of catastrophic cancellation and benign situations, where, for example, we have a heat of adsorption term in a column which will be close to except near the breakthrough point.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there an easy way to test for that however? Note that this tool is only used for Cautions in the main toolbox, with the implication that these might be issues (but not guaranteeing that).

I.e., this is intended to be a simple check to find potential issues, but the user will have to look into them all further to decide if they are critical or not.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure there's an easy way to do it without assuming a well-scaled system. In a well-scaled system, it would show up in the SVD with extremely large or extremely small singular values.

However, once you get to a high enough noise/signal ratio, you start to question about whether including the tool in report_numerical_issues is worthwhile or whether it is distracting the user from more fruitful avenues of diagnostics. I suppose we can just pull it from report_numerical_issues without a deprecation cycle if it proves to not be useful, so we can try it out and see how users find it.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jsiirola This walker was written specifically with Constraints in mind, so I had not considered that. The check for mismatched terms would make sense for an Objective however, so this could be extended to handle them as well. However, how would the expression walker know if it was dealing with an Objective or a Constraint (the input argument is the expression, not the component)?

Comment on lines 4444 to 4446
# (In)equality expressions are a special case of sum expressions
# We can start by just calling the method to check the sum expression
vals, mismatch, cancelling, const = self._check_sum_expression(node, child_data)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't you need to negate the values for the second argument in the relational expression before treating it like a sum?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I had forgotten to make that correction - thank you.

node_type_method_map = {
EXPR.EqualityExpression: _check_equality_expression,
EXPR.InequalityExpression: _check_equality_expression,
EXPR.RangedExpression: _check_sum_expression,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't you need special handling for Ranged (like Equality / Inequality)? I would probably define a

def _check_ranged(self, node, child_data):
    lhs_vals, lhs_mismatch, lhs_cancelling, lhs_const = self._check_equality(node, child_data[:2])
    rhs_vals, rhs_mismatch, rhs_cancelling, rhs_const = self._check_equality(node, child_data[1:])
    # Then merge the results and return

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think I've fixed this - it would be good if you could double check my logic however.

idaes/core/util/model_diagnostics.py Outdated Show resolved Hide resolved
Comment on lines 4345 to 4352
def _check_product(self, node, child_data):
mismatch, cancelling, const = self._perform_checks(node, child_data)

val = self._get_value_for_sum_subexpression(
child_data[0]
) * self._get_value_for_sum_subexpression(child_data[1])

return [val], mismatch, cancelling, const
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Almost all of your nonlinear handlers could be replaced by a single callback:

def _check_general_expr(self, node, child_data):
    mismatch, cancelling, const = self._perform_checks(node, child_data)
    val = node._apply_operation(list(map(self._get_value_for_sum_subexpression, child_data)))
    return [val], mismatch, cancelling, const

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you - I did not know I could do that.

Copy link
Contributor

@dallan-keylogic dallan-keylogic left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Something that occurred to me is that it might be useful for the user to call this not on the entire model, but on a subset of constraints (perhaps identified by the SVD toolbox, perhaps identified some other way). That probably doesn't need to be present during the initial rollout, but is something to keep in mind.

Comment on lines +1029 to +1031
m.c1 = Constraint(expr=m.v1 == m.v2)
m.v1.fix()
m.v2.fix()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we have a test case for a standard linking constraint like this with the variables unfixed? We want to make sure that arc constraints aren't going to get lumped in here.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As it stands, Arc constraints will get lumped in here, however from a strict sense they should fall in here (as they can be trivially eliminated). However, from a structural stand point we probably want to ignore these.

What do we want to allow? One option would be the special case of a=b where a and b are leaf nodes (i.e. not expressions themselves).

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@dallan-keylogic I believe I have fixed this - the cancellation checks are only run if at least one of the sides in an (in)equality is a sum expression. This means that constraints with the form a == b and a == f(b) will not be flagged as cancelling.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, regarding you initial comment, this a tool to do that could be written fairly easily as it would just iterate over a ComponentSet containing the Constraint of interest and call this walker for each Constraint. This walker is not part of the DiagnosticsToolbox class, so it can easily be called from other places too (such as the SVDToolbox) if we can come up with a way to identify a set of Constraints` of interest.

Comment on lines 4429 to 4434
# We will check for cancelling terms here, rather than the sum itself, to handle special cases
# We want to look for cases where a sum term results in a value much smaller
# than the terms of the sum
sums = self._sum_combinations(d[0])
if any(i <= self._sum_tol * max(d[0]) for i in sums):
cancelling.append(str(node))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure there's an easy way to do it without assuming a well-scaled system. In a well-scaled system, it would show up in the SVD with extremely large or extremely small singular values.

However, once you get to a high enough noise/signal ratio, you start to question about whether including the tool in report_numerical_issues is worthwhile or whether it is distracting the user from more fruitful avenues of diagnostics. I suppose we can just pull it from report_numerical_issues without a deprecation cycle if it proves to not be useful, so we can try it out and see how users find it.

idaes/core/util/model_diagnostics.py Outdated Show resolved Hide resolved
idaes/core/util/model_diagnostics.py Outdated Show resolved Hide resolved
@ksbeattie ksbeattie requested review from mrmundt and Robbybp and removed request for Robbybp, bknueven and lbianchi-lbl September 5, 2024 18:37
Copy link
Contributor

@mrmundt mrmundt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I did not review the tests cases yet because I do have some suggestions for changes in the main file that may require changes in test cases.

idaes/core/util/model_diagnostics.py Outdated Show resolved Hide resolved
idaes/core/util/model_diagnostics.py Show resolved Hide resolved
def _sum_combinations(self, values_list):
sums = []
for i in chain.from_iterable(
combinations(values_list, r) for r in range(2, len(values_list) + 1)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

2 is a magic number! Why 2?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We are looking for any combination of terms which cancel, thus the minimum number of terms to consider is 2 (a single term cannot cancel with itself). I can add a comment.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What this block of code does is go through combinations of terms in an expression first in groups of 2, then in groups of 3, all the way up to groups of len(values_list). So if we have the sum expression a+b+c+d, we first iterate through (a, b), (a, c), (a, d), (b, c), (b, d), (c, d), then iterate through (a, b, c,), (a, b, d), (a, c, d), (b, c, d), then (a,b,c,d).

However, the number of terms you're checking grows exponentially in expression size. In particular, if len(values_list) == m, you'll be checking 2 ** m - m -1 terms. I expect that this sort of check will take an extremely long time on any model with Expressions of any significant length, much less an extreme chonker like eNRTL.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can probably make this more efficient by:

  1. Stripping any term with a value of 0
  2. Breaking the loop at the first failure - we do not count how many cancellations there are, just if there is at least 1.

idaes/core/util/model_diagnostics.py Outdated Show resolved Hide resolved
idaes/core/util/model_diagnostics.py Outdated Show resolved Hide resolved
idaes/core/util/model_diagnostics.py Show resolved Hide resolved
@andrewlee94
Copy link
Member Author

@mrmundt I think I have addressed all of your comment. If you have time to do another review, it would be appreciated.

@andrewlee94
Copy link
Member Author

@mrmundt Thank you - I fixed the left over print statements.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
diagnostics enhancement New feature or request Priority:Normal Normal Priority Issue or PR
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants