Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix piecewise/Heaviside handling #2234

Merged
merged 14 commits into from
Dec 14, 2023
245 changes: 0 additions & 245 deletions python/examples/example_errors.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -840,251 +840,6 @@
")\n",
"assert all(rdata.status == amici.AMICI_SUCCESS for rdata in res[RDATAS])"
]
},
{
"cell_type": "markdown",
"id": "4b977c0b",
"metadata": {},
"source": [
"## `Steady state computation failed`\n",
"\n",
"Let's run a simulation:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "97f797dd",
"metadata": {},
"outputs": [],
"source": [
"petab_problem = benchmark_models_petab.get_problem(\"Brannmark_JBC2010\")\n",
"amici_model = import_petab_problem(\n",
" petab_problem,\n",
" verbose=False,\n",
")\n",
"\n",
"amici_solver = amici_model.getSolver()\n",
"\n",
"\n",
"np.random.seed(1851)\n",
"problem_parameters = dict(\n",
" zip(\n",
" petab_problem.x_free_ids,\n",
" petab_problem.sample_parameter_startpoints(n_starts=1)[0],\n",
" )\n",
")\n",
"res = simulate_petab(\n",
" petab_problem=petab_problem,\n",
" amici_model=amici_model,\n",
" problem_parameters=problem_parameters,\n",
" scaled_parameters=True,\n",
" solver=amici_solver,\n",
")\n",
"\n",
"print(\n",
" \"Status:\",\n",
" [amici.simulation_status_to_str(rdata.status) for rdata in res[RDATAS]],\n",
")\n",
"\n",
"# hard to reproduce on GHA\n",
"if os.getenv(\"GITHUB_ACTIONS\") is None:\n",
" assert [\n",
" amici.simulation_status_to_str(rdata.status) for rdata in res[RDATAS]\n",
" ] == [\n",
" \"AMICI_ERROR\",\n",
" \"AMICI_NOT_RUN\",\n",
" \"AMICI_NOT_RUN\",\n",
" \"AMICI_NOT_RUN\",\n",
" \"AMICI_NOT_RUN\",\n",
" \"AMICI_NOT_RUN\",\n",
" \"AMICI_NOT_RUN\",\n",
" \"AMICI_NOT_RUN\",\n",
" ]"
]
},
{
"cell_type": "markdown",
"id": "0713b830",
"metadata": {},
"source": [
"**What happened?**\n",
"\n",
"All given experimental conditions require pre-equilibration, i.e., finding a steady state. AMICI first tries to find a steady state using the Newton solver, if that fails, it tries simulating until steady state, if that also fails, it tries the Newton solver from the end of the simulation. In this case, all three failed. Neither Newton's method nor simulation yielded a steady state satisfying the required tolerances.\n",
"\n",
"This can also be seen in `ReturnDataView.preeq_status` (the three statuses corresponds to Newton \\#1, Simulation, Newton \\#2):"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ffdc8e82",
"metadata": {},
"outputs": [],
"source": [
"rdata = res[RDATAS][0]\n",
"list(map(amici.SteadyStateStatus, rdata.preeq_status.flatten()))"
]
},
{
"cell_type": "markdown",
"id": "189dd964",
"metadata": {},
"source": [
"**How to address?**\n",
"\n",
"There are several ways to address that:\n",
"\n",
"1. Stricter integration tolerances (preferred if affordable - higher accuracy, but generally slower)\n",
"\n",
"2. Looser steady-state tolerances (lower accuracy, generally faster)\n",
"\n",
"3. Increase the number of allowed steps for Newton's method\n",
"\n",
"Let's try all of them:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "28fada9f",
"metadata": {},
"outputs": [],
"source": [
"# Reduce relative tolerance for integration\n",
"amici_solver = amici_model.getSolver()\n",
"amici_solver.setRelativeTolerance(\n",
" 1 / 100 * amici_solver.getRelativeTolerance()\n",
")\n",
"\n",
"res = simulate_petab(\n",
" petab_problem=petab_problem,\n",
" amici_model=amici_model,\n",
" problem_parameters=problem_parameters,\n",
" scaled_parameters=True,\n",
" solver=amici_solver,\n",
")\n",
"print(\n",
" \"status:\",\n",
" [amici.simulation_status_to_str(rdata.status) for rdata in res[RDATAS]],\n",
")\n",
"\n",
"rdata = res[RDATAS][0]\n",
"print(\n",
" f\"preeq_status={list(map(amici.SteadyStateStatus, rdata.preeq_status.flatten()))}\"\n",
")\n",
"print(f\"{rdata.preeq_numsteps=}\")\n",
"\n",
"# hard to reproduce on GHA\n",
"if os.getenv(\"GITHUB_ACTIONS\") is None:\n",
" assert all(rdata.status == amici.AMICI_SUCCESS for rdata in res[RDATAS])"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a6d34467",
"metadata": {},
"outputs": [],
"source": [
"# Increase relative steady state tolerance\n",
"for log10_relaxation_factor in range(1, 10):\n",
" print(f\"Relaxing tolerances by factor {10 ** log10_relaxation_factor}\")\n",
" amici_solver = amici_model.getSolver()\n",
" amici_solver.setRelativeToleranceSteadyState(\n",
" amici_solver.getRelativeToleranceSteadyState()\n",
" * 10**log10_relaxation_factor\n",
" )\n",
"\n",
" res = simulate_petab(\n",
" petab_problem=petab_problem,\n",
" amici_model=amici_model,\n",
" problem_parameters=problem_parameters,\n",
" scaled_parameters=True,\n",
" solver=amici_solver,\n",
" )\n",
" if all(rdata.status == amici.AMICI_SUCCESS for rdata in res[RDATAS]):\n",
" print(\n",
" f\"-> Succeeded with relative steady state tolerance {amici_solver.getRelativeToleranceSteadyState()}\\n\"\n",
" )\n",
" break\n",
" else:\n",
" print(\"-> Failed.\\n\")\n",
"\n",
"print(\n",
" \"status:\",\n",
" [amici.simulation_status_to_str(rdata.status) for rdata in res[RDATAS]],\n",
")\n",
"\n",
"rdata = res[RDATAS][0]\n",
"print(\n",
" f\"preeq_status={list(map(amici.SteadyStateStatus, rdata.preeq_status.flatten()))}\"\n",
")\n",
"print(f\"{rdata.preeq_numsteps=}\")\n",
"assert all(rdata.status == amici.AMICI_SUCCESS for rdata in res[RDATAS])"
]
},
{
"cell_type": "markdown",
"id": "6d1b1835",
"metadata": {},
"source": [
"That fixed the error, and took only a quarter of the number steps as the previous run, but at the cost of much lower accuracy."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "df1ee3fc",
"metadata": {},
"outputs": [],
"source": [
"# Let's try increasing the number of Newton steps\n",
"# (this is 0 by default, so the Newton solver wasn't used before,\n",
"# as can be seen from the 0 in `rdata.preeq_numsteps[0]`)\n",
"amici_solver = amici_model.getSolver()\n",
"amici_solver.setNewtonMaxSteps(10**4)\n",
"\n",
"res = simulate_petab(\n",
" petab_problem=petab_problem,\n",
" amici_model=amici_model,\n",
" problem_parameters=problem_parameters,\n",
" scaled_parameters=True,\n",
" solver=amici_solver,\n",
")\n",
"print(\n",
" \"status:\",\n",
" [amici.simulation_status_to_str(rdata.status) for rdata in res[RDATAS]],\n",
")\n",
"\n",
"rdata = res[RDATAS][0]\n",
"print(\n",
" f\"preeq_status={list(map(amici.SteadyStateStatus, rdata.preeq_status.flatten()))}\"\n",
")\n",
"print(f\"{rdata.preeq_numsteps=}\")\n",
"# hard to reproduce on GHA\n",
"if os.getenv(\"GITHUB_ACTIONS\") is None:\n",
" assert [\n",
" amici.simulation_status_to_str(rdata.status) for rdata in res[RDATAS]\n",
" ] == [\n",
" \"AMICI_ERROR\",\n",
" \"AMICI_NOT_RUN\",\n",
" \"AMICI_NOT_RUN\",\n",
" \"AMICI_NOT_RUN\",\n",
" \"AMICI_NOT_RUN\",\n",
" \"AMICI_NOT_RUN\",\n",
" \"AMICI_NOT_RUN\",\n",
" \"AMICI_NOT_RUN\",\n",
" ]"
]
},
{
"cell_type": "markdown",
"id": "5c746982",
"metadata": {},
"source": [
"Increasing the maximum number of Newton steps doesn't seem to help here. The Jacobian was numerically singular and its factorization failed. This can be a result of conserved quantities in the model. Section [Steady state sensitivity computation failed due to unsuccessful factorization of RHS Jacobian](#unsuccessful_factorization) shows how to address that."
]
}
],
"metadata": {
Expand Down
10 changes: 6 additions & 4 deletions python/sdist/amici/de_export.py
Original file line number Diff line number Diff line change
Expand Up @@ -58,8 +58,8 @@
generate_flux_symbol,
smart_subs_dict,
strip_pysb,
symbol_with_assumptions,
toposort_symbols,
unique_preserve_order,
)
from .logging import get_logger, log_execution_time, set_log_level

Expand Down Expand Up @@ -2754,7 +2754,7 @@
heavisides = []
# run through the expression tree and get the roots
tmp_roots_old = self._collect_heaviside_roots(dt_expanded.args)
for tmp_old in tmp_roots_old:
for tmp_old in unique_preserve_order(tmp_roots_old):
# we want unique identifiers for the roots
tmp_new = self._get_unique_root(tmp_old, roots)
# `tmp_new` is None if the root is not time-dependent.
Expand All @@ -2767,9 +2767,11 @@
if heavisides:
# only apply subs if necessary
for heaviside_sympy, heaviside_amici in heavisides:
dxdt = dxdt.subs(heaviside_sympy, heaviside_amici)
dt_expanded = dt_expanded.subs(

Check warning on line 2770 in python/sdist/amici/de_export.py

View check run for this annotation

Codecov / codecov/patch

python/sdist/amici/de_export.py#L2770

Added line #L2770 was not covered by tests
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

was the issue that expand led to manipulation of the trigger function which was then missed during this substitution?

Not super happy that we manipulate the rhs by applying expand here as I think we discussed at some point we want to keep simplification optional. I don't see why the expand is necessary here in the first place. Looks like this was introduced in #1352

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

was the issue that expand led to manipulation of the trigger function which was then missed during this substitution?

Correct.

Not super happy that we manipulate the rhs by applying expand here as I think we discussed at some point we want to keep simplification optional. I don't see why the expand is necessary here in the first place. Looks like this was introduced in #1352

        # expanding the rhs will in general help to collect the same
        # heaviside function

So I guess, not expanding the expression may lead to more root functions to be tracked than strictly necessary in certain cases. I don't know how common this issue really is, and how much of a performance impact each trigger function has.

I am open to dropping expand.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like this was introduced in #1352

Specifically in f27004f
Doesn't really provide more context, though.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

was the issue that expand led to manipulation of the trigger function which was then missed during this substitution?

Correct.

Not super happy that we manipulate the rhs by applying expand here as I think we discussed at some point we want to keep simplification optional. I don't see why the expand is necessary here in the first place. Looks like this was introduced in #1352

        # expanding the rhs will in general help to collect the same
        # heaviside function

So I guess, not expanding the expression may lead to more root functions to be tracked than strictly necessary in certain cases. I don't know how common this issue really is, and how much of a performance impact each trigger function has.

I am open to dropping expand.

But we are anyways calling sp.simplify in _get_unique_root. If expand really makes a difference, we could do sp.simplify(root_found.expand() - root.get_val()), but I would be surprised if that's really the case. We can map multiple Heaviside functions to the same root finding function, so I would prefer keeping all symbolic manipulations in _get_unique_root.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If expand really makes a difference

I don't really have any evidence.

I find it hard to imagine though that we'd get a relevant performance hit from tracking e.g. 3 equivalent root functions instead of just one.

I'm fine with either. Just dropping the expand requires fewest changes.

Copy link
Member Author

@dweindl dweindl Dec 13, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well, I don't think the expand ever really improved anything in terms of reducing the number of root functions. Either it would simply have had no effect at all (i.e. the same roots would have been extracted with or without expand), or it would have prevented replacing the Heavisides because the to-be-substituted subexpressions weren't found in the original expression. So I'd say it's safe to drop it completely.

heaviside_sympy, heaviside_amici
)

return dxdt
return dt_expanded


class DEExporter:
Expand Down
15 changes: 15 additions & 0 deletions python/sdist/amici/import_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -734,5 +734,20 @@ def strip_pysb(symbol: sp.Basic) -> sp.Basic:
return symbol


def unique_preserve_order(seq: Sequence) -> list:
"""Return a list of unique elements in Sequence, keeping only the first
occurrence of each element

Parameters:
seq: Sequence to prune

Returns:
List of unique elements in ``seq``
"""
seen = set()
seen_add = seen.add
return [x for x in seq if not (x in seen or seen_add(x))]


sbml_time_symbol = symbol_with_assumptions("time")
amici_time_symbol = symbol_with_assumptions("t")
2 changes: 1 addition & 1 deletion python/sdist/amici/sbml_import.py
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,6 @@
DEModel,
_default_simplify,
smart_is_zero_matrix,
symbol_with_assumptions,
)
from .import_utils import (
RESERVED_SYMBOLS,
Expand All @@ -54,6 +53,7 @@
sbml_time_symbol,
smart_subs,
smart_subs_dict,
symbol_with_assumptions,
toposort_symbols,
)
from .logging import get_logger, log_execution_time, set_log_level
Expand Down
2 changes: 2 additions & 0 deletions tests/benchmark-models/test_petab_benchmark.py
100755 → 100644
Original file line number Diff line number Diff line change
Expand Up @@ -38,6 +38,8 @@
# excluded due to excessive numerical failures
"Crauste_CellSystems2017",
"Fujita_SciSignal2010",
# fails for unclear reasons on GHA
"Giordano_Nature2020",
)
]

Expand Down
Loading