Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reduce memory usage in reinforcement learning #485

Merged
merged 10 commits into from
Nov 5, 2024

Conversation

harisorgn
Copy link
Member

Uses saveat in run_experiment! in order to reduce memory usage. For large systems of stiff equations my 16GB RAM maxes out even with moderate tspan (~100).

Also reorganizes run_experiment! dispatches to call run_trial! internally.

We could reduce memory even further by using save_idxs too, since we do not currently need all states saved. However this does not seem to work with time interpolation and indexing observed states, will investigate further in another PR.

@harisorgn harisorgn merged commit a77cb63 into master Nov 5, 2024
6 checks passed
@harisorgn harisorgn deleted the ho/run_experiment_memory_fix branch November 5, 2024 18:51
david-hofmann pushed a commit that referenced this pull request Nov 11, 2024
* move `t_warmup` to kwargs

* more RL save test

* add `run_warmup` function

* update `run_trial!` function

* add getter functions for states and times when actions and learning rules are evaluated

* update `run_experiment!` dispatches

* move RL tests in a single file

* remove `save_idxs` kwarg

* import `CSV.write` for RL save
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant