Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add action, reward and obs wrappers #311

Draft
wants to merge 4 commits into
base: main
Choose a base branch
from
Draft

Conversation

belerico
Copy link
Member

Summary

This PR adds various env wrappers:

Type of Change

Please select the one relevant option below:

  • Bug fix (non-breaking change that solves an issue)
  • New feature (non-breaking change that adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • Documentation update
  • Other (please describe):

Checklist

Please confirm that the following tasks have been completed:

  • I have tested my changes locally and they work as expected. (Please describe the tests you performed.)
  • I have added unit tests for my changes, or updated existing tests if necessary.
  • I have updated the documentation, if applicable.
  • I have installed pre-commit and run locally for my code changes.

Screenshots or Visuals (Optional)

If applicable, please provide screenshots, diagrams, graphs, or videos of the changes, features or the error.

Additional Information (Optional)

Please provide any additional information that may be useful for the reviewer, such as:

  • Any potential risks or challenges associated with the changes.
  • Any instructions for testing or running the code.
  • Any other relevant information.

Thank you for your contribution! Once you have filled out this template, please ensure that you have assigned the appropriate reviewers and that all tests have passed.

@belerico
Copy link
Member Author

@michele-milesi One problem is the obs normalization statistics: if one wants to test an algo trained with normalized obs then (s)he needs to also apply the same statistics to the test env I suppose. A simple solution would be to pass the same env used for training to the test function, but this does not solve the offline test. To solve that we should maybe save also the RunningMeanStd. What do you think?

@belerico
Copy link
Member Author

Another issue: the obs and rewards normalization is done per-env since the wrappers are created inside the make_env method, then called in the agent code by the SyncVectorEnv or AsyncVectorEnv. Do we want to maintain the normalization independent per env or do we want to apply the normalization on the overall vector-env?
For reference

@belerico belerico marked this pull request as draft July 10, 2024 20:25
@michele-milesi
Copy link
Member

I was thinking of creating custom normalizers with a standard format (that works with numpy arrays), e.g., a class that must define 3 methods:

  • __call__() or normalize() for applying the normalization.
  • state_dict() to save the state of the normalizer.
  • load_state_dict() is used to load the state of the normalizer (as for torch modules).

I propose to normalize the observations returned by the env.step() function, in this way, we do not have a normalizer for each environment, but one global normalizer.

@belerico what do you think?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants