Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RLlib: env must be one of following supported types issue #10

Open
kirmiziorman opened this issue Feb 20, 2023 · 2 comments
Open

RLlib: env must be one of following supported types issue #10

kirmiziorman opened this issue Feb 20, 2023 · 2 comments

Comments

@kirmiziorman
Copy link

Hello @maxpumperla ,

Thank you for the amazing book, it's been a real pleasure working through it so far!

I am, however, having a problem with evaluating the maze game using RLlib in chapter 4. When I get to the command

rllib evaluate ~/ray_results/maze_env/ --algo DQN --env maze_gym_env.Environment --steps 100

I get the following error:

ValueError: Env must be of one of the following supported types: BaseEnv, gym.Env, MultiAgentEnv, VectorEnv, 
RemoteBaseEnv, ExternalMultiAgentEnv, ExternalEnv, but instead is of type class <'maze_gym_env.Environment'>.

The above error has been found in your environment! We've added a module for checking your custom environments. It may 
cause your experiment to fail if your environment is not set up correctly. You can disable this behavior by setting 
'disable_env_checking=True' in your environment config dictionary. You can run the environment checking module 
standalone by calling ray.rllib.utils.check_env([env]).

I'm don't understand why I am getting this error as I followed your instructions to the letter. It's also not clear where the config dictionary is supposed to be stored.

I was wondering if you could help me resolve this issue?

Thank you!

@kirmiziorman
Copy link
Author

I have just checked and the problem also appears when you run the notebook in Colab

@maxpumperla
Copy link
Owner

@kirmiziorman thanks so much for the feedback and spotting this mistake. I can't look into it today, but from the warning/error message it seems at least this can be turned off by setting disable_env_checking=True.

Likely the best way is to make it a gym.Env and sidestep this problem, which I thought we were already doing. Let me check this for you. We had a lot of moving parts in RLlib lately, so I apologise for the inconvenience here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants