Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

can not get the paper result #3

Open
softmicro929 opened this issue Jan 20, 2019 · 12 comments
Open

can not get the paper result #3

softmicro929 opened this issue Jan 20, 2019 · 12 comments

Comments

@softmicro929
Copy link

have you anyone get the fig2 result in paper? my model doesn't convergence。

@etleader
Copy link

have you anyone get the fig2 result in paper? my model doesn't convergence。

I hava the same question, what's your specific condition? When i train the model, the reward almost don't change. When i test the TMs, i find the training as if never learned sth.

@softmicro929
Copy link
Author

have you anyone get the fig2 result in paper? my model doesn't convergence。

I hava the same question, what's your specific condition? When i train the model, the reward almost don't change. When i test the TMs, i find the training as if never learned sth.

yes, learn nothing, but you should fix the TM when testing over the training stage

@etleader
Copy link

have you anyone get the fig2 result in paper? my model doesn't convergence。

I hava the same question, what's your specific condition? When i train the model, the reward almost don't change. When i test the TMs, i find the training as if never learned sth.

yes, learn nothing, but you should fix the TM when testing over the training stage

Sorry to bother you, i don't really understand what u mean? How to fix the TMs?

@softmicro929
Copy link
Author

have you anyone get the fig2 result in paper? my model doesn't convergence。

I hava the same question, what's your specific condition? When i train the model, the reward almost don't change. When i test the TMs, i find the training as if never learned sth.

yes, learn nothing, but you should fix the TM when testing over the training stage

Sorry to bother you, i don't really understand what u mean? How to fix the TMs?

the author's code didn't do testing , so you have to write test code by yourself to get the fig result in paper.

@Lui-Chiho
Copy link

have you anyone get the fig2 result in paper? my model doesn't convergence。

Sorry to bother you!
Have You get the Fig. 1 result ? I still can't understand how to use the TMs mentioned in the paper to train this DRL-Agent . Can you explain the whole training process, because in the given code , I did not find any correlation between the previous state and the new state, It seems that they are all randomly generated using np.random.

@wqhcug
Copy link

wqhcug commented Apr 26, 2019

have you anyone get the fig2 result in paper? my model doesn't convergence。

Sorry to bother you!
Have You get the Fig. 1 result ? I still can't understand how to use the TMs mentioned in the paper to train this DRL-Agent . Can you explain the whole training process, because in the given code , I did not find any correlation between the previous state and the new state, It seems that they are all randomly generated using np.random.

Excuse me. Also have similar question, I can't understand why the state(TMs) and the new_state(TMs) are randomly generated in the step function which in Environment.py .It isn't meeting the logic of DRL.

@FaisalNaeem1990
Copy link

Can any get the result of the same as in paper as the model is not converging

@CZMG
Copy link

CZMG commented May 29, 2019

I have the same question. I dont't understand why the old state and the new state are randomly generated in the Environment.py.

@FaisalNaeem1990
Copy link

FaisalNaeem1990 commented May 29, 2019 via email

@wqhcug
Copy link

wqhcug commented May 29, 2019

I have the same question. I dont't understand why the old state and the new state are randomly generated in the Environment.py.
Did you run the whole simulations or not.

Excuse me. I run the whole simulations. But in my daily study, the STATE of Reinforcement Learning is usually changed by the ACTION, but in the code of this paper, we can find flie that in Environment.py, its NEW STATE and OLD STATE are randomly generated, which does not seem to meet the logic of Reinforcement Learning. Which teacher can answer my confusion? Thank you very much.

@ljh14
Copy link

ljh14 commented Dec 5, 2019

I have the same question. I dont't understand why the old state and the new state are randomly generated in the Environment.py.
Did you run the whole simulations or not.

Excuse me. I run the whole simulations. But in my daily study, the STATE of Reinforcement Learning is usually changed by the ACTION, but in the code of this paper, we can find flie that in Environment.py, its NEW STATE and OLD STATE are randomly generated, which does not seem to meet the logic of Reinforcement Learning. Which teacher can answer my confusion? Thank you very much.

I've also found this question. I think that the author need to do some explanations. It disobey the basic logic of reinforcement learning. @gissimo

@slblbwl
Copy link

slblbwl commented Nov 2, 2023

hello,Please ask how I can run the whole simulation, can you tell me the approximate steps, thank you very much!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

8 participants