Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How does it work at inference time? #1

Open
msinto93 opened this issue Jun 28, 2019 · 2 comments
Open

How does it work at inference time? #1

msinto93 opened this issue Jun 28, 2019 · 2 comments

Comments

@msinto93
Copy link

In order to predict multiple time steps ahead for the training/validation/testing where you already have access to the future intermediate representations you simply combine the prediction at the previous time step with the next incoming intermediate representation and use this as the next time step's input. But how do you predict multiple time steps ahead live at inference time when you don't have access to the future intermediate representations? I couldn't see any code for this in the repo.

@DKandrew
Copy link

The statement in your first sentence is correct, and I think they did implement it. Read the line 167-173 in the infer-main.ipynb from your IDE, or what I copy pasted below

   if offset >= 20:
        new_inp = inp.clone().squeeze(0)
        mn, mx = torch.min(prevOut), torch.max(prevOut)
        prevOut = (prevOut - mn) / (mx - mn)
        new_inp[0] = prevOut
        new_inp[4] = prevChannels[0, 4, :, :]        
        inp = new_inp.unsqueeze(0).cuda()

@starcosmos1225
Copy link

starcosmos1225 commented Nov 24, 2020

I also have questions about this reasoning process. In the following code, only the vehicles layer is set to the current state, and the h_cur of the previous ConvLSTM is assigned to the next inp[0].

if offset >= 20:
new_inp = inp.clone().squeeze(0)
        mn, mx = torch.min(prevOut), torch.max(prevOut)
        prevOut = (prevOut - mn) / (mx - mn)
        new_inp[0] = prevOut
        new_inp[4] = prevChannels[0, 4, :, :]        
        inp = new_inp.unsqueeze(0).cuda()

But in inference time, the environmental information should not be obtained because it comes from the future. I modified this code to the following code and tested it, and found that the result was worse:

if offset >= 20:
       new_inp = inp.clone().squeeze(0)
       mn, mx = torch.min(prevOut), torch.max(prevOut)
       prevOut = (prevOut - mn) / (mx - mn)
       new_inp = prevChannels[0, :, :, :]
       new_inp[0] = prevOut
       inp = new_inp.unsqueeze(0).cuda()

Before modified the code, the result is:
1s: 2.803831390248086, 2s: 4.294110099108489, 3s: 5.880810761892322, 4s: 7.932508908912126
After modified the code, the result is:
1s: 3.0671675635446505, 2s: 5.282467410635003, 3s: 7.603446948335296, 4s: 10.115045174018357

How could the future environmental information be used in the reasoning process?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants