Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About max_val #2

Open
thbupt opened this issue Jun 11, 2018 · 2 comments
Open

About max_val #2

thbupt opened this issue Jun 11, 2018 · 2 comments

Comments

@thbupt
Copy link

thbupt commented Jun 11, 2018

For stability, you compute the max_val = torch.max((exp_term_tmp).clamp(min=0), dim=1, keepdim=True)[0], which means you compute the max value for the first sample(max_val has size 1 because of [0] operation). But I think max value should be computed for each sample seperately without [0] operation, then the max_val has size of N*1. Is it right?

@sandonair007
Copy link

The size of max_val is actually N*1, within the [0] operation. Because

torch.max(input, dim, keepdim=False, out=None) -> (Tensor, LongTensor)
Returns the maximum value of each row of the input tensor in the given dimension dim. The second return value is the index location of each maximum value found (argmax).

Link: https://pytorch.org/docs/stable/torch.html?highlight=max#torch.max

@danieltan07
Copy link
Owner

danieltan07 commented Jun 15, 2018

@thbupt sorry for the late reply. @muchuanyun is correct. torch.max returns a tuple where the first element are the max values and the second element are the indices.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants