You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
during inference in batches, if reorder_output becomes True (if input_seqs are not of PackedSequence), then outputs are correctly reordered to match input order.
however, if return_attention is requested (set to True) then it is (and that's the bug) returned in an order that does not match the inputs.
a possible fix could be in the following section in the code:
The text was updated successfully, but these errors were encountered:
levhaikin
changed the title
attention weights should be reordered as well as the outputs in case inputs were sorted and packed
attention weights should be reordered as well as the outputs in case inputs were sorted and packed when doing batch inference
Nov 6, 2019
torchMoji/torchmoji/model_def.py
Line 243 in 198f7d4
during inference in batches, if
reorder_output
becomesTrue
(ifinput_seqs
are not ofPackedSequence
), then outputs are correctly reordered to match input order.however, if
return_attention
is requested (set toTrue
) then it is (and that's the bug) returned in an order that does not match the inputs.a possible fix could be in the following section in the code:
torchMoji/torchmoji/model_def.py
Line 243 in 198f7d4
similarly to how outputs are reordered - we can add the attention reordering code under that same
if
statement:does that make sense?
am i missing anything?
The text was updated successfully, but these errors were encountered: