Skip to content

v1.1.0

Latest
Compare
Choose a tag to compare
@desh2608 desh2608 released this 18 Dec 22:29

The following changes have been made:

  1. The semi-orthogonal loss is now computed as the Frobenius norm of P (P = torch.mm(M, M.T)), instead of the Frobenius norm of (P - \alpha^2 I). This makes it consistent with the loss reporting in Kaldi.

  2. The forward() function in the TDNNF class now takes semi_ortho_step as argument instead of training. This allows the calling function to make the decision about whether or not to take the step towards semi-orthogonality.

  3. The initialization of the TDNN layer now takes a bias argument, which specifies whether or not to use bias in the Conv1D layer. When using the TDNN for the SemiOrthogonalConv class for TDNNF, we set bias = False, so that the matrix factorization checks out correctly.