Skip to content

Commit

Permalink
oov penalty
Browse files Browse the repository at this point in the history
  • Loading branch information
hieuhoang committed Jun 10, 2016
1 parent 58ce86b commit 0e41d81
Show file tree
Hide file tree
Showing 5 changed files with 1 addition and 1 deletion.
Binary file modified acl.2016/acl2016.pdf
Binary file not shown.
2 changes: 1 addition & 1 deletion acl.2016/acl2016.tex
Original file line number Diff line number Diff line change
Expand Up @@ -212,7 +212,7 @@ \subsection{Lexicalized Re-ordering Model Optimizations}

Similar to the phrase-table, the lexicalized re-ordering model is trained on parallel data. A resultant model file is then queried during decoding. The need for random lookup during querying inevitably results in slower decoding speed. Previous work such as~\newcite{junczys_tsd_2012b} improve querying speed with more efficient data structures.

However, the model's query keys are the source and target phrase of each translation rule. Rather than storing the lexicalized re-ordering model separately, we shall integrating it into the translation model, eliminating the need to query a separate file. The lexicalized reordering model retains its own weights in the log-linear framework. The total weighted score is expected to be identical to the Moses baseline apart from edge case training configurations, for example, where the lexicalized reordering model is trained on a different dataset than the translation model.
However, the model's query keys are the source and target phrase of each translation rule. Rather than storing the lexicalized re-ordering model separately, we shall integrating it into the translation model, eliminating the need to query a separate file. However, the model remain the same under the log-linear framework, including having its own weights. %The search algorithm will produce identical results to the Moses baseline apart from edge case training configurations, for example, where the lexicalized reordering model is trained on a different dataset than the translation model.

This optimization has precedent in ~\newcite{peitz2012jane} but the effect on decoding speed were not published. We will compare results with using a separate model in this paper.

Expand Down
Binary file modified icon.2014/basics.pdf
Binary file not shown.
Binary file modified iit-bombay/basics.pdf
Binary file not shown.
Binary file modified jnu.2014/basics.pdf
Binary file not shown.

0 comments on commit 0e41d81

Please sign in to comment.