You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Modern NMT systems often are trained and learned via example-based method which is often not possible with low-resource languages (LRLs). The author proposes a “fuzzy analogies” method which lessens that strictness between correlated sentences and capture approximate conformity between the translation matches.
Proposed method
Generate predictions based on the author’s proposed method called “fuzzy analogies” which handles partial analogies that capture approximate conformity between sentence transformations.
My Summary
This research was conducted only with one set of language pair (English-Japanese). This method is not yet tested with other language pairs. So, it’s yet to be convinced that how well the proposed model would perform with other pairs. In this paper, the model shows a score of up to 6.0 BLEU score over the NMT baseline model (which did 2.9) used for comparing. However, the model doesn’t always outperform the baseline NMT model (in 415 out of 2000, it performed worse). Overall, future works includes ablation of the model to finetune to the most optimized version and also to test on other language pairs. Until then, it’s still possible the claimed improvements over the NMT baseline model can be due to the selected language as not every LRL have a direct translation to another LRL or high resource language (HRL).
Datasets
English-Japanese (limited dataset size)
The text was updated successfully, but these errors were encountered:
A quick note, if you could find an example that shows exactly what happens to an input sentence by this method, it would be best.
Sure, I'll make an update.
thangk
changed the title
A Framework for Neural Machine Translation by Fuzzy Analogies (2023)
2023-IARML@IJCAI 2023-A Framework for Neural Machine Translation by Fuzzy Analogies
Jun 25, 2024
Link: Semantic Scholar
Main problem
Modern NMT systems often are trained and learned via example-based method which is often not possible with low-resource languages (LRLs). The author proposes a “fuzzy analogies” method which lessens that strictness between correlated sentences and capture approximate conformity between the translation matches.
Proposed method
Generate predictions based on the author’s proposed method called “fuzzy analogies” which handles partial analogies that capture approximate conformity between sentence transformations.
My Summary
This research was conducted only with one set of language pair (English-Japanese). This method is not yet tested with other language pairs. So, it’s yet to be convinced that how well the proposed model would perform with other pairs. In this paper, the model shows a score of up to 6.0 BLEU score over the NMT baseline model (which did 2.9) used for comparing. However, the model doesn’t always outperform the baseline NMT model (in 415 out of 2000, it performed worse). Overall, future works includes ablation of the model to finetune to the most optimized version and also to test on other language pairs. Until then, it’s still possible the claimed improvements over the NMT baseline model can be due to the selected language as not every LRL have a direct translation to another LRL or high resource language (HRL).
Datasets
English-Japanese (limited dataset size)
The text was updated successfully, but these errors were encountered: