This repository contains the source code for HIT (Hierarchical Transformer). It uses Fused Attention Mechanism (FAME) for learning representation learning from code-mixed texts. We evaluate HIT on code-mixed sequence classification, token classification and generative tasks.
We publish the datasets (publicly available) and the experimental setup used for different tasks.
$ pip install -r requirements.txt
$ cd experiments && python experiments_hindi_sentiment.py \
--train_data ../data/hindi_sentiment/IIITH_Codemixed.txt \
--model_save_path ../models/model_hindi_sentiment/
$ cd experiments && python experiments_hindi_POS.py \
--train_data '../data/POS Hindi English Code Mixed Tweets/POS Hindi English Code Mixed Tweets.tsv' \
--model_save_path ../models/model_hindi_pos/
$ cd experiments && python experiments_hindi_NER.py\
--train_data '../data/NER/NER Hindi English Code Mixed Tweets.tsv' \
--model_save_path ../models/model_hindi_NER/
$ cd experiments && python nmt.py \
--data_path '../data/IITPatna-CodeMixedMT' \
--model_save_path ../models/model_hindi_NMT/
For sentiment classification, PoS and NER classification we use macro precision, recall and F1 score to evaluate the models. For machine translation task we use BLEU, ROGUE-L and METEOR scores. To accommodate class imbalance we use weighted precision for hindi sentiment classification task.
$macro-precision = \sum_{i=1}^{C}pr_{i}$
$macro-recall = \sum_{i=1}^{C}re_{i}$
$macro-F1 = \sum_{i=1}^{C}\frac{2*pr_{i}*re_{i}}{(pr_{i} + re_{i})}$
The below table can be reproduced by using only the macro score.
Model | Macro-Precision | Macro-Recall | Macro-F1 |
---|---|---|---|
BiLSTM | 0.894 | 0.901 | 0.909 |
HAN | 0.889 | 0.906 | 0.905 |
CS-ELMO | 0.901 | 0.903 | 0.909 |
ML-BERT | 0.917 | 0.914 | 0.909 |
HIT | 0.926 | 0.914 | 0.915 |
If you find this repo useful, please cite our paper:
@inproceedings{,
author = {Ayan Sengupta and
Sourabh Kumar Bhattacharjee and
Tanmoy Chakraborty and
Md. Shad Akhtar},
title = {HIT: A Hierarchically Fused Deep Attention Network for Robust Code-mixed Language Representation},
booktitle = {Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics},
publisher = {Association for Computational Linguistics},
year = {2021},
url = {https://aclanthology.org/2021.findings-acl.407},
doi = {10.18653/v1/2021.findings-acl.407},
}