MVIFSA: Enhancing Relation Detection in Knowledge Base Question Answering through Multi-View Information Fusion and Self-Attention
MVIFSA comprises five network layers: a multi-view embedding layer, an information fusion layer, a complex information representation layer, a residual learning layer, and a self-attention layer. The model diagram is as follows.
You can download the word encoding files required for the experiment from GloVe.
The environment required for the code is in requirements.
-
Configure the dataset in
config.ini
and the maximum length. -
You can run
preprocess.py
to preprocess the dataset to get various training and test sets for model training.
Train model.
The MVIFSA.py
file contains the model building and training code. After data preprocessing, you can directly run the file to get the model training results.
Once you have saved your training model, run the MVIFSA_eval.py
file, which allows you to evaluate the trained model.