Our approach ultimately relies on tuning of LLM for binary classification task while also including information from wiki-data graph domain in LLM pipeline. The representations for target prediction on question-answer pair is acquired by addressing the last hidden layer representation of the
According to the nature of task it is obvious that amongst candidate answers only one of them is correct, however the amount of the candidate answers for single question is not known beforehand. During inference we utilize knowledge about only one candidate answer being right and select the most probable answer to be correct according to model scores. This naturally allows to use model trained for classification target for ranking top-1 candidate answer.
For our research, we utilized the TextGraphs17-shared-task dataset, consisting of 37,672 question-answer pairs annotated with Wikidata entities. This dataset includes 10 different types of data, notably entities from Wikidata mentioned in both the answer and the corresponding question, as well as a shortest-path graph for each
During training and evaluation of our models we use metrics same as ones present in the workshop leaderboard, which include
We propose implementing the all-Mpnet-base model, training it on question-answer pairs with incorporated linearized knowledge graphs. Additionally, we utilize the LoRA implementation from the \texttt{peft} library and apply oversampling techniques to address imbalance in the training dataset.
Thus, the
- We propose a method of combining textual and graph information. Adding linearized sub-graph directly into main question prompt with additional separate tokens allows to improve the performance of models working with each modality separately.
- We conducted a thorough study of LLM backbones and wide hyper-parameter search. For efficient training we provided fine-tuning with LoRA.
-
What did the blind man say when he walked into a bar?
-
Hello everyone, long time no see!
Bert's mom is packing him off to school:
- She puts graphs, texts, and "lora" into the bag.
- Mom, why the heck??
- Well, you take the graphs, cut them up, put them on the text, and eat them.
- And the "lora"?
- Here it is!