Skip to content

tzuhsial/pytorch-vqa-dan

 
 

Repository files navigation

Dual Attention Networks for Visual Question Answering

This is a PyTorch implementation of Dual Attention Networks for Multimodal Reasoning and Matching. I forked the code from Cyanogenoid's pytorch-vqa and replaced the model with my implementation of Dual Attention Networks because doing all the data preprocessing and loading stuff is kinda nasty. Please see pytorch-vqa on how the data was preprocessed and extracted.

Differences between paper and this model

  • Learning rate decay: the original paper halved the learning after 30 epochs and trained for another 30 epochs. we used the forked code optimization and halved learning rate after 50k iterations.
  • Answer scoring: the original paper used only a single layer to score the answers with the memory vector. Our implementation uses a 2 layer network.
  • Pretrained word embeddings: the original paper used 512 as word embedding dimension. For the below graph, we used 300 and load pretrained Glove vectors.

Our implementation reaches around 61% validation accuracy after running 20 epochs. Learning graph

Requirements

Python version 3

  • h5py
  • torch
  • torchvision
  • tqdm
  • torchtext

Plotting

  • numpy
  • matplotlib

About

A PyTorch implementation of Dual Attention Network

Topics

Resources

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 100.0%