The ability to generate natural language from source code is an open research topic and has gained an increasing popularity in recent years. Due to the nature of open research topics, there is no silverbullet in solving this problem, meaning there are different promising approaches being explored by the research community.
This specific work is heavily inspired by Uri Alon, Shaked Brody, Omer Levy and Eran Yahav, "code2seq: Generating Sequences from Structured Representations of Code" [PDF] and relies partly on their unofficial implementation on GitHub [Repository].
Documentation plays an important role in the process of software development. It helps other developers to better understand the software's source code and enables them to build on each others ideas.
The aim of this project is to improve the experience of developers with poorly documented code, when reaching out to the original authors is not an option.
The quickstart notebook is a good starting point to get an overview of what this project is about.
If you want to train and evaluate the model yourself, you can find more information about the project's structure and a training and evaluation guide in the wiki.