An open source visualization tool for Temporal Action Localization in Constrained/Unconstrained Videos.
TALVT is a HTML and Javascript based Web application.
Features include:
- The ability to visualize temporal segments of a particular Action Instance in the video
- Has support upto 20 class Instance
- Can visualize both Supervised and Weakly Supervised Model Output
- Includes code to convert output of two standard codes in Temporal Action Localization into visualizable format.
TALVT helps facilitate an end-to-end machine learning pipeline:
Nothing to install ! Simple ! Double click the "src/index.html" it will pop up the web application for you. Happy Visualizing !
TALVT web-application preview. The simplest tool to visualize temporal action localization code snippet available on GitHub for the First time
Interface Design :
- Select Video : You need to select a video for which you want to visualize the temporal segments
- Predicted File : The script which will be provided in this repository will convert the pytorch code output into a readable format (.txt)
- GT Annotation File : The sample scripts for GT data format for THUMOS14 and ActivityNet will be provided separately in .txt format
- Choose Method : It supports two methods namely "Supervised" and "Semi-Supervised". Based on the supervision for the pytorch code select this.
The following is the data format followed for visualization of temporal segments :
[Start Time] [End Time] [Action Class]
- Sample Codes for generating data format of few state-of-the-art Pytorch Codes
- Sample codes for generating GT data format of any Dataset
- More powerful UI with a flask based python application