forked from gyglim/video2gif_dataset
-
Notifications
You must be signed in to change notification settings - Fork 0
/
README
71 lines (49 loc) · 2.77 KB
/
README
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
Please note: We provide the pre-trained model here: https://github.com/gyglim/video2gif_code
================================================================================
Video2GIF dataset, version 0.9
The Video2GIF dataset contains over 100,000 pairs of GIFs and their source
videos. The GIFs were collected from two popular GIF websites (makeagif.com,
gifsoup.com) and the corresponding source videos were collected from YouTube in
Summer 2015. We provide IDs and URLs of the GIFs and the videos, along with
temporal alignment of GIF segments to their source videos. The dataset shall be
used to evaluate GIF creation and video highlight techniques.
In addition to the 100K GIF-video pairs, the dataset contains 357 pairs of GIFs
and their source videos as the test set. The 357 videos come with a Creative
Commons CC-BY license, which allows us to redistribute the material with
appropriate credit. We provide this test set to make the results reproducible
even when some of the videos become unavailable.
If you end up using the dataset, we ask you to cite the following paper:
Michael Gygli, Yale Song, Liangliang Cao
"Video2GIF: Automatic Generation of Animated GIFs from Video,"
IEEE CVPR 2016
If you have any question regarding the dataset, please contact:
Michael Gygli <[email protected]>
License: This dataset is licensed under BSD, see LICENSE file
================================================================================
Full description:
This repo has the following content:
(a) "metadata.txt"
This file contains information about GIFs, their corresponding source video
ID, and temporal alignment of the GIF to the video. It contains the following
fields:
youtube_id, is_creative_commons, gif_id, gif_url, gif_start_frame,
gif_end_frame, gif_start_sec, gif_end_sec, gif_title, gif_views, gif_age,
video_title, video_views, video_publish_date, video_likes, video_duration,
video_frame_count, video_category, video_rating, video_description,
video_retrieved_date
(b) "video_tags.txt”
This file contains a list of video tags for each video.
Format: YoutubeID;\t tag1; tag2; ...; tagN
(c) "testset.txt"
This file contains YouTube IDs of the videos used for evaluatation. We
provide these videos on:
https://data.vision.ee.ethz.ch/cvl/video2gif/YouTubeID.mp4
(d) "./v2g_evaluation/"
This directory contains evaluation code for nMSD and Average Precision used
in our CVPR 2016 paper. It requires Python packages 'numpy', 'scikit-learn'
and 'pandas' which can be installed using pip.
(e) "example.py"
This script shows how to evaluate the predictions of a model.
(f) "setup.py"
Script to install the evaluation package. Run `python setup.py install --user`
Last edit: August 10, 2017