-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathindex.html
467 lines (384 loc) · 12.9 KB
/
index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<script src="http://www.google.com/jsapi" type="text/javascript"></script>
<script type="text/javascript">google.load("jquery", "1.3.2");</script>
<style type="text/css">
body {
font-family: "Titillium Web", "HelveticaNeue-Light", "Helvetica Neue Light", "Helvetica Neue", Helvetica, Arial, "Lucida Grande", sans-serif;
font-weight: 300;
font-size: 17px;
margin-left: auto;
margin-right: auto;
width: 980px;
}
h1 {
font-weight:300;
line-height: 1.15em;
}
h2 {
font-size: 1.75em;
}
a:link,a:visited {
color: #1367a7;
text-decoration: none;
}
a:hover {
color: #208799;
}
h1, h2, h3 {
text-align: center;
}
h1 {
font-size: 40px;
font-weight: 500;
}
h2 {
font-weight: 400;
margin: 16px 0px 4px 0px;
}
.paper-title {
padding: 16px 0px 16px 0px;
}
section {
margin: 32px 0px 32px 0px;
text-align: justify;
clear: both;
}
.col-6 {
width: 16.6%;
float: left;
}
.col-5 {
width: 20%;
float: left;
}
.col-4 {
width: 25%;
float: left;
}
.col-3 {
width: 33%;
float: left;
}
.col-2 {
width: 50%;
float: left;
}
.row, .author-row, .affil-row {
overflow: auto;
}
.author-row, .affil-row {
font-size: 20px;
}
.row {
margin: 16px 0px 16px 0px;
}
.authors {
font-size: 18px;
}
.affil-row {
margin-top: 16px;
}
.teaser {
max-width: 100%;
}
.text-center {
text-align: center;
}
.screenshot {
width: 256px;
border: 1px solid #ddd;
}
.screenshot-el {
margin-bottom: 16px;
}
hr {
height: 1px;
border: 0;
border-top: 1px solid #ddd;
margin: 0;
}
.material-icons {
vertical-align: -6px;
}
p {
line-height: 1.25em;
}
.caption {
font-size: 16px;
/*font-style: italic;*/
color: #666;
text-align: left;
margin-top: 8px;
margin-bottom: 8px;
}
video {
display: block;
margin: auto;
}
figure {
display: block;
margin: auto;
margin-top: 10px;
margin-bottom: 10px;
}
#bibtex pre {
font-size: 14px;
background-color: #eee;
padding: 16px;
}
.blue {
color: #2c82c9;
font-weight: bold;
}
.orange {
color: #d35400;
font-weight: bold;
}
.flex-row {
display: flex;
flex-flow: row wrap;
justify-content: space-around;
padding: 0;
margin: 0;
list-style: none;
}
.paper-btn {
position: relative;
text-align: center;
display: inline-block;
margin: 8px;
padding: 8px 8px;
border-width: 0;
outline: none;
border-radius: 2px;
background-color: #1367a7;
color: #ecf0f1 !important;
font-size: 20px;
width: 100px;
font-weight: 600;
}
.supp-btn {
position: relative;
text-align: center;
display: inline-block;
margin: 8px;
padding: 8px 8px;
border-width: 0;
outline: none;
border-radius: 2px;
background-color: #1367a7;
color: #ecf0f1 !important;
font-size: 20px;
width: 150px;
font-weight: 600;
}
.paper-btn-parent {
display: flex;
justify-content: center;
margin: 16px 0px;
}
.paper-btn:hover {
opacity: 0.85;
}
.container {
margin-left: auto;
margin-right: auto;
padding-left: 16px;
padding-right: 16px;
}
.venue {
color: #1367a7;
}
.topnav {
overflow: hidden;
background-color: #EEEEEE;
}
.topnav a {
float: left;
color: black;
text-align: center;
padding: 14px 16px;
text-decoration: none;
font-size: 16px;
}
</style>
<div class="topnav" id="myTopnav">
</div>
<!-- End : Google Analytics Code -->
<script type="text/javascript" src="../js/hidebib.js"></script>
<link href='https://fonts.googleapis.com/css?family=Titillium+Web:400,600,400italic,600italic,300,300italic' rel='stylesheet' type='text/css'>
<head>
<title>MVOC: a training-free multiple video object composition method with diffusion models</title>
<meta property="og:description" content="MVOC: a training-free multiple video object composition method with diffusion models"/>
<link href="https://fonts.googleapis.com/css2?family=Material+Icons" rel="stylesheet">
<!-- Global site tag (gtag.js) - Google Analytics -->
<script async src="https://www.googletagmanager.com/gtag/js?id=G-6HHDEXF452"></script>
<script>
window.dataLayer = window.dataLayer || [];
function gtag(){dataLayer.push(arguments);}
gtag('js', new Date());
gtag('config', 'G-6HHDEXF452');
</script>
</head>
<body>
<div class="container">
<div class="paper-title">
<h1>MVOC: a training-free multiple video object composition method with diffusion models</h1>
</div>
<div id="authors">
<div class="author-row">
<div class="col-5 text-center"><a href="https://scholar.google.com.hk/citations?user=tfJVFEcAAAAJ&hl=zh-CN">Wei Wang*</a><sup>1</sup></div>
<div class="col-5 text-center"><a href="https://scholar.google.com.hk/citations?user=HvWZhM4AAAAJ&hl=zh-CN">Yaosen Chen*#</a><sup>1,2</sup></div>
<div class="col-5 text-center">Yuegen Liu</a><sup>1</sup></div>
<div class="col-5 text-center">Qi Yuan</a><sup>1</sup></div>
<div class="col-5 text-center">Shubin Yang</a><sup>1,2</sup></div>
<div class="col-5 text-center">Yanru Zhang</a><sup>2</sup></div>
</div>
<div class="affil-row">
<div class="col-8 text-center"><sup>1</sup>Sobey Media Intelligence Laboratory</a></div>
<div class="col-8 text-center"><sup>2</sup>University of Electronic Science and Technology of China</a></div>
</div>
<p class="caption">
*Equal Contribution. #Corresponding Author.
</p>
<div style="clear: both">
<div class="paper-btn-parent">
<a class="supp-btn" href="https://arxiv.org/abs/2406.15829">
<span class="material-icons"> description </span>
Paper
</a>
<a class="supp-btn" href="https://github.com/SobeyMIL/MVOC">
<span class="material-icons"> description </span>
Code
</a>
</div></div>
</div>
<section id="paper">
<h2>Paper</h2>
<hr>
<div class="flex-row">
<div style="box-sizing: border-box; padding: 16px; margin: auto;">
<a href="assets/upstnerf.pdf"><img class="screenshot" src="paper.png"></a>
</div>
<div style="width: 50%">
<p><b>MVOC: a training-free multiple video object composition method with diffusion models</b></p>
<p>Wei Wang, Yaosen Chen, Yuegen Liu, Qi Yuan, Shubin Yang, Yanru Zhang</p>
<div><span class="material-icons"> description </span><a href="https://arxiv.org/abs/2406.15829"> arXiv version</a></div>
</div>
</div>
</section>
<section id="teaser" class="flex-row">
<a href="mvoc_intro.png" style="text-align: center;">
<img width="100%" src="mvoc_intro.png" >
</a>
<p class="caption">
Given multiple video objects (e.g. Background, Object1, Object2), our method enables presenting the interaction effects between multiple video objects and maintaining the motion and identity consistency of each object in the composited video.
</p>
</section>
<section id="abstract"/>
<h2>Abstract</h2>
<hr>
<p>Video composition is the core task of video editing. Although image composition based on diffusion models has been highly successful, it is not straightforward to extend the achievement to video object composition tasks, which not only exhibit corresponding interaction effects but also ensure that the objects in the composited video maintain motion and identity consistency, which is necessary to composite a physical harmony video. To address this challenge, we propose a Multiple Video Object Composition (MVOC) method based on diffusion models. Specifically, we first perform DDIM inversion on each video object to obtain the corresponding noise features. Secondly, we combine and edit each object by image editing methods to obtain the first frame of the composited video. Finally, we use the image-to-video generation model to composite the video with feature and attention injections in the Video Object Dependence Module, which is a training-free conditional guidance operation for video generation, and enables the coordination of features and attention maps between various objects that can be non-independent in the composited video. The final generative model not only constrains the objects in the generated video to be consistent with the original object motions and identities, but also introduces interaction effects between objects. Extensive experiments have demonstrated that the proposed method outperforms existing state-of-the-art approaches.
</p>
</section>
<section id="method"/>
<h2>Approach</h2>
<hr>
<section id="teaser" class="flex-row">
<a href="mvoc_framework.png" style="text-align: center;">
<img width="90%" src="mvoc_framework.png" >
</a>
<p class="caption">
<strong>Multiple video object composition framework.</strong> Our method presents a two-stage approach: video object preprocessing and generative video editing. In preprocessing stage, we perform DDIM inversion, object extraction and paste, mask extraction and first frame editing. In editing stage, we edit the first frame by an image editing model, then use video object dependence for conditional guidance video generation
</p>
</section>
</section>
<section id="method"/>
<h2>Comparison</h2>
<hr>
<section id="BoatSurf" class="flex-row">
<a href="BoatSurf.gif" style="text-align: center;">
<img width="90%" src="BoatSurf.gif" >
</a>
<p class="caption">
<strong>Comparison on BoatSurf.</strong>
</p>
</section>
<section id="BirdSeal" class="flex-row">
<a href="BirdSeal.gif" style="text-align: center;">
<img width="90%" src="BirdSeal.gif" >
</a>
<p class="caption">
<strong>Comparison on BirdSeal.</strong>
</p>
</section>
<section id="MonkeySwan" class="flex-row">
<a href="MonkeySwan.gif" style="text-align: center;">
<img width="90%" src="MonkeySwan.gif" >
</a>
<p class="caption">
<strong>Comparison on MonkeySwan.</strong>
</p>
</section>
<section id="DuckCrane" class="flex-row">
<a href="DuckCrane.gif" style="text-align: center;">
<img width="90%" src="DuckCrane.gif" >
</a>
<p class="caption">
<strong>Comparison on DuckCrane.</strong>
</p>
</section>
<section id="CraneSeal" class="flex-row">
<a href="CraneSeal.gif" style="text-align: center;">
<img width="90%" src="CraneSeal.gif" >
</a>
<p class="caption">
<strong>Comparison on CraneSeal.</strong>
</p>
</section>
<section id="RiderDeer" class="flex-row">
<a href="RiderDeer.gif" style="text-align: center;">
<img width="90%" src="RiderDeer.gif" >
</a>
<p class="caption">
<strong>Comparison on RiderDeer.</strong>
</p>
</section>
<section id="RobotCat" class="flex-row">
<a href="RobotCat.gif" style="text-align: center;">
<img width="90%" src="RobotCat.gif" >
</a>
<p class="caption">
<strong>Comparison on RobotCat.</strong>
</p>
</section>
</section>
<hr>
<section id="Consistency">
<h2> Quantitative Comparison</h2>
<section id="teaser" class="flex-row">
<a href="QuantitativeComparison.png" style="text-align: center;">
<img width="60%" src="QuantitativeComparison.png" >
</a>
<p class="caption">
<strong>The comparisons of short and long-range consistency are shown in Table.1 and Table.2, respectively.</strong> CutPaste, Poisson, and Harmonizer are non-generative methods, which essentially have better temporal consistency, but cannot produce interactive effects and are not harmonious; others and our method are generative methods, and the compositied videos are more harmonious. Nonetheless, the average of our metrics on temporal consistency is still superior all compared methods.
</p>
</section>
</section>
</code></pre>
</section>
<br />
<section id="bibtex">
<h2>Citation</h2>
<hr>
<pre><code>
@inproceedings{wang2024mvoc,
title = {MVOC: a training-free multiple video object composition method with diffusion models},
author = {Wei Wang and Yaosen Chen and Yuegen Liu and Qi Yuan and Shubin Yang and Yanru Zhang},
year = {2024},
booktitle = {arxiv}
}
</code></pre>
</section>
</div>