-
Notifications
You must be signed in to change notification settings - Fork 1
/
index.html
executable file
·299 lines (266 loc) · 15.3 KB
/
index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
<!DOCTYPE html>
<html lang="en-US">
<head>
<meta charset="UTF-8">
<!-- Begin Jekyll SEO tag v2.5.0 -->
<title>A Bayesian Odyssey in Uncertainty: from Theoretical
Foundations to Real-World Applications</title>
<meta name="generator" content="Jekyll v3.8.5" />
<meta property="og:title" content="Uncertainty Quantification Tutorial" />
<meta property="og:locale" content="en_US" />
<meta name="description" content="ECCV 2024 Tutorial" />
<meta property="og:description" content="ECCV 2024 Tutorial" />
<link rel="canonical" href="https://ensta-u2is-ai.github.io/uqt/" />
<meta property="og:url" content="https://ensta-u2is-ai.github.io/uqt/" />
<meta property="og:site_name" content="Uncertainty Quantification Tutorial" />
<script type="application/ld+json">
{"description":"ECCV 2024 Tutorial","@type":"WebSite","url":"https://ensta-u2is-ai.github.io/uqt/","name":"Uncertainty Quantification Tutorial","headline":"A Bayesian Odyssey in Uncertainty: from Theoretical
Foundations to Real-World Applications"}</script>
<!-- End Jekyll SEO tag -->
<meta name="viewport" content="width=device-width, initial-scale=1">
<meta name="theme-color" content="#157878">
<link rel="stylesheet" href="https://ensta-u2is-ai.github.io/uqt/assets/css/style.css">
<link rel="stylesheet" href="https://ensta-u2is-ai.github.io/uqt/assets/mystyle.css">
</head>
<body>
<section class="page-header">
<h1 class="project-name">A Bayesian Odyssey in Uncertainty: from Theoretical
Foundations to Real-World Applications</h1>
<h2 class="project-tagline">ECCV 2024 - Room: <strong>Suite 7</strong><br><strong>30 Sept</strong> - 8:30 AM to
12:30 PM<br>This recording of the tutorial will be
<strong>available online</strong>.
</h2>
</section>
<section class="main-content" style="max-width:100%">
<div class="container">
<h2>Organizers</h2>
<div>
<div class="instructor">
<a href="http://u2is.ensta-paris.fr/members/franchi/index.php?lang=fr" target="_blank">
<div class="instructorphoto"><img src="assets/franchi.jpg"> </div>
<div>Gianni Franchi<br><small>ENSTA Paris</small></div>
</a>
</div>
<div class="instructor">
<a href="https://github.com/alafage" target="_blank">
<div class="instructorphoto"><img src="assets/alafage.jpg"> </div>
<div>Adrien Lafage<br><small>ENSTA Paris & Renault</small></div>
</a>
</div>
<div class="instructor">
<a href="https://github.com/o-laurent" target="_blank">
<div class="instructorphoto"><img src="https://avatars.githubusercontent.com/u/62881275?v=4"> </div>
<div>Olivier Laurent<br><small>ENSTA Paris & Paris-Saclay</small></div>
</a>
</div>
<div class="instructor">
<a href="https://aleximmer.github.io/" target="_blank">
<div class="instructorphoto"><img src="https://aleximmer.github.io/assets/images/profile.jpg"> </div>
<div>Alexander Immer<br><small>ETH Zürich</small></div>
</a>
</div>
<div class="instructor">
<a href="https://izmailovpavel.github.io/" target="_blank">
<div class="instructorphoto"><img src="https://izmailovpavel.github.io/imgs/pavel-alaska.jpeg"> </div>
<div>Pavel Izmailov<br><small>Anthropic & NYU</small></div>
</a>
</div>
<div class="instructor">
<a href="https://abursuc.github.io/" target="_blank">
<div class="instructorphoto"><img src="https://abursuc.github.io/img/abursuc.jpg">
</div>
<div>Andrei Bursuc<br><small>valeo.ai</small></div>
</a>
</div>
<br>
<div class="containertext" style="max-width:50rem">
<h2 style="text-align: center">Overview</h2>
<p>
This tutorial is here to help researchers understand and handle uncertainty in their models,
making them more reliable using Bayesian methods. We'll start by discussing different Bayesian
approaches and then focus on Bayesian Neural Networks and how to approximate them efficiently for
computer vision tasks. We will also use real-world examples and practical methods to show how to
put these ideas into practice.
</p>
</div>
<div class="containertext" style="max-width:50rem">
<h2 style="text-align: center">Schedule</h2>
<p>
<ul>
<li>
<b>8:45 - 9:15</b>: Opening - Andrei
</li>
<li>
<b>9:15 - 10:05</b>: Uncertainty quantification: from maximum a posteriori to BNNs - Pavel
</li>
<li>
<b>10:05 - 10:30</b>: Computationally-efficient BNNs for computer vision - Gianni
</li>
<li>
<b>10:35 - 11:00</b>: ☕ Coffee ☕
</li>
<li>
<b>11:00 - 11:50</b>: Convert your DNN into a BNN - Alexander
</li>
<li>
<b>11:50 - 12:20</b>: Quality of estimated uncertainty and practical examples - Adrien & Gianni
</li>
<li>
<b>12:20 - 12:40</b>: Closing remarks + Q&A - Andrei, Alex, Pavel & Gianni
</li>
</ul>
</p>
<br>
<div style="text-align:center">
<a href="https://drive.google.com/file/d/1304WSYTKX5Q5vpNd5ykoB3yuh1sokLsv">
<p style="text-align:center;font-size:36px;color:#088F8F"> 💻 Link to the practical session 💻 </p>
</a>
</div>
<div class="containertext" style="max-width:50rem">
<h2 style="text-align: center">Outline</h2>
<h3 style="text-align: left">Introduction: Why & where is UQ helpful?</h3>
<p>
Initial exploration into the critical role of uncertainty quantification (UQ) within the realm
of computer vision (CV): participants will gain an understanding of why it’s essential to consider
uncertainty in CV, especially concerning decision-making in complex
environments. We will introduce real-world scenarios where uncertainty can profoundly
impact model performance and safety, setting the stage for deeper exploration through out the tutorial.
</p>
<h3 style="text-align: left">From maximum a posteriori to BNNs.</h3>
<p>
In this part, we will journey through the evolution of UQ techniques, starting
from classic approaches such as maximum a posteriori estimation to the more ellaborate Bayesian Neural
Networks. The participants will grasp the conceptual foundations
of UQ, laying the groundwork for the subsequent discussions of Bayesian methods.
</p>
<h3 style="text-align: left">Strategies for BNN posterior inference.</h3>
<p>
This is the core part, which will dive into the process of estimating the posterior distribution of BNNs.
The participants
will gain insights into the computational complexities involved in modeling uncertainty
through a comprehensive overview of techniques such as Variational Inference (VI),
Hamiltonian Monte Carlo (HMC), and Langevin Dynamics. Moreover, we will explore
the characteristics and visual representation of posterior distributions, providing a better
understanding of Bayesian inference.
</p>
<h3 style="text-align: left">Computationally-efficient BNNs for CV.</h3>
<p>
Here, we will present recent techniques to improve the computational efficiency of BNNs for computer
vision
tasks.
We will present different forms of obtaining BNNs from a intermediate checkpoints,
weight trajectories during a training run, different types of variational subnetworks,
etc., along with their main strenghts and limitations.
</p>
<h3 style="text-align: left">Convert your DNN into a BNN: post-hoc BNN inference.</h3>
<p>
This segment focuses on post-hoc inference techniques, with a focus on Laplace approximation. The
participants
will learn how Laplace approximation serves as a computationally efficient method for
approximating the posterior distribution of Bayesian Neural Networks.
</p>
<h3 style="text-align: left">Quality of estimated uncertainty and practical examples.</h3>
<p>
In the final session, participants will learn how to evaluate the quality of UQ in practi-
cal settings. We will develop multiple approaches to assess the reliability and calibra-
tion of uncertainty estimates, equipping participants with the tools to gauge the robust-
ness of their models. Additionally, we will dive into real-world examples and applica-
tions, showcasing how UQ can enhance the reliability
and performance of computer vision systems in diverse scenarios. Through interactive
discussions and case studies, participants will gain practical insights into deploying
uncertainty-aware models in real-world applications.
</p>
<h3 style="text-align: left">Uncertainty Quantification Framework.</h3>
<p>
This tutorial will also very quickly introduce the <a
href="https://github.com/ensta-u2is-ai/torch-uncertainty">TorchUncertainty
library</a>, an uncertainty-aware open-source framework for training models in PyTorch.
</p>
</div>
<a href="https://torch-uncertainty.github.io/" target="_blank">
<center>
<div><img src="assets/logoTU_full.png" width="40%" hspace="2%"> </div>
</center>
</a>
<br>
<div class="containertext" style="max-width:50rem">
<h2 style="text-align: center">Relation to prior tutorials and short courses</h2>
<p> This tutorial is affiliated with the <a href="https://uncv2023.github.io/">UNCV Workshop</a>,
which had its inaugural edition at ECCV 2022, a subsequent one at ICCV, and is back at ECCV this year.
In constrast to the workshop, the tutorial puts its primary emphasis on the theoretical facets. </p>
<p> UQ has received some attention
in recent times, as evidenced by its inclusion in
the tutorial <a href="https://abursuc.github.io/many-faces-reliability/">'Many Faces of Reliability of
Deep
Learning for Real-World Deployment'</a>. While this tutorial explored various applications associated
with
uncertainty,
it did not place a specific emphasis on probabilistic models and Bayesian Neural Networks. Our tutorial
aims
to provide a more in-depth exploration of uncertainty theory, accompanied by the introduction of practical
applications, including the presentation of the library, <a
href="https://github.com/ensta-u2is-ai/torch-uncertainty">TorchUncertainty</a>.</p>
</div>
<div class="containertext" style="max-width:50rem">
<h2 style="text-align: center">Selected References</h2>
<ol>
<li><b>Immer, A.</b>, Palumbo, E., Marx, A., & Vogt, J. E. E<a
href="https://proceedings.neurips.cc/paper_files/paper/2023/file/a901d5540789a086ee0881a82211b63d-Paper-Conference.pdf">
Effective Bayesian Heteroscedastic Regres-
sion with Deep Neural Networks</a>. In NeurIPS, 2023.</li>
<li><b>Franchi, G., Bursuc, A.,</b> Aldea, E., Dubuisson, S.,
& Bloch, I. <a href="https://arxiv.org/pdf/2012.02818">Encoding the latent posterior of
Bayesian Neural Networks for uncertainty quantification</a>. IEEE TPAMI, 2023.</li>
<li><b>Franchi, G.</b>, Yu, X., <b>Bursuc, A.</b>, Aldea, E., Dubuisson,
S., & Filliat, D. <a href="https://arxiv.org/pdf/2207.10130">Latent Discriminant
deterministic Uncertainty</a>. In ECCV 2022.</li>
<li><b>Laurent, O.</b>, <b>Lafage, A.</b>, Tartaglione, E., Daniel, G.,
Martinez, J. M., <b>Bursuc, A.</b>, & <b>Franchi, G.</b>
<a href="https://arxiv.org/pdf/2210.09184">Packed-Ensembles for Efficient Uncertainty Estimation</a>. In
ICLR 2023.
</li>
<li><b>Izmailov, P.</b>, Vikram, S., Hoffman, M. D., & Wilson, A. G. <a
href="https://arxiv.org/pdf/2104.14421">What are Bayesian neural network
posteriors really like?</a> In ICML, 2021.</li>
<li><b>Izmailov, P.</b>, Maddox, W. J., Kirichenko, P., Garipov, T., Vetrov, D., & Wilson, A. G. <a
href="https://arxiv.org/pdf/1907.07504">Subspace inference for Bayesian deep learning</a>. In UAI,
2020.
</li>
<li><b>Franchi, G.</b>, <b>Bursuc, A.</b>, Aldea, E., Dubuisson, S., &
Bloch, I. <a href="https://arxiv.org/pdf/1912.11316">TRADI: Tracking deep neural
network weight distributions</a>. In ECCV 2020.</li>
<li>Wilson, A. G., & <b>Izmailov, P</b>. <a href="https://arxiv.org/pdf/2002.08791">Bayesian deep
learning and a probabilistic perspective of generalization</a>. In NeurIPS, 2020.</li>
<li>Hendrycks, D., Dietterich, T. <a href="https://arxiv.org/pdf/1903.12261">Benchmarking Neural Network
Robustness to Common Corruptions and
Perturbations</a>. In ICLR 2019.</li>
<li><b> Izmailov, P.</b>, Podoprikhin, D., Garipov, T., Vetrov, D., & Wilson, A. G. <a
href="https://arxiv.org/pdf/1803.05407">Averaging weights
leads to wider optima and better generalization</a>. In UAI, 2018. </li>
</ol>
You will find more references in the <a
href="https://github.com/ensta-u2is-ai/awesome-uncertainty-deeplearning">Awesome Uncertainty in deep
learning.</a>
</div>
<br>
<div class="containertext">
<h3 style="text-align: center">Andrei Bursuc is supported by ELSA:</h3>
<center>
<a href="https://elsa-ai.eu/" target="_blank"><img src="assets/elsa_logo.png" width="10%" hspace="2%" />
</center>
</a>
</div>
</div>
</div>
</section>
<script src="https://code.jquery.com/jquery-3.3.1.slim.min.js"
integrity="sha384-q8i/X+965DzO0rT7abK41JStQIAqVgRVzpbzo5smXKp4YfRvH+8abtTE1Pi6jizo"
crossorigin="anonymous"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/popper.js/1.14.7/umd/popper.min.js"
integrity="sha384-UO2eT0CpHqdSJQ6hJty5KVphtPhzWj9WO1clHTMGa3JDZwrnQq4sF86dIHNDz0W1"
crossorigin="anonymous"></script>
<script src="https://stackpath.bootstrapcdn.com/bootstrap/4.3.1/js/bootstrap.min.js"
integrity="sha384-JjSmVgyd0p3pXB1rRibZUAYoIIy6OrQ6VrjIEaFf/nJGzIxFDsf4x0xIM+B07jRM"
crossorigin="anonymous"></script>
</body>
</html>