Skip to content

Commit

Permalink
Add CaloChallenge, run plot and recent
Browse files Browse the repository at this point in the history
  • Loading branch information
claudius-krause committed Nov 11, 2024
1 parent 026def8 commit 17cf040
Show file tree
Hide file tree
Showing 7 changed files with 121 additions and 24 deletions.
12 changes: 12 additions & 0 deletions HEPML.bib
Original file line number Diff line number Diff line change
Expand Up @@ -168,6 +168,18 @@ @article{Algren:2024bqw
year = "2024"
}

% October 29, 2024
@article{Krause:2024avx,
author = "Krause, Claudius and others",
title = "{CaloChallenge 2022: A Community Challenge for Fast Calorimeter Simulation}",
eprint = "2410.21611",
archivePrefix = "arXiv",
primaryClass = "cs.LG",
reportNumber = "HEPHY-ML-24-05, FERMILAB-PUB-24-0728-CMS, TTK-24-43",
month = "10",
year = "2024"
}

% October 29, 2024
@article{Das:2024fwo,
author = "Das, Ranit and Shih, David",
Expand Down
12 changes: 6 additions & 6 deletions HEPML.tex
Original file line number Diff line number Diff line change
Expand Up @@ -43,9 +43,9 @@
\\\textit{Below are links to many (static) general and specialized reviews. The third bullet contains links to classic papers that applied shallow learning methods many decades before the deep learning revolution.}
\begin{itemize}
\item Modern reviews~\cite{Shanahan:2022ifi,Boehnlein:2021eym,Karagiorgi:2021ngt,Schwartz:2021ftp,Bourilkov:2019yoi,Carleo:2019ptp,Radovic:2018dip,Albertsson:2018maf,Guest:2018yhq,Larkoski:2017jix}
\item Specialized reviews~\cite{Malara:2024zsj,Duarte:2024lsg,Sahu:2024fzi,Halverson:2024hax,Larkoski:2024uoc,Barman:2024wfx,Ahmad:2024dql,Huetsch:2024quz,Mondal:2024nsa,Bardhan:2024zla,Kheddar:2024osf,Gooding:2024wpi,Araz:2023mda,Belis:2023mqs,Hashemi:2023rgo,Allaire:2023fgp,Du:2023qst,DeZoort:2023vrm,Zhou:2023pti,Huber:2022lpm,Huerta:2022kgj,Cheng:2022idp,Plehn:2022ftl,Chen:2022pzc,Benelli:2022sqn,Coadou:2022nsh,Harris:2022qtm,Thais:2022iok,Adelmann:2022ozp,Dvorkin:2022pwo,Butter:2022rso,Bogatskiy:2022hub,Viren:2022qon,Baldi:2022okj,Alanazi:2021grv,deLima:2021fwm,Guan:2020bdl,Kagan:2020yrm,Rousseau:2020rnz,Cranmer:2019eaq,Vlimant:2020enz,Duarte:2020ngm,Nachman:2020ccu,Brehmer:2020cvb,Forte:2020yip,Butter:2020tvl,Psihas:2020pby,Shlomi:2020gdn,1807719,Kasieczka:2019dbj}
\item Specialized reviews~\cite{Krause:2024avx,Malara:2024zsj,Duarte:2024lsg,Sahu:2024fzi,Halverson:2024hax,Larkoski:2024uoc,Barman:2024wfx,Ahmad:2024dql,Huetsch:2024quz,Mondal:2024nsa,Bardhan:2024zla,Kheddar:2024osf,Gooding:2024wpi,Araz:2023mda,Belis:2023mqs,Hashemi:2023rgo,Allaire:2023fgp,Du:2023qst,DeZoort:2023vrm,Zhou:2023pti,Huber:2022lpm,Huerta:2022kgj,Cheng:2022idp,Plehn:2022ftl,Chen:2022pzc,Benelli:2022sqn,Coadou:2022nsh,Harris:2022qtm,Thais:2022iok,Adelmann:2022ozp,Dvorkin:2022pwo,Butter:2022rso,Bogatskiy:2022hub,Viren:2022qon,Baldi:2022okj,Alanazi:2021grv,deLima:2021fwm,Guan:2020bdl,Kagan:2020yrm,Rousseau:2020rnz,Cranmer:2019eaq,Vlimant:2020enz,Duarte:2020ngm,Nachman:2020ccu,Brehmer:2020cvb,Forte:2020yip,Butter:2020tvl,Psihas:2020pby,Shlomi:2020gdn,1807719,Kasieczka:2019dbj}
\item Classical papers~\cite{Lonnblad:1990bi,Denby:1987rk}
\item Datasets~\cite{Bhimji:2024bcd,Zoch:2024eyp,Rusack:2023pob,Eller:2023myr,Qu:2022mxj,Chen:2021euv,Govorkova:2021hqu,Benato:2021olt,Aarrestad:2021oeb,Kasieczka:2021xcg}
\item Datasets~\cite{Krause:2024avx,Bhimji:2024bcd,Zoch:2024eyp,Rusack:2023pob,Eller:2023myr,Qu:2022mxj,Chen:2021euv,Govorkova:2021hqu,Benato:2021olt,Aarrestad:2021oeb,Kasieczka:2021xcg}
\end{itemize}
\item \textbf{Classification}
\\\textit{Given a feature space $x\in\mathbb{R}^n$, a binary classifier is a function $f:\mathbb{R}^n\rightarrow [0,1]$, where $0$ corresponds to features that are more characteristic of the zeroth class (e.g. background) and $1$ correspond to features that are more characteristic of the one class (e.g. signal). Typically, $f$ will be a function specified by some parameters $w$ (e.g. weights and biases of a neural network) that are determined by minimizing a loss of the form $L[f]=\sum_{i}\ell(f(x_i),y_i)$, where $y_i\in\{0,1\}$ are labels. The function $\ell$ is smaller when $f(x_i)$ and $y_i$ are closer. Two common loss functions are the mean squared error $\ell(x,y)=(x-y)^2$ and the binary cross entropy $\ell(x,y)=y\log(x)+(1-y)\log(1-x)$. Exactly what `more characteristic of' means depends on the loss function used to determine $f$. It is also possible to make a multi-class classifier. A common strategy for the multi-class case is to represent each class as a different basis vector in $\mathbb{R}^{n_\text{classes}}$ and then $f(x)\in[0,1]^{n_\text{classes}}$. In this case, $f(x)$ is usually restricted to have its $n_\text{classes}$ components sum to one and the loss function is typically the cross entropy $\ell(x,y)=\sum_\text{classes $i$} y_i\log(x)$.}
Expand Down Expand Up @@ -167,13 +167,13 @@
\item \textbf{Generative models / density estimation}
\\\textit{The goal of generative modeling is to learn (explicitly or implicitly) a probability density $p(x)$ for the features $x\in\mathbb{R}^n$. This task is usually unsupervised (no labels).}
\begin{itemize}
\item \textbf{GANs}~\cite{Kach:2024yxi,Wojnar:2024cbn,Dooney:2024pvt,Chan:2023icm,Scham:2023usu,Scham:2023cwn,FaucciGiannelli:2023fow,Erdmann:2023ngr,Barbetti:2023bvi,Alghamdi:2023emm,Dubinski:2023fsy,Chan:2023ume,Diefenbacher:2023prl,EXO:2023pkl,Hashemi:2023ruu,Yue:2023uva,Buhmann:2023pmh,Anderlini:2022hgm,ATLAS:2022jhk,Rogachev:2022hjg,Ratnikov:2022hge,Anderlini:2022ckd,Ghosh:2022zdz,Bieringer:2022cbs,Buhmann:2021caf,Desai:2021wbb,Chisholm:2021pdn,Anderlini:2021qpm,Bravo-Prieto:2021ehz,Li:2021cbp,Mu:2021nno,Khattak:2021ndw,NEURIPS2020_a878dbeb,Kansal:2021cqp,Winterhalder:2021ave,Lebese:2021foi,Rehm:2021qwm,Carrazza:2021hny,Rehm:2021zoz,Rehm:2021zow,Choi:2021sku,Lai:2020byl,Maevskiy:2020ank,Kansal:2020svm,2008.06545,Diefenbacher:2020rna,Alanazi:2020jod,buhmann2020getting,Wang:2020tap,Belayneh:2019vyx,Hooberman:DLPS2017,Farrell:2019fsm,deOliveira:2017rwa,Oliveira:DLPS2017,Urban:2018tqv,Erdmann:2018jxd,Erbin:2018csv,Derkach:2019qfk,Deja:2019vcv,Erdmann:2018kuh,Musella:2018rdi,Datta:2018mwd,Vallecorsa:2018zco,Carminati:2018khv,Zhou:2018ill,ATL-SOFT-PUB-2018-001,Chekalina:2018hxi,Hashemi:2019fkn,DiSipio:2019imz,Lin:2019htn,Butter:2019cae,Carrazza:2019cnt,SHiP:2019gcl,Vallecorsa:2019ked,Bellagente:2019uyp,Martinez:2019jlu,Butter:2019eyo,Alonso-Monsalve:2018aqs,Paganini:2017dwg,Paganini:2017hrr,deOliveira:2017pjk}
\item \textbf{GANs}~\cite{Krause:2024avx,Kach:2024yxi,Wojnar:2024cbn,Dooney:2024pvt,Chan:2023icm,Scham:2023usu,Scham:2023cwn,FaucciGiannelli:2023fow,Erdmann:2023ngr,Barbetti:2023bvi,Alghamdi:2023emm,Dubinski:2023fsy,Chan:2023ume,Diefenbacher:2023prl,EXO:2023pkl,Hashemi:2023ruu,Yue:2023uva,Buhmann:2023pmh,Anderlini:2022hgm,ATLAS:2022jhk,Rogachev:2022hjg,Ratnikov:2022hge,Anderlini:2022ckd,Ghosh:2022zdz,Bieringer:2022cbs,Buhmann:2021caf,Desai:2021wbb,Chisholm:2021pdn,Anderlini:2021qpm,Bravo-Prieto:2021ehz,Li:2021cbp,Mu:2021nno,Khattak:2021ndw,NEURIPS2020_a878dbeb,Kansal:2021cqp,Winterhalder:2021ave,Lebese:2021foi,Rehm:2021qwm,Carrazza:2021hny,Rehm:2021zoz,Rehm:2021zow,Choi:2021sku,Lai:2020byl,Maevskiy:2020ank,Kansal:2020svm,2008.06545,Diefenbacher:2020rna,Alanazi:2020jod,buhmann2020getting,Wang:2020tap,Belayneh:2019vyx,Hooberman:DLPS2017,Farrell:2019fsm,deOliveira:2017rwa,Oliveira:DLPS2017,Urban:2018tqv,Erdmann:2018jxd,Erbin:2018csv,Derkach:2019qfk,Deja:2019vcv,Erdmann:2018kuh,Musella:2018rdi,Datta:2018mwd,Vallecorsa:2018zco,Carminati:2018khv,Zhou:2018ill,ATL-SOFT-PUB-2018-001,Chekalina:2018hxi,Hashemi:2019fkn,DiSipio:2019imz,Lin:2019htn,Butter:2019cae,Carrazza:2019cnt,SHiP:2019gcl,Vallecorsa:2019ked,Bellagente:2019uyp,Martinez:2019jlu,Butter:2019eyo,Alonso-Monsalve:2018aqs,Paganini:2017dwg,Paganini:2017hrr,deOliveira:2017pjk}
\\\textit{Generative Adversarial Networks~\cite{Goodfellow:2014upx} learn $p(x)$ implicitly through the minimax optimization of two networks: one that maps noise to structure $G(z)$ and one a classifier (called the discriminator) that learns to distinguish examples generated from $G(z)$ and those generated from the target process. When the discriminator is maximally `confused', then the generator is effectively mimicking $p(x)$.}
\item \textbf{(Variational) Autoencoders}~\cite{Liu:2024kvv,Kuh:2024lgx,Hoque:2023zjt,Zhang:2023khv,Chekanov:2023uot,Lasseri:2023dhi,Anzalone:2023ugq,Roche:2023int,Cresswell:2022tof,AbhishekAbhishek:2022wby,Collins:2022qpr,Ilten:2022jfm,Touranakou:2022qrp,Buhmann:2021caf,Tsan:2021brw,Jawahar:2021vyu,Orzari:2021suh,Collins:2021pld,Fanelli:2019qaq,Hariri:2021clz,deja2020endtoend,Bortolato:2021zic,Buhmann:2021lxj,Howard:2021pos,1816035,Cheng:2020dal,ATL-SOFT-PUB-2018-001,Monk:2018zsb}
\item \textbf{(Variational) Autoencoders}~\cite{Krause:2024avx,Liu:2024kvv,Kuh:2024lgx,Hoque:2023zjt,Zhang:2023khv,Chekanov:2023uot,Lasseri:2023dhi,Anzalone:2023ugq,Roche:2023int,Cresswell:2022tof,AbhishekAbhishek:2022wby,Collins:2022qpr,Ilten:2022jfm,Touranakou:2022qrp,Buhmann:2021caf,Tsan:2021brw,Jawahar:2021vyu,Orzari:2021suh,Collins:2021pld,Fanelli:2019qaq,Hariri:2021clz,deja2020endtoend,Bortolato:2021zic,Buhmann:2021lxj,Howard:2021pos,1816035,Cheng:2020dal,ATL-SOFT-PUB-2018-001,Monk:2018zsb}
\\\textit{An autoencoder consists of two functions: one that maps $x$ into a latent space $z$ (encoder) and a second one that maps the latent space back into the original space (decoder). The encoder and decoder are simultaneously trained so that their composition is nearly the identity. When the latent space has a well-defined probability density (as in variational autoencoders), then one can sample from the autoencoder by applying the detector to a randomly chosen element of the latent space.}
\item \textbf{(Continuous) Normalizing flows}~\cite{Saito:2024fmr,Bodendorfer:2024egw,Heimel:2024wph,Quetant:2024ftg,Dreyer:2024bhs,Buss:2024orz,Favaro:2024rle,Du:2024gbp,Bai:2024pii,Abbott:2024knk,Daumann:2024kfd,Schnake:2024mip,Kelleher:2024jsh,Vaselli:2024vrx,Kelleher:2024rmb,Deutschmann:2024lml,Kanwar:2024ujc,Krause:2023uww,Ernst:2023qvn,ElBaz:2023ijr,Bierlich:2023zzd,Heimel:2023ngj,Gavranovic:2023oam,Pham:2023bnl,Albandea:2023ais,Bright-Thonney:2023sqf,Finke:2023ltw,Bickendorf:2023nej,Reyes-Gonzalez:2023oei,Golling:2023mqx,Pang:2023wfx,Buckley:2023rez,Singha:2023xxq,Xu:2023xdc,Wen:2023oju,Golling:2023yjq,Raine:2023fko,Nachman:2023clf,R:2023dcr,Nicoli:2023qsl,Diefenbacher:2023vsw,Rousselot:2023pcj,Albandea:2023wgd,Heimel:2022wyj,Backes:2022vmn,Dolan:2022ikg,Kach:2022uzq,Kach:2022qnf,Cresswell:2022tof,Krause:2022jna,Albandea:2022fky,Chen:2022ytr,Leigh:2022lpn,Verheyen:2022tov,Butter:2022lkf,Winterhalder:2021ngy,Butter:2021csz,Krause:2021wez,Bister:2021arb,Jawahar:2021vyu,Vandegar:2020yvw,NEURIPS2020_a878dbeb,Hallin:2021wme,Menary:2021tjg,Hackett:2021idh,Krause:2021ilc,Winterhalder:2021ave,Hollingsworth:2021sii,Bieringer:2020tnw,Lu:2020npg,Choi:2020bnf,Nachman:2020lpy,Gao:2020vdv,Gao:2020zvv,Bothmann:2020ywa,Brehmer:2020vwc,Kanwar:2003.06413,1800956,Albergo:2019eim}
\item \textbf{(Continuous) Normalizing flows}~\cite{Krause:2024avx,Saito:2024fmr,Bodendorfer:2024egw,Heimel:2024wph,Quetant:2024ftg,Dreyer:2024bhs,Buss:2024orz,Favaro:2024rle,Du:2024gbp,Bai:2024pii,Abbott:2024knk,Daumann:2024kfd,Schnake:2024mip,Kelleher:2024jsh,Vaselli:2024vrx,Kelleher:2024rmb,Deutschmann:2024lml,Kanwar:2024ujc,Krause:2023uww,Ernst:2023qvn,ElBaz:2023ijr,Bierlich:2023zzd,Heimel:2023ngj,Gavranovic:2023oam,Pham:2023bnl,Albandea:2023ais,Bright-Thonney:2023sqf,Finke:2023ltw,Bickendorf:2023nej,Reyes-Gonzalez:2023oei,Golling:2023mqx,Pang:2023wfx,Buckley:2023rez,Singha:2023xxq,Xu:2023xdc,Wen:2023oju,Golling:2023yjq,Raine:2023fko,Nachman:2023clf,R:2023dcr,Nicoli:2023qsl,Diefenbacher:2023vsw,Rousselot:2023pcj,Albandea:2023wgd,Heimel:2022wyj,Backes:2022vmn,Dolan:2022ikg,Kach:2022uzq,Kach:2022qnf,Cresswell:2022tof,Krause:2022jna,Albandea:2022fky,Chen:2022ytr,Leigh:2022lpn,Verheyen:2022tov,Butter:2022lkf,Winterhalder:2021ngy,Butter:2021csz,Krause:2021wez,Bister:2021arb,Jawahar:2021vyu,Vandegar:2020yvw,NEURIPS2020_a878dbeb,Hallin:2021wme,Menary:2021tjg,Hackett:2021idh,Krause:2021ilc,Winterhalder:2021ave,Hollingsworth:2021sii,Bieringer:2020tnw,Lu:2020npg,Choi:2020bnf,Nachman:2020lpy,Gao:2020vdv,Gao:2020zvv,Bothmann:2020ywa,Brehmer:2020vwc,Kanwar:2003.06413,1800956,Albergo:2019eim}
\\\textit{Normalizing flows~\cite{pmlr-v37-rezende15} learn $p(x)$ explicitly by starting with a simple probability density and then applying a series of bijective transformations with tractable Jacobians.}
\item \textbf{Diffusion Models}~\cite{Araz:2024bom,Algren:2024bqw,Aarts:2024rsl,Zhu:2024kiu,Wojnar:2024cbn,Quetant:2024ftg,Kita:2024nnw,Favaro:2024rle,Kobylianskii:2024sup,Jiang:2024bwr,Vaselli:2024vrx,Kobylianskii:2024ijw,Jiang:2024ohg,Sengupta:2023vtm,Butter:2023ira,Wang:2023sry,Heimel:2023ngj,Devlin:2023jzp,Buhmann:2023acn,Buhmann:2023zgc,Buhmann:2023kdg,Hunt-Smith:2023ccp,Mikuni:2023tqg,Diefenbacher:2023wec,Cotler:2023lem,Diefenbacher:2023flw,Amram:2023onf,Imani:2023blb,Leigh:2023zle,Acosta:2023zik,Mikuni:2023tok,Butter:2023fov,Buhmann:2023bwk,Shmakov:2023kjj,Mikuni:2023dvk,Leigh:2023toe,Mikuni:2022xry}
\item \textbf{Diffusion Models}~\cite{Krause:2024avx,Araz:2024bom,Algren:2024bqw,Aarts:2024rsl,Zhu:2024kiu,Wojnar:2024cbn,Quetant:2024ftg,Kita:2024nnw,Favaro:2024rle,Kobylianskii:2024sup,Jiang:2024bwr,Vaselli:2024vrx,Kobylianskii:2024ijw,Jiang:2024ohg,Sengupta:2023vtm,Butter:2023ira,Wang:2023sry,Heimel:2023ngj,Devlin:2023jzp,Buhmann:2023acn,Buhmann:2023zgc,Buhmann:2023kdg,Hunt-Smith:2023ccp,Mikuni:2023tqg,Diefenbacher:2023wec,Cotler:2023lem,Diefenbacher:2023flw,Amram:2023onf,Imani:2023blb,Leigh:2023zle,Acosta:2023zik,Mikuni:2023tok,Butter:2023fov,Buhmann:2023bwk,Shmakov:2023kjj,Mikuni:2023dvk,Leigh:2023toe,Mikuni:2022xry}
\\\textit{These approaches learn the gradient of the density instead of the density directly.}
\item \textbf{Transformer Models}~\cite{Brehmer:2024yqw,Quetant:2024ftg,Spinner:2024hjm,Paeng:2024ary,Li:2023xhj,Tomiya:2023jdy,Raine:2023fko,Butter:2023fov,Finke:2023veq}
\\\textit{These approaches learn the density or perform generative modeling using transformer-based networks.}
Expand Down
6 changes: 6 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,6 +27,7 @@ This review was built with the help of the HEP-ML community, the [INSPIRE REST A

### Specialized reviews

* [CaloChallenge 2022: A Community Challenge for Fast Calorimeter Simulation](https://arxiv.org/abs/2410.21611) (2024)
* [Exploring jets: substructure and flavour tagging in CMS and ATLAS](https://arxiv.org/abs/2410.14330) (2024)
* [Novel machine learning applications at the LHC](https://arxiv.org/abs/2409.20413) (2024)
* [Unveiling the Secrets of New Physics Through Top Quark Tagging](https://arxiv.org/abs/2409.12085) (2024)
Expand Down Expand Up @@ -85,6 +86,7 @@ This review was built with the help of the HEP-ML community, the [INSPIRE REST A

### Datasets

* [CaloChallenge 2022: A Community Challenge for Fast Calorimeter Simulation](https://arxiv.org/abs/2410.21611) (2024)
* [FAIR Universe HiggsML Uncertainty Challenge Competition](https://arxiv.org/abs/2410.02867) (2024)
* [RODEM Jet Datasets](https://arxiv.org/abs/2408.11616) (2024)
* [Electron Energy Regression in the CMS High-Granularity Calorimeter Prototype](https://arxiv.org/abs/2309.06582) (2023)
Expand Down Expand Up @@ -1377,6 +1379,7 @@ This review was built with the help of the HEP-ML community, the [INSPIRE REST A
## Generative models / density estimation
### GANs

* [CaloChallenge 2022: A Community Challenge for Fast Calorimeter Simulation](https://arxiv.org/abs/2410.21611) (2024)
* [Pay Attention To Mean Fields For Point Cloud Generation](https://arxiv.org/abs/2408.04997) (2024)
* [Applying generative neural networks for fast simulations of the ALICE (CERN) experiment](https://arxiv.org/abs/2407.16704) (2024)
* [cDVGAN: One Flexible Model for Multi-class Gravitational Wave Signal and Glitch Generation](https://arxiv.org/abs/2401.16356) [[DOI](https://doi.org/10.1103/PhysRevD.110.022004)] (2024)
Expand Down Expand Up @@ -1461,6 +1464,7 @@ This review was built with the help of the HEP-ML community, the [INSPIRE REST A

### (Variational) Autoencoders

* [CaloChallenge 2022: A Community Challenge for Fast Calorimeter Simulation](https://arxiv.org/abs/2410.21611) (2024)
* [Calo-VQ: Vector-Quantized Two-Stage Generative Model in Calorimeter Simulation](https://arxiv.org/abs/2405.06605) (2024)
* [Deep Generative Models for Ultra-High Granularity Particle Physics Detector Simulation: A Voyage From Emulation to Extrapolation](https://arxiv.org/abs/2403.13825) (2024)
* [CaloQVAE : Simulating high-energy particle-calorimeter interactions using hybrid quantum-classical generative models](https://arxiv.org/abs/2312.03179) (2023)
Expand Down Expand Up @@ -1492,6 +1496,7 @@ This review was built with the help of the HEP-ML community, the [INSPIRE REST A

### (Continuous) Normalizing flows

* [CaloChallenge 2022: A Community Challenge for Fast Calorimeter Simulation](https://arxiv.org/abs/2410.21611) (2024)
* [Signal model parameter scan using Normalizing Flow](https://arxiv.org/abs/2409.13201) [[DOI](https://doi.org/10.22323/1.458.0017)] (2024)
* [Variational Monte Carlo with Neural Network Quantum States for Yang-Mills Matrix Model](https://arxiv.org/abs/2409.00398) (2024)
* [Differentiable MadNIS-Lite](https://arxiv.org/abs/2408.01486) (2024)
Expand Down Expand Up @@ -1574,6 +1579,7 @@ This review was built with the help of the HEP-ML community, the [INSPIRE REST A

### Diffusion Models

* [CaloChallenge 2022: A Community Challenge for Fast Calorimeter Simulation](https://arxiv.org/abs/2410.21611) (2024)
* [Point cloud-based diffusion models for the Electron-Ion Collider](https://arxiv.org/abs/2410.22421) (2024)
* [Variational inference for pile-up removal at hadron colliders with diffusion models](https://arxiv.org/abs/2410.22074) (2024)
* [On learning higher-order cumulants in diffusion models](https://arxiv.org/abs/2410.21212) (2024)
Expand Down
Binary file modified docs/assets/dark_per_year.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/assets/per_year.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading

0 comments on commit 17cf040

Please sign in to comment.