Skip to content

Commit

Permalink
Update HEPML.tex
Browse files Browse the repository at this point in the history
  • Loading branch information
ramonpeter authored Nov 27, 2024
1 parent 1743aa3 commit d450d52
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion HEPML.tex
Original file line number Diff line number Diff line change
Expand Up @@ -167,7 +167,7 @@
\item \textbf{Generative models / density estimation}
\\\textit{The goal of generative modeling is to learn (explicitly or implicitly) a probability density $p(x)$ for the features $x\in\mathbb{R}^n$. This task is usually unsupervised (no labels).}
\begin{itemize}
\item \textbf{GANs}~\cite{Simsek:2024zhj,Krause:2024avx,Kach:2024yxi,Wojnar:2024cbn,Dooney:2024pvt,Chan:2023icm,Scham:2023usu,Scham:2023cwn,FaucciGiannelli:2023fow,Erdmann:2023ngr,Barbetti:2023bvi,Alghamdi:2023emm,Dubinski:2023fsy,Chan:2023ume,Diefenbacher:2023prl,EXO:2023pkl,Hashemi:2023ruu,Yue:2023uva,Buhmann:2023pmh,Anderlini:2022hgm,ATLAS:2022jhk,Rogachev:2022hjg,Ratnikov:2022hge,Anderlini:2022ckd,Ghosh:2022zdz,Bieringer:2022cbs,Buhmann:2021caf,Desai:2021wbb,Chisholm:2021pdn,Anderlini:2021qpm,Bravo-Prieto:2021ehz,Li:2021cbp,Mu:2021nno,Khattak:2021ndw,NEURIPS2020_a878dbeb,Kansal:2021cqp,Winterhalder:2021ave,Lebese:2021foi,Rehm:2021qwm,Carrazza:2021hny,Rehm:2021zoz,Rehm:2021zow,Choi:2021sku,Lai:2020byl,Maevskiy:2020ank,Kansal:2020svm,2008.06545,Diefenbacher:2020rna,Alanazi:2020jod,buhmann2020getting,Wang:2020tap,Belayneh:2019vyx,Hooberman:DLPS2017,Farrell:2019fsm,deOliveira:2017rwa,Oliveira:DLPS2017,Urban:2018tqv,Erdmann:2018jxd,Erbin:2018csv,Derkach:2019qfk,Deja:2019vcv,Erdmann:2018kuh,Musella:2018rdi,Datta:2018mwd,Vallecorsa:2018zco,Carminati:2018khv,Zhou:2018ill,ATL-SOFT-PUB-2018-001,Chekalina:2018hxi,Hashemi:2019fkn,DiSipio:2019imz,Lin:2019htn,Butter:2019cae,Carrazza:2019cnt,SHiP:2019gcl,Vallecorsa:2019ked,Bellagente:2019uyp,Martinez:2019jlu,Butter:2019eyo,Alonso-Monsalve:2018aqs,Paganini:2017dwg,Paganini:2017hrr,deOliveira:2017pjk}
\item \textbf{GANs}~\cite{Krause:2024avx,Kach:2024yxi,Simsek:2024zhj,Wojnar:2024cbn,Dooney:2024pvt,Chan:2023icm,Scham:2023usu,Scham:2023cwn,FaucciGiannelli:2023fow,Erdmann:2023ngr,Barbetti:2023bvi,Alghamdi:2023emm,Dubinski:2023fsy,Chan:2023ume,Diefenbacher:2023prl,EXO:2023pkl,Hashemi:2023ruu,Yue:2023uva,Buhmann:2023pmh,Anderlini:2022hgm,ATLAS:2022jhk,Rogachev:2022hjg,Ratnikov:2022hge,Anderlini:2022ckd,Ghosh:2022zdz,Bieringer:2022cbs,Buhmann:2021caf,Desai:2021wbb,Chisholm:2021pdn,Anderlini:2021qpm,Bravo-Prieto:2021ehz,Li:2021cbp,Mu:2021nno,Khattak:2021ndw,NEURIPS2020_a878dbeb,Kansal:2021cqp,Winterhalder:2021ave,Lebese:2021foi,Rehm:2021qwm,Carrazza:2021hny,Rehm:2021zoz,Rehm:2021zow,Choi:2021sku,Lai:2020byl,Maevskiy:2020ank,Kansal:2020svm,2008.06545,Diefenbacher:2020rna,Alanazi:2020jod,buhmann2020getting,Wang:2020tap,Belayneh:2019vyx,Hooberman:DLPS2017,Farrell:2019fsm,deOliveira:2017rwa,Oliveira:DLPS2017,Urban:2018tqv,Erdmann:2018jxd,Erbin:2018csv,Derkach:2019qfk,Deja:2019vcv,Erdmann:2018kuh,Musella:2018rdi,Datta:2018mwd,Vallecorsa:2018zco,Carminati:2018khv,Zhou:2018ill,ATL-SOFT-PUB-2018-001,Chekalina:2018hxi,Hashemi:2019fkn,DiSipio:2019imz,Lin:2019htn,Butter:2019cae,Carrazza:2019cnt,SHiP:2019gcl,Vallecorsa:2019ked,Bellagente:2019uyp,Martinez:2019jlu,Butter:2019eyo,Alonso-Monsalve:2018aqs,Paganini:2017dwg,Paganini:2017hrr,deOliveira:2017pjk}
\\\textit{Generative Adversarial Networks~\cite{Goodfellow:2014upx} learn $p(x)$ implicitly through the minimax optimization of two networks: one that maps noise to structure $G(z)$ and one a classifier (called the discriminator) that learns to distinguish examples generated from $G(z)$ and those generated from the target process. When the discriminator is maximally `confused', then the generator is effectively mimicking $p(x)$.}
\item \textbf{(Variational) Autoencoders}~\cite{Smith:2024lxz,Krause:2024avx,Liu:2024kvv,Kuh:2024lgx,Hoque:2023zjt,Zhang:2023khv,Chekanov:2023uot,Lasseri:2023dhi,Anzalone:2023ugq,Roche:2023int,Cresswell:2022tof,AbhishekAbhishek:2022wby,Collins:2022qpr,Ilten:2022jfm,Touranakou:2022qrp,Buhmann:2021caf,Tsan:2021brw,Jawahar:2021vyu,Orzari:2021suh,Collins:2021pld,Fanelli:2019qaq,Hariri:2021clz,deja2020endtoend,Bortolato:2021zic,Buhmann:2021lxj,Howard:2021pos,1816035,Cheng:2020dal,ATL-SOFT-PUB-2018-001,Monk:2018zsb}
\\\textit{An autoencoder consists of two functions: one that maps $x$ into a latent space $z$ (encoder) and a second one that maps the latent space back into the original space (decoder). The encoder and decoder are simultaneously trained so that their composition is nearly the identity. When the latent space has a well-defined probability density (as in variational autoencoders), then one can sample from the autoencoder by applying the detector to a randomly chosen element of the latent space.}
Expand Down

0 comments on commit d450d52

Please sign in to comment.