Skip to content

Latest commit

 

History

History
44 lines (28 loc) · 4.01 KB

single_modality_pretraining_papers.md

File metadata and controls

44 lines (28 loc) · 4.01 KB

  • The Llama 3 Herd of Models, Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, Anirudh Goyal, Anthony Hartshorn, Aobo Yang, Archi Mitra, Archie Sravankumar, Artem Korenev, Arthur Hinsvark, Arun Rao, Aston Zhang, Aurelien Rodriguez, Austen Gregerson, Ava Spataru, Baptiste Roziere, Bethany Biron, Binh Tang, Bobbie Chern, Charlotte Caucheteux, Chaya Nayak, Chloe Bi, Chris Marra, Chris McConnell, Christian Keller, Christophe Touret, Chunyang Wu, Corinne Wong, Cristian Canton Ferrer, Cyrus Nikolaidis, Damien Allonsius, Daniel Song, Danielle Pintz, Danny Livshits, David Esiobu, Dhruv Choudhary, Dhruv Mahajan, Diego Garcia-Olano, Diego Perino, Dieuwke Hupkes, Egor Lakomkin, Ehab AlBadawy, Elina Lobanova, Emily Dinan, Eric Michael Smith, Filip Radenovic, Frank Zhang, Gabriel Synnaeve, Gabrielle Lee, Georgia Lewis Anderson, Graeme Nail, Gregoire Mialon, Guan Pang, Guillem Cucurell, Hailey Nguyen, Hannah Korevaar, Hu Xu, Hugo Touvron, Iliyan Zarov, Imanol Arrieta Ibarra, Isabel Kloumann, Ishan Misra, Ivan Evtimov, Jade Copet, Jaewon Lee, Jan Geffert, Jana Vranes, Jason Park, Jay Mahadeokar, Jeet Shah, Jelmer van der Linde, Jennifer Billock, Jenny Hong, Jenya Lee, Jeremy Fu, Jianfeng Chi, Jianyu Huang, Jiawen Liu, Jie Wang, Jiecao Yu, Joanna Bitton, Joe Spisak, Jongsoo Park, Joseph Rocca, Joshua Johnstun, Joshua Saxe, Junteng Jia, Kalyan Vasuden Alwala, Kartikeya Upasani, Kate Plawiak, Ke Li, Kenneth Heafield, Kevin Stone et al. (432 additional authors not shown) [Paper]

  • [arXiv:2406.12793] ChatGLM: A Family of Large Language Models from GLM-130B to GLM-4 All Tools, Team GLM [Paper]

  • SaulLM-7B: A pioneering Large Language Model for Law, Pierre Colombo, Telmo Pessoa Pires, Malik Boudiaf, Dominic Culver, Rui Melo, Caio Corro, Andre F. T. Martins, Fabrizio Esposito, Vera Lúcia Raposo, Sofia Morgado, Michael Desa [Paper]

  • [Time Series] UniTS: Building a Unified Time Series Model, Shanghua Gao, Teddy Koker, Owen Queen, Thomas Hartvigsen, Theodoros Tsiligkaridis, Marinka Zitnik [Paper] [Code]

  • ChemLLM: A Chemical Large Language Model, Di Zhang, Wei Liu, Qian Tan, Jingdan Chen, Hang Yan, Yuliang Yan, Jiatong Li, Weiran Huang, Xiangyu Yue, Dongzhan Zhou, Shufei Zhang, Mao Su, Hansen Zhong, Yuqiang Li, Wanli Ouyang [Paper] [Code]

  • InfMAE: A Foundation Model in Infrared Modality, Fangcen Liu, Chenqiang Gao, Yaming Zhang, Junjie Guo, Jinhao Wang, Deyu Meng [Paper] [Code]

  • Quality-aware Pre-trained Models for Blind Image Quality Assessment, Kai Zhao, Kun Yuan, Ming Sun, Mading Li and Xing Wen [Paper]

  • A Unified Visual Information Preservation Framework for Self-supervised Pre-training in Medical Image Analysis, Hong-Yu Zhou, Student Member, IEEE, Chixiang Lu, Chaoqi Chen, Sibei Yang, and Yizhou Yu, Fellow, IEEE, IEEE TPAMI'23 [Paper] [Code]

  • Ponder: Point Cloud Pre-training via Neural Rendering, Di Huang, Sida Peng, Tong He, Xiaowei Zhou, Wanli Ouyang, [Paper] [Code]

  • Masked Event Modeling: Self-Supervised Pretraining for Event Cameras, Simon Klenk, David Bonello, Lukas Koestler, and Daniel Cremers [Paper]

  • MAR: Masked Autoencoders for Efficient Action Recognition, Zhiwu Qing, Shiwei Zhang, Ziyuan Huang, Xiang Wang, Yuehuan Wang, Yiliang Lv, Changxin Gao, Nong Sang [Paper] [Code]