-
Notifications
You must be signed in to change notification settings - Fork 1
/
2021-2022学年组会通知
199 lines (159 loc) · 11.7 KB
/
2021-2022学年组会通知
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
每周六晚19:00 周期会议ID:892 4985 0596
每周六早10:00(周报) 会议ID:538 6991 5988
2022.6.18(周六)晚上19:00
金一:ViSTA: Vision and Scene Text Aggregation for Cross-Modal Retrieval
鑫炎:Align Representations with Base: A New Approach to Self-Supervised Learning
2022.6.11(周六)晚上19:00
高亦:Continual learning: A comparative study on how to defy forgetting in classification tasks
文涓:Mosaicking to Distill:Knowledge Distillation from Out-of-Domain Data
舟阳:Multimodal Machine Learning:A Survey and Taxonomy
2022.6.4(周六)晚上19:00
文涓:Mosaicking to Distill:Knowledge Distillation from Out-of-Domain Data
中添:Perturb, Predict & Paraphrase: Semi-Supervised Learning using Noisy Student for Image Captioning
2022.5.28(周六)晚上19:00
宇萱:Graph Attention Multi-Layer Perceptron
伯元:UNITER: UNiversal Image-TExt Representation Learning
2022.5.21(周六)晚上19:00
守威:Deep Learning Through the Lens of Example Difficulty
禹睿:Long Short View Feature Decomposition via Contrastive Video Representation Learning
2022.5.14(周六)晚上19:00
红陈:On the Integration of Self-Attention and Convolution
2022.5.7(周六)晚上19:00
文涓:Learning Placeholders for Open-Set Recognition
中添:SIMVLM: SIMPLE VISUAL LANGUAGE MODEL PRE-TRAINING WITH WEAK SUPERVISION
金一:Align before Fuse: Vision and Language Representation Learning with Momentum Distillation
2022.4.30(周六)晚上19:00
鑫炎:ReSSL: Relational Self-Supervised Learning with Weak Augmentation
志浩:Learning Dual Semantic Relations with Graph Attention for Image-Text Matching
2022.4.23(周六)晚上19:00
伯元:Similarity Reasoning and Filtration for Image-Text Matching
宇萱:SIMPLE SPECTRAL GRAPH CONVOLUTION
志浩:Multi-Modality Cross Attention Network for Image and Sentence Matching
2022.4.16(周六)晚上19:00
守威:Task-Agnostic Undesirable Feature Deactivation Using Out-of-Distribution Data
禹睿:Temporal-attentive Covariance Pooling Networks for Video Recognition
2022.4.9(周六)晚上19:00
初兵:NOT ALL PATCHES ARE WHAT YOU NEED: EXPEDITING VISION TRANSFORMERS VIA TOKEN REORGANIZATIONS
2022.3.26(周六)晚上19:00
文涓:OPEN-SET RECOGNITION:A GOOD CLOSED-SET CLASSIFIER IS ALL YOU NEED
鑫炎:Multi-Grained Vision Language Pre-Training:Aligning Texts with Visual Concepts
2022.3.19(周六)晚上19:00
宇萱:Are we really making much progress? Revisiting, benchmarking,and refining heterogeneous graph neural networks
伯元:BOUNDARY-AWARE SELF-SUPERVISED LEARNING FOR VIDEO SCENE SEGMENTATION
2022.3.12(周六)晚上19:00
守威:Towards Learning a Generic Agent for Vision-and-Language Navigation via Pre-training
初兵:Thinking Fast and Slow: Efficient Text-to-Visual Retrieval with Transformers
红陈:Going deeper with Image Transformers
2022.3.5(周六)晚上19:00
中添:BART: Denoising Seq2Seq Pre-training for Natural Language Generation, Translation, and Comprehension
海亮:MoCo v3:An Empirical Study of Training Self-Supervised Vision Transformers
文涓:VLN_BERT: A Recurrent Vision-and-Language BERT for Navigation
2022.2.26(周六)晚上19:00
金一:ELECTRA: PRE-TRAINING TEXT ENCODERS AS DISCRIMINATORS RATHER THAN GENERATORS
禹睿:Swin Transformer: Hierarchical Vision Transformer using Shifted Windows
2022.2.19(周六)晚上19:00
宇萱:DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter
伯元:Big Bird: Transformers for Longer Sequences
鑫炎:ERNIE-ViL: Knowledge Enhanced Vision-LanguageRepresentations through Scene Graphs
文涓:What Makes Training Multi-modal Classification Networks Hard?
2022.2.12(周六)晚上19:00
兵:Large-Scale Adversarial Training for Vision-and-Language Representation Learning
守威:History Aware Multimodal Transformer for Vision-and-Language Navigation
2022.1.29(周六)晚上19:00
金一:RoBERTa: A Robustly Optimized BERT Pretraining Approach
中添:Improving Language Understanding by Generative Pre-Training
禹睿:AN IMAGE IS WORTH 16X16 WORDS:TRANSFORMERS FOR IMAGE RECOGNITION AT SCALE
海亮:Training data-efficient image transformers & distillation through attention
2022.1.22(周六)晚上19:00
鑫炎:ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision
文涓:Unicoder-VL: A Universal Encoder for Vision and Language by Cross-Modal Pre-Training
2022.1.15(周六)晚上19:00
初兵:Attention is All you Need
伯元:BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
2022.1.8(周六)晚上19:00
红陈:Swin Transformer: Hierarchical Vision Transformer using Shifted Windows
2022.1.1(周六)晚上19:00
守威:Polyhedral Conic Classifiers for Visual Object Detection and Classification
2021.12.26(周六)晚上19:00
金一:Probabilistic Embeddings for Cross-Modal Retrieval
2021.12.19(周六)晚上19:00
守威:Polyhedral Conic Classifiers for Visual Object Detection and Classification
2021.12.11(周六)晚上19:00
初兵:Deep Co-Attention Network for Multi-View Subspace Learning
伯元:VX2TEXT: End-to-End Learning of Video-Based Text Generation From Multimodal Inputs
宇萱:Scalable Heterogeneous Graph Neural Networks for Predicting High-potential Early-stage Startups
2021.12.4(周六)晚上19:00
鑫炎:Few-shot Network Anomaly Detection via Cross-network Meta-learning
海亮:Learning Cross-Modal Retrieval with Noisy Labels
2021.11.27(周六)晚上19:00
文涓:Few-Shot Incremental Learning with Continually Evolved Classifiers
禹睿:Semi-Supervised Action Recognition with Temporal Contrastive Learning
金一:Unsupervised Model Adaptation for Continual Semantic Segmentation
2021.11.20(周六)晚上19:00
红陈:Multimodal Few-Shot Learning with Frozen Language Models
Unifying Vision-and-Language Tasks via Text Generation
中添:Jigsaw Clustering for Unsupervised Visual Representation Learning
鹏辉:DEEP GRAPH MATCHING CONSENSUS
组会问题:无
2021.11.13(周六)晚上19:00
宇萱:struc2vec: Learning Node Representations from Structural Identity
初兵:Self-Tuning for Data-Efficient Deep Learning
守威:Towards Open World Recognition
Polyhedral Conic Classifiers for Visual Object Detection and Classification
Toward Open Set Recognition
1.开放性的公式解释(开放性Openness定义中各变量的关系。)
仅用Testing class和Train class来表示开放性。其中 。已知类都会出现在训练类中,未知类都存在与测试类中,训练类必定是测试类的子集。
2.Shell model risk公式解释(文中对开放空间风险引入了新的定义称为“Shell model risk”。该模型公式在进行化简过程中 所以第二步分子相减就出现了0的情况。这个公式就解释不通了。)
未知数据空间在整个数据空间中所占的比例即可表示开放空间的风险,所以把公式进行简单的修改。在改进的公式中直接用包含所有数据的球形空间 减去包含训练数据(已知类)的球形空间 来表示未知空间,因为指定了 所以最后化简得结果是无限接近1。
3.1-VS-SET Machine中的开放空间风险公式解释(在1-VS-SET中对开放空间进行了实际可运算的定义。文中只是给出了每个符号的抽象描述,理解比较费力,仅通过简单的符号描述不容易看懂这个公式。)
就是基础的线性1-VS-SET Machine计算出的远超平面和近超平面之间的距离,表示正类所在区域的宽度。 就是经过泛化之后远超平面 到近超平面 的距离,也可以是经过专业化之后远超平面 到近超平面 的距离。所以就可以用 来表示过度泛化(欠拟合)的风险,用 表示过度专业化(过拟合)的风险。 和 分别表示远超平面 、近超平面 的法向量, 和 是用户自定义的参数。将以上几项求和就得到了实际可运算的开放空间风险。
2021.11.06(周六)晚上19:00
鲍然:TRUSTED MULTI-VIEW CLASSIFICATION
鑫炎:Typing Errors in Factual Knowledge Graphs: Severity and Possible Ways Out
组会问题:无
2021.10.30(周六)晚上19:00
金一:Learning from the Master: Distilling Cross-modal Advanced Knowledge for Lip Reading
初兵:DiffMG: Differentiable Meta Graph Search for Heterogeneous Graph Neural Networks
组会问题:无
2021.10.24(周天)晚上19:00
红陈:Multimodal Few-Shot Learning with Frozen Language Models
Unifying Vision-and-Language Tasks via Text Generation
中添:Instance-Conditioned GAN
组会问题:无,备注:魏红陈重新讲
2021.10.16(周六)晚上19:00
禹睿:A Local-to-Global Approach to Multi-modal Movie Scene Segmentation
宇萱:Scalable Heterogeneous Graph Neural Networks for Predicting High-potential Early-stage Startups
组会问题:无,备注:张宇萱重新讲
2021.10.9(周六)晚上19:00
鑫炎:Relational Message Passing for Knowledge Graph Completion
1、这个方法适用于什么模型
因为信息的聚合,所以适用于GNN base data
2、每条路径的编码是如何进行的?
这里用到了词袋模型,即首先将所有出现的路径组合列出来,然后为每种组合的边赋予一个embedding(长度为len(n_relation),即关系数目),初始化为将每种路径通过one_hot的方法
进行初始化,例如以wn18RR数据集为例,共有11种关系,所以embedding的长度为11,以路径组合(1,2,3,1)为例,其初始化embedding为(0,2,1,1,0,0,0,0,0,0,0),
然后通过一个全连接层变为关系的embedding维度,在更新中,只更新全连接层。
鲍然:Scalable Heterogeneous Graph Neural Networks for Predicting High-potential Early-stage Startups
2021.10.3(周天)晚上19:00
禹睿:Incorporating Domain Knowledge To Improve Topic Segmentation Of Long MOOC Lecture Videos
红陈:RSTNet: Captioning with Adaptive Attention on Visual and Non-Visual Words
UNIMO: Towards Unified-Modal Understanding and Generation via Cross-Modal Contrastive Learning
组会问题:无
2021.9.25(周六)晚上19:00
金一:Learning from Imbalanced and Incomplete Supervision with Its Application to Ride-Sharing Liability Judgment
中添:Deep Clustering based Fair Outlier Detection
Q:sensitive attribute si 的形式?
A:si 是 (N,1) 的形式,其中N为数据样本的个数。每个数据集固定选用一个敏感属性参与训练,倘若原始样本存在多个敏感属性,则在数据预处理过程中仅保留一个 以构造训练数据。
S 就是样本的敏感属性,si 就是第 i 个样本在敏感属性 S 上的取值。则在对抗性训练过程中,g 试图尽可能预测对敏感属性的值,而 f 在编码过程中尽可能遮掩敏感属性的特征,两个模型更新 Loss
的方向相反,构成对抗训练。。公平性对抗训练的目标就是训练一个 敏感属性值预测器 g 和 特征提取器 f ,让 g 预测不出敏感属性值、 f 不提取敏感属性的特征,从而让模型对敏感属性的取值不敏感。
组会问题:无
2021.9.18(周六)晚上19:00
宇萱:Learning to Walk across Time for Interpretable Temporal Knowledge Graph Completion
初兵:Purify and Generate: Learning Faithful Item-to-Item Graph from Noisy User-Item Interaction Behaviors
鑫炎:NRGNN: Learning a Label Noise-Resistant Graph Neural Network on Sparsely and Noisily Labeled Graphs
组会问题:无
2021.9.11(周六)晚上19:00
鲍然:Model-Agnostic Counterfactual Reasoning for Eliminating Popularity Bias in Recommender System
组会问题:无
2021.9.4(周六)晚上19:00
禹睿:Mixup for Node and Graph Classification
组会问题:无