diff --git a/CV_YipingWang_phd.pdf b/CV_YipingWang_phd.pdf
index 00f6540..58aa9ba 100644
Binary files a/CV_YipingWang_phd.pdf and b/CV_YipingWang_phd.pdf differ
diff --git a/Research/DataSelection/VAS/README.md b/Research/DataSelection/VAS/README.md
new file mode 100644
index 0000000..e84d02d
--- /dev/null
+++ b/Research/DataSelection/VAS/README.md
@@ -0,0 +1,16 @@
+# Nerfies
+
+This is the repository that contains source code for the [Nerfies website](https://nerfies.github.io).
+
+If you find Nerfies useful for your work please cite:
+```
+@article{park2021nerfies
+ author = {Park, Keunhong and Sinha, Utkarsh and Barron, Jonathan T. and Bouaziz, Sofien and Goldman, Dan B and Seitz, Steven M. and Martin-Brualla, Ricardo},
+ title = {Nerfies: Deformable Neural Radiance Fields},
+ journal = {ICCV},
+ year = {2021},
+}
+```
+
+# Website License
+
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
diff --git a/Research/DataSelection/VAS/index.html b/Research/DataSelection/VAS/index.html
new file mode 100644
index 0000000..4bcc026
--- /dev/null
+++ b/Research/DataSelection/VAS/index.html
@@ -0,0 +1,396 @@
+
+
+
+ Off-policy dynamic programming (DP) techniques such as Q-learning are not guaranteed to converge + in the presence of function approximation, often diverging due to the absence of + Bellman-completeness in the function classes considered. +
++ Return-conditioned supervised learning (RCSL) is an alternative off-policy framework, which learns + a return-conditioned distribution of actions in each state + by directly applying supervised learning on the trajectories in the dataset, + achieving satisfactory performance + by simply conditioning the policy on high desired returns. +
++ RCSL is able to circumvent these challenges of Bellman completeness, + converging under significantly more relaxed assumptions inherited from supervised learning + than DP-based methods. + We prove there exists a natural environment in which + if one uses two-layer multilayer perceptron as the function approximator, + the layer width needs to grow linearly with the state space size to satisfy + Bellman-completeness while a constant layer width is enough for RCSL. + +
++ For either special case of RCSL: +
+ MBRCSL leverages learned dynamics models + and forward sampling to accomplish trajectory stitching, bringing the benefits of dynamic + programming to RCSL without ever having to do dynamic programming backups. +
++ We evaluate MBRCSL on a custom Point Maze environment built on top of + D4RL Maze. + We construct an offline dataset consisting of two kinds of suboptimal trajectories with + equal number, and the optimal trajectory should be a stitching of the two trajectories in dataset. + MBRCSL outperforms all baselines. +
++ We also evaluate MBRCSL on three simulated robotic tasks from + COG . In each + task, a robot arm is required to complete some goal by accomplishing two phases of actions. + The dataset for every task consists of two kinds of trajectories, with each kind completing + exactly one phase of actions at a certain successful rate. +
++ MBRCSL outperforms the baselines in all three tasks. +
+@article{wang2024variance,
+ title={Variance Alignment Score: A Simple But Tough-to-Beat Data Selection Method for Multimodal Contrastive Learning},
+ author={Wang, Yiping and Chen, Yifang and Yan, Wendan and Jamieson, Kevin and Du, Simon Shaolei},
+ journal={arXiv preprint arXiv:2402.02055},
+ year={2024}
+}
+
+ I'm a first-year Ph.D. student in Paul G. Allen School of Computer Science & Engineering from University of Washington. -I feel very fortunate to have worked under the guidance of Prof. Simon Shaolei Du since 2022.
+I feel very fortunate to have worked under the guidance of Prof. Simon Shaolei Du since 2022 summer.My main research interest lies in machine learning theory, especially the foundations of deep learning and representation learning. -I am also keen on developing practical machine learning algorithms with strong theoretical guarantees, and currently I'm working on optimizing data selection methods for training foundational models. +I am also keen on developing practical machine learning algorithms with strong theoretical guarantees, and currently I'm working on designing data selection methods for training foundational models. Furthermore, I always hold a strong enthusiasm for understanding the essence of intelligence and exploring the cross-cutting areas of mathematics, physics, and AI.
Previously, I got my bachelor's degree in Computer Science & Technology from Zhejiang University in 2023, with an honor degree from Chu Kochen Honors College. I also minored in Mathematics at Zhejiang University. @@ -40,6 +40,10 @@
*: indicating equal contribution or alphabetic ordering.
Variance Alignment Score: A Simple But Tough-to-Beat Data Selection Method for Multimodal Contrastive Learning [Code]
+*Yiping Wang, *Yifang Chen, Wendan Yan, Kevin Jamieson, Simon Du.
+Preprint.
JoMA: Demystifying Multilayer Transformers via JOint Dynamics of MLP and Attention
Yuandong Tian, Yiping Wang, Zhenyu Zhang, Beidi Chen, Simon Du.
International Conference on Learning Representations (ICLR) 2024