You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This project is an open-ended research project. Recently, lot of temporal or spatial ensemble methods have been shown to be effective because of the flat curvatures around the local minimums of their loss surfaces, resulting in better generalisation. So what happens to the model generalisation if we just self-ensemble? To evaluate the progress on answering this open question, we focus on model robustness as a surrogate measure of model generalisation, more specifically, we are talking about gradients based adversarial attacks and out-of-distribution testing samples which are ubiquitous in medical imaging. We focus on a synthetic segmentation task on MNIST.
Wanted:
Passion and interest in machine learning.
Experiences in Python or Pytorch or both can help.
Research/implementation of synthetic realistic out-of-the-distribution testing samples in medical imaging
Research/implementation of gated mixture of experts models
Research/implementation of stochastic mixture of experts models
Research/implementation of probabilistic mixture of experts models
Goal
This can lead towards a nice MICCAI / MIDL / ICCV / NeurIPS submission
The text was updated successfully, but these errors were encountered:
moucheng2017
changed the title
Exploring Attacking and Defensing Neural Networks
Exploring Attacking and Defensing Neural Networks with Mixture of Experts
Nov 10, 2022
Exploring Attacking Neural Networks and How to Defense it with Mixture of Experts
Project Leader: Moucheng Xu email: [email protected]
Introduction
This project is an open-ended research project. Recently, lot of temporal or spatial ensemble methods have been shown to be effective because of the flat curvatures around the local minimums of their loss surfaces, resulting in better generalisation. So what happens to the model generalisation if we just self-ensemble? To evaluate the progress on answering this open question, we focus on model robustness as a surrogate measure of model generalisation, more specifically, we are talking about gradients based adversarial attacks and out-of-distribution testing samples which are ubiquitous in medical imaging. We focus on a synthetic segmentation task on MNIST.
Wanted:
Passion and interest in machine learning.
Experiences in Python or Pytorch or both can help.
Resources:
Raw data of MNIST(https://www.kaggle.com/datasets/oddrationale/mnist-in-csv)
Preprocessed data(https://drive.google.com/drive/folders/1kKKxA0F8Vcm3042TB-Qc_oF-RifNaYoT?usp=share_link)
Some code snippets(https://github.com/moucheng2017/mnist_robust)
Some paper with similar idea(https://openreview.net/forum?id=tuC6teLFZD)
Another one(https://arxiv.org/pdf/2210.10253.pdf)
Tasks (very negotiable)
Goal
The text was updated successfully, but these errors were encountered: