Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Exploring Attacking and Defensing Neural Networks with Mixture of Experts #14

Open
moucheng2017 opened this issue Nov 10, 2022 · 0 comments

Comments

@moucheng2017
Copy link

moucheng2017 commented Nov 10, 2022

Exploring Attacking Neural Networks and How to Defense it with Mixture of Experts

Project Leader: Moucheng Xu email: [email protected]

Introduction

This project is an open-ended research project. Recently, lot of temporal or spatial ensemble methods have been shown to be effective because of the flat curvatures around the local minimums of their loss surfaces, resulting in better generalisation. So what happens to the model generalisation if we just self-ensemble? To evaluate the progress on answering this open question, we focus on model robustness as a surrogate measure of model generalisation, more specifically, we are talking about gradients based adversarial attacks and out-of-distribution testing samples which are ubiquitous in medical imaging. We focus on a synthetic segmentation task on MNIST.

Wanted:

Passion and interest in machine learning.
Experiences in Python or Pytorch or both can help.

Resources:

Raw data of MNIST(https://www.kaggle.com/datasets/oddrationale/mnist-in-csv)
Preprocessed data(https://drive.google.com/drive/folders/1kKKxA0F8Vcm3042TB-Qc_oF-RifNaYoT?usp=share_link)
Some code snippets(https://github.com/moucheng2017/mnist_robust)
Some paper with similar idea(https://openreview.net/forum?id=tuC6teLFZD)
Another one(https://arxiv.org/pdf/2210.10253.pdf)

Tasks (very negotiable)

  • Research/implementation of synthetic realistic out-of-the-distribution testing samples in medical imaging
  • Research/implementation of gated mixture of experts models
  • Research/implementation of stochastic mixture of experts models
  • Research/implementation of probabilistic mixture of experts models

Goal

  • This can lead towards a nice MICCAI / MIDL / ICCV / NeurIPS submission
@moucheng2017 moucheng2017 changed the title Exploring Attacking and Defensing Neural Networks Exploring Attacking and Defensing Neural Networks with Mixture of Experts Nov 10, 2022
@moucheng2017 moucheng2017 reopened this Nov 10, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant