A comprehensive toolkit for training, evaluating, and improving medical image segmentation models using diffusion-based approaches with cross-attention mechanisms.
nnQC provides tools for:
- Training autoencoder models for medical image latent representations
- Training diffusion models with cross-attention for segmentation refinement
- Evaluating model performance with comprehensive metrics
- Generating synthetic segmentation masks for quality control
The system consists of three main components:
- Autoencoder: Encodes medical images and segmentation masks into latent space
- Diffusion Model: Generates refined segmentation masks using cross-attention with CLIP features
- Evaluation Pipeline: Comprehensive metrics computation including DSC, HD95, and correlation analysis
git clone https://github.com/yourusername/nnQC.git
cd nnQC
pip install -e .- Python >= 3.8
- PyTorch >= 1.12.0
- MONAI >= 1.2.0
- CUDA-capable GPU (recommended)
See requirements.txt for complete dependency list.
import nnqc
# Train autoencoder
nnqc.train_autoencoder()
# Train diffusion model
nnqc.train_diffusion()
# Run evaluation
results = nnqc.evaluate_validation_set(args)# Train autoencoder
nnqc-train-ae -c config/config_train_32g.json -g 2
# Train diffusion model
nnqc-train-diffusion -c config/config_train_32g.json -g 2
# Run inference/evaluation
nnqc-inference -c config/config_train_32g.json
# Evaluate validation set
nnqc-evaluate -c config/config_train_32g.json# Train autoencoder
python -m nnqc.training.train_autoencoder -c config/config_train_32g.json
# Train diffusion
python -m nnqc.training.train_diffusion -c config/config_train_32g.json
# Run inference
python -m nnqc.inference.inference -c config/config_train_32g.json