This repository provides a complete pipeline for generating, training, and deploying automated visual anomaly detection in real-time 3D applications (e.g., video games). It covers both texture and mesh anomaly detection through two-stage deep learning architectures, as well as Unity-based data generation tools and deployment artifacts.
It is a continuations of the two papers:
- EASE2024: Automated evaluation of game content display using deep learning, https://dl.acm.org/doi/10.1145/3661167.3661184
- EASE2025: https://conf.researchr.org/details/ease-2025/ease-2025-industry-papers/6/Hierarchical-deep-learning-framework-for-continuous-state-aware-visual-glitch-detect (to be presented)
-
DataGenerator/
Unity C# scripts to introduce controlled visual anomalies and capture screenshots + metadata. -
Pipelines/
Python implementations of the two-stage detection pipelines and orchestration scripts: -
Deployments/
Artifacts for containerization and service deployment:- Deployments README
- Docker, Kubernetes, gRPC server code, scripts, configs, and docs in subfolders.
-
Data Generation
- Open the Unity project in
DataGenerator/
. - Configure and run the dataset generator.
- See Dataset Generator Documentation.
- Open the Unity project in
-
Training & Inference
- Install Python dependencies:
pip install torch torchvision tqdm pyyaml
- Follow each pipeline’s user guide:
- Install Python dependencies:
-
Deployment
- Follow the instructions in Deployments README to build and run the service.
For detailed information on each component, click the links above.