Members:
- Abhilash Kurapati (Email: [email protected], Banner ID: 001298038)
- Navya Srujana Cherukuneedi (Email: [email protected], Banner ID: 001269524)
- Shruthi Nandakumar (Email: [email protected], Banner ID: 001290810)
- Ashwin Pawar (Email: [email protected], Banner ID: 001296012)
Advisor: Dr. Hadi Ali Akbarpour
Team Lead: Param Sangani
Institution: Saint Louis University
Event cameras, known for their high-temporal resolution, minimal motion blur, and low latency, are ideal for fields like robotics, autonomous navigation, and augmented reality. However, existing methods in feature tracking face challenges in complex, dynamic conditions such as aerial imagery.
This project aims to develop a data-driven feature tracker for aerial imagery by integrating innovative techniques to enhance model robustness, adaptability, and stability in challenging environments.
High-altitude platforms (e.g., drones and satellites) demand advanced imaging systems for applications such as environmental monitoring, disaster management, and surveillance. Current feature tracking methods, though effective, lack robustness under conditions like feature drifts, diverse object sizes, and dynamic aerial scenarios.
This research will address these issues by exploring enhancements in adaptive attention, multi-scale tracking, and self-supervised learning techniques.
-
Adaptive Attention Mechanism:
- Improve frame attention modules to prioritize features based on tracking difficulty.
-
Multi-Scale Tracking Architecture:
- Introduce multi-resolution layers to handle objects of varying sizes and altitudes.
-
Self-Supervised Online Adaptation:
- Enable real-time model adaptation to new aerial environments without labeled data.
- Concept: Dynamically adjust attention weights based on feature drift and reliability scores.
- Benefit: Enhanced robustness and reduced feature drift in complex scenes.
- Concept: Use feature pyramid networks to handle diverse object sizes.
- Benefit: Improved tracking across objects at varying altitudes and scales.
- Concept: Implement real-time fine-tuning using self-supervised learning objectives.
- Benefit: Continuous performance improvements without labeled data.
- Literature Review: Analyze methods in adaptive attention, multi-scale tracking, and self-supervised learning for aerial imagery.
- Model Development: Integrate proposed mechanisms into a unified framework.
- Training:
- Train on synthetic aerial data.
- Use self-supervised methods for real-world datasets.
- Evaluation: Validate using diverse aerial datasets for robustness and adaptability.
- Optimization: Apply model compression for real-time deployment.
- Advanced feature tracking with adaptive attention for enhanced stability.
- Multi-scale architecture for consistent performance across object sizes.
- Real-time, self-supervised adaptation ensuring robustness in dynamic scenarios.
For more information or queries, please contact the team members or advisor listed above.