AI Researcher | Machine Learning Engineer | Human-AI Interaction Enthusiast
π Passionate about AI-driven behavioral modeling, multimodal learning, and human-computer interaction, I specialize in machine learning, reinforcement learning, and AI applications in AR/VR. My work focuses on building AI-powered tools for behavioral analysis, human-agent collaboration, and interactive experiences.
- AI-driven AR/VR simulations and player behavior analysis
- AI for emotion-aware computing and multimodal learning (audio, video, text)
- Generative AI & LLMs for code analysis and autonomous agents
- Reinforcement learning applied to human-agent collaboration
πΉ Developed an AI model for recognizing human emotions from physiological and behavioral signals in VR environments.
πΉ Used EEG, eye-tracking, and reinforcement learning to adapt virtual scenarios based on user state.
πΉ Tech: Unity, Python, PyTorch, TensorFlow
- Granger Leadership in a Novel Dyadic Search Paradigm
- Using Multi-Modal Physiological Markers and Latent States to Understand Team Performance and Collaboration
- Latent State Synchronization in Dyadic Partners using EEG
- Decoding neural activity to assess individual latent state in ecologically valid contexts
πΉ Created machine learning models to analyze team dynamics, integrating speech, video, and physiological data for real-time insights.
πΉ Used deep learning, NLP, and computer vision to study human interactions.
πΉ Tech: Python, Hugging Face, TensorFlow, Pandas, SciPy
- Granger Leadership in a Novel Dyadic Search Paradigm
- Hierarchical Multi-Agent Reinforcement Learning with Explainable Decision Support for Human-Robot Teams
πΉ AI-powered chatbot that analyzes codebases, extracts key functions, and generates insightful recommendations.
πΉ Uses LLMs, embeddings, and FAISS-based retrieval for efficient search.
πΉ Tech: Python, Flask, FAISS, PyTorch, OpenAI API, Ollama
πΉ Developed neuromodulation techniques to restore motor function, using dorsal root ganglia (DRG) microstimulation to evoke postural responses and closed-loop neuromuscular stimulation for grasp force control.
πΉ Investigated somatosensory feedback restoration through microstimulation and designed a wearable textile-based electrode system to regulate precise finger movements in individuals with quadriplegia.
πΉ Tech: Python, MATLAB, Signal Processing, Neural Interfaces, Closed-Loop Control
- Closed-loop neuromuscular electrical stimulation using feedforward-feedback control and textile electrodes to regulate grasp force in quadriplegia
- DRG microstimulation evokes postural responses in awake, standing felines
π» Programming: Python, C++, C#, MATLAB, R, Linux, Git
π Machine Learning: PyTorch, TensorFlow, Scikit-learn, OpenCV, Hugging Face
πΆοΈ AR/VR & Simulation: Unity, Unreal Engine, Reinforcement Learning
π§ Multimodal AI: Speech & Audio Processing, Emotion Recognition, Video Analysis
If you're interested in AI, reinforcement learning, multimodal ML, or AR/VR research, let's connect!
π LinkedIn | π« [email protected]