Quiet Interaction In Mixed Reality - Designing an Accessible Home Environment for DHH Individuals through AR, AI, and IoT Technologies
Link to Full Video: Youtube Link
Link to System Interaction Tutorial: Youtube Link
Test the APK (Basic interaction without Raspy integration): Sample APK
How might the integration of AR, AI, and IoT technologies in HMD platforms potentially open up new avenues for improving the home living experience of Deaf and Hard of Hearing (DHH) individuals?
As technology rapidly evolves, voice-command-based smart assistants are becoming integral to our daily lives. However, this advancement overlooks the needs of the Deaf and Hard of Hearing (DHH) community, creating a technological gap in current systems. To address this technological oversight, this study develops a Mixed-Reality (MR) application that integrates Augmented Reality (AR), Artificial Intelligence (AI), and the Internet of Things (IoT) to fill the gaps in safety, communication, and accessibility for DHH individuals at home.
-
Literature Review: Gain insights into the daily challenges that DHH individuals encounter at home, as well as the advantages and limitations of current solutions.
-
Online Survey with DHH Individuals: A more detailed understanding of their specific challenges and needs.
-
(FlatKit is only for the outline effects, feel free to purchase.)
-
3D WebView for Android (Web Browser) by Vuplex
UnityOculusAndroidVRBrowser by IanPhilips [https://github.com/IanPhilips/UnityOculusAndroidVRBrowser] (This one should also work and it's free ;), which you might wanna give a try.)
-
Meta Quest 2+. (I used the Quest 3 for testing; however, the Quest 2, and Quest Pro should also work.)
-
Zigbee devices: List of supported Zigbee devices
-
Respeaker 4-Mic Array for Raspberry Pi Module. (This could be any microphone, as long as it is capable of capturing audio for analysis by the Raspberry Pi.)
-
SONOFF Zigbee USB Dongle and RaspberryPi 4B for smart device communication: SONOFF Zigbee dongle
-
Surveillance System: A webcam with microphone + RaspberryPi 4B + PWM Servo Driver
Microsoft Azure for Real-Time Speech Diarization (note that Python is not currently supported. However, remember to refer to the official documentation for any updates.), Speech-To-Text, and Text-To-Speech
Example of MQTT workflow
As this project involves communication between hardware and APIs, it requires a somewhat complex setup and a basic understanding of Raspberry Pi. If you're interested in experimenting with the application, please ensure you have the following setup:
- Set up your Zigbee Dongle and MQTT broker (you can refer to this tutorial) on your Raspberry Pi, and enter your API keys for Azure Speech Service (a free tier is available).
- Run the Python scripts on Raspberry Pi, remember to change the IP address and note the port numbers (feel free to change it) used in these scripts. For the speech services script, you can choose between Livecaption.py and Translation.py. There's no need to run both.
-
Import FlatKit & 3D WebView for Android
-
Enter your Azure Speech key and Live Caption port in the Inspector:
-
Configure MQTT settings in the Inspector:
-
Surveillance System:
Given the complexity of the setup, if you encounter any issues during setup or testing, please do not hesitate to contact me. Let's work together to build a more accessible home living environment for the DHH community.
- User Menu Mechanism
- 3D Representative User Home Model and Room Selection
- Tutorial for first time user
- Lamp Control
- Blinds Control
- Example 1: Microwvae Status
- Water Leaking
- Fridge Door Opened
- Active Keyboard
- Speech-to-Text and Text-to-Speech
(Note that the participant’s face is blurred to protect privacy and maintain anonymity.)