The idea of aframe-ai
is to lower the entry-barrier for designers, students and the general public to explore ideas and create prototypes powered by AI.
A-Frame can act as an AI-Simulator like this:
- Capture input:
- capture images from the current VR-scene
- capture images from the web cam looking at the user
- capture audio from the user
- capture audio from the current VR-scene
- Send the input to a machine-learning service
- Use the result to:
- control non-human-players / robots
- simulate assistive technology in the browser
This repo will showcase some examples of this approach...
- Assistive-Aframe — AI describing the virtual world to vision-impaired users
- Voice-Control — Use your voice to navigate a virtual world
- Head-Control — Move your head to navigate a virtual world