How can you implement a context-sensitive UI assistant using AWS services, such as Alexa, Lex or Recognition? For a new service, the context comes from your BMW: User profile of the driver, facial recognition, vehicle data, images of the vehicles surroundings, etc. How can we use this to enrich the drivers experience? BMW will provide the Car and the API to connect to the car (Git Repo). Bring all these data together with third-party skills (Spotify, TripAdvisor, you name it!) and surprise us with your emotion recognition assistant of the future.
What should the Digital Companion of the future be able to do? If you are into communication with machines and love NLP, then we are looking forward to your use cases for a personal in-car assistant or a smart automation system. BMW will provide you with the BMW Car API. And you will be using Amazon services such as Alexa, Lex and AWS IoT linked with the in-car information provided to you such as user profile, vehicle data and third party skills. Develop your idea of a companion and implement it on site.