Uncover the latest AI trends in Appen's 2024 State of AI Report.
Resources
Blog

Three of the Most Innovative Automotive AI Applications at AutoSens Detroit

Published on
June 12, 2019
Author
Authors
Share

Appen recently exhibited at AutoSens Detroit, where we shared our approach to data collection and annotation for in-car navigation, infotainment, and monitoring, as well as autonomous vehicle solutions. Here are some of the coolest innovations in automotive AI, as spotted by our team:

Mercedes Benz’s Intelligent Interior

Volker Entenmann, Senior Manager of UI Functions at Daimler AG, shared how Mercedes-Benz is automating driving tasks and creating a more intuitive, connected user experience for both drivers and passengers. After releasing the MBUX (Mercedes-Benz User Experience, which included a touch input on the steering wheel, touchpad on the center console screen, and cutting-edge natural language recognition) in 2018, Mercedes wanted to assist drivers by automating functions both exterior and interior. [caption id="attachment_28490" align="alignnone" width="750"]

Photo: Mercedes Benz via mercedes-benz.com/a-class/com/en/mbux/[/caption]To establish an all-new category in seamless, intuitive user experience, the company recently launched MBUX Interior Assistant, now available in GLE and CLA models. Interior Assistant is a camera-based system that understands the body language of drivers and passengers, automating and providing direct access to functions like music, climate control, lighting, and seating. The cutting-edge technology provides proximity detection for the both the touch screen and the center console — when you put your hand close to the screen, it immediately responds by highlighting icons on the homescreen, showing the navigation bar on demand, and activating the radio and media cover flow. The system recognizes gestures, and can distinguish between the driver and different passengers.

Parallel Domain’s Synthetic World-Building

While autonomous vehicles need to be trained on millions of data points to learn tasks like object detection, collecting that data through real-world driving is a massive undertaking. Parallel Domain’s technology automatically generates virtual environments for testing autonomous vehicles, based on real-world, open street map data.While there is not yet consensus that synthetic data alone is viable for autonomous vehicle training, Parallel Domain’s CEO and Founder Kevin McNamara explained that simulated data removes some of the challenges of data collection in real-world driving. Namely, real-world driving is dangerous, expensive, slow, and repetitive.McNamara believes that companies can minimize the danger and time in developing autonomous systems by multiplying the amount of simulation performed today. Procedural content generation is useful for producing large datasets. Parallel domain fills in gaps like parked cars and trees, where it’s logical, and generates a best-guess HD map based on center lines of lanes — then generates virtual worlds with different lighting variants, pulling in distant satellite imagery like background mountains. These aren’t just video simulations — they contain lots of metadata, with a buffer for semantic segmentation and instant object tracking built in.

Guardian Optical Technologies’ Optical Cabin Control

Guy Raz, CTO of Guardian Optical Technologies, presented how their company is enhancing the safety of in-cabin experiences and paving the way for truly autonomous vehicles. Using multi-layered sensors combining machine vision, depth perception, and micro-motion detection — as well as 2D and 3D detection and motion classification — the system is able to identify and map the driver and every passenger in the cabin. By identifying individuals in the automobile, the system is designed to protect them with features like drowsy driver detection, seat belt reminders, micro-vibration detection to remind parents of sleeping children if they attempt to leave the vehicle, and smartphone detection to prevent hazardous distractions or illegal driving conditions.

Collecting data that covers all the real-world driving scenarios, voices, languages, accents, and demographics an AV solution will encounter can be a daunting task. If you’re building autonomous vehicle solutions, improving in-car experiences, or solving other challenges in creating connected cars, don’t miss the Appen and Figure Eight teams at AutoSens Brussels this September. Stop by the Figure Eight booth to learn how we help leading automotive OEMs and Tier 1s collect and annotate the robust, high-quality training data they need to develop and optimize AV and in-car solutions.—At Appen, we’ve helped leaders in machine learning and AI scale their programs from proof of concept to production. Contact us to learn more.

Related posts

What is Human-in-the-Loop Machine Learning?

Human-in-the-loop (HITL) is a branch of artificial intelligence that leverages both human and machine intelligence to create machine learning models. In a traditional
Read more

Deciphering AI from Human Generated Text: The Behavioral Approach

One of the most important elements of building a well-functioning AI model is consistent human feedback. When generative AI models are trained by human annotators, they serve
Read more

Data Quality: The Better the Data, the Better the Model

If your data’s not accurate, your model won’t run...properly, that is. While you may end up with a working model, it won’t function the way it was intended. The quality of
Read more

Machine Vision vs. Computer Vision — What’s the Difference?

Artificial Intelligence is an umbrella term that covers several specific technologies. In this post, we will explore machine vision (MV) vs. computer vision (CV). They both
Read more