Live Webinar - Optimize LLM performance through Human-AI Collaboration

Autonomous Vehicles: One of AI’s Most Challenging Tasks

Published on
July 19, 2022
Author
Authors
Share

Where the Industry Stands Today and What it Still Needs to Achieve Success

Autonomous vehicles are said to be the holy grail of the automotive industry, set to revolutionize transportation. The hype for autonomous vehicles just a few years ago was as high as can be—so what happened? Where’s our driverless car revolution that so many companies touted we'd have by 2021? As it turns out, creating a self-driving car is much harder than expected.Let’s explore where we are with self-driving cars, why they’re one of the most challenging tasks of our time, and what we can do about it.

Where are Autonomous Vehicles Now?

Autonomous vehicles hold much promise: they’re set to transform our roadways, creating a much safer driver experience. After all, stats show that human error can be blamed for more than 90% of road accidents. Back in 2015 or 2016, many automotive manufacturers announced major plans for getting fully autonomous commercial vehicles on the road in the next few years, but we’ve long surpassed their initial estimates. It was an exciting time for the industry, but the hype had moved far ahead of the reality. So, what progress has actually been made toward a fully self-driving car?It helps to evaluate progress using SAE’s widely-accepted levels of driving automation. There are five levels of automation, from Level 0 (no driving automation) to Level 5 (full driving automation).

  • Level 0: No autonomy (driver has full control of the vehicle)
  • Level 1: Driver assistance
  • Level 2: Partial automation
  • Level 3: Conditional automation
  • Level 4: High automation
  • Level 5: Full automation (self-driving car)

For now, most cars sold are at least at Level 1, where the car offers a few features that assist the driver. These might include lane-assist or adaptive cruise control. Tesla Autopilot is at Level 2, which means the car can manage steering and speed elements, but the driver still needs to be paying close attention and ready to take the wheel. Honda unveiled a model in March 2021 that has achieved Level 3, their Legend Sedan, which enables the driver to let the vehicle take control but only under very specific conditions.As far as Level 4 goes, there are a few companies making headway here: GM, Daimler, and Google are notable. Google Waymo, for example, has vehicles that are fully autonomous within a specific geofenced perimeter (namely, certain suburbs of Arizona and a few other controlled locations). We expect to see this technology take hold in 2024 and 2025.There are no Level 5 vehicles on the market currently, with companies pushing back their timelines on deployment after realizing the immense challenges inherent to full autonomy. A positive outcome of this more gradual progress is that it helps to build trust with customers by moving up the levels of autonomy one step at a time, rather than all at once. It’s difficult to say when we’ll experience our driverless car revolution; rather than make more projections that will likely not be met, it’s more important to focus on addressing the challenges to getting there.

cars driving down a street

What Makes Building an Autonomous Vehicle So Challenging?

Ultimately, what the problem comes down to is that creating a fully self-driving car for all conditions is extremely difficult. It’s much more complicated than automotive experts realized at the outset of their projections, which is why companies have had to delay timelines, sell off autonomous divisions, and revamp their approach. Let’s talk about what makes this type of project so difficult:

  • The World is Complicated. Autonomous vehicles must navigate a highly complex world of various roadways, street signs, pedestrians, other vehicles, buildings, and more.
  • Humans are Unpredictable. These vehicles need to not only understand their driver, but also be able to predict human behavior, which as we know can be relatively unpredictable.
  • Tech is Expensive. Hardware installed in autonomous vehicles (think cameras, LiDAR systems, and RADAR) is used to capture the external world and help the vehicle make decisions. But this hardware still needs to improve significantly to provide the level of detailed data vehicles need. It also isn’t very cost-effective.
  • Training Must be Thorough. Autonomous vehicles need to be trained for all possible conditions (for example, extreme weather like snow or fog); it’s extraordinarily difficult to predict all the conditions a vehicle might encounter.
  • There’s No Margin for Error. autonomous vehicles are a life-or-death use case as they directly impact driver and passenger safety. These systems must be perfectly accurate.

Data Holds the Key

Solving the above challenges means looking at where they come from. And to do that, we need to understand how a self-driving car works. These cars rely on AI, especially computer vision models, that give the vehicle the ability to “see” the world around it and then make decisions based on what it sees. Data is captured from hardware on the vehicle (as we mentioned, cameras, LiDAR, RADAR, and other types of sensor data) and used as input for the models.For a car to react to a pedestrian in the road, for example, it will need to have seen sensor data representing that condition before. In other words, it needs to be trained using data that represents all possible scenarios and conditions. If you think about your experiences in a vehicle, you can understand that this ends up being a lot of conditions, and therefore a lot of training data.If we look at our pedestrian example alone, we’d also need to incorporate examples of children as well as adults, people in wheelchairs, babies in strollers, and other scenarios that may not immediately come to mind. Further, we’d want our model to differentiate an actual pedestrian from a picture of a person’s face on a sign, for example. You can see that what seems like a straightforward use case can get complicated fast.Not only does the vehicle need a lot of training data, that training data needs to be accurately annotated. An AI model can’t just look at an image of a pedestrian and understand what it’s looking at; the image needs to include clear labels of which part of that image includes the pedestrian.As a result of this complexity, there are many different types of annotation that are used for autonomous vehicle AI models, including:

  • Point cloud labeling for LiDAR and RADAR data: identifies and tracks objects in a scene
  • 2D labeling including semantic segmentation for camera data: gives the model an understanding of which class each pixel belongs to
  • Video object and event tracking: helps the model understand how objects move through time
  • And more

For more information on data annotation techniques, see our summary article.There’s very little room for error in these annotations, nor is there room for error in missing key use cases. Ultimately, data collection and annotation for autonomous vehicles is a very time-consuming, resource-intensive process, which a lot of companies don’t fully realize when they start the work. This is what leads to delayed timelines, frustration, and the lack of autonomous vehicles on our roadways now. Nonetheless, this is the critical piece of the puzzle that automotive manufacturers need to unlock in order to achieve success.

Accuracy, Diversity and Efficiency are Critical to Ensuring Safety

To learn more about the critical considerations for data for autonomous vehicles, we tapped our very own Appen Data Scientist, Xiaorui Yang, who specializes in computer vision.AccuracyIt's crucial that an autonomous vehicle be able to perceive to surroundings precisely, detect and prevent hazards to complete transportation missions. The data should be accurate enough so AI models can learn from them, only precise inference on obstacles’ location can result in reasonable decisions. For instance, if the model is unable to detect a truck moving in the nearest lane precisely in a horizontal direction, that will frequently lead to mistaken breaks, and therefor a degraded user experience.DiversityScenarios:The real environment could be diverse in weather: rainy, snowy, foggy; and in light conditions: sunny day, dark night, very cloudy before heavy rain, etc. An autonomous vehicle should be able to handle all scenarios. Therefore, training data should cover common conditions as well as rare ones.Modalities:Sensors perform differently in diverse environments. For example, we see LiDAR performance drop on rainy or snowy days due to physical characteristics. Intuitively, the camera cannot see at night as far as it can in daylight. That’s why most companies still use multiple types of sensors to compensate for each other in difficult environment perception cases.EfficiencyWhen the companies run experiments with their vehicles in a new country or city, the efficiency of data is vital to the progress of the whole trial. If the annotated training data is not ready on time, the project has higher risk to delay. With the help of state-of-the-art perception models, a good data partner should be able to provide data ahead of the deadline and save time for other time-consuming pipelines.

A Data Partner Can Help

Given the huge uplift automotive projects require, many companies have in the past leaned on multiple vendors to help them collect and prepare data to train their models. In reality, it’s better to find one data partner with the expertise and tools to assist throughout the project. A reliable data partner will be able to help you achieve the highest levels of accuracy needed for AI-powered autonomous vehicles and to scale projects to multiple geographic locations.Your data partner should work with you to create a consistent source of new data to train your AI models that covers both common and rare use cases. With their tools, you should be able to annotate that data accurately with the most advanced labeling tools available. Companies that find a trusted data partner to power their autonomous vehicles can gain a competitive edge in the autonomous vehicle space and more broadly, the automotive industry as a whole.

What We Can Do For You

Appen collects and labels images, text, speech, audio, video, and other data used to build and continuously improve the world’s most innovative artificial intelligence systems. Our expertise includes having a global crowd of over 1 million skilled contractors who speak over 180 languages, and the industry’s most advanced AI-assisted data annotation platform.With 25 years of experience, 15 of which in Automotive, we offer a full suite of multimodal computer vision annotation tools as well as in-cabin vehicle collection and NLP annotation services to help with your autonomous vehicle projects. Our experienced team based in the heart of Motor City, Detroit lends its expertise and resources on the ground to accelerate your product development and testing workflows.Learn more about how we can help your automotive-centered AI initiatives, or contact us today to speak with someone directly.

Related posts

Lorem ipsum dolor sit amet, consectetur adipiscing elit.

No items found.
Dec 11, 2023