Expert analysis: Inside the technology of autonomous vehicles

Autonomous cars, or self-driving cars, are equipped with advanced sensors, cameras, LiDAR, and AI-driven software that allow them to navigate and operate without human intervention. Classified from Level 0 (no automation) to Level 5 (full automation), these cars can independently perceive their environment, plan routes, and execute driving tasks.

These vehicles promise to revolutionize transportation by enhancing safety, efficiency, and accessibility for diverse populations. But what are they exactly? Let’s take a look.

What is an autonomous vehicle?

An autonomous vehicle travels between locations without human input using sensors, cameras, radar, and artificial intelligence (AI).

“An autonomous vehicle is one that does not require, and in many cases does not allow for, input from any sort of human to safely get it from point A to point B. This describes an L4, L5 autonomous vehicle, which is the style that Waymo is building,” explained David Margines, Director of Product Management of Google’s Waymo to IE.

[embedded content]

Waymo is a California-based autonomous driving technology company.

“Autonomous vehicles are basically a big robot. The system starts by sensing. You have lots of different sensors. Camera, radar, lidar, the typical sensors on an autonomous vehicle,” explained autonomous vehicle expert David Silver to IE.

To qualify as fully autonomous, a vehicle must be able to navigate to a predetermined destination without human intervention over roads that have not been adapted for its use.

Several companies are involved in developing and testing autonomous cars. These companies include Audi, BMW, Ford, Google, General Motors, Tesla, Volkswagen, and Volvo. Google, for example, conducted tests on a fleet of self-driving cars, including a Toyota Prius and an Audi TT.

What are the levels of autonomy for autonomous vehicles?

The US National Highway Traffic Safety Administration (NHTSA) has defined six levels of vehicle automation. The levels range from Level 0, where humans fully control driving, to fully autonomous cars, known as Level 5.

Tesla self driving mode.
A Tesla vehicle in self-driving mode. Source: helivideo/iStock

The first is, obviously, Level 1. At this level, an Advanced Driver Assistance System (ADAS) assists the human driver with steering, braking, or accelerating, but not all simultaneously. ADAS includes rearview cameras and features such as a vibrating seat warning to alert drivers when they drift out of their lane.

Moving up a level, the ADAS can steer and either brake or accelerate simultaneously while the driver remains fully aware behind the wheel and continues to act as the driver.

To qualify for level 3, an automated driving system (ADS) can perform all driving tasks under certain circumstances, such as parking the car. In these circumstances, the human driver must be ready to retake control and is still required to be the vehicle’s main driver.

Level 4 autonomous vehicles’ ADS can perform all driving tasks and monitor the driving environment in certain circumstances. In those circumstances, the ADS is reliable enough that the human driver doesn’t need to pay attention.

At the top level, level 5, the vehicle’s ADS acts as a virtual chauffeur and does all the driving in all circumstances. The human occupants are passengers and are never expected to drive the car.

[embedded content]

At present, the most sophisticated autonomous vehicles are all level 4. Level 5 vehicles will only become commonplace once the technology is mature and proven to be infallible.

How do autonomous vehicles work?

A powerful artificial intelligence (AI) self-driving car system is at the core of any self-driving vehicle. Self-driving car developers utilize large amounts of data from image recognition systems, machine learning, and neural networks to create systems capable of autonomous driving.

In most cases, the neural networks identify patterns in the data, which are fed to the machine learning algorithms. That data includes images from cameras on self-driving cars from which the neural network learns to identify traffic lights, trees, curbs, pedestrians, street signs, and other parts of any given driving environment.

For instance, Google’s self-driving car project, Waymo, utilizes a combination of sensors, lidar (light detection and ranging – a technology similar to RADAR), and cameras. It combines data from these systems to identify objects around the vehicle and predict their potential actions in a fraction of a second.

In general, the idea is that the driver (or passenger), sets a destination in the vehicle. From that instruction, the autonomous vehicle will plan and calculate the route (much like a GPS-enabled navigation system).

The vehicle’s “brain” will then move the car, constantly monitoring things like Lidar sensors. These create a real-time dynamic 3D map of the vehicle’s immediate vicinity.

Schematic of a self-driving car using its sensors.
Schematic of a self-driving car using its sensors. Source: Just_Super/iStock

Other sensors, like the wheels, will tell the vehicle where it is at any particular time according to its “map.” While in motion, the “brain” will also monitor other sensors like forward/rear Radar or cameras to check for obstacles and people.

When one is detected, the “brain” will act appropriately to alter course or apply the brakes (if needed). Where no obstacles are detected, the vehicle will otherwise follow the pre-calculated route, operating the vehicle’s propulsion and steering accordingly.

What are the challenges that autonomous vehicles must overcome?

Like a learner human driver, autonomous vehicles must recognize numerous objects in the vehicle’s path, such as branches, litter, animals, and people. Challenges include tunnels interfering with GPS, construction projects causing lane changes, and complex decisions like figuring out where to stop to let emergency vehicles pass.

The systems must make instantaneous decisions on when to slow down, swerve, or continue acceleration normally. This is a continuing challenge for developers, and there are reports of self-driving cars hesitating and swerving unnecessarily when objects are detected in or near the roadways.

Another challenge for cities is audio cues, which any Level 4 or Level 5 autonomous vehicle must handle. Emergency services can be unpredictable challenges to deal with accordingly, much like a human driver.

“On the audio side, [being able] hear sirens and things like that, is really important for navigating in urban environments and hearing kind of around the corner, ” said David Margines to IE.

While this is almost automatic for humans, engineers have had to find ways for an autonomous vehicle to readily handle audio and visual signals and react safely and appropriately.

Concept art of a self-driving car.
Concept art of a self-driving car. Source: gorodenkoff/iStock

This is enough of a challenge for humans; imagine the level of research and development needed to make compute-controlled vehicles do the same. To this end, it should come as no surprise, that autonomous vehicles are not foolproof.

Are autonomous vehicles safe?

Sadly, like any technology, accidents can and do happen. Sometimes, with tragic results.

One notable example occurred in March 2018. When an autonomous Uber sadly hit and killed a pedestrian. The company reported that the car’s software identified a pedestrian but mistakenly thought it was a false alarm and didn’t swerve to avoid hitting the person.

As a result of this crash, Toyota temporarily stopped testing self-driving cars on public roads but will continue testing in other locations. The Toyota Research Institute is also building a test facility on a 60-acre site in Michigan to develop automated vehicle technology further.

This was far from the only incident in recent times, but carmakers are working hard to learn from such experiences. But their hands will be tied in any case.

Self driving car on a road.
Artist’s impression of the inside of a self-driving car on a road. Source: Scharfsinn86/iStock

Before we see any mass rollout of these vehicles, carmakers will need to comply with national road safety regulations, like the Federal Motor Vehicle Safety Standards. These will likely also set strict guidelines and requirements for carmakers and autonomous vehicle operators in the future.

A driverless future

As the autonomous vehicle industry continues to evolve, its impact on transportation, safety, and accessibility grows more profound. From enhancing motorsports through innovative AI applications to providing life-changing mobility solutions for people with disabilities, the potential of self-driving technology is vast.

With companies like Tesla and Waymo leading the charge and continuous advancements in sensor technology, AI, and vehicle-to-vehicle communication, the future of autonomous driving promises a new era of convenience and efficiency.

They will also prove life-changing for anyone who cannot drive at present due to mental or physical disabilities. “Autonomous driving capabilities offer a huge leap forward for people with disabilities,” explained Tom McCarthy to IE.

In the future, David Margines envisages that “riders [will have] absolutely no responsibility to pay attention to the driving task except to sit back, relax, and enjoy their favorite music.”

NEWSLETTER

The Blueprint Daily

Stay up-to-date on engineering, tech, space, and science news with The Blueprint.

ABOUT THE EDITOR

Christopher McFadden Christopher graduated from Cardiff University in 2004 with a Masters Degree in Geology. Since then, he has worked exclusively within the Built Environment, Occupational Health and Safety and Environmental Consultancy industries. He is a qualified and accredited Energy Consultant, Green Deal Assessor and Practitioner member of IEMA. Chris’s main interests range from Science and Engineering, Military and Ancient History to Politics and Philosophy.