The past decade has seen incremental progress in self-driving car technology, driven largely by advances in deep learning and artificial intelligence. In the not-too-distant future, self-driving cars will be the safest vehicles on the road. Although many vehicles today use “driver assistance systems”; cars still need people.
The automotive industry uses a much stronger set of sensor data and the possibility of fast processing of this data; It seeks to create a fully autonomous car. The emergence of self-driving cars in people’s lives will reduce road accidents, reduce traffic, and facilitate commuting in crowded cities.
Table of Contents
The role of deep learning in the development of driverless cars
Over the past decade, deep learning and artificial intelligence have become the main technologies in the development of many technologies such as robotics, natural language processing, anti-fraud systems, and driverless cars.
In this regard, artificial intelligence, deep learning, and neural networks can be effective in these three areas:
- Using sensor data for scene design
- Recognition of signs and driving rules
- Continuous learning to improve safety and performance
Currently, driver assistance systems control driving functions such as navigation, lane keeping, collision avoidance, and parking. But they are not able to drive without human presence. However, artificial intelligence and deep learning technologies, using advanced sensors and route mapping, help cars to move completely autonomously and safely than human-driven cars.
The development path of self-driving cars has several levels:
- Level 0: The car is completely driven by humans.
- Level 1: Steering, braking, acceleration, parking, and other functions can be performed automatically by the car, but the driver is always ready to take control of the car.
- Level 2: There is at least one fully automatic driver assistance system (such as speed control and direction of movement), but the driver is alert to detect and react to incidents or objects in case the system fails.
- Level 3: Drivers can fully delegate the main functions to the vehicle when the environmental and traffic conditions are suitable. In this level, unlike the previous levels, there is no need for constant supervision by the driver.
- Level 4: The vehicle is fully autonomous and capable of performing all important driving functions in safety and monitoring road conditions for a complete journey.
- Level 5: the vehicle operates fully autonomously, And it is proven that it drives better than a human.
Today, most of the cars on the roads are at zero level; While many vehicles produced in the last few years have level 1 or 2 autonomy. Higher levels require artificial intelligence. Levels 4 and 5 will be built using advanced deep-learning technologies.
Functional requirements in driverless driving
Driverless driving requires a complex set of advanced functions to measure what is happening. Route mapping, and creating driving policies to deal with predictable and unpredictable situations are examples of it.
Most smart cars use LiDAR (a method that uses laser light to measure distance), radar (to detect objects), and digital cameras to understand the driving environment. They examine and analyze the situation:
- Static objects such as road boundaries, guard rails, and bike lanes
- Moving objects including other vehicles, pedestrians, and bicycles
- Data and signs such as lanes, parking areas, traffic signs, and lights
Accurate measurement in driverless cars
Multiple sensors enable autonomous vehicles to accurately detect both moving and stationary objects. These sensors track and classify the scene around the entire perimeter of the vehicle several times per second.
Smart cars use GPS data to get from point A to point B. However they still need driver preferences to make route mapping as efficient as possible.
Automated systems need to know when to change lanes or change speed. Human drivers create a set of policies tailored to their driving style and driving conditions. Driverless cars also require a comprehensive set of policies to make safe decisions.
- Operating systems of self-driving cars must:
- be implemented continuously
- Be able to operate safely in difficult conditions (bad weather or heavy traffic) and at night
- React to the unpredictable behavior of other vehicles, pedestrians, road repairs, etc. without error percentage.
Each of these needs represents several challenges in technology. One of the most important requirements that is properly covered by deep learning; is the ability to perceive the whole picture in a moment (which is formed by several sensors).
Neural networks draw the scene
Internal sensor hardware made by Tesla: includes 8 surround cameras, 12 ultrasonic sensors plus forward radar. All these sensors collect data several times per second.
If we consider the sensors as the eyes of a vehicle, the artificial neural networks act as the cerebral cortex and transform the sensor data into a usable image of the road space. Neural networks paint the scene around the moving car, read the posted speed limit, and follow it. Recognizes stop signs and green lights; They detect people, businesses, and even garbage on the road.
Car hazard detection
Cars with the ability to transmit safety warnings can inform the cars behind them of the presence of obstacles ahead to prevent accidents.
Current software engineering and rules-based tools are not powerful enough to solve complex problems such as sensor data interpretation and autonomous driving. There are too many variables. There are many unforeseen issues that one must anticipate and plan for.
The most basic deep learning technologies used in driverless cars are Convolutional Neural Networks, Recurrent Neural Networks, and Reinforcement Neural Networks.
convolutional neural networks (CNN)
Convolutional neural networks are mainly used to process spatial information such as images, And they can be used as image feature extractors. Before the advent of deep learning, computer vision systems were based on manual features. Convolutional neural networks can be roughly compared to different parts of the visual cortex of mammals.
Recurrent Neural Networks (RNN)
Among deep learning methods, recurrent neural networks perform well in processing data such as text or video streams. Unlike convolutional neural networks, they include a time-dependent feedback loop in their memory cell.
Reinforcement Neural Networks (DRL)
In reinforcement neural networks, an agent can learn in an interactive environment using trial and error and its own experiences. In driverless driving with this method, the main task is to learn optimal driving policies from point to point.
The most important things for the future of driverless driving are deep learning and neural networks; which enable continuous learning of new situations and conditions in a changing driving environment.