AI and Neural Networks
There’s no doubt that cars are becoming smarter, but if the automotive industry is going to achieve its goal of fully autonomous vehicles, then they still have a long way to go and they’ll need AI and neural networking to get there. In his article, Imagination Technologies’ Bryce Johnstone explains how GPUs and neural network accelerator chips provide an intelligent edge for automotive applications.
Automotive is one of the key sectors driving developments in artificial intelligence (AI) due to the focus on autonomous vehicles and the spin-off benefits of advanced driver assistance systems (ADAS). Cars are becoming intelligent, but the road to fully autonomous vehicles is a long one. While there are discussions about the ideal mix of technologies that will be needed to achieve full autonomy, what is clear is that AI and, in particular, neural networks, will play a key role.
— ADVERTISMENT—
—Advertise Here—
NEURAL NETWORKS
The purpose of a neural network is to carry out tasks that are quite challenging for traditional vision or pattern recognition systems. By making every neural network different and designed for a specific task, it can carry out that task more efficiently and at high precision.
All neural networks are organized to operate in layers that process data multiple times. So, rather than running the operation once with a certain set of parameters, the neural network can run it 10 or 20 times with different input patterns. The idea is that with all these different paths, the number of options increases. By the time it reaches the point where a decision needs to be made, it has all the information extracted from the inputs.
In the road-sign recognition, for example, the first layer could be looking for the corners of a sign, then the color and so on, until it can say with a high degree of certainty that this is a sign and this is what says. But the beauty of this is that there’s no need to program each of these steps. A neural network will do it itself and learn over time. The algorithm knows what it needs to find and will try different methods until it achieves its goal, learning as it goes along. Once it is trained, it can be used in a real application. This means engineers do not have to spend hours fine-tuning complicated algorithms. They just show the neural network what it needs to find and let it teach itself.
— ADVERTISMENT—
—Advertise Here—
These technologies are already becoming prevalent in vehicles for object detection, classification and analysis while driver monitoring, access control, and voice and gesture recognition can also take advantage of the different types of neural networks. In addition, AI approaches that combine classical vision with neural networks for use cases such as pedestrian path analysis and surround-view will rely on both graphics processing units (GPUs) and neural network accelerators (NNAs) (Figure 1).
AI approaches that combine classical vision with neural networks for use cases such as pedestrian path analysis and surround-view will rely on both graphics processing units (GPUs) and neural network accelerators (NNAs).
The use of neural networks can be seen in the whole make-up of the sensor to electronic control unit (ECU) chain, with pre-processing, intermediate and post-processing using various techniques bringing AI into the mix. In addition, vehicle-to-everything (V2X) technologies are in development that will essentially use autonomous vehicles as sensing agents to provide data and information for various smart city and smart transport scenarios. Again, these advances will rely on approaches to AI that employ both GPUs and NNAs to support a wide variety of analyses and computations from an ever-larger set of inputs.
SENSOR FUSION
Autonomous and highly automated vehicles will rely heavily on sensors of various types, including cameras, thermal imaging, radar, LiDAR and so on. The signals from all these sensors need to be interpreted and fused to give an overall view of what is happening outside and inside the vehicle. Sensor fusion will be essential for autonomous driving and will involve a combination of GPUs and neural networks along with machine learning and AI.
A good example of sensor fusion inside the vehicle is driver monitoring. In today’s vehicles, various sensors are able to detect if a driver is losing concentration. A neural network can analyze the images taken of the driver to work out if he or she is sleeping, tired, not paying attention or even talking or messaging on a mobile device. This is essential information for early-stage autonomous vehicles that may require the driver to retake control at certain points, as the car needs to know if the driver is in a fit state to do so.
— ADVERTISMENT—
—Advertise Here—
How does driver monitoring work? Cameras pointed at the driver’s face provide inputs for algorithms that analyze facial elements, particularly the eyes. Are they open or closed; if closed how long have they been closed; are they fluttering; where is the driver looking?
Studying the whole face can determine if the driver is angry or sad. If angry, the system could advise drivers to pull over and calm down before continuing. All this is based on building a picture of the face, extracting key points and using neural networks to extract emotions, gaze time and so on to judge the driver’s state of mind.
Over the next two or three years, driver monitoring is likely to become a requirement for approval by the European New Car Assessment Programme (NCAP) and the U.S. Department of Transportation’s National Highway Traffic Safety Administration (NHTSA) and thus become necessary for car makers to implement, not just on high-end cars, but all vehicles.
LEVEL FIELDS
The Society of Automotive Engineers (SAE) and the NHTSA have classified six levels of capability for autonomous cars. Basically, Level 0 is no automation at all, while in Level 1 the car will offer some assistance to the driver. Level 2 has more driving assistance and can even execute some tasks autonomously such as automatic emergency braking to avoid a collision.
Level 3 is the tricky one, as the car is driving itself but the driver has to be ready to take over at any time. Driver monitoring will be key in Level 3 autonomous driving, because the driver has to be prepared to intervene, and to a certain degree, the car has responsibility for ensuring that the driver is prepared.
In Level 4, even though the driver can take over, the car, in theory, can handle all the situations into which it has been put. A Level 5 car will be fully automated; there will be no steering wheel or pedals (Figure 2).
The SAE and the NHTSA have classified six levels (0 to 5) of capability for autonomous cars. A Level 5 car will be fully automated; there will be no steering wheel or pedals.
Each time a vehicle goes up one of these levels, there is about a 10-times increase in compute performance needed. This is why neural networks are important as they can deliver this performance at very low power.
Take, for example, a pedestrian. A car’s onboard cameras and sensors can record the pedestrian walking or standing; the neural network can be used for plotting the likely path the pedestrian might take and calculate if the vehicle may need to slow down or brake quickly. The neural network can also look at the same image and segment it, picking out other objects and applying object recognition technology to work out if they represent something the vehicle needs to be aware of.
All this has to be put into the context of where the vehicle is and where it wants to go. If it is reversing and it detects a child behind the car, then that needs to be processed quickly and the brakes applied. To do this needs AI and neural networks to see if there is an object there, identify it as a child and send a signal to an actuator or the driver to do something about it.
This is made more complex because the camera will normally have some sort of fish-eye lens. This produces a warped picture that needs straightening and then interpreting. Inputs from this and other sensors need combining for split-second decision-making.
DATA HANDLING
At the same time, other information is coming from all around the car—from all the sensors and information that’s being received wirelessly from other vehicles or the infrastructure. That is a massive amount of data, likely in the terabyte range.
ECUs will be all around the car, making decisions based on the data. This can involve 100 ECUs or more. Some approaches are looking at how this could be done with fewer ECUs and more computing power. An embedded AI right next to the camera or sensor can make some of the decisions so that less information needs passing around the vehicle.
This means different levels of processing are needed. Data can be pre-processed, such as straightening out a fish-eye image, at the point of capture. Intermediate processing could involve various planned tasks, object recognition, decision-making and so on. Post-processing happens afterward, when the information can be cleaned up and a displayed on a screen so the driver knows what is happening or has happened.
These data processing technologies are also being used to create applications currently in development to provide virtual see-through pillars inside the car. In this use case, the pillars (the struts that attach the roof of the car to the body) would be installed with cameras to capture what is going on outside the car. The inside of the pillars would provide a display that shows what these cameras are capturing providing the driver with an uninterrupted field of vision.
This process is phenomenally difficult to achieve. The system has to understand what is on the other side of where the driver is looking. The picture will need to be de-warped and placed onto an uneven or curved surface, then re-warped to the contours of the pillar.
While this advancement is one for the future, some high-end cars already provide surround-view systems, and these will soon move into mid-range and entry-level cars. A GPU is used to analyze the images coming from the various cameras around the car—there are usually four or five cameras—and stitch the images together. From the stitched image, a neural network will carry out object detection and path prediction to see if the object is likely to intercept the path of the vehicle.
INFOTAINMENT AND NAVIGATION
When it comes to in-vehicle infotainment (IVI) and navigation, GPUs play a major role (Figure 3). They are also involved in voice control, which is likely to become a key interface between humans and vehicles. So, for a satellite navigation system, rather than having to enter in a destination and fiddle with buttons and keypads, the driver could just say the zip code or street name and ask the system to plot the route.
For in-vehicle infotainment (IVI) and navigation, GPUs play a major role. They are also involved in voice control, which is likely to become a key interface between humans and vehicles.
The dashboard cluster will be linked to the external cameras for activities such as road-sign recognition. If the camera picks up, say, a sign showing a speed limit, that sign can be displayed in front of the driver while it is valid and a warning sounded if the car exceeds that limit.
In fact, the whole instrument cluster will use GPUs to carry out image rendering and information prioritization (Figure 4). If the system determines that the driver needs to know some critical piece of information, the message could pop out of the cluster or even be projected onto the windshield. Images on the windshield could also be used as part of the navigation system, showing the driver the correct turning or illustrating which lane the car needs to be in for an upcoming junction.
The whole instrument cluster can use GPUs to carry out image rendering and information prioritization. If the system determines that the driver needs to know some critical piece of information, the message could pop out of the cluster or even be projected onto the windshield.
Mirror replacement is another major potential development. Cars are already being developed where the mirrors have been replaced with screens showing views from different cameras. As well as showing what is happening behind the car, as with traditional rear-view mirrors, they can also be used for blind-spot detection. Here, the neural network can issue a warning to the driver about a car that the driver cannot see and automatically prevent the car from changing lanes into the path of another vehicle.
SMART CITY
A long-term goal that local governments around the world are moving toward is for smart cities to have autonomous and highly-automated vehicles that are integrated into an intelligent transportation system covering a whole town or city.
The idea behind this is that all city services and planning efforts are coordinated and connected, giving citizens greater access to information and making living in a city more pleasant and, importantly, healthier. To achieve this, reducing pollution and traffic congestion is essential.
An intelligent transportation system would control the whole transport infrastructure of the city. This infrastructure would communicate with vehicles and traffic signals and vehicles would also communicate with each other and send collected data back. A practical example of this would be controlling the traffic lights so vehicles pass through an area at optimum speed without delays (Figure 5). If emergency vehicles need quick access, these same traffic signals could be used to stop other road users and create a safe path for them.
An intelligent transportation system would control the whole transport infrastructure of the city. This infrastructure would communicate with vehicles and traffic signals and vehicles would also communicate with each other and send collected data back.
If there is a traffic jam, vehicles can relay this information to the infrastructure, which in turn can inform other vehicles to stay away from that zone, thus not adding to the problem so the jam can be cleared more quickly. This could even be used outside of cities, say on an entrance ramp to a freeway. If there is already a backup that the system has learned about from cars traveling in the opposite direction, it can warn drivers before they join the freeway, thus enabling them to consider other routes.
To achieve this goal, cities will need to have a central intelligent hub that can process the incoming information and calculate what data to send out to other vehicles or traffic signals. This can only be done with a combination of neural networks, AI, machine learning and advanced algorithms.
CONCLUSION
Highly automated vehicles will be safer than those driven by humans. Research by the NHTSA found that 94% of accidents were caused by human error [1]. AI-based technology is already better than a human driver in terms of responsiveness and picking up threats that need a fast response.
To achieve the processing power for such vehicles, GPUs working with NNAs will be required. As the industry moves to fully autonomous vehicles, NNAs will come into their own because of the massive increase in computing power. Estimates suggest that a Level 5 autonomous vehicle will need 10,000 times the computing power of a Level 1 vehicle.
This is a huge increase in processing performance, but it also has to be done within a power budget. Already, an NNA has 100 to 800 times the performance of a CPU and in a package that is a fraction of the size. A vehicle could have a large CPU with many NNAs carrying out tasks around the vehicle at much lower power and higher performance than CPUs spread around the vehicle.
Imagination Technologies produces IP for both GPUs and NNAs. Its technologies are in more digital dashboards than any of its competitors, and the company is also spearheading advances in the ADAS and autonomous vehicle markets. All the elements that are needed to make autonomous vehicles practical will depend on these technologies—and it will be only a matter of time before it becomes a reality.
RESOURCES
Imagination Technologies | www.imgtec.com
PUBLISHED IN CIRCUIT CELLAR MAGAZINE • DECEMBER 2019 #353 – Get a PDF of the issue
Sponsor this ArticleBryce Johnstone is responsible for relationships and marketing throughout the automotive value chain in support of defining IP requirements for the rapidly changing car market. Previously at Imagination, he was in charge of Developer Ecosystem largely working with mobile games companies throughout the world. Prior to joining Imagination in 2011, Mr. Johnstone spent 19 years with Texas Instruments, where he worked in several technical, managerial and business development roles before moving into the Wireless Terminal Business Unit to head up the OMAP Developer Network activity. He started his career as a Senior Design Engineer at STC Semiconductors. Mr. Johnstone holds a BSC. In Electrical and Electronic Engineering from the University of Edinburgh and an MBA from the Open University.