Auto-driving car hardware and software details

This article details the hardware and software of the self-driving car, as well as the preparations that need to be done. Every developer or person who is ready to devote himself to the unmanned field should take a good look.

Countless companies around the world are busy developing self-driving cars. Their products are also bizarre, but the basic ideas and core technologies are similar. This article details the hardware and software of self-driving cars and the preparations needed to do so. Every developer or person who is ready to devote himself to the driverless area should take a good look.

Everyone knows that the Intelligent Vehicle is an integrated system integrating environmental awareness, planning and decision making, and multi-level assisted driving. It uses computers, modern sensing, information fusion, communications, artificial intelligence, and automatic control. Other technologies are typical high-tech complexes.

The key technologies of automatic driving can be divided into four parts: environmental awareness, behavioral decision-making, path planning and motion control.

Autopilot theory sounds simple, four key technologies, but in the end how to achieve it? Google started autopiloting in 2009 and it has been eight years since. 8 years of technology accumulation has not yet reached the level of production of automatic driving technology. It can be seen that the automatic driving technology is not simple. Autopilot is a large and complex project that involves many technologies and is too detailed. I talked about the technologies involved in self-driving cars from both hardware and software.

hardware

Leaving hardware to talk about autopilot is all about rogues. Take a look at the first picture. The figure below basically contains the various hardware needed for autopilot research.

However, so many sensors do not necessarily appear in a car at the same time. The presence or absence of a sensor depends on what kind of task the car needs to perform. If you only need to complete the autopilot on the freeway, like the Tesla's AutoPilot feature, you don't need to use a laser sensor at all; if you need to complete the autopilot on the urban section, there is no laser sensor, and vision alone is difficult.

Autopilot engineers are task-oriented, selecting hardware and controlling costs. It's a bit like assembling a computer, giving me a need, and I'll give you a configuration list.

car

Since auto-driving is required, cars are certainly essential. From SAIC's experience in autopiloting, when it comes to development, you can choose not to choose pure gasoline vehicles. On the one hand, the power consumed by the entire automated driving system is huge, and hybrid and pure electric have obvious advantages in this respect. On the other hand, the underlying control algorithm of the engine is too complex compared to the motor. Instead of spending a lot of time on the calibration and debugging of the bottom layer, it is better to directly select an electric vehicle to study higher-level algorithms.

There are also media in the country that have specifically investigated the choice of test vehicles. "Why did Google and Apple invariably choose the Lexus RX450h (hybrid car)?" "When technology companies test their own autonomous driving technology, what do they have about the choice of test vehicles?" and other issues. They came to the conclusion that "electricity" and "space" are crucial for unmanned vehicle refitting, and secondly, the "familiarity" of the car from a technical aspect is another factor, because if it does not cooperate with car manufacturers, it needs to " Hack (intrusion) some control system.

Controller

In the pre-analysis phase of the algorithm, it is recommended to use the Industrial PC (IPC) as the most direct controller solution. Because IPCs are more stable and reliable than embedded devices, community support and supporting software are also more abundant. Baidu open source Apollo recommended an industrial computer with a GPU, Nuvo-5095GC, as shown below.

Github ApolloAuto

When the algorithm is relatively mature, the embedded system can be used as a controller. For example, the zFAS jointly developed by Audi and TTTech has been applied to the latest Audi A8 production car.

CAN card

The interaction between the IPC and the chassis of the vehicle must be through a special language, CAN. Obtaining the current speed and steering wheel angle from the chassis requires analyzing the data sent from the chassis to the CAN bus. After the industrial control machine calculates the steering wheel angle and desired vehicle speed through the sensor information, it must also transcode the message into the chassis through the CAN card. The signal then the chassis responds.

The CAN card can be installed directly in the industrial computer and then connected to the CAN bus via an external interface. The CAN card used by Apollo is model ESD CAN-PCIe/402, as shown below.

Global Positioning System (GPS) + Inertial Measurement Unit (IMU)

Humans drive from A to B, need to know the map from A to B, and their current position, so that we can know whether it will turn right or go straight to the next intersection.

The same is true for unmanned systems. With GPS + IMU, you can know where you are (latitude and longitude) and in which direction you are going (heading). Of course, IMU can also provide richer information such as yaw rate and angular acceleration. Helps auto-driving car positioning and decision control.

Apollo's GPS model is NovAtel GPS-703-GGG-HV, and IMU model is NovAtel SPAN-IGM-A1.

Sensor

I believe everyone is familiar with the onboard sensors. Sensors are divided into many types, including vision sensors, laser sensors, radar sensors, and so on. The visual sensor is a camera. The camera is divided into monocular vision and binocular (stereoscopic) vision. The more well-known visual sensor providers are Mobileye in Israel, PointGrey in Canada, and Pike in Germany.

The laser sensor is divided into single lines and multi-line up to 64 lines. Every additional line, the cost rose by 10,000 yuan, of course, the corresponding detection effect is also better. Well-known laser sensor providers include Velodyne and Quanergy in the United States, Ibeo in Germany, etc., and Sagitar in China.

The radar sensor is the strength of the depot Tier1 because radar sensors have been widely used in cars. Well-known suppliers are of course Bosch, Delphi, Denso and others.

Hardware summary

Assemble a set of autopilot system needs that can accomplish a certain function and its rich experience, and should know the boundary of each sensor and controller's calculation ability well. Excellent system engineers can control costs to the minimum while meeting functional requirements, making them more likely to be produced and landed.

software

The software consists of four layers: perception, integration, decision, and control.

Between the various levels need to write code to achieve the transformation of information, more detailed classification is as follows.

Share a PPT publicly disclosed by a startup company.

To achieve a smart driving system, there are several levels:

Perception Layer → Fusion Layer → Plan Layer → Control Layer

More specifically:

Sensor layer → driver layer → information fusion layer → decision planning layer → bottom layer control layer

All levels need to write code to achieve the transformation of information.

The most basic levels are the following types: acquisition and preprocessing, coordinate transformation, and information fusion.

collection

When the sensor communicates with our PC or embedded module, there are different transmission methods.

For example, we collect image information from cameras, some of which are communicated through Gigabit network cards, and some of which communicate directly through video lines. For example, some millimeter-wave radars send information downstream through the CAN bus, so we must write code that parses the CAN information.

Different transmission media need to use different protocols to resolve this information. This is the "drive layer" mentioned above. In layman's terms, all the information collected by the sensors is obtained and encoded into data that the team can use.

Pretreatment

When the sensor information is received, it is found that not all information is useful.

The sensor layer sends data to the downstream in a frame-by-frame, fixed-frequency manner, but the downstream is unable to take the data of each frame for decision or fusion. why?

Since the state of the sensor is not 100% effective, if only the signal of one frame is used to determine whether there is an obstacle in front (there may be a sensor misdetection), it is extremely irresponsible for the downstream decision. Therefore, upstream information needs to be pre-processed to ensure that obstacles in front of the vehicle are always present in the time dimension, rather than flashing through.

Here we will use an algorithm often used in the field of smart driving - Kalman filtering.

Coordinate conversion

Coordinate conversion is very important in the field of smart driving.

The sensors are installed in different places. For example, the millimeter wave (in the purple area in the figure above) is arranged in front of the vehicle; when there is an obstacle in front of the vehicle and it is 50 meters away from this millimeter-wave radar, we consider this obstacle distance. Is the car 50 meters?

no! Because the decision-making control plane does the vehicle motion planning, it is done in the body coordinate system (the body coordinate system is generally O-point at the center of the rear axle), so the 50 meters detected by the millimeter-wave radar is converted to its own vehicle coordinate system. You also need to add the distance from the sensor to the rear axle.

In the end, all sensor information needs to be transferred to the own vehicle coordinate system so that all sensor information can be unified for use in planning decisions.

Similarly, the camera is generally installed under the windshield. The data obtained is also based on the camera coordinate system. To the downstream data, the camera also needs to be converted to the own vehicle coordinate system.

Auto-coordinate system: Take out your right hand and start to read X, Y, and Z in the order of thumb → index finger → middle finger. Then hold the handle in the following shape:

The intersection of the three axes (forefinger root) is placed in the center of the car's rear axle, with the Z axis pointing toward the roof and the X axis pointing in the direction of the vehicle's heading.

The coordinate system directions that each team may define are inconsistent, as long as the development team is unified internally.

Information fusion

Information fusion refers to all-in-one operations on information of the same attribute.

For example, if the camera detects an obstacle in front of the vehicle, the millimeter wave also detects an obstacle in front of the vehicle, and the lidar also detects an obstacle in front of it, but there is actually only one obstacle in front. So what we have to do is A fusion of the information from the multi-sensor vehicle is used to tell the downstream that there is a car in front, not three cars.

Decision planning

The main design of this level is how to correctly plan after getting the fusion data. The plan includes longitudinal control and lateral control: longitudinal control, ie, speed control, when the performance is accelerated, and when the brake is applied; lateral control is behavior control, performance is why the lane is changed, and when the vehicle overtakes.

Individuals are not very familiar with this piece and are afraid to make comments.

What does the software look like?

Some software in the autopilot system looks similar to the following.

The name of the software reflects the actual role of the software:

App_driver_camera: camera driver

App_driver_hdmap: High-precision map drive

App_driver_ins: Inertial Drive

App_driver_lidar: Laser Sensor Driver

App_driver_mwr: millimeter wave sensor driver

App_fusion_freespace: Free Travel Area Integration

App_fusion_lane: lane line fusion

App_fusion_obstacle: Obstacle Fusion

App_planning&decision: Planning decisions

However, in fact siege lions will write some other software for their own debugging work, such as tools for recording data and playing back data.

There is also a visualization program for sensor information display, similar to the effect of the figure below.

Having mastered the idea of ​​software, then we look at what you have to do to prepare.

ready

Operating system installation

Since it is software, you must first have an operating system. The common operating system is Windows/Linux/Mac... (I haven’t used the operating system to fight...). Considering community support and development efficiency, Linux is recommended as the operating system for driverless research.

Most of Linux used by unmanned teams follow the trend and can save a lot of things.

Linux is divided into many versions. The Ubuntu series is the most commonly used and popular one. Although Ubuntu has been updated to 17.04, it is recommended to install the 14.04 version in terms of stability.

It is recommended to use a single SSD to install Linux, or use a virtual machine to install, and it is not recommended to install dual systems (not very stable). Offer Linux Ubuntu 14.04 installation package + virtual machine installation method. (Link: http://pan.baidu.com/s/1jIJNIPg Password: 147y.)

Linux basic instructions

As the core of Linux - command line operations not only help the development, but also install X weapon. Another advantage is that using the apt-get install command, you can quickly and easily install a lot of software without having to look for suitable installation packages around the Internet as you would with Windows. There are many Linux instructions and they are more complicated. They require more use and more use.

Development environment installation

The development environment will involve a lot of libraries that are actually used. Different programmers deal with the same problems and may use different libraries. In the following, by installing the libraries I frequently use in my work and study, I will start to introduce the door to developers.

Build the installation package required for the environment:

(link: http://pan.baidu.com/s/1sllta5v password: eyc8)

Attachment: Introduction to the development environment

Integrated Development Environment IDE

In front of the installation of an open source IDE qt, the status of qt in Linux is the same as Visual Studio in Windows. Unless you are using high-level IDE development, most teams developing under Linux will choose to develop with qt.

The main function of qt is to make an interactive interface, such as displaying various information collected by the current sensor in the interface. Interface interaction can significantly speed up the process of developers debugging programs and calibration parameters.

Tips:

Familiar with qt can find tutorials online, I recommend learning systematically, such as buying a Qt book.

Buy books or go to the library to borrow books, pay attention to see the date of writing books, the newer the better, too old books, the corresponding version is also very old.

OpenCV

OpenCV is a very powerful library that encapsulates a large number of functions that can be applied to unmanned research, including various filter algorithms, feature point extraction, matrix operations, projected coordinate transformations, machine learning algorithms, and more.

Of course, the most important thing is that its influence in the field of computer vision, camera calibration, target detection, identification, and tracking interfaces are very convenient to use. Using OpenCV library can make this picture show the effect.

Tips:

Please at least purchase a tutorial with version 2.4 or above to learn about OpenCV. However, the OpenCV Chinese tutorials that are currently available on the market are all too shallow, and even the classic Kalman Filter is not introduced. I recommend learning the English version of Learning OpenCV3 directly.

The electronic version is provided, the explanation is very detailed, and each chapter is printed and read step by step.

(link: http://pan.baidu.com/s/1dE5eom9 password: n2dn)

libQGLViewer

libQGLViewer is a library of the famous OpenGL adapter qt. The programming interface and method are similar to those of OpenGL. The display of environment awareness information we often see in the propaganda posters of major drone companies can be completely made by QGL.

Tips:

Learning libQGLViewer does not require purchasing any textbooks. The example in the official website and the archive is the best teacher. According to the tutorial of the official website, each example is implemented once and basic entry is started.

Official website link: libQGLViewer Home Page

Boost

The Boost library is known as the "C++ Quasi-Standard Library." There are a lot of "wheels" in this library. For C++ developers, it is convenient to call them directly and avoid recreating "wheels."

Tips:

Boost is based on standard C++ development. It uses exhaustive and sophisticated techniques. Do not rush to study and find a book related to the Boost library (electronic or paper). Read the directory and know what functions are available. At a certain point, take the time to do research.

QCustomplot

In addition to the libQGLViewer mentioned above, the information of the onboard sensors can also be displayed in the form of a floor plan. Since qt only provides basic drawing tools such as lines and circles, it is not very convenient to use, so QCustomplot was born. Simply call the API, and then enter the data you want to display as parameters, you can draw the following great graphics. And it's easy to drag and zoom.

Here are some of the sensor information that I displayed using QCustomplot in the actual development process.

Tips:

The official website provides the source code download of this library, you only need to import the .cpp and .h files in your project. Follow the tutorials provided by the official website to learn quickly. By writing code against the examples in example, you can quickly turn your own data into a visual image.

LCM (Lightweight Communications and Marshalling)

Team development software must have program (multi-process) communication problems. There are many ways of multi-process communication, and each has its own advantages and disadvantages. In December 2014, MIT announced the LCM signal transmission mechanism used in the DARPA Robotics Challenge in the United States. The source is: MIT releases LCM driver for MultiSense SL.

LCM contains multiple languages ​​such as java, c++, etc. Specifically for real-time systems with high bandwidth and low latency for message sending and data marshaling. It provides a publish/subscribe message model, automatic encapsulation/decapsulation code generation tools with multiple programming language versions. This mode and ROS are now very similar to the communication between nodes.

Tips:

LCM is the source code of demo official website communication between two processes. According to the tutorial on the official website, you can quickly establish your own LCM communication mechanism.

Official Website: LCM Projcect

Git & Github

Git is an indispensable version control tool for team development. Everyone is sure to write one version of the day. If you do not specifically mark each version, it will be forgotten. This is especially true of writing code.

The use of Git can greatly improve the efficiency of multi-person development, and version management specifications, code is very easy to trace.

Github has a strong track record in software development, and when it needs some code, it can search directly.

Tips:

At present, the books that introduce Git on the world are very laborious, and they introduce the details of the minutiae so that people can't get started quickly.

So I would strongly recommend Git tutorial: Liao Xuefeng's Git tutorial, easy to understand, but also with the text + video, just conscience.

After the above basic introduction, master these things, you will become the old driver in the driverless field.

Bluetooth Rich Bass Speaker

Bluetooth louder speaker has a high-power trumpet, the subwoofer is very shocking, and it can also realize TWS wireless serial connection, the left and right channels can play songs at the same time, as if you are in a theater.
The portable Bluetooth rich bass speaker is very smart. It can support AUX, USB, TF card memory reading, Bluetooth connection, etc. for playback. The LED light flashes according to the rhythm of the music, which is very sensational and cool!It supports hand free call then frees your hands.
Bluetooth Speaker also has a long battery life. The 6000mAh battery allows the playback time to reach 5H~10H, allowing you to enjoy music anytime and anywhere when you outdoors without worrying about the battery running out quickly.

We also use waterproof fabric to make it, the waterproof Bluetooth speaker grade reaches IPX5, which can prevent rain or domestic water from splashing on the speaker. When using it, remember to close the charging rubber plug,Dont worry about ranning or splashing water,let you enjoy the music.


Waterproof IPX5 Bluetooth Louder Speaker,Portable Bluetooth Louder Speaker,Heavy Bass Bluetooth Louder Speaker

Shenzhen Focras Technology Co.,Ltd , https://www.focrass.com

This entry was posted in on