End-to-end Autonomous Driving Research: status quo of End-to-end (E2E) autonomous driving
1. Status quo of end-to-end solutions in China
An end-to-end autonomous driving system refers to direct mapping from sensor data inputs (camera images, LiDAR, etc.) to control command outputs (steering, acceleration/deceleration, etc.). It first appeared in the ALVINN project in 1988. It uses cameras and laser rangefinders as input and a simple neural network to generate steering as output.
In early 2024, Tesla rolled out FSD V12.3, featuring an amazing intelligent driving level. The end-to-end autonomous driving solution garners widespread attention from OEMs and autonomous driving solution companies in China.
Compared with conventional multi-module solutions, the end-to-end autonomous driving solution integrates perception, prediction and planning into a single model, simplifying the solution structure. It can simulate human drivers making driving decisions directly according to visual inputs, effectively cope with long tail scenarios of modular solutions and improve the training efficiency and performance of models.
![端到端 1_副本.png](/UpLoads/Article/2024/端到端%201_副本.png)
![端到端 2_副本.png](/UpLoads/Article/2024/端到端%202_副本.png)
![端到端 3_副本.png](/UpLoads/Article/2024/端到端%203_副本.png)
Li Auto's end-to-end solution
Li Auto believes that a complete end-to-end model should cover the whole process of perception, tracking, prediction, decision and planning, and it is the optimal solution to achieve L3 autonomous driving. In 2023, Li Auto pushed AD Max3.0, with overall framework reflecting the end-to-end concept but still a gap with a complete end-to-end solution. In 2024, Li Auto is expected to promote the system to become a complete end-to-end solution.
Li Auto's autonomous driving framework is shown below, consisting of two systems:
Fast system: System 1, Li Auto’s existing end-to-end solution which is directly executed after perceiving the surroundings.
Slow system: System 2, a multimodal large language model that logically thinks and explores unknown environments to solve problems in unknown L4 scenarios.
![端到端 4_副本.png](/UpLoads/Article/2024/端到端%204_副本.png)
In the process of promoting the end-to-end solution, Li Auto plans to unify the planning/forecast model and the perception model, and accomplish the end-to-end Temporal Planner on the original basis to integrate parking with driving.
2. Data becomes the key to the implementation of end-to-end solutions.
The implementation of an end-to-end solution requires processes covering R&D team building, hardware facilities, data collection and processing, algorithm training and strategy customization, verification and evaluation, promotion and mass production. Some of the sore points in scenarios are as shown in the table:
![端到端 5_副本.png](/UpLoads/Article/2024/端到端%205_副本.png)
The integrated training in end-to-end autonomous driving solutions requires massive data, so one of the difficulties it faces lies in data collection and processing.
First of all, it needs a long time and may channels to collect data, including driving data and scenario data such as roads, weather and traffic conditions. In actual driving, the data within the driver's front view is relatively easy to collect, but the surrounding information is hard to say.
During data processing, it is necessary to design data extraction dimensions, extract effective features from massive video clips, make statistics of data distribution, etc. to support large-scale data training.
DeepRoute
As of March 2024, DeepRoute.ai's end-to-end autonomous driving solution has been designated by Great Wall Motor and involved in the cooperation with NVIDIA. It is expected to adapt to NVIDIA Thor in 2025. In the planning of DeepRoute.ai, the transition from the conventional solution to the "end-to-end" autonomous driving solution will go through sensor pre-fusion, HD map removal, and integration of perception, decision and control.
![端到端 6_副本.png](/UpLoads/Article/2024/端到端%206_副本.png)
GigaStudio
DriveDreamer, an autonomous driving model of GigaStudio, is capable of scenario generation, data generation, driving action prediction and so forth. In the scenario/data generation, it has two steps:
When involving single-frame structural conditions, guide DriveDreamer to generate driving scenario images, so that it can understand structural traffic constraints easily.
Extend its understanding to video generation. Using continuous traffic structure conditions, DriveDreamer outputs driving scene videos to further enhance its understanding of motion transformation.
![端到端 7_副本.png](/UpLoads/Article/2024/端到端%207_副本.png)
3. End-to-end solutions accelerate the application of embodied robots.
In addition to autonomous vehicles, embodied robots are another mainstream scenario of end-to-end solutions. From end-to-end autonomous driving to robots, it is necessary to build a more universal world model to adapt to more complex and diverse real application scenarios. The development framework of mainstream AGI (General Artificial Intelligence) is divided into two stages:
Stage 1: the understanding and generation of basic foundation models are unified, and further combined with embodied artificial intelligence (embodied AI) to form a unified world model;
Stage 2: capabilities of world model + complex task planning and control, and abstract concept induction gradually evolve into the era of the interactive AGI 1.0.
In the landing process of the world model, the construction of an end-to-end VLA (Vision-Language-Action) autonomous system has become a crucial link. VLA, as the basic foundation model of embodied AI, can seamlessly link 3D perception, reasoning and action to form a generative world model, which is built on the 3D-based large language model (LLM) and introduces a set of interactive markers to interact with the environment.
![端到端 8_副本.png](/UpLoads/Article/2024/端到端%208_副本.png)
As of April 2024, some manufacturers of humanoid robots adopting end-to-end solutions are as follows:
![端到端 9_副本.png](/UpLoads/Article/2024/端到端%209_副本.png)
For example, Udeer·AI's Large Physical Language Model (LPLM) is an end-to-end embodied AI solution that uses a self-labeling mechanism to improve the learning efficiency and quality of the model from unlabeled data, thereby deepening the understanding of the world and enhancing the robot's generalization capabilities and environmental adaptability in cross-modal, cross-scene, and cross-industry scenarios.
![端到端 10_副本.png](/UpLoads/Article/2024/端到端%2010_副本.png)
LPLM abstracts the physical world and ensures that this kind of information is aligned with the abstract level of features in LLM. It explicitly models each entity in the physical world as a token, and encodes geometric, semantic, kinematic and intentional information.
In addition, LPLM adds 3D grounding to the encoding of natural language instructions, improving the accuracy of natural language to some extent. Its decoder can learn by constantly predicting the future, thus strengthening the ability of the model to learn from massive unlabeled data.
Chinese OEMs (Passenger Car) Going Overseas Report, 2024--Germany
Keywords of Chinese OEMs going to Germany: electric vehicles, cost performance, intelligence, ecological construction, localization
The European Union's temporary tariffs on electric vehicles in Chi...
Analysis on DJI Automotive’s Autonomous Driving Business, 2024
Research on DJI Automotive: lead the NOA market by virtue of unique technology route.
In 2016, DJI Automotive’s internal technicians installed a set of stereo sensors + vision fusion positioning syst...
BYD’s Layout in Electrification, Connectivity, Intelligence and Sharing and Strategy Analysis Report, 2023-2024
Insight: BYD deploys vehicle-mounted drones, and the autonomous driving charging robot market is expected to boom.
BYD and Dongfeng M-Hero make cross-border layout of drones.
In recent years,...
Great Wall Motor’s Layout in Electrification, Connectivity, Intelligence and Sharing and Strategy Analysis Report, 2023-2024
Great Wall Motor (GWM) benchmarks IT giants and accelerates “Process and Digital Transformation”.
In 2022, Great Wall Motor (GWM) hoped to use Haval H6's huge user base to achieve new energy transfo...
Cockpit AI Agent Research Report, 2024
Cockpit AI Agent: Autonomous scenario creation becomes the first step to personalize cockpits
In AI Foundation Models’ Impacts on Vehicle Intelligent Design and Development Research Report, 2024, Res...
Leading Chinese Intelligent Cockpit Tier 1 Supplier Research Report, 2024
Cockpit Tier1 Research: Comprehensively build a cockpit product matrix centered on users' hearing, speaking, seeing, writing and feeling.
ResearchInChina released Leading Chinese Intelligent Cockpit ...
Global and China Automotive Wireless Communication Module Market Report, 2024
Communication module and 5G research: 5G module installation rate reaches new high, 5G-A promotes vehicle application acceleration
5G automotive communication market has exploded, and 5G FWA is evolv...
ADAS and Autonomous Driving Tier 1 Suppliers Research Report, 2024 – Chinese Companies
ADAS Tier1s Research: Suppliers enter intense competition while exploring new businesses such as robotics
In China's intelligent driving market, L2 era is dominated by foreign suppliers. Entering era...
Automotive Gateway Industry Report, 2024
Automotive gateway research: 10BASE-T1S and CAN-XL will bring more flexible gateway deployment solutions
ResearchInChina released "Automotive Gateway Industry Report, 2024", analyzing and researching...
Global and China Electronic Rearview Mirror Industry Report, 2024
Research on electronic rearview mirrors: electronic internal rearview mirrors are growing rapidly, and electronic external rearview mirrors are facing growing pains
ResearchInChina released "Global a...
Next-generation Zonal Communication Network Topology and Chip Industry Research Report, 2024
The in-vehicle communication architecture plays a connecting role in automotive E/E architecture. With the evolution of automotive E/E architecture, in-vehicle communication technology is also develop...
Autonomous Delivery Industry Research Report, 2024
Autonomous Delivery Research: Foundation Models Promote the Normal Application of Autonomous Delivery in Multiple Scenarios
Autonomous Delivery Industry Research Report, 2024 released by ResearchInCh...
Global Autonomous Driving Policies & Regulations and Automotive Market Access Research Report, 2024
Intelligent driving regulations and vehicles going overseas: research on regional markets around the world and access strategies. "Going out”: discussion about regional markets aroun...
China Passenger Car HUD Industry Report, 2024
HUD research: AR-HUD accounted for 21.1%; LBS and optical waveguide solutions are about to be mass-produced. The automotive head-up display system (HUD) uses the principle of optics to display s...
Ecological Domain and Automotive Hardware Expansion Research Report, 2024
Automotive Ecological Domain Research: How Will OEM Ecology and Peripheral Hardware Develop? Ecological Domain and Automotive Hardware Expansion Research Report, 2024 released by ResearchInChina ...
C-V2X and CVIS Industry Research Report, 2024
C-V2X and CVIS Research: In 2023, the OEM scale will exceed 270,000 units, and large-scale verification will start.The pilot application of "vehicle-road-cloud integration” commenced, and C-V2X entere...
Automotive Intelligent Cockpit Platform Configuration Strategy and Industry Research Report, 2024
According to the evolution trends and functions, the cockpit platform has gradually evolved into technical paths such as cockpit-only, cockpit integrated with other domains, cockpit-parking integratio...
Analysis on Huawei's Electrification, Connectivity, Intelligence and Sharing,2023-2024
Analysis on Huawei's Electrification, Connectivity, Intelligence and Sharing: Comprehensive layout in eight major fields and upgrade of Huawei Smart Selection
The “Huawei Intelligent Driving Business...