Research Report on Automotive Memory Chip Industry and Its Impact on Foundation Models, 2025
Research on automotive memory chips: driven by foundation models, performance requirements and costs of automotive memory chips are greatly improved.
From 2D+CNN small models to BEV+Transformer foundation models, the number of model parameters has soared, making memory a performance bottleneck.
The global automotive memory chip market is expected to be worth over USD17 billion in 2030, compared with about USD4.3 billion in 2023, with a CAGR up to 22% during the period. Automotive memory chips took an 8.2% share in automotive semiconductor value in 2023, a figured projected to rise to 17.4% in 2030, indicating a substantial increase in memory chip costs.
The main driver for the development of the automotive memory chip industry lies in the rapid rise of automotive LLMs. From the previous 2D+CNN small models to BEV+Transformer foundation models, the number of model parameters has significantly increased, leading to a surge in computing demands. CNN models typically have fewer than 10 million parameters, while foundation models (LLMs) generally range from 7 billion to 200 billion parameters. Even after distillation, automotive models can still have billions of parameters.
From a computing perspective, in BEV+Transformer foundation models, typically those with LLaMA decoder architecture, the Softmax operator plays a core role. Its weaker parallelization capability than that of traditional convolution operators makes memory the bottleneck. Especially memory-intensive models like GPT pose high requirements for memory bandwidth, and common autonomous driving SoCs on market often face the problem of "memory wall".
End-to-end essentially embeds a small LLM. With the increasing amount of data fed, the parameters of the foundation model will continue to grow. The initial model size is around 10 billion parameters, and through continuous iteration, it will eventually exceed 100 billion.
On April 15, 2025, at its AI sharing event, XPeng disclosed for the first time that it is developing XPeng World Foundation Model, a 72-billion-parameter ultra-large autonomous driving model. XPeng's experimental results show that the scaling law effect is evident in models with 1 billion, 3 billion, 7 billion, and 72 billion parameters: the larger the parameter scale, the greater the model's capabilities. For models of the same size, the more training data, the greater the model's performance.
The main bottleneck in multimodal model training is not only GPUs but also the efficiency of data access. XPeng has independently developed underlying data infrastructure (Data Infra), increasing data upload capacity by 22 times, and data bandwidth by 15 times in training. By optimizing both GPU/CPU and network I/O, the model training speed has been improved by 5 times. Currently, XPeng uses up to 20 million video clips to train its foundation model, a figure that will increase to 200 million this year.
In the future, XPeng will deploy the "XPeng World Foundation Model" to vehicles by distilling small models over the cloud. The parameter scale of automotive foundation models will only continue to grow, posing significant challenges to computing chips and memory. To address this, XPeng has self-developed Turing AI chip, which boasts a utilization 20% higher than general automotive high-performance chips and can handle foundation models with up to 30B (30 billion) parameters. In contrast, Li Auto's current VLM (Vision-Language Model) has about 2.2 billion parameters.
More model parameters often come with higher inference latency. How to solve the latency problem is crucial. It is expected that the Turing AI chip may offer big improvements in memory bandwidth through multi-channel design or advanced packaging technology, so as to support the local operation of 30B-parameter foundation models.
Memory bandwidth determines the upper limit of inference computing speed. LPDDR5X is widely adopted but still falls short. GDDR7 and HBM may be put on the agenda.
Memory bandwidth determines the upper limit of inference computing speed. Assuming a foundation model has 7 billion parameters, at INT8 precision for automotive use, it occupies 7GB of storage. Tesla's first-generation FSD chip has memory bandwidth of 63.5GB/s, meaning it generates one token every 110 milliseconds, with a frame rate of lower than 10Hz, compared with the typical image frame rate of 30Hz in the autonomous driving field. Nvidia Orin with memory bandwidth of 204.5GB/s generates one token every 34 milliseconds (7GB ÷ 204.5GB/s = 0.0343s, about 34ms), barely reaching 30Hz (frame rate = 1 ÷ 0.0343s = 29Hz). Noticeably this only accounts for the time required for data transfer, completely ignoring the time for actual computation, so the real speed will be much lower than the data.

DRAM Selection Path (1): LPDDR5X will be widely adopted, and the LPDDR6 standard is still being formulated.
Apart from Tesla, all current automotive chips only support up to LPDDR5. The next step for the industry is to promote LPDDR5X. For example, Micron has launched a LPDDR5X + DLEP DRAM automotive solution, which has passed ISO26262 ASIL-D certification and meets critical automotive FuSa requirements.
Nvidia Thor-X already supports automotive LPDDR5X, with memory bandwidth increased to 273GB/s. It supports the LPDDR5X standard and PCIe 5.0 interface. Thor-X-Super has an astonishing memory bandwidth of 546GB/s, and utilizes 512-bit wide LPDDR5X memory to ensure extremely high data throughput. In reality, the Super version, like Apple's chip series, simply integrates two X chips into one package, but it is not expected to enter mass production in the short term.
Thor has multiple versions, with five currently known: ① Thor-Super, with 2000T computing power; ② Thor-X, with 1000T computing power; ③ Thor-S, with 700T computing power; ④ Thor-U, with 500T computing power; ⑤ Thor-Z, with 300T computing power. Lenovo's first Thor central computing unit in the world plans to adopt dual Thor-X chips.
Micron 9600MTPS LPDDR5X already has samples, targeting mobile devices, with no automotive-grade products available yet. Samsung's new LPDDR5X product, K3KL9L90DM-MHCU, empowers high performance from PCs, servers, vehicles, to emerging on-device AI applications. It delivers speeds 1.25 times faster and 25% better power efficiency compared to the previous generation, and has a maximum operating temperature of 105°C. Mass production started in early 2025. A single K3KL9L90DM-MHCU features 8GB and x32 bus, eight chips totaling 64GB.
As LPDDR5X gradually enters the era of 9600Mbps or even 10Gbps, JEDEC has started developing the next-generation LPDDR6 standard, targeting 6G communications, L4 autonomous driving, and immersive AR/VR scenarios. LPDDR6, as the next-generation memory technology, is expected to have a rate of over 10.7Gbps, even possibly up to 14.4Gbps, with improvements in both bandwidth and energy efficiency - 50% better than the current LPDDR5X. However, mass production of LPDDR6 memory may not occur until 2026. Qualcomm's next-generation flagship chip, Snapdragon 8 Elite Gen 2 (codenamed SM8850), will support LPDDR6. Automotive LPDDR6 may take even longer to arrive.
DRAM Selection Path (2): GDDR6 is already installed in vehicles but faces cost and power consumption issues. A GDDR7+LPDDR5X hybrid memory architecture may be viable.
Aside from LPDDR5X, another path is GDDR6 or GDDR7. Tesla’s second-gen FSD chip already supports first-gen GDDR6. HW4.0 uses 32GB GDDR6 (model: MT61M512M32KPA-14) running at 1750MHz (the minimum LPDDR5 frequency is also above 3200MHz). Since it is the first-gen GDDR6, its speed is relatively low. Even with GDDR6, running 10 billion-parameter foundation models smoothly remains unfeasible, though it’s currently the best available.
Tesla’s third-gen FSD chip is likely under development and may be completed in late 2025, with support for at least GDDR6X.
The next-generation GDDR7 standard was officially released in March 2024, but Samsung had already unveiled the world’s first GDDR7 in July 2023. Currently, both SK Hynix and Micron have introduced GDDR7 products. GDDR requires a special physical layer and controllers, and chips must have a built-in GDDR physical layer and controllers to use GDDR. Companies like Rambus and Synopsys sell relevant IPs.

Future autonomous driving chips may adopt hybrid memory architecture, for example, use GDDR7 for processing high-load AI tasks and LPDDR5X for low-power general computing, balancing performance and cost.
DRAM Selection Path (3): HBM2E is already deployed in L4 Robotaxis but remains far from production passenger cars. Memory chip vendors are working on migration of HBM technology from data centers to edge devices.
High bandwidth memory (HBM) is primarily used in servers. Stacking SDRAM using TSV technology increases not only the cost of the memory itself, but also the cost of TSMC's CoWoS process. Currently CoWoS capacity is tight and expensive. HBM has a much higher price than LPDDR5X, LPDDR5, and LPDDR4X commonly used in production passenger cars, and is not economical.
SK Hynix’s HBM2E is being exclusively used in Waymo’s L4 Robotaxis, offering 8GB capacity, transmission rate of 3.2Gbps, and impressive bandwidth of 410GB/s, setting a new industry benchmark.
SK Hynix is currently the only vendor capable of supplying HBMs that meet stringent AEC-Q automotive standards. SK Hynix is actively collaborating with autonomous driving solution giants like NVIDIA and Tesla to expand HBM applications from AI data centers to intelligent vehicles.
Both SK Hynix and Samsung are working to migrate HBM from data centers to edge devices like smartphones and cars. Adoption of HBMs in mobile devices will focus on improving edge AI performance and low-power design, driven by technological innovation and industry chain synergy. Cost and yield remain the primary short-term challenges, mainly involving HBM production process improvement.
Key Differences: Traditional data center HBM is a "high bandwidth, high power consumption" solution designed for high-performance computing, while on-device HBM is a "moderate bandwidth, low power consumption" solution tailored for mobile devices.
Technology Path: Traditional data center HBM relies on TSV and interposers, whereas on-device HBM achieves performance breakthroughs through packaging innovations (e.g., vertical wire bonding) and low-power DRAM technology.
For example, Samsung’s LPW DRAM (Low-Power Wide I/O DRAM) uses similar technology, offering low latency and up to 128GB/s bandwidth while consuming only 1.2pJ/b. It is expected to enter mass production during 2025-2026.
LPW DRAM significantly increases I/O interfaces by stacking LPDDR DRAM to achieve the dual goals of improving performance and reducing power consumption. Its bandwidth can exceed 200GB/s, 166% higher than LPDDR5X. Its power consumption is reduced to 1.9pJ/bit, 54% lower than LPDDR5X.

UFS 3.1 has already been widely adopted in vehicles and will gradually iterate to UFS 4.0 and UFS 5.0, while PCIe SSD will become the preferred choice for L3/L4 high-level autonomous vehicles.
At present, high-level autonomous vehicles generally adopt UFS 3.1 storage. As vehicle sensors and computing power advance, higher-specification data transmission solutions are imperative, and UFS 4.0 products will become one of the mainstream options in the future. UFS 3.1 offers a maximum speed of 2.9GB/s, which is dozens of times lower than SSD. The next-generation version UFS 4.0 will reach 4.2GB/s, providing higher speed while reducing power consumption by 30% compared to UFS 3.1. By 2027, UFS 5.0 is expected to arrive with speeds of around 10GB/s, still much lower than SSD, but with the advantages of controllable costs and a stable supply chain.
Given the strong demand for foundation models from both cockpit and autonomous driving, and to ensure sufficient performance headroom, SSD should be adopted instead of the current mainstream UFS (which is not fast enough) or eMMC (which is even slower). Automotive SSD uses the PCIe standard, which offers tremendous flexibility and potential. JESD312 defines the PCIe 4.0 standard, which actually includes multiple rates. 4 lanes is the lowest PCIe 4.0 standard, and 16-lane duplex can reach 64GB/s. PCIe 5.0 was released in 2019, doubling the signaling rate to 32GT/s, with x16 full-duplex bandwidth approaching 128GB/s.
Currently, both Micron and Samsung offer automotive-grade SSD. Samsung AM9C1 Series ranges from 128GB to 1TB, while Micron 4150AT Series comes in 220GB, 440GB, 900GB, and 1800GB capacities. The 220GB version is suitable for standalone cockpit or intelligent driving, while cockpit-driving integration requires at least 440GB.
Multi-port BGA SSD can serve as a centralized storage and computing unit in vehicles, connecting via multiple ports to SoCs for cockpit, ADAS, gateways, and more. It efficiently processes and stores different types of data in designated areas. Its benefit of independence ensures that non-core SoCs cannot access critical data without authorization, preventing interference, misidentification, or corruption of core SoC data. This maximizes data transmission isolation and independence and also reduces hardware cost of each SoC for vehicle storage.
For future L3/L4 high-level autonomous vehicles, PCIe 5.0 x4 + NVMe 2.0 will be the preferred choice for high-performance storage:
Ultra-high-speed transmission: Read speeds up to 14.5GB/s and write speeds up to 13.6GB/s, three times faster than UFS 4.0.
Low latency & high concurrency: Support higher queue depths (QD32+) for parallel processing of multiple data streams.
AI computing optimization: Combined with vehicle SoCs, accelerate AI inference computing to meet requirements of fully autonomous driving.
In autonomous driving applications, PCIe NVMe SSD can cache AI computing data, reducing memory access pressure and improving real-time processing capabilities. For example, Tesla’s FSD system uses a high-speed NVMe solution to store autonomous driving training data to enhance perception and decision-making efficiency.
Synopsys has already launched the world’s first automotive-grade PCIe 5.0 IP solution, which includes PCIe controller, security module, physical layer device (PHY), and verification IP, and complies with ISO 26262 and ISO/SAE 21434 standards. This means PCIe 5.0 will soon be available for automotive applications.
Research Report on Automotive Memory Chip Industry and Its Impact on Foundation Models, 2025
Research on automotive memory chips: driven by foundation models, performance requirements and costs of automotive memory chips are greatly improved.
From 2D+CNN small models to BEV+Transformer found...
48V Low-voltage Power Distribution Network (PDN) Architecture and Supply Chain Panorama Research Report, 2025
For a long time, the 48V low-voltage PDN architecture has been dominated by 48V mild hybrids. The electrical topology of 48V mild hybrids is relatively outdated, and Chinese OEMs have not given it suf...
Research Report on Overseas Cockpit Configuration and Supply Chain of Key Models, 2025
Overseas Cockpit Research: Tariffs stir up the global automotive market, and intelligent cockpits promote automobile exports
ResearchInChina has released the Research Report on Overseas Cockpit Co...
Automotive Display, Center Console and Cluster Industry Report, 2025
In addition to cockpit interaction, automotive display is another important carrier of the intelligent cockpit. In recent years, the intelligence level of cockpits has continued to improve, and automo...
Vehicle Functional Safety and Safety Of The Intended Functionality (SOTIF) Research Report, 2025
Functional safety research: under the "equal rights for intelligent driving", safety of the intended functionality (SOTIF) design is crucial
As Chinese new energy vehicle manufacturers propose "Equal...
Chinese OEMs’ AI-Defined Vehicle Strategy Research Report, 2025
AI-Defined Vehicle Report: How AI Reshapes Vehicle Intelligence?
Chinese OEMs’ AI-Defined Vehicle Strategy Research Report, 2025, released by ResearchInChina, studies, analyzes, and summarizes the c...
Automotive Digital Key (UWB, NearLink, and BLE 6.0) Industry Trend Report, 2025
Digital key research: which will dominate digital keys, growing UWB, emerging NearLink or promising Bluetooth 6.0?ResearchInChina has analyzed and predicted the digital key market, communication techn...
Integrated Battery (CTP, CTB, CTC, and CTV) and Battery Innovation Technology Report, 2025
Power battery research: 17 vehicle models use integrated batteries, and 34 battery innovation technologies are released
ResearchInChina released Integrated Battery (CTP, CTB, CTC, and CTV)and Battery...
AI/AR Glasses Industry Research Report, 2025
ResearchInChina released the " AI/AR Glasses Industry Research Report, 2025", which deeply explores the field of AI smart glasses, sorts out product R&D and ecological layout of leading domestic a...
Global and China Passenger Car T-Box Market Report 2025
T-Box Research: T-Box will achieve functional upgrades given the demand from CVIS and end-to-end autonomous driving
ResearchInChina released the "Global and China Passenger Car T-Box Market Report 20...
Automotive Microcontroller Unit (MCU) Industry Report, 2025
Research on automotive MCUs: the independent, controllable supply chain for automotive MCUs is rapidly maturing
Mid-to-high-end MCUs for intelligent vehicle control are a key focus of domestic produc...
Automotive LiDAR Industry Report, 2024-2025
In early 2025, BYD's "Eye of God" Intelligent Driving and Changan Automobile's Tianshu Intelligent Driving sparked a wave of mass intelligent driving, making the democratization of intelligent driving...
Software-Defined Vehicles in 2025: SOA and Middleware Industry Research Report
Research on automotive SOA and middleware: Development towards global SOA, cross-domain communication middleware, AI middleware, etc.
With the implementation of centrally integrated EEAs, OEM softwar...
Global and Chinese OEMs’ Modular and Common Technology Platform Research Report, 2025
Modular platforms and common technology platforms of OEMs are at the core of current technological innovation in automotive industry, aiming to enhance R&D efficiency, reduce costs, and accelerate...
Research Report on the Application of AI in Automotive Cockpits, 2025
Cockpit AI Application Research: From "Usable" to "User-Friendly," from "Deep Interaction" to "Self-Evolution"
From the early 2000s, when voice recognition and facial monitoring functions were first ...
Analysis on Li Auto’s Layout in Electrification, Connectivity, Intelligence and Sharing, 2024-2025
Mind GPT: The "super brain" of automotive AI Li Xiang regards Mind GPT as the core of Li Auto’s AI strategy. As of January 2025, Mind GPT had undergone multip...
Automotive High-precision Positioning Research Report, 2025
High-precision positioning research: IMU develops towards "domain controller integration" and "software/hardware integrated service integration"
According to ResearchInChina, in 2024, the penetration...
China Passenger Car Digital Chassis Research Report, 2025
Digital chassis research: Local OEMs accelerate chassis digitization and AI
1. What is the “digital chassis”?
Previously, we mostly talked about concepts such as traditional chassis, ch...