Software is an essential part of modern automobiles. This year, the lines of code in the average car are expected to reach 650 million, an increase from 200 million in 2020. What’s more, we’re seeing a shift from distributed architectures for vehicle firmware toward zonal architectures with central high-performance computers (HPCs). All of this creates complexity and novel software challenges.
Embedded systems developers are trying to adapt to this complexity. At the same, market pressures are forcing them to accelerate their development processes and ship innovation faster.
Artificial intelligence (AI) can help address these challenges, but its implementation raises important questions. To what degree should AI tools autonomously generate and review code in automotive embedded systems? How much human oversight is advisable? Drawing from the automotive industry's vocabulary, I propose that embedded development requires Level 2 AI assistance — at least right now.
Understanding Level 2 automation for AI in embedded development
In automotive driving automation, Level 2 systems represent partial automation: a carefully balanced human-machine collaboration. These systems can help control steering, acceleration, and braking in specific scenarios, but the driver must stay engaged. They must monitor the environment and be ready to take control at any moment. The human remains legally responsible for the vehicle's operation and must supervise the automation continually. In contrast, Level 4-5 systems aim to operate with minimal or no human oversight in defined conditions.
This framework provides a useful analogy for AI in embedded development. Current AI tools excel at providing suggestions and automating routine tasks, much like Level 2 driver assistance. They can suggest code, help with testing, and identify potential issues. However, their contextual understanding has limitations. Given the high stakes of automotive embedded systems, combining AI's capabilities with human wisdom and oversight is best.
Why AI excels as a development assistant
AI shows remarkable capabilities across numerous areas of embedded development. Here are just a few examples from the growing list of applications:
First, AI can generate and complete code for common patterns in C/C++, reducing developers' time spent on routine programming tasks. And if prompted correctly, AI can respect embedded-specific constraints like memory limitations and hardware interfaces.
Second, AI can generate tests that you can run on cloud-based ARM CPUs or virtual hardware. This helps teams "shift left" in testing their firmware and catch issues earlier in development when they're less expensive to fix. It also helps identify edge cases you might have otherwise overlooked.
Third, AI can help accelerate the remediation of security vulnerabilities in your code. AI tools can help interpret security findings from your security scanners. They can even suggest potential approaches to address issues, supporting development teams as they work to meet cybersecurity requirements in this highly regulated space. AI thus helps expedite remediation.
Beyond these examples, AI is increasingly valuable for root cause analysis of complex issues, comprehensive code reviews, automated code refactoring for optimization, explaining complex legacy code, and providing conversational assistance through AI chat capabilities. As AI evolves, so will the ways in which it assists embedded development teams.
The essential human element
Though these AI capabilities are quite powerful, they cannot — and should not — replace human expertise. Embedded developers bring domain knowledge that spans both software and hardware domains, understanding not just how to code, but how that code interacts with physical components under varying conditions.
Moreover, embedded developers understand the intricate relationships between different vehicle subsystems. Far from replacing such expertise, AI must integrate with human beings' contextual knowledge.
Humans also bring creativity and innovation to solving unique automotive challenges. When faced with conflicting requirements or novel problems, human engineers draw on experience and intuition that AI simply doesn't possess.
The human-centered approach is critical in automotive development, where safety and reliability cannot be compromised. Just as a driver must remain alert and ready to take control of a Level 2 automated vehicle, developers must maintain ultimate responsibility for AI-generated code. While valuable, AI suggestions require expert validation. Developers must review and verify that proposed solutions solve the problem correctly within the specific automotive context.
This human oversight becomes even more critical when considering the consequences of errors. In enterprise software, a bug might cause inconvenience; in automotive systems, it could potentially impact passenger safety. Developers bring ethical judgment and a holistic understanding of the operating environment that AI currently lacks. They can anticipate edge cases based on real-world driving conditions and evaluate AI recommendations against their practical experience with actual vehicle systems.
Creating an effective human-AI partnership
Below are some initial approaches to consider as you begin building productive partnerships between developers and AI.
Start by identifying specific high-volume, low-risk tasks where AI can provide immediate value: unit test generation for non-safety-critical components, documentation updates, and routine code standardization are excellent entry points.
Implement a tiered approach to AI integration based on system criticality. For infotainment or connectivity systems, teams might leverage more autonomous AI assistance. For safety-related systems, establish mandatory human review checkpoints with structured approval workflows. Create clear guidelines on which code components require senior engineer review versus those where junior developers can approve AI suggestions with minimal oversight.
Review processes also need adaptation. Rather than having humans review AI-generated code in isolation, teams should implement collaborative workflows where AI assists with the review itself, highlighting potential issues for human evaluation. Consider adopting structured prompting techniques. For example, have developers specify constraints like memory requirements, coding standards, or performance parameters before generating AI suggestions.
These examples represent starting points for effective human-AI collaboration in embedded development.
Looking to the future
The human-AI partnership will evolve across different automotive domains as AI capabilities advance. Teams should prepare by focusing on higher-value skills that complement AI capabilities, such as systems architecture, integration expertise, and hardware-software design.
The teams that succeed will find the right balance, leveraging AI to handle routine tasks while keeping humans at the center of the development process. This is the path to realizing AI's productivity promise.
I'll be discussing topics like this and more with Dr. Felix Kortmann of Ignite by FORVIA HELLA in a webinar on June 11. The webinar will be on “Building the Future of Automotive Software.” Join us to learn how to effectively balance AI assistance with human expertise in your embedded development teams. Register here.
Next steps
Transform automotive DevOps: Secure, fast, future-ready
Discover how embedded DevOps practices are reshaping automotive software development, enabling faster delivery cycles with integrated security.
Download the guide
Discover how embedded DevOps practices are reshaping automotive software development, enabling faster delivery cycles with integrated security.
Frequently asked questions
Key takeaways
- AI in automotive embedded software development works best as a Level 2 assistant, meaning human expertise remains essential for effective embedded development in vehicles.
- The right human-AI balance varies across different automotive software domains; teams that find the right balance between AI assistance and human expertise will gain competitive advantages.
- Creating effective human-AI partnerships requires intentional processes such as mandatory human review checkpoints for safety-critical systems.