AI to replace embedded software developers?

In regulated, hardware-specific domains like embedded and system-level software, generative AI remains an assistant, not a replacement. Confidential hardware specs, scarce training data, and safety-critical requirements mean AI tools (private models, copilots, automation agents) speed up documentation, testing, reviews and even some drivers development; but hardware simulation, firmware, and real-time operating systems still require human expertise.

The AI replacement debate for embedded software development

Those of us deeply involved in the software development industry often hear a question of if and when software developers will be replaced by AI. Besides this topic being discussed at casual IT meetups and during random conversations with non-IT friends, this question consistently comes up at tradeshows, customer meetings, and the hiring process. Then, if you start googling (or should I switch to saying AI-ing?), a common wisdom from the industry’s gurus is that it may happen in the distant future, but not any time soon. PerformaCode is a bit unique in the software services industry because a big part of what we do is so-called system-level software (or embedded software that is optimized for various chips or devices), so our take on being replaced by AI is a bit unique too. I thought I’d share thoughts from our corner of the industry.

Barriers to AI automation in system-level & embedded development

I will argue that replacing human developers with AI models is even more distant than in business apps or DevOps. The reason is simple: far fewer materials to train generative models on. In many cases with regulated industries and custom devices, device-specific open source code base is rarely available. We work with numerous device and chip manufacturers, and they consider specifications for their future hardware highly confidential. Some of our larger customers even deployed a “private AI” model. While those are naturally less powerful due to having far less data for training, they cannot leak confidential information to the open internet. In a sense, this reminds me of the early days of cloud technology, when soon after cloud became a widespread thing, corporate private clouds came along very quickly due to data protection and confidentiality.

It is important to point out that security and quality requirements are especially stringent in the world of embedded systems. In our case, much of the code is developed for medical devices, avionics, and some other safety-critical industries. The cost of an error can be as high as a human life, so the importance of delivering bug-free products cannot be overstated.

Practical AI assistance: tools, experiments & future outlook

The concept of private AI, while still being tested, seems very promising. It is already highly utilized by both customers’ internal development teams and our engineers working on customers’ projects. Where things stand right now, such models are not far from being able to create a driver for a well-documented piece of hardware. Other types of embedded software our developers specialize in, like hardware simulation, firmware, real-time operating systems (RTOS), are too hardware-specific and nearly impossible to train models for. It is possible to get an embedded code by going through multiple iterations of vibe coding, but the outcome will almost certainly be full of hidden bugs, security and architectural flaws. One of the best self-explanatory examples of “why” is the curl CEO Daniel Stenberg’s recent LinkedIn post stating that they have stopped accepting security vulnerability reports from AI-generated code. 

Generative AI models can and should be a valuable assistant to even the most conservative embedded developers. We are witnessing that feeding existing codebase and enormous amounts of technical documentation for future hardware into internal AI models inside larger and more innovative device manufacturers happens on a daily basis. If implemented correctly, retrieving documentation or code samples becomes a breeze. GitHub Copilot integrated into IDE won’t write hardware-specific code for you, but it already is a valuable assistance with less hardware-specific parts of the codebase. It can certainly provide comprehensive recommendations during code reviews or refactoring, especially in more tedious areas like typos and compliance with coding style policies. Various AI agents running Jenkins can create and run tests, analyze results, create Jira tasks and documentation. The point is that they won’t write device-specific code, but they do offload tons of peripheral time-consuming work off seasoned developers’ shoulders so that they can focus on what they are best at.

That said, there definitely are tangible solutions on the market that allow for embedded code generation by various AI models. Just over the last few months our developers, with various levels of success, experimented with Generative AI models like ARM CMSIS-NN neural networks while working with ARM Cortex, and NVIDIA Deep Learning SDK in another project, or Qt AI Assistant in another medical device project. Tools of this kind, while can’t produce a final codebase, do provide valuable assistance and productivity boost to our development teams.

AI models have been dragging every industry through a tectonic shift in ways of conducting business, and the software industry is certainly not immune. Jointly with our customers, we are experimenting with how to use AI in embedded software and becoming more efficient at what we do every day. As a lot of routine and junior-level tasks become fully automated and less time-consuming, our senior experts are able to focus on using AI as yet another important tool in their already vast system-level code toolbox.

By: Yuri Kirkel, CEO
Contact Yuri on LinkedIN

Once a month: what we’ve built, seen, and learned.