A survey of 785 development and security professionals working on embedded systems published this week finds 89% of organizations are already using artificial intelligence (AI) coding assistants, but 39% also noted that only certain developers are allowed to use them.

Conducted by Censuswide on behalf of Black Duck Software, the survey also finds that 96% of respondents are integrating open source AI models into their products.

Unfortunately, rapid adoption appears to be outpacing the development of necessary governance and security measures, with 21% of respondents lacking confidence in their ability to prevent AI from introducing security vulnerabilities.

A total of 18% also admit that they’re not confident they can manage the open source license risks that come with AI-generated code.

Corey Hamilton, senior solutions manager for Black Duck Software, said the survey makes it clear that while adoption of AI coding tools is high, there is a substantial amount of risk, especially if the embedded system is used in an application that could impact human safety. The simple fact of the matter is that the level of potential risk is being elevated, he noted. Too many business leaders, in the name of increasing productivity, have donned AI security blinders, added Hamilton.

Many of the applications being developed using AI coding tools may turn out to be ticking time bombs that, at some unknown time in the future, might wreak havoc.

Overall, the survey finds that Python is now the most widely used programming language by builders of embedded systems (27%), followed by C++ (26%), Java (22%) and JavaScript (21%).

Usage of software composition analysis (SCA) tools is becoming more widespread, with scans occurring with every build (39%), on every pull request (39%), and within the integrated developer environment (35%).

A full 71% said their organizations can also now produce software bill of materials (SBOMs), driven primarily by customer and partner requirements (40%). Despite well-known challenges of creating an accurate SBOM, 80% of respondents are confident their organization can produce a complete and accurate one when asked.

More than half of all companies are actively scanning for license obligations in their main components (51%) as well as any code snippets that developers copy and paste (54%).

Not surprisingly, there’s also a disconnect between management and engineers regarding project success. While 86% of CTOs and directors consider their projects successful, only 56% of developers concurred.

Historically, embedded systems have been a favorite target for cybercriminals, mainly because legacy platforms were attached to the Internet without being updated to strengthen cybersecurity resilience. Many of those systems, in theory, should be replaced by platforms that have been developed using best DevSecOps practices. The issue is that it’s not clear to what degree AI coding tools might, despite known flaws, still create more secure code than many human developers, who often have limited cybersecurity expertise.

In the meantime, the number of embedded systems being deployed continues to expand, which means the attack surface that needs to be defended continues to increase. As a result, it’s only a matter of time before cybersecurity teams are overwhelmed by the sheer volume of vulnerabilities in embedded systems that could have been easily prevented.


Share.
Leave A Reply