Quest for Code Quality
The complexity of today’s embedded software keeps pushing the goalposts further out in terms of ensuring good quality code. To keep pace, vendors of code analysis tools are innovating with highly integrated and effective solutions.
It is now a given for embedded devices to have millions of lines of software code. As these systems get more complex, the challenge of producing error free code isn’t getting any easier. To meet the challenge, development tool vendors continue to add new features and capabilities to their code analysis products.
Although they are all addressing a similar need, the major embedded tool vendors each have their own twists and nuances when it comes to providing code analysis features. Some weave them tightly into their Integrated Development Environment (IDE), while others take a more modular approach. Meanwhile, issues due to programming languages, standards compliance and even IoT are all part of the landscape of today’s code analysis tool features.
Runtime Code Analysis
IAR Systems includes a variety of code analysis tools as part of its IAR Embedded Workbench IDE. Among these is its C-RUN runtime analysis tool. The tool is completely integrated with the IDE and provides detailed runtime error information. C-RUN is available as an add-on to IAR Embedded Workbench for Arm and for Renesas RX. C-RUN supports all supported Arm cores in IAR Embedded Workbench (Figure 1).
Runtime analysis tools work by inserting test code into an application to enable the tool to find real and potential errors in the code while executing the program in a software debugger. The types of errors found with this method include out-of-bounds errors, arithmetical errors and memory inconsistency errors.
By using runtime analysis, embedded system developers can find potential and real errors at an early stage, as opposed to finding errors at a later stage which makes product development more expensive and time consuming. It improves cost efficiency and development time, allowing for a speedier time to market. C-RUN supports both C and C++.
IAR Embedded Workbench also includes a tool called C-SPY Debugger. C-SPY provides an instruction simulator as well as extensive support for debugging probes and target systems. It includes RTOS plugins and wide support for communication stacks and middleware. A C-like macro system and integrated code quality control further extends its capabilities. Developers can use C-RUN in C-SPY simulator as well as in their actual target hardware. IAR Systems provides a size-limited version of C-RUN that is activated for evaluation when you download IAR Embedded Workbench for Arm V7.20 (and later versions) or IAR Embedded Workbench for RX V3.10.
Ada Static Analysis
With its main focus on Ada language tools, AdaCore’s static analysis tool suite is called CodePeer. CodePeer is an Ada source code analyzer that detects run-time and logic errors. It assesses potential bugs before program execution, serving as an automated peer reviewer, helping to find errors easily at any stage of the development life-cycle (Figure 2).
CodePeer is a stand-alone tool that runs on Windows and Linux platforms. It may be used with any standard Ada compiler or fully integrated into the GNAT Pro development environment. It can detect several of the “Top 25 Most Dangerous Software Errors” in the Common Weakness Enumeration. CodePeer supports all versions of Ada (83, 95, 2005, 2012). CodePeer has been qualified as a Verification Tool under the DO-178B and EN 50128 software standards.
In February, AdaCore released Version 19.1 of its flagship products including CodePeer as well as its GNAT Pro, CodePeer, SPARK Pro and QGen products. The enhancements in CodePeer 19.1 are focused on user/usability improvements. These include new entry level (“level 0”) with fast analysis and minimal false positives. A simple “getting started quickly” mode is provided for new users. Other new features include a security report output, integration of AdaCore’s GNATcheck tool and a major documentation update—including examples of typical workflows.
Integrated with Compiler
As one of the long time veterans in the embedded software industry, Green Hills Software provides its MULTI IDE that includes a rich set of debugging and analysis tools. Among these are its DoubleCheck integrated static analysis tool. Green Hills emphasizes the importance of this tool as being an integrated tool. In other words, DoubleCheck is built into the Green Hills C/C++ compiler—unlike other source code analyzers that run as separate tools.
A typical compiler issues warnings and errors for some basic potential code problems, such as violations of the language standard or use of implementation-defined constructs. In contrast, DoubleCheck performs a full program analysis, finding bugs caused by complex interactions between pieces of code that may not even be in the same source file. DoubleCheck determines potential execution paths through code, including paths into and across subroutine calls, and how the values of program objects—such as standalone variables or fields within aggregates—could change across these paths (Figure 3).
Examples of the types of flaws DoubleCheck looks for are potential NULL pointer dereferences, buffer overflow, potential writes to read-only memory, resource leaks and others. The analyzer understands the behavior of many standard runtime library functions. For example, it knows that subroutines like free should be passed pointers to memory allocated by subroutines like malloc. The analyzer uses this information to detect errors in code that calls or uses the result of a call to these functions.
Software development organizations often employ an internal coding standard which governs programming practices to help ensure quality, maintainability and reliability. DoubleCheck can automate the enforcement of these coding standards. For example, DoubleCheck has a Green Hills Mode that adds a range of sensible quality controls to its bug-finding mission, including several MISRA compliance checks, enforcement of optional but important language standards and more.
Metric computations and enforcement of other coding rules do not incur significant overhead since DoubleCheck is already traversing the code tree to find bugs. DoubleCheck can be configured to generate a build error that highlights problem code to keep developers from accidentally submitting software that violates the coding rules. Using DoubleCheck as an automated software quality control saves the time and frustration typically associated with peer reviews.
For its part, Segger Microcontroller also provides static analysis as part of its IDE, Embedded Studio (Figure 4). Embedded Studio is a complete development environment for any Arm based processor, from legacy Arm7, Arm9 and Arm11 devices to Cortex-A, R and M. It comes with a system library that is optimized for embedded systems and GCC and LLVM/Clang compilers.
Embedded Studio offers various features and windows that provide you with enough information to analyze your application even before debugging. The Memory Usage Window goes into detail to show you where the sections—code and data—are placed. The Code Outline Window presents a clear structured outline of your source, which eases navigation through your code. The Source Navigator feature provides fast access to all your functions typedefs and variables with a single click. The Symbol Browser provides more insight into the compiled application. You can see how much memory is used by each symbol and where it will end up in your target. The Stack Usage Window does a static stack analysis of your application and shows the stack use of functions and call paths.
The Code Analyzer in Embedded Studio goes beyond the typical compiler warnings of an IDE. A compiler will usually generate warnings for anything that might break your application, such as uninitialized variables. To find further issues which have no immediate effect but might affect performance—and to increase your code quality—you can run the Code Analyzer analysis on your sources. All findings will be shown in the log to easily navigate to them.
Going Deeper for IOT
Unlike many of the other vendors covered in this article, GrammaTech is not an IDE vendor. Instead, it specializes in code analysis with an emphasis on deep code analysis. In February, the company announced the latest release of its CodeSonar took, version 5.1, with a focus on the Internet of Things (IoT). The new version of CodeSonar is designed to provide IoT developers the capability to support their multitude of languages and deliver safer and more secure software products faster.
With CodeSonar, developers can use a single user interface to find, assess and correct security vulnerabilities in different programs using multiple programming languages. CodeSonar 5.1 is tightly integrated with the Julia engine from Juliasoft, which provides high recall, high precision detection of security vulnerabilities in Java and C#. For developers of IoT systems, this is critical because IoT devices and enterprise services are built using many different programming languages. While C# or Java are typically the languages used on the user-interface or enterprise side, the embedded device itself is built using C/C++, with Python in the mix for scripting.
CodeSonar’s Qualification Kit is available as an add-on for software developers that have requirements to support functional safety standards such as IEC 61508, DO178B/C or ISO 26262. The Qualification Kit enables developers to qualify CodeSonar in their environment as a preparatory step in the safety certification process. CodeSonar now supports the import and export of results in SARIF (Static Analysis Results Interchange Format).
A new API Anomaly detection module is now included CodeSonar, which uses statistical machine learning to distill checkers from open source bodies of code. This module reports reliability and security problems due to bad use of 3rd party APIs such as the GNU C Library, OpenSSL, Qt, Glib, GTK, libXML and others. This module has already been used to report problems in the Git version control system, the elinks browser, the Query Object Framework, Gnome and other projects.
Avoiding Language Pitfalls
Some programming languages, particularly C and C++, include features that are prone to causing problems. Figure 5 shows output from LDRA’s static analysis tools, as it relates to adherence to a MISRA language subset. MISRA—like other coding standards—is designed to ensure that developers avoid using those problem features. In addition to showing compliance with coding standards, LDRA static analysis tools can also help developers in many other ways such as by ensuring that their code is clear, easy to maintain and test and not excessively complex.
While static analysis involves an automatic “inspection” of the source code, dynamic analysis involves its compilation and execution either as a whole, or in part. LDRA’s unit, integration and system dynamic analysis tools are used to ensure that the code works in accordance with project requirements, and has been exercised adequately. LDRA requirements traceability tools show that the code fulfils the requirements of both the project and any applicable functional safety standards, and that there is no spurious code.
Like some of the other solutions mentioned earlier, LDRA’s static analysis, dynamic analysis and requirements traceability tools leverage the benefits of being combined into an integrated tool suite. Some key features offered by the tool suite such as data coupling analysis and control coupling analysis draw upon this integration by leveraging static and dynamic analysis in tandem.
Data coupling analysis can identify issues such as mismatches in the sequences of variable values being set and used, and control coupling analysis can identify problems including ambiguities in the intended control flow of the code. These checks are obligatory for some DO-178C compliant (aerospace) applications and although they might not obligatory elsewhere, that doesn’t make the anomalies any less of a threat in other safety- or security- critical systems. Control and data coupling analyses are particularly significant in the context of tainted data, for example, because they point to situations where that data could be inaccurate, and where there is the very real potential for bad actors to abuse the situation.
PUBLISHED IN CIRCUIT CELLAR MAGAZINE • MAY 2019 #346 – Get a PDF of the issueSponsor this Article