Back to Top

Subscribe to INSIGHT

Expanding perspectives every month.

Subscribe
<em>Spotlight on Lessons Learned:</em> Static Software Analysis of the NASA Autonomous Flight Termination Software

A NASA Engineering and Safety Center assessment of the Autonomous Flight Termination System resulted in coding standard and software static analysis recommendations for software development teams.

The Autonomous Flight Termination System (AFTS) is an independent, self-contained subsystem intended for use by government and commercial launch vehicles. The AFTS flight software was developed by a consortium including NASA, the Defense Advanced Research Projects Agency, the United States Air Force, the United States Army Space and Missile Defense Command, and commercial companies.

The AFTS autonomously makes flight termination/destruct decisions using configurable, software-based rules that are implemented on redundant flight processors using data from redundant Global Positioning System/Inertial Measurement Unit (GPS/IMU) navigation sensors. The four major AFTS components are launch and range specific abort rules, Core Autonomous Safety Software (CASS), mission-specific software interfaces (also known as “wrapper”), and hardware architecture and interfaces.

Static analysis—the analysis of computer software that is performed without executing programs—is unlike dynamic program analysis, which executes the code. Static analysis is performed on the source code, with no need for hardware. Static code checkers use heuristics to inspect the code for issues that have historically caused failures, including issues of coding standard violations, variable assignments, divide-by-zero possibilities, questionable syntax, consistency issues, complexity measures, unchecked input values, code maintenance, and security flaws.

The Commercial Crew Program Chief Engineer requested that the NASA Engineering and Safety Center (NESC) identify deficiencies within certain AFTS CASS software builds. Recommendations were made to the CASS Operational Release 1 developers to further evaluate issues identified by the static analysis report.

Lesson Number: 24503
Lesson Date: August 23, 2018
Submitting Organization: NASA Engineering and Safety Center

 

HIGHLIGHTS

LESSONS LEARNED

  • The definition of the software coding standards was non-specific and the “acceptable level” of mitigating deficiencies in C++ for use in safety-critical systems was never defined.
  • No procedures were documented in the software development plan regarding the configuration of the analysis tools or the process of assessing the reported issues.

RECOMMENDATIONS

  • Software development teams should specify coding standards to be applied to the software development for both ground utilities and flight software.
  • Software development team should configure the static analysis tool to best match the specified coding standard.

Consult the lesson learned for complete lists.

 

Tim Crumbley Credit: NASA

Tim Crumbley
Credit: NASA

NASA Technical Fellow for Software Assurance Tim Crumbley on the importance of this lesson learned:

The static analysis requirement for NASA software projects increases the quality and safety of code developed for NASA Missions. Using static analysis helps to ensure that code meets the coding standards/criteria established by the project team and common coding errors are eliminated before system integration and testing. Studies show that the cost of catching errors dramatically increases from one phase of development to the next by roughly a factor of 10. Eliminating errors during implementation results in cost and time savings during integration and testing, which is a particularly important cost-saving factor for projects using high-fidelity testbeds.

Static analysis tools provide a means for analyzing code without having to run the code, helping ensure higher quality software throughout the development process. Static code analysis tools identify the patterns in the code and detect possible security threats and issues in the quality of the code. This will help in revealing any issues in the early stages of development, which can be rectified during the development stage, allowing developers to develop a strong code base.

Modern static code analysis tools can identify a variety of issues and problems, including but not limited to dead code, non-compliances with coding standards, security vulnerabilities, race conditions, memory leaks, and redundant code. Typically, static analysis tools are used to help verify adherence with coding methods, standards, and/or criteria. While false positives are an acknowledged shortcoming of static analysis tools, users can calibrate, tune, and filter results to make effective use of these tools. Software peer reviews/inspections of code items can include reviewing the results from static code analysis tools.

Static analysis tools may not be readily available for some platforms or computing languages. If static analysis tools are determined to be unavailable, the project can document the alternate manual methods and procedures to be used to verify and validate the software code. These manual methods and procedures will be addressed or referenced in the project’s compliance matrix against this requirement.

For critical code, it is essential to use sound and complete static analyzers. Sound and complete analyzers guarantee that all errors are flagged and that no false negatives (i.e., an erroneous operation is classified as safe) are generated. Such commercial analyzers are expensive but necessary for critical applications. Note that sound and complete static analyzers are now available free of charge for C and C++ software systems (see the Tools Table provided under the Resources tab).

The cost of sound and complete commercial tools can be prohibitive for projects working on non-critical software systems. However, there are other static analyzers available free of charge, which can identify many common errors or deviations from standards or good coding practices. It is a best practice to use some type of static analysis in all software development efforts but is required for mission-critical software.

Introducing the routine use of static analyzers in an existing development process can be a tricky proposition. Even if the software team is convinced of the benefits such tools can bring, projects need to be careful and make the introduction of static analysis as unobtrusive as possible. Checking coding standards/criteria through the use of a static analysis tool complements manual code reviews, and results in safer software systems. Coding standards/criteria ensure code consistency across a project and facilitate not only reviews but also software system integration. Software components that do not comply with coding standards can result in incorrect assumptions about the component, which can result in errors at later development stages. Debugging can also be hampered by deviations from coding standards/criteria since they can result in misleading assumptions. Enforcement of coding standards yields benefits that can continue into the later software life cycle phases, reducing software maintenance complexity and fostering code reuse.

Static analysis applies to a wide range of software systems, with exceptions when tools are not available for a particular platform or programming language. Note that there are many available tools for widely used languages.

The NASA IV&V Program maintains access to an extensive set of static code analyzers to support the evaluation of a diverse set of coding languages, including C, C++, Ada, Java, and Fortran. IV&V has established a robust infrastructure to support the independent—meaning outside of the developers’ software development environment—execution of these tools as well as a mature capability for the prioritization and evaluation of the analyzer results. The IV&V Program regularly utilizes these analyzers in the evaluation of the agency’s safety and mission-critical software and can provide fee-for-service support for projects not currently receiving IV&V from the NASA IV&V Program.

Small projects can benefit from inserting static analysis in their development process. As mentioned, there are many static analyzers available free of charge. So, using these does not impose an additional cost on a small project with constrained resources. However, it might have an impact on development time, since it takes time to get used to a new tool and free tools tend to generate a lot of false positives.

The time spent getting used to a tool is negligible. Most tools use the same interfaces as compilers and are not difficult to use. There might be differences due to the language parser used by the tool. To ensure smooth integration of the tool in the development process, pick a tool that relies on parsers from common compilers (e.g., GNU Compiler Collection). This helps to ensure that the tool can be easily integrated into existing make-files or other compiling mechanisms.

The generation of many false positives can be a problem since it might overwhelm the users. In some sense, it is similar to the problem of choosing an adequate warning level for a compiler: generating too many warnings causes them to be ignored. Choose a static analyzer that has the capability of filtering results or that can be adjusted to generate fewer warnings. In general, it is a good habit to start using a static analyzer at the level that generates fewer warnings and to slowly increase the levels until too many warnings are generated.

Read the full lesson learned.

 

Spotlight on Lessons Learned is a monthly series of articles featuring a valuable lesson along with perspective from a NASA technical expert on why the lesson is important. The full lessons are publicly available in NASA’s Lessons Learned Information System (LLIS).

If you have a favorite NASA lesson learned that belongs in the spotlight, please contact us and be sure to include the LLIS Lesson Number.

About the Author

Share With Your Colleagues