Towards DO-178C compatible tool design

Thumbnail Image
Date
2015-01-01
Authors
Xu, Yijia
Major Professor
Advisor
Robyn R. Lutz
Committee Member
Journal Title
Journal ISSN
Volume Title
Publisher
Altmetrics
Authors
Research Projects
Organizational Units
Organizational Unit
Journal Issue
Is Version Of
Versions
Series
Department
Computer Science
Abstract

In software development, testing often takes more than half the total development time (Pan 1999). Test case design and execution of test procedures consume most of the testing time. Thus, automatically generating test cases and automatically detecting errors in test procedures prior to execution is highly advantageous. This thesis proposes a new approach to further automate test case design and the test procedure development process.

Several open-source products exist to automate test case design, but they have limitations including test cases that do not trace back to models; test cases that are not reusable for libraries; and limiting test cases to generation on their own test environment. This limits their support for the important, new avionics standard, DO-178C (RTCA 2012).

The first contribution of the thesis is a technique for test code generation that, compared to existing products, is faster, provides improved traceability to models, and supports reusable test procedures that can be generated on any testing environment. To address the current limitations, the new approach utilizes the Simulink Design Verifier and an open-source constraint solver to generate test cases. The technique allows each test case to be traced back to an expression and to the original model.

Detecting errors in manually written test procedures before testing starts is also critical to efficient verification. It can save hours or even days if errors are detected in the early test procedure design stage. However, analysis done here of a set of open source code analysis tools shows that they cannot detect type and attribute errors effectively.

The second contribution of the thesis is to develop a static code analyzer for Python code that detects bugs that could cause automated test procedures to crash. The analyzer converts a Python code to an abstract syntax tree and detects all type and attribute errors by performing a type-flow analysis. This approach provides improved accuracy over existing products.

Together, these two contributions, a test code generator with improved traceability and reusability, and a static code analyzer capable of handling more error types, can improve test process compatibility with DO-178C.

Comments
Description
Keywords
Citation
Source
Subject Categories
Copyright
Thu Jan 01 00:00:00 UTC 2015