Title

On the Evaluation and Comparison of Runtime Verification Tools for Hardware and Cyber-Physical Systems

Campus Units

Aerospace Engineering, Computer Science, Electrical and Computer Engineering, Mathematics, Virtual Reality Applications Center

Document Type

Conference Proceeding

Conference

International Workshop on Competitions, Usability, Benchmarks, Evaluation, and Standardisation for Runtime Verification Tools (RV-CUBES)

Publication Version

Published Version

Link to Published Version

10.29007/pld3

Publication Date

2017

Journal or Book Title

Kalpa Publications in Computing

Volume

3

First Page

123

Last Page

137

DOI

10.29007/pld3

Conference Title

International Workshop on Competitions, Usability, Benchmarks, Evaluation, and Standardisation for Runtime Verification Tools (RV-CUBES)

Conference Date

September 13-16, 2017

City

Seattle, WA

Abstract

The need for runtime verification (RV), and tools that enable RV in practice, is widely recognized. Systems that need to operate autonomously necessitate on-board RV technologies, from Mars rovers that need to sustain operation despite delayed communication from operators on Earth, to Unmanned Aerial Systems (UAS) that must fly without a human on-board, to robots operating in dynamic or hazardous environments that must take care to preserve both themselves and their surroundings. Enabling all forms of autonomy, from tele-operation to automated control to decision-making to learning, requires some ability for the autonomous system to reason about itself. The broader class of safety-critical systems require means of runtime self-checking to ensure their critical functions have not degraded during use.

Runtime verification addresses a vital need for self-referential reasoning and system health management, but there is not currently a generalized approach that answers the lower-level questions. What are the inputs to RV? What are the outputs? What level(s) of the system do we need RV tools to verify, from bits and sensor signals to high-level architectures, and at what temporal frequency? How do we know our runtime verdicts are correct? How do the answers to these questions change for software, hardware, or cyber-physical systems (CPS)? How do we benchmark RV tools to assess their (comparative) suitability for particular platforms? The goal of this position paper is to fuel the discussion of ways to improve how we evaluate and compare tools for runtime verification, particularly for cyber-physical systems.

Comments

This proceeding is published as Rozier, Kristin Yvonne. "On the Evaluation and Comparison of Runtime Verification Tools for Hardware and Cyber-Physical Systems." International Workshop on Competitions, Usability, Benchmarks, Evaluation, and Standardisation for Runtime Verification Tools (RV-CUBES), held in conjunction with the 17th International Conference on Runtime Verification (RV). Seattle, Washington, USA, September 13-16, 2017. In Kalpa Publications in Computing, vol. 3, pp. 123-137. G. Reger and K. Havelund (eds.). DOI: 10.29007/pld3. Posted with permission.

Copyright Owner

The Author(s)

Language

en

File Format

application/pdf

Published Version

Share

Article Location

 
COinS