SCSC 2007 START Conference Manager    

A Common M&S Credibility Criteria-set A Common M&S Credibility Criteria-set Supports Multiple Problem Domains

Joe Hale, Bobby Hartway and Danny Thomas

Summer Computer Simulation Conference 2007 (SCSC 2007)
San Diego, California (USA), July 15-18, 2007


Credibility management of modeling and simulation (M&S) results depends on two factors; 1) how well analysts understand the credibility of their M&S applications, and 2) how transparently that knowledge is represented to decision makers so they can account for risk-of-use for M&S results. Satisfying these two divergent factors requires a hierarchically structured yet easily understood criteria-set and measurement method capable of supporting simple and transparent credibility metrics at the management (top) level, and technically detailed credibility evidence for M&S assessment at the bottom level. Ideally, this method would use best practices and processes of conventional Verification, Validation, and Accreditation (VV&A) work, with tailoring as required to account for a new emphasis on M&S-results credibility. This paper presents a candidate for such a method. It uses a hierarchical spreadsheet framework, using a specially augmented version of the Analytic Hierarchy Process (AHP). The problem-structuring and importance-weighting techniques of classical AHP are augmented with “required-value thresholds” for each criterion, to indicate preferred values set by a VV&A accreditation agent conducting the M&S credibility. The threshold concept is further augmented by using corresponding “deficiency flags” to provide management-level highlights of any shortfalls, compared to the preferred values, of an M&S credibility criterion measurement. An added rule for disallowing credibility credit for achieved credibility values greater than the preferred value assures a non-compensatory scoring aggregation. This augmented AHP framework provides simple “dashboard indicators” for M&S results credibility for top management, yet total transparency (traceability) to the lower-level credibility assessments necessary for providing technical feedback to developers and users. The framework’s structural rules are described, and an example of its application is provided to an example published in the Interim NASA Standard for Models and Simulations, NASA-STD-(I)-7009.

START Conference Manager (V2.54.4)