Degree Type

Dissertation

Date of Award

2017

Degree Name

Doctor of Philosophy

Department

English

Major

Rhetoric and Professional Communication

First Advisor

Jean Goodwin

Second Advisor

Stacy Tye-Williams

Abstract

An instrument for valid, quantitative assessment of scientists’ public communication promises to promote improved science communication by giving formative feedback to scientists developing their communication skills and providing a mechanism for summative assessment of communication training programs for scientists. A quantitative instrument also fits with the scientific ethos, increasing the likelihood that the assessment will gain individual and institutional adoption. Unfortunately, past assessment instruments have fallen short in providing a methodologically sound, theory-based assessment instrument to use in assessing public science communication. This dissertation uses the Evidence Centered Design (ECD) method for language testing to develop and test the APPS—the Assessment for Public Presentations by Scientists—a f filled-cell rubric and accompanying code book based on communication theory and practice that can be used to provide formative and summative assessments of scientists giving informative presentations to public, non-scientist audiences.

The APPS rubric was developed by employing an extensive domain analysis to establish the knowledge, skills, and abilities most desired for scientists who speak to public audiences, based on a methodical review of scientific organizations and a systematic review of science communication scholarship. This analysis found that scientists addressing public audiences should speak in language that is understandable, concrete, and free from scientific jargon, translating important scientific information into language that public audiences can understand; should convey the relevance and importance of science to the everyday lives of audience members; should employ visuals that enhance the presentations; should explain scientific processes, techniques, and purposes; should engage in behaviors that increase the audience’s perceptions of scientists as trustworthy, human, and approachable; and should engage in interactive exchanges about science with public audiences. The APPS operationalizes these skills and abilities, using communication theory, in a detailed, user-friendly rubric and code book for assessing public communication by scientists. The rubric delineates theory-based techniques for demonstrating the desired skills, such as using explanatory metaphors, engaging in behaviors that increase immediacy, using first-person pronouns, telling personal stories, and engaging in back-and-forth conversation with the audience.

Four rounds of testing provided evidence that the final version of the APPS is a reliable and valid assessment, with constructs that measure what they are intended to measure and that are seen similarly by different raters when used in conjunction with rater training. Early rounds of testing showed the need to adjust wording and understanding of some of the constructs so that raters understood them similarly, and later testing showed marked improvement in those areas. Although the stringent interclass agreement measure Cohen’s kappa did not show strong agreement in most measures, the adjacent agreement (where raters choose scores that are within one point of each other) was high for every category in the final testing. This shows that although raters did not often have exactly the same score for speakers in each construct, they nearly always understood the construct similarly.

The agreement ratings also accentuate the study’s finding that the raters’ backgrounds may affect their abilities to objectively score science speakers. Testing showed that science raters had difficulty separating themselves from their inherent science knowledge and had difficulty objectively rating communication skills. Therefore, this study finds that scientists can act as communication raters if they are trained by practicing rating science presentations as a group to norm scoring and by studying communication skills discussed in the code book. However, because of the possible difficulty separating themselves from their intrinsic science knowledge and their lack of experience in identifying excellent communication practices, the assessment of science speakers will nearly always be more accurate and the communication performance of scientists more enhanced when utilizing communication experts to help train and assess scientists in their science communication with public audiences.

Therefore, the APPS can be a valuable tool for improving the knowledge, skills, and abilities of scientists communicating with public audiences when used by communication training programs to provide prompt, specific feedback. Given the reliability limitations, the rubric should not be used for high-stakes purposes or for “proving” a speaker’s competence. However, when used in a science communication training program with consistent raters, the APPS can provide valuable summative and formative assessment for science communicators.

DOI

https://doi.org/10.31274/etd-180810-5203

Copyright Owner

Rachel Collier Murdock

Language

en

File Format

application/pdf

File Size

288 pages

Share

COinS