Non-formal educator use of evaluation results

Thumbnail Image
Date
2012-08-01
Authors
Baughman, Sarah
Boyd, Heather
Franz, Nancy
Major Professor
Advisor
Committee Member
Journal Title
Journal ISSN
Volume Title
Publisher
Authors
Person
Franz, Nancy
Professor Emerita
Research Projects
Organizational Units
Organizational Unit
Journal Issue
Is Version Of
Versions
Series
Department
Extension and Outreach
Abstract

Increasing demands for accountability in educational programming have resulted in increasing calls for program evaluation in educational organizations. Many organizations include conducting program evaluations as part of the job responsibilities of program staff. Cooperative Extension is a complex organization offering non-formal educational programs through land grant universities. Many Extension services require non-formal educational program evaluations be conducted by field-based Extension educators. Evaluation research has focused primarily on the efforts of professional, external evaluators. The work of program staff with many responsibilities including program evaluation has received little attention. This study examined how field based Extension educators (i.e. program staff) in four Extension services use the results of evaluations of programs that they have conducted themselves. Four types of evaluation use are measured and explored; instrumental use, conceptual use, persuasive use and process use. Results indicate that there are few programmatic changes as a result of evaluation findings among the non-formal educators surveyed in this study. Extension educators tend to use evaluation results to persuade others about the value of their programs and learn from the evaluation process. Evaluation use is driven by accountability measures with very little program improvement use as measured in this study. Practical implications include delineating accountability and program improvement tasks within complex organizations in order to align evaluation efforts and to improve the results of both. There is some evidence that evaluation capacity building efforts may be increasing instrumental use by educators evaluating their own programs.

Comments

NOTICE: this is the author’s version of a work that was accepted for publication in Evaluation and Program Planning. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in Evaluation and Program Planning, 35(3)2012: 329-336 DOI¨10.1016/j.evalprogplan.2011.11.008.

Description
Keywords
Citation
DOI
Copyright
Sun Jan 01 00:00:00 UTC 2012
Collections