Campus Units

English

Document Type

Book Chapter

Publication Version

Accepted Manuscript

Publication Date

3-2018

Journal or Book Title

Useful Assessment and Evaluation in Language Education

Volume

13

First Page

217

Last Page

234

Abstract

THE DIVERSE AND EVER-CHANGING list of technologies encompassed by computer-assisted language learning (CALL) presents evaluators with a challenging moving target. At a time when CALL can include everything from school-based telecollaborative projects to Massive Online Open Courses (MOOCs), to smartphone- and tablet-based apps, previous approaches to evaluation reveal their inadequacies. The checklists that once predominated (see Susser 2001) assumed the focus of evaluation to be tutorial software known as “courseware,” which now constitutes a much-diminished part of the CALL landscape. Methodological frameworks like the one proposed by Hubbard (2006) assume the role of an instructor and a course in which the technology is situated, which is countered by increasingly autonomous and self-directed applications of CALL (Reinders and White 2016). And sets of criteria based in principles of interactionist second language acquisition (SLA; Chapelle 2001) seem less than ideal for evaluating technologies designed for individual use whose function is primarily facilitative of second language (L2) usage, such as online dictionaries or translation tools.

Comments

This book chapter is published as Ranalli, J. Addressing Diversity in CALL Evaluation through Arguments and Theory-of-Action. (2018) in Useful Assessment and Evaluation in Language Education, John McE. Davis, John M. Norris, Margaret E. Malone, Todd H. McKay, and Young-A Son, Editors. Georgetown University Press. Posted with permission.

Copyright Owner

Georgetown University Press

Language

en

File Format

application/pdf

Published Version

Share

COinS