Campus Units

English

Document Type

Article

Publication Version

Published Version

Publication Date

2013

Journal or Book Title

International Journal of Computer-Assisted Language Learning and Teaching,

Volume

3

Issue

3

First Page

77

Last Page

98

DOI

10.4018/ijcallt.2013070105

Abstract

Valid evaluations of automated writing evaluation (AWE) design, development, and implementation should integrate the learners’ perspective in order to ensure the attainment of desired outcomes. This paper explores the learner fit quality of the Research Writing Tutor (RWT), an emerging AWE tool tested with L2 writers at an early stage of its development. Employing a mixed-methods approach, the authors sought to answer questions regarding the nature of learners’ interactional modifications with RWT and their perceptions of appropriateness of its feedback about the communicative effectiveness of research article Introductions discourse. The findings reveal that RWT’s move, step, and sentence-level feedback provides various opportunities for learners to engage with the revision task at a useful level of difficulty and to stimulate interaction appropriate to their individual characteristics. The authors also discuss insights about usefulness, user-friendliness, and trust as important concepts inherent to appropriateness.

Comments

This article is published as Cotos, E., & Huffman, S. (2013). Learner fit in scaling up automated writing evaluation. International Journal of Computer-Assisted Language Learning and Teaching, 3(3), 77-98. DOI: 10.4018/ijcallt.2013070105. Posted with permission.

Copyright Owner

I G I Global

Language

en

File Format

application/pdf

Share

COinS