Date of Award
Doctor of Philosophy
Applied Linguistics and Technology
Achieving sufficient proficiency in academic writing is critical in university level setting. It is not surprising hence, that the admission to English speaking universities is usually conditioned not only by a particular total score from Standardized tests of English proficiency ,e.g., TOEFL or IELTs but also a specific band score in writing. To further ensure students’ readiness for university level writing, many US universities use in-house developed assessments. The main goal of in -house placement tests is to identify students who may experience difficulty coping up with the demands of university level writing and hence, place them in ESL writing courses that are tailored to their linguistics needs. Unlike high -stakes tests such as standardized tests of writing in which the validity of score interpretations and use is extensively investigated, the validity of score interpretations or placement decisions of in-house placement tests of writing may not be sufficiently explored. Besides the potential lack of logistic resources, one potential reason for the lack of validity research is that unlike high stakes tests, in-house placement tests often lack a clear definition of the writing construct or the subconstructs being assessed. The extensive research that has been conducted on the construct or the multi-subconstructs of writing reflects not only its complexity but also the need for delineating the boundaries of the construct or the subconstructs being assessed. This becomes even more crucial with placement tests whose main goal is to make placement decisions based on test takers’ linguistic needs.
A major subconstruct that has been used as an anchor in determining development in writing in both placement and standardized tests is linguistic complexity. Although there is no agreed upon definition on the subconstructs of linguistics complexity, there is a general agreement that it is reflected in three main levels: lexical, syntax and discourse.
Despite being used widely in the rating rubrics of writing assessments particularly in placement tests, there is little research on how complexity features with its three major aspects can be used to explore the validity of score use and interpretations in placement tests of writing. The extent to which the variations in learners’ performance as reflected by placement decisions mirror differences in students’ ability of using complex language has not, to my knowledge, been explored with the EPT reading based writing test at Iowa State University. Also, expected to bring benefits to both teachers’ and learners, placement decisions need to be evaluated with regard to their impact on the focus of instructional practices and the development in learners’ written performance. Bearing these concerns in mind, the current study attempts to investigate the validity of score interpretations and the use of the English Placement test of writing (EPT) administered at ISU. To facilitate the process of investigating the assumptions underlying the score interpretations and the use and the type of evidence to be examined, an interpretative /use argument is used.
Two main inferences are investigated: explanation and ramification. The first connects the performance as reflected by the EPT placement decision to the construct or the subconstruct being assessed. In other words, to what extent is the performance in the EPT writing supported -as expected by theory-by learners’ use of complex language. The second inference focuses on the impact of using the EPT test. In other words, to what extent do the stakeholders namely students and teachers benefit as a result of the placement decisions.
To address these concerns, a four -part study is conducted using a mixed method approach in which quantitative results are triangulated by qualitative analysis. The examination of the explanation inference focused on a quantitative and a qualitative analysis of three main aspects of linguistic complexity: lexical, syntax and discourse which were investigated in three separate studies. A subsample corpus of 554 texts from the EPT written responses was used to analyze lexical and syntactic complexity features. Five indices reflecting three main aspects of lexical complexity were investigated : diversity, sophistication and cohesion. The Multivariate MANOVA shows a statistically significant main effect for lexical complexity on the placement levels. The pairwise comparisons revealed statistically significant differences among the groups. The results also suggest an influence of the education level (graduate vs. undergraduate) on the complex vocabulary use. The findings revealed partial support for the explanation inference in the validity argument for the EPT writing section. The Pass graduates displayed statistically higher use of complex vocabulary in terms of diversity, sophistication and cohesion than level B students. In addition, the undergraduate Pass displayed statistically higher means of cohesive vocabulary and higher means of sophisticated vocabulary (measured by AWL, bigram range) than level C. However, the differences between undergraduate placement levels of C and B were not consistent specifically with using words from the academic word list or AWL (a measure of sophistication). In addition, no statistical differences were found between graduate students in Pass and D levels.
The analysis of frequency use of 12 syntactic features posited by previous research to be characteristics of academic writing pointed to differences in syntactic complexity among the groups. The results of syntactic complexity indicated an effect for the education level (graduate vs. undergraduates ) on the frequency of use of syntactically complex structures. The syntactic analysis revealed support for the EPT validity argument. With the exception of nouns as prenominal modifiers, the graduate level students in Pass and D level showed higher frequency use of nominalization and prenominal modifiers i.e. adjectives and post nominal modifiers (prepositional phrases, non- finite relatives, that clause complements of nouns). Their writings also exhibited relatively less frequency of finite adverbials than their lower undergraduate levels.
Using a subsample of the EPT corpus (92 texts), the examination of the third study or the third aspect of linguistics complexity i.e. discourse level complexity focused on two main dimensions: the differences in the use of topical developmental patterns and the frequency and discourse functions of the initial sentence elements or ISEs. MANOVA test was used to examine if there were statistically significant differences in the use of topical organizational patterns. The results showed that developmental patterns have a main effect with a relatively medium size effect. The between subjects test showed statistically significant differences in the use of sequential progression. The pair wise comparison using LSD post hoc test showed that the graduate Pass utilized statistically more sequential progression than Level B. In addition, the undergraduate Pass and graduate D levels used statistically higher sequential patterns than their lower undergraduate peers in C and B levels. The One- Way ANOVA test revealed statistically main effect for the frequency of ISES on the grouping variable. A Tukey HSD post hoc tests revealed that the use of ISEs by graduate level learners in Pass and D levels was statistically significantly higher than their undergraduate peers. Although the five placement levels displayed similar distribution of ISES forms namely with linking adverbials, prepositional phrases and adverbs, the Pass group-as was shown by the qualitative analysis- exhibited better awareness of the discourse functions of the ISES. The differences in both topical progression and ISEs came in support for the EPT validity argument.
The investigation of the ramification inference in the fourth study utilized surveys and interviews with teachers and students to survey the overall satisfaction with the EPT writing decisions, course materials and to examine: 1) teachers’ beliefs about aspects of good academic writing, common problems in their ESL students’ writing and the focus of their ESL instruction, 2) ESL students’ self-rated development after ESL writing courses. Results from the survey and interviews pointed to a partial concurrence between what teachers believe to be important for quality writing and the linguistic aspects emphasized in ESL writing courses. The assessment of students’ self-assessed development after ESL writing courses, however, revealed general satisfaction and a statistically significant self -rated improvement in the overall writing ability and in their lexical, syntactic and discourse level writing ability.
The findings from the four studies contributed to the validity argument of the EPT writing. Some findings came with partial support while others fully support the underlying assumptions of the explanation and ramification inferences. The findings of this dissertation are intended to direct attention to issues, I believe, worth considering and to avenues for further research in the EPT context and with placement tests of writing in general.
Alonazi, Zaha, "Validity argument for EPT written argumentative essays" (2019). Graduate Theses and Dissertations. 17386.