Interpreting the Validity of a High-Stakes Test in Light of the Argument-Based Framework: Implications for Test Improvement
Publish Year: 1399
نوع سند: مقاله ژورنالی
زبان: English
View: 33
This Paper With 23 Page And PDF Format Ready To Download
- Certificate
- من نویسنده این مقاله هستم
استخراج به نرم افزارهای پژوهشی:
شناسه ملی سند علمی:
JR_RALS-11-1_004
تاریخ نمایه سازی: 17 اردیبهشت 1403
Abstract:
The validity of large-scale assessments may be compromised, partly due to their content inappropriateness or construct underrepresentation. Few validity studies have focused on such assessments within an argument-based framework. This study analyzed the domain description and evaluation inference of the Ph.D. Entrance Exam of ELT (PEEE) sat by Ph.D. examinees (n = ۹۹۹) in ۲۰۱۴ in Iran. To track evidence for domain definition, the test content was scrutinized by applied linguistics experts (n = ۱۲). As for evaluation inference, the reliability and differential item functioning (DIF) of the test were examined. Results indicated that the test is biased because (۱) the test tasks are not fully represented in the Ph.D. course objectives, (۲) the test is best reliable for high-ability test-takers (IRT analysis), and (۳) ۴ items are flagged for nonnegligible DIF (logistic regression [LR] analysis). Implications for language testing and assessment are discussed and some possible suggestions are offered.
Keywords:
Authors
Ali Darabi Bazvand
Department of English Language, College of Languages , University of Human Developmentv, Sulaimani, Kurdistan, Iraq
Alireza Ahmadi
Shiraz University
مراجع و منابع این Paper:
لیست زیر مراجع و منابع استفاده شده در این Paper را نمایش می دهد. این مراجع به صورت کاملا ماشینی و بر اساس هوش مصنوعی استخراج شده اند و لذا ممکن است دارای اشکالاتی باشند که به مرور زمان دقت استخراج این محتوا افزایش می یابد. مراجعی که مقالات مربوط به آنها در سیویلیکا نمایه شده و پیدا شده اند، به خود Paper لینک شده اند :