A comparison of traditional test blueprinting to assessment engineering in a large scale assessment context
- UNCG Author/Contributor (non-UNCG co-authors, if there are any, appear on document)
- Cheryl A. Thomas (Creator)
- Institution
- The University of North Carolina at Greensboro (UNCG )
- Web Site: http://library.uncg.edu/
- Advisor
- Terry Ackerman
Abstract: This dissertation investigates the plausibility of computing Assessment Engineering cognitive task model derived difficulty parameters through careful engineering design, and to compare the task model derived difficulty with empirical Rasch model ‘b’ parameter estimates. In addition, this research seeks to examine whether cognitive task model derived difficulty can replace the Rasch Model ‘b’ parameter estimates for scoring examinees. The study uses real data constituting four assessments from a large-scale testing company. The results of the analysis indicated strong correlations between the task model and the empirical difficulty parameter estimates. While most of the empirical items satisfied the standard requirements of fit, there were several misfitting task model items, however, the task model was able to provide adequate fit for most of the items. Furthermore the proficiency scores for the empirical and the task model matched each other quite well for all of the assessments, showing no differences among the empirical and task model scores. An examination of the standard error statistics showed no differences between the empirical Rasch model and the cognitive task models. Assessment engineering is a new field, therefore very little research exists on comparing assessment engineering cognitive task model derived difficulties to empirical Rasch model parameter estimates. Moreover, the effects of cognitive task model estimate on proficiency scores has not been investigated. This study showed that through assessment engineering cognitive task modelling design process, it is possible to generate the item difficulty parameters a priori, without the use of any complex data hungry statistical models. For large scale testing companies, this will significantly reduce cost for pilot testing and make available hundreds of items that operate in a psychometrically similar manner. This design process produces difficulty parameters that operate in a similar manner to the statistical difficulty parameters computed in traditional ways using the Rasch model.
A comparison of traditional test blueprinting to assessment engineering in a large scale assessment context
PDF (Portable Document Format)
4740 KB
Created on 12/1/2016
Views: 1332
Additional Information
- Publication
- Dissertation
- Language: English
- Date: 2016
- Keywords
- Assessment Engineering, Cognitive Task Model, Correlation, Engineering Design, Parameter Estimates, Rasch Model
- Subjects
- Examinations $x Design and construction
- Educational tests and measurements
- Rasch models