Automated item difficulty modeling with test item representations

UNCG Author/Contributor (non-UNCG co-authors, if there are any, appear on document)
Sa'ed Ali Qunbar (Creator)
Institution
The University of North Carolina at Greensboro (UNCG )
Web Site: http://library.uncg.edu/
Advisor
John Willse

Abstract: This work presents a study that used distributed language representations of test items to model test item difficulty. Distributed language representations are low-dimensional numeric representations of written language inspired and generated by artificial neural network architecture. The research begins with a discussion of the importance of item difficulty modeling in the context of psychometric measurement. A review of the literature synthesizes the most recent automated approaches to item difficulty modeling, introduces distributed language representations, and presents relevant predictive modeling methods. The present study used an item bank from a certification examination in a scientific field as its data set. The study first generated and assessed the quality of distributed item representations with a multi-class similarity comparison. Then, the distributed item representations were used to train and test predictive models. The multi-class similarity task showed that the distributed representations of items were more similar on average to items within their content domain versus outside of their domain in 14 out of 25 domains. The prediction task did not produce any meaningful predictions from the distributed representations. The study ends with a discussion of limitations and potential avenues for future research.

Additional Information

Publication
Dissertation
Language: English
Date: 2019
Keywords
Distributed representations, Embeddings, Item difficulty modeling, Measurement, Predictive modeling, Psychometrics
Subjects
Examinations $x Design and construction
Item response theory
Predictive control
Psychometrics

Email this document to