Subjective scoring of divergent thinking: Examining the reliability of unusual uses, instances, and consequences tasks.

UNCG Author/Contributor (non-UNCG co-authors, if there are any, appear on document)
Paul Silvia, Professor (Creator)
Institution
The University of North Carolina at Greensboro (UNCG )
Web Site: http://library.uncg.edu/

Abstract: The present research examined the reliability of three types of divergent thinking tasks (unusual uses, instances, consequences/implications) and two types of subjective scoring (an average across all responses vs. the responses people chose as their top-two responses) within a latent variable framework, using the maximal-reliability H statistic. Overall, the unusual uses tasks performed the best for both scoring types, the instances tasks had less reliable scores, and the consequences tasks had poor reliability and convergence problems. The discussion considers implications for test users, differences between average scoring and top-two scoring, and the problem of whether divergent thinking tasks are interchangeable.

Additional Information

Publication
Language: English
Date: 2011
Keywords
creativity, divergent thinking, measurement, reliability, latent variable models, psychology

Email this document to