On the dependability and feasibility of layperson ratings of divergent thinking
- UNCG Author/Contributor (non-UNCG co-authors, if there are any, appear on document)
- Paul Silvia, Professor (Creator)
- Institution
- The University of North Carolina at Greensboro (UNCG )
- Web Site: http://library.uncg.edu/
Abstract: A new system for subjective rating of responses to divergent thinking tasks was tested using raters recruited from Amazon Mechanical Turk. The rationale for the study was to determine if such raters could provide reliable (aka generalizable) ratings from the perspective of generalizability theory. To promote reliability across the Alternative Uses and Consequence task prompts often used by researchers as measures of Divergent Thinking, two parallel scales were developed to facilitate feasibility and validity of ratings performed by laypeople. Generalizability and dependability studies were conducted separately for two scoring systems: the average-rating system and the snapshot system. Results showed that it is difficult to achieve adequate reliability using the snapshot system, while good reliability can be achieved on both task families using the average-rating system and a specific number of items and raters. Additionally, the construct validity of the average-rating system is generally good, with less validity for certain Consequences items. Recommendations for researchers wishing to adopt the new scales are discussed, along with broader issues of generalizability of subjective creativity ratings.
On the dependability and feasibility of layperson ratings of divergent thinking
PDF (Portable Document Format)
1067 KB
Created on 3/18/2020
Views: 1039
Additional Information
- Publication
- Frontiers in Psychology, 9, 1343
- Language: English
- Date: 2018
- Keywords
- generalizability theory, consensual assessment technique, divergent thinking, creativity, originality