Chris Dickinson Ph.D.

Research Interests: Visual cognition, Visual memory, Scene perception, Eye movements during visual search and scene perception. My research focuses on two aspects of visual-spatial cognition. In one line of research, I examine how we search our environment, and the role that memory plays in that process. More specifically, I am interested in the extent to which people remember where they have searched, how they represent this information, and the extent to which they use this information to make the search process more efficient. As an example, consider searching for your keys in your kitchen. As you search, you may remember a large number of the locations you searched, a small number of them, or none of them. You might represent these locations in memory in different ways – as individual locations, as the path that links them together, as regions or areas (e.g., you might group all of the locations on the counter or the right side of the kitchen as one searched area), or as the "plan" you followed while searching ("First I searched the table, then the counter, then the cabinets"). What's more, you might make use of your memory for where you had already searched to avoid searching these locations again. I am also interested in how these questions intersect. For example, how well you are able to use your memory for where you had already searched would be affected by the format in which you represent that information – this in turn would influence how you search the environment. In a second line of research, I am interested in exploring how people represent the spatial layout and spatial expanse of a scene. Although we "experience" the world around us as very expansive and continuous, we actually "see" it as a series of partial views. One interesting aspect of our memories of these views is that we tend to remember them as having shown more of the scene than they actually did. As an example, consider the view through a window as a partial view of the world. We would tend to remember this view as if the window had been larger than it actually was—as if we had seen beyond the window's edges. This is a constructive memory error known as boundary extension (Intraub & Richardson, 1989), and it illustrates how information from both bottom-up (i.e., visual) and top-down (i.e., memory, knowledge) sources of input contribute to how we experience the world around us. I am currently exploring ways of quantifying the amount of boundary extension that occurs for scenes, as well as different factors that might influence the amount of boundary extension that occurs for a given scene. In both lines of research, I use a combination of behavioral methods (reaction time, recognition, and signal-detection measures), eye tracking, and computational modeling.

There are 8 included publications by Chris Dickinson Ph.D.:

TitleDateViewsBrief Description
Coordinating Cognition: The Costs and Benefits of Shared Gaze During Collaborative Search 2008 1790 Collaboration has its benefits, but coordination has its costs. We explored the potential for remotely located pairs of people to collaborate during visual search, using shared gaze and speech. Pairs of searchers wearing eyetrackers jointly performed...
Coordinating Spatial Referencing Using Shared Gaze 2010 127 To better understand the problem of referencing a location in space under time pressure, we had two remotely located partners (A, B) attempt to locate and reach consensus on a sniper target, which appeared randomly in the windows of buildings in a ps...
Do Object Refixations During Scene Viewing Indicate Rehearsal In Visual Working Memory? 2010 749 Do refixations serve a rehearsal function in visual working memory (VWM)? We analyzed refixations from observers freely viewing multiobject scenes. An eyetracker was used to limit the viewing of a scene to a specified number of objects fixated after ...
False Memory 1/20th Of A Second Later: What The Early Onset Of Boundary Extension Reveals About Perception 2008 206 Errors of commission are thought to be caused by heavy memory loads, confusing information, lengthy retention intervals, or some combination of these factors. We report false memory beyond the boundaries of a view, boundary extension, after less than...
Marking Rejected Distractors: A Gaze-Contingent Technique For Measuring Memory During Search 2005 723 There is a debate among search theorists as to whether search exploits a memory for rejected distractors. We addressed this question by monitoring eye movements and explicitly marking objects visited by gaze during search. If search is memoryless, ma...
Memory for the Search Path: Evidence for a High-Capacity Representation of Search History 2007 1554 Using a gaze-contingent paradigm, we directly measured observers’ memory capacity for fixated distractor locations during search. After approximately half of the search objects had been fixated, they were masked and a spatial probe appeared at either...
Spatial Asymmetries In Viewing And Remembering Scenes: Consequences Of An Attentional Bias? 2009 697 Given a single fixation, memory for scenes containing salient objects near both the left and right view boundaries exhibited a rightward bias in boundary extension (Experiment 1). On each trial, a 500-msec picture and 2.5-sec mask were followed by a ...
Transsaccadic Representation of Layout: What Is the Time Course of Boundary Extension? 2008 1828 How rapidly does boundary extension occur? Across experiments, trials included a 3-scene sequence (325 ms/picture), masked interval, and repetition of 1 scene. The repetition was the same view or differed (more close-up or wide angle). Observers rate...