Chris Dickinson Ph.D.
Research Interests:
Visual cognition,
Visual memory,
Scene perception,
Eye movements during visual search and scene perception.
My research focuses on two aspects of visual-spatial cognition. In one line of research, I examine how we search our environment, and the role that memory plays in that process. More specifically, I am interested in the extent to which people remember where they have searched, how they represent this information, and the extent to which they use this information to make the search process more efficient. As an example, consider searching for your keys in your kitchen. As you search, you may remember a large number of the locations you searched, a small number of them, or none of them. You might represent these locations in memory in different ways – as individual locations, as the path that links them together, as regions or areas (e.g., you might group all of the locations on the counter or the right side of the kitchen as one searched area), or as the "plan" you followed while searching ("First I searched the table, then the counter, then the cabinets"). What's more, you might make use of your memory for where you had already searched to avoid searching these locations again. I am also interested in how these questions intersect. For example, how well you are able to use your memory for where you had already searched would be affected by the format in which you represent that information – this in turn would influence how you search the environment. In a second line of research, I am interested in exploring how people represent the spatial layout and spatial expanse of a scene. Although we "experience" the world around us as very expansive and continuous, we actually "see" it as a series of partial views. One interesting aspect of our memories of these views is that we tend to remember them as having shown more of the scene than they actually did. As an example, consider the view through a window as a partial view of the world. We would tend to remember this view as if the window had been larger than it actually was—as if we had seen beyond the window's edges. This is a constructive memory error known as boundary extension (Intraub & Richardson, 1989), and it illustrates how information from both bottom-up (i.e., visual) and top-down (i.e., memory, knowledge) sources of input contribute to how we experience the world around us. I am currently exploring ways of quantifying the amount of boundary extension that occurs for scenes, as well as different factors that might influence the amount of boundary extension that occurs for a given scene. In both lines of research, I use a combination of behavioral methods (reaction time, recognition, and signal-detection measures), eye tracking, and computational modeling.