Parafoveal Processing in Reading

Picture1

We have the impression that we can clearly see all of the words in the text, but our representations of words vary in their fidelity as they are represented before, while, and after they are directly fixated. That is, perceptual input from the word is poor quality (i.e., noisy) when a reader fixates the preceding text and can only see the word with upcoming low-acuity vision and it becomes more precise when the reader directly fixates the word and can view it with high-acuity, central vision. However, even the fuzzy glimpse we obtain before looking at a word provides us a head-start on processing it (i.e., a preview benefit). Moreover, the semantic constraint and language context leading up to the target word will change what we anticipate the text to say and, consequently, the way we respond to the visual information (e.g. assessing the plausibility of the word in that particular sentence). Much of our research focuses on what aspects of words readers pre-activate during reading, for example whether they gather information just about the way it looks (e.g. if  the word is capitalized, an abbreviation, or transposed) or whether they glean meaning from the preview (i.e., benefit from a synonymous words). We also investigate how these representations trigger eye movement decisions, and what types of contexts support or constrain this process. In general, an easy to process upcoming word can lead the reader to skip over it, or look at it very briefly (i.e., make a forced fixation), but also can affect readers’ comprehension and and cause them to reread the text (i.e., make a regression).


Attention and Decision-making

csm_collage1_b3130b3d53 Attention is a limited resource whose absence may adversely affect decision-making by increasing cognitive load. By tracking the process through which people allocate attention when they make decisions, we investigate what features attract their attention and therefore what is driving them to make those decisions. In our current NSF-funded project we are using eye tracking to understand how different types of people (e.g., those who are more or less risk-averse) attend to different features of decision tasks (e.g., payoff values or probabilities) when making risky financial decisions.


Task Goals and Information Processing

Humans are flexible with respect to how they process visual information depending on GlassesBooktask goals. This cognitive flexibility is useful because some aspects of a stimulus are more relevant for a particular goal than others. For example, when proofreading a text for spelling errors, a word’s expectedness is more relevant when the errors produce incorrect, but real words (e.g., trial for trail) than when they produce non-words (e.g., trcak for track). Similarly, whether a photograph is in black-and-white or color is more relevant to a decision about which of two photographs is older than to decisions about personal preference. Eye tracking experiments show that people respond differently to specific word or image properties based on their intentions.

The efficiency with which we process a text depends on our intentions; speed and accuracy are competing pressures. As people increase their reading speed it becomes more difficult to accurately encode what it said, which suggests that speed reading is actually implausible. The inability to re-read the text negatively affects comprehension; rereading takes time, but that time is not wasted.  When people read aloud, there is the added requirement to pronounce the text, which sometimes leads to decreased encoding of meaning. However, research on bilingual readers in Spanish and Chinese, suggests that the meaning of words is encoded, but then shipped off to a production system that may produce something meaningful, but not the exact content of the text. Singers of a musical score (e.g., chorale singers) have different roles in the piece and these different roles lead to different types of singing errors for irregular intervals and different eye movement patterns when sight-singing.


Peripheral Perception in Deaf Signers

1200px-American_Sign_Language_ASL.svgIn order to comprehend a visual language like American Sign Language (ASL) deaf signers must simultaneously process meaningful linguistic features in central vision (i.e., facial expression) and in peripheral vision (i.e., manual signs composed of unique combinations of handshape, motion, and location relative to the body). Our research suggests that experience with this cognitive demand leads them to perceive peripheral ASL signs more accurately at far eccentricities than hearing people, even those who are very proficient in ASL. Moreover, we find evidence for a sign superiority effect, in which the decrease in identification accuracy is less severe for meaningful stimuli (i.e., ASL signs and fingerspelled words) than for meaningless stimuli (i.e., pseudo-signs and nonword sequences). We also investigate whether there is a parallel between the cognitive demands for sign comprehension and reading English print, in that both tasks involve identification and fine-grained discrimination of meaningful forms in both central and peripheral vision. In fact, deaf signers have a larger attentional span during reading (i.e., are able to process words farther away from fixation) than hearing people of equivalent reading-levels, even when they are still learning to read (e.g., children aged 7-15). We are currently investigating whether this relates to their peripheral identification abilities in sign language.