Skip to content

Latest commit

 

History

History
10 lines (9 loc) · 2.38 KB

File metadata and controls

10 lines (9 loc) · 2.38 KB

!!! Limited report and code samples due to pending publication status

Covert attention serves as a vital cognitive process of human perception allowing individuals to discreetly examine pertinent points of interest within their visual periphery, without resorting to overt eye movements. This covert attention mechanism serves as a vigilant monitoring system, effectively directing cognitive resources and allocating priority to salient information when required. Extensive research has demonstrated that pupillary response is a dependable indicator of attentional processes, offering valuable insights into cognitive functions. Nevertheless, scientific inquiries into the spatiotemporal properties of attention, particularly covert attention, have received limited attention thus far. In our current study, we explored the spatiotemporal dynamics associated with covert attention through the utilization of pupillometry. Two separate experiments were conducted, that involved participants maintaining fixation on a central point while simultaneously attending to visual targets in their peripheral vision. During the experiments, random screen locations were probed to elicit a pupil response which was used as a measure of attention. The resulting data was subsequently utilized to generate spatiotemporal maps, which effectively depict the effects of attention. Computational models were further employed to approximate and elucidate attentional behavior. We show for the first time ever a visual representation of the spatial dynamics of covert attention over a fine temporal scale. Additionally, we show differences between conditions of periphery-to-periphery and fovea-to-periphery in covert attention. In the former, we observe a behavior similar to a jump model behavior with remnants of attention lingering between the old and new attentional loci. In the latter, a biphasic operation can be observed with an attention expansion stretching from the fovea to the peripheral target is succeeded by a zoom-in model. In both experiments, these processes take at most 240 ms with fovea-to-periphery attentional modulation already displaying benefits of visual attention before 160 ms which are further enhanced with sustained attention.