Predicting Human Visuomotor Behavior in a Driving Task — ASN Events

Predicting Human Visuomotor Behavior in a Driving Task (#359)

Dana Ballard 1 , Mary Hayhoe 1 , Leif Johnson 1
  1. University of Texas at Austin, Austin, TEXAS, United States

The sequential deployment of gaze to regions of interest is an integral part of human visual function. Owing to its central importance, decades of research have focused on predicting gaze locations, but there has been relatively little formal attempt to predict the temporal aspects of gaze deployment in natural multi-tasking situations. We study this problem by decomposing complex visual driving behaviour into individual task modules that each require independent sources of visual information for control, in order to model human gaze deployment on different task-relevant objects. This setting allows the characterization of moment by moment gaze selection as a competition between active modules. To mediate this competition, we introduce a multi-particle barrier model for gaze selection that combines two key elements for each module: a priority parameter that represents task importance per module, and noise estimates that allow each  module to represent uncertainty about its state of task-relevant visual information. The net effect is that each module attempts to reduce the reward-weighted uncertainty in its visual state information. Comparisons with human gaze data gathered in the virtual driving environment show that the model is capable of steering the vehicle and its gaze selection statistics very closely approximate human performance.

  1. 1. Leif Johnson, Brian Sullivan, Mary Hayhoe and Dana Ballard,(2014)Predicting human visuomotor behaviour in a driving task,Phil. Trans. R. Soc.,369,20130044.
  2. 2. B. T. Sullivan, L. Johnson, C. A. Rothkopf, D. H. Ballard and M. M. Hayhoe, (2012)The role of uncertainty and reward on eye movements in a virtual driving task, Journal of Vision,12(13):19, 1-16