Quantifying the Impacts of Situational Visual Clutter on Driving Performance Using Video Analysis and Eye Tracking
Harvard Dataverse (Africa Rice Center, Bioversity International, CCAFS, CIAT, IFPRI, IRRI and WorldFish)
View Archive InfoField | Value | |
Title |
Quantifying the Impacts of Situational Visual Clutter on Driving Performance Using Video Analysis and Eye Tracking
|
|
Identifier |
https://doi.org/10.7910/DVN/5BRV9B
|
|
Creator |
Ai, Chengbo
Hou, Qing Knodler Jr., Michael Tainter, Francis |
|
Publisher |
Harvard Dataverse
|
|
Description |
The challenges in investigating the situational clutter are sourced from its complicated constitution of different contributors (e.g., vehicle, other road users, the road infrastructures, etc.) and its dynamically changing manner (e.g., dashboard display, traffic conditions and outlooks of the vehicles, dynamic road, and roadside landscapes, etc.). Although the psychology and cognitive science communities have investigated the situational visual clutter, there lacks effort in studying it in the driving context. The proposed study aims to bridge such a gap. The objective of this study is threefold: 1) to develop a new video analysis model that can quantify the complex and dynamic driving scene; 2) to employ the developed model to quantify the impact of the situational visual clutter on driving performance, and 3) to demonstrate the potential of employing the driving scene quantification to support other retrospective studies and data mining using the existing driving simulation data.
|
|
Subject |
Engineering
|
|
Contributor |
Heiden, Jacob
|
|