Monday, June 18, 2012

Visual Interaction Situational Awareness Modifiers - Unmanned Operation

I recently completed a Ph.D. research study to examine and compare the effects of dynamic visual interaction to conventional static interaction on unmanned operator Situational Awareness (SA).  This study was based on the premise that the amount of visual information available could affect the SA of an operator and that increasing visual information through dynamic eyepoint manipulation (i.e., moveable camera) may result in higher SA than static visualization (i.e., body-fixed camera). Using a simple environment simulation and experimentation, four experimental dynamic visual interaction methods were examined (analog joystick, head tracker, uninterrupted hat/point of view switch, and incremental hat/point of view switch) and compared to a single static method (the control treatment), described below:

Unmanned Operator Station (Simulation) - The simulated operator station was a custom developed system (hardware and software) used for experimental testing to capture the SA of each participant.  The appearance and function of this system approximated the look, feel, and interaction of a generic ground control station (GCS).


Static Visual Interaction- The focus of this method was on replicating current unmanned aircraft visual interaction functionality, where the eyepoint would remain fixed within the simulated environment and equal to the field of view (FOV) of the simulated camera (i.e., could only observe center of the simulated environment).


 
Analog Joystick Dynamic Visual Interaction - This method was used to control the simulated eyepoint (camera) of the visual display using custom developed software and a USB joystick device, which  captured and translated analog X and Y axes input into eyepoint movement in the visual simulation.
 
Head Tracker Dynamic Visual Interaction- This method was used to control the simulated eyepoint of the visual display using a custom developed head tracker system (hardware and software), which captured and translated rotational head movements (pitch/yaw) into eyepoint movement in the visual simulation.

Incremental Hat/Point of View (POV) Switch Dynamic Visual Interaction - Controlled the simulated eyepoint of the visual display using the eight-directional hat/POV switch of a USB joystick and custom developed software, which captured and translated user switch input into incremental (up, down, left, and/or right) visual change in the eyepoint position based on previous positioning and a predetermined increment rate (i.e., 50 pixels per second).

Uninterrupted Hat/POV Switch Dynamic Visual Interaction- Controlled the simulated eyepoint of the visual display using the eight-directional hat/POV switch of a USB joystick and custom developed software, which captured and translated user switch input into sweeping (i.e., uninterrupted) visual change in the eyepoint position.


These five methods were used in experimental testing with 150 participants (N = 150; n = 30 per treatment) to determine the use of a dynamic eyepoint significantly increased the SA score (0 to 100%) of a user within a stationary egocentric environment (see following graph), indicating that employing dynamic control would reduce the occurrence or consequences of the soda straw effect.


I'm currently exploring options to perform followup research to determine if the effects of dynamic eyepoint manipulation continue to remain true for use in a dynamic setting (i.e., aircraft in flight, landing, takeoff, or target engagement).

If you are interested in potential collaboration or have any questions, please feel free to read this post,
download my dissertation defense presentation or follow on research proposal, or contact me.

No comments:

Post a Comment