cvcn banner

Center for Visual and Cognitive Neuroscience Facilities

Driving Simulator Core

The Driving Simulator Core (DSC) provides the faculty and students of the CVCN, NDSU as a whole, visual and cognitive neuroscience researchers in the region, and COBRE investigators in other IDeA states, access to a state-of-the-art driving simulator facility.
The DSC is based on a DriveSafety DS-600c research driving simulator. The DS-600c includes a realistic vehicle cab consisting of driver and passenger seats, a center console, a fully instrumented dashboard, a rearview mirror display, and controls for steering, braking, and acceleration. The cab is mounted on a motion platform in order to simulate the motions associated with driving. The simulated driving environment is projected onto five 65" LED/LCD screens that provide over 180o view, and displayed on three mini-LCD screens mounted on the rear-view and side mirrors. The design of the simulator permits the collection of several performance measures. Response buttons on the steering wheel and center console facilitate the collection of data relevant for psychological research, such as speeded responses to events in the simulated environment. Additionally, the state of all vehicle controls and instruments is sampled at a rate of 60 Hz during simulations, enabling detailed analysis of driving-related behaviors. Finally, a FaceLAB eyetracker (from our Eyetracking Core), mounted on the dashboard, is used to compute point-of-gaze, within and outside the cab, as participants drive the simulated vehicle. The simulation is computer controlled using DriveSafety's Vection software, which enables real-time data collection and updating of the displays. DriveSafety's HyperDrive Authoring Suite is used to model complex driving scenarios and to precisely control the presentation of stimuli within the simulated environment. CVCN researchers have full control over the environment, objects and events that might influence the driver's behavior.

EEG Core

The High-Density Electroencephalography (EEG)/Neurostimulation Core (EEGC) provides the faculty and students of the CVCN, NDSU as a whole, visual and cognitive neuroscience researchers in the region, and COBRE investigators in other IDeA states, access to a state-of-the-art high-density EEG and neurostimulation (Transcranial Magnetic Stimulation) facility. The adjective "high-density" which is applied to our EEG recording capability refers to the fact that as the number of electrodes increases the scalp voltage topography is sampled at higher density. The skull "blurs" the scalp voltage distribution such that there is a limit to the sampling density which can be realized. Current standards regard 100-200 scalp electrodes as "high" density recording. By applying techniques borrowed from geophysics, where the depth and location of earthquake hypocenters can be inferred from the seismic activity recorded at numerous surface sensors, cognitive neuroscientists can now make accurate inferences concerning the intracranial location of the generators of the voltages recorded at the scalp. This is an extremely powerful tool in the arsenal of neuroscience, especially since the temporal resolution of the EEG signal is on the order 10-3 seconds, giving it 3-4 orders of magnitude greater temporal resolution than competing neuroimaging technologies such as functional magnetic resonance imaging (fMRI), positron emission tomography (PET), or functional near-infrared spectroscopy (fNIRS).

Electro-Optical Instrumentation Core

The Electro-Optical Instrumentation Core (EOIC) consists of a collection of instruments required for the evaluation, design, and/or fabrication of custom electronic and mechanical devices, and/or for the routine measurement and calibration of visual, auditory, and haptic displays, and response collection devices. It is a resource to the faculty and students of the CVCN, to NDSU as a whole, to visual and cognitive neuroscience researchers in the region, and potentially to COBRE investigators in other IDeA states. The total value of the instruments comprising the EOIC is $144K.

Eyetracking Core

The Eyetracking Core (ETC) is collection of eyetracking devices that facilitates the conduct of cognitive, psychophysical and electrophysiological research. These include:

(1) Head-mounted high-speed video eyetracker (Eyelink II) allowing unrestricted head movement and the collection of point-of-regard data from human subjects during experiments on visual attention and perception. The Eyelink II eyetrackers sample eye position at a rate of 500 Hz, with a resolution of 0.025° and an average error of less than 0.5°.

(2) Eyelink 1000 (Tower) video-based eyetrackers with a sampling rate up to 1000 Hz. These are cutting-edge instruments that perform real-time eyetracking with exceptional spatial resolution. The Eyelink 1000 Tower mount incorporates the camera and illuminator housing within a combined chin and forehead rest with the use of an infrared reflective mirror. A display computer is equipped with applications for stimulus display, data file viewing and the creation of experimental paradigms; a host computer runs controls the operation of the eyetrackers itself. The Eyelink 1000 has a resolution of 0.01° and an average error of less than 0.5°.

(1) Eyelink 1000 (Remote) video-based eyetracker with a sampling rate up to 1000 Hz. These are cutting-edge instruments that perform real-time eyetracking with exceptional spatial resolution. The Eyelink 1000 Tower mount incorporates the camera and illuminator housing within a combined chin and forehead rest with the use of an infrared reflective mirror. A display computer is equipped with applications for stimulus display, data file viewing and the creation of experimental paradigms; a host computer runs controls the operation of the eyetrackers itself. The Eyelink 1000 has a resolution of 0.01° and an average error of less than 0.5°.

(3) Applied Science Laboratories (Eye-Trac6 Series and 5000 Series) remote video eyetrackers, designed specifically for use in situations where the stimulus presented to the subject is restricted to a single surface such as a computer or video monitor, and where the use of a head-mount or chin-rest is undesirable, such as in conjunction with EEG recording. The system allows the subject approximately one square foot of head movement, which eliminates the need for head restraint. The Eye-Trac6 series has 0.25° resolution with a visual range of 50° horizontal and 40° vertical, with sampling rates up to 240 Hz.

(1) Skalar IRIS IR Limbus Eye Tracker is designed specifically for oculomotor research and can record vertical or horizontal movements of each eye and provide analog output at a rate up to 200 Hz. This system is especially valuable for eye tracking experiments in complete darkness where the effects of auditory or vestibular input may be separated from the effect of visual inputs.

(1) FaceLAB (Seeing Machines, Inc.) eyetracker is installed in the DS-600c driving simulator. This device tracks head and eye movements at a rate of 60 Hz, enabling the collection of point-of-regard data while also allowing considerable freedom of movement (head rotation up to 90° is permissible, as is translation up to 13" horizontally and 9" vertically). The FaceLAB eyetracker has an average gaze position error of less than 1°.

(1) The Tobii X120 eye tracker is currently installed in conjunction with the E-Lumens presentation dome. This binocular eye-tracking device measures the movements of both eyes at a rate of 120 Hz, and can automatically switch between "dark-pupil" and "light-pupil" systems giving it the flexibility to work optimally with a wide variety of observers. The system has an ideal accuracy of 0.4°, and an ideal precision of 0.16°, with up to a 30° gaze angle. Tobii's binocular tracking is ideal for recording the observer's point of regard in a three-dimensional space.

(1) LC Eyegaze Edge Eyefollower eyetracker. The EyeFollower provides users with a completely unobtrusive and natural environment when operating the system. There are three main distinguishing factors with the EyeFollower equipment. This system offer the largest amount of free head movement of any remote eye tracker on the market, operates on the largest portion of the human population while simultaneously achieving the highest level of accuracy (<0.4 degrees). Accommodates human eye variations. Usable by 90% of people, and works with most eyeglasses/contact lenses. 0.104 cubic meters of full binocular tracking volume. (Autofocusing gimbal technology); side to side 30 inches, up and down 20 inches, and back and forth 24 inches.

Immersive Virtual Reality Core (IVRC)

The Immersive Virtual Reality Core (IVRC) provides the faculty and students of the CVCN, NDSU as a whole, visual and cognitive neuroscience researchers in the region, and COBRE investigators in other IDeA states, access to a fully staffed, state-of-the-art immersive virtual reality (VR) facility with which to study social, cognitive, and visual processing, as well as audiovisual and visuohaptic multisensory integration. The IVRC is comprised of a number of interfacable instruments:

(1) Real-time high-resolution VR head-mounted display (HMD) system (NVIS nVisor SX60). The nVisor SX60 is a high-resolution (1280 x 1024 pixel) stereoscopic display with a 44o (H) x 34o (V) monocular field of view (FOV), and a total horizontal FOV of 44o (60o diagonal). This HMD is position and orientation tracked using a Precision Position Tracking (PPT E) optical motion tracking system by WorldViz, Inc.

(1) Real-time high-resolution HMD VR display system (NVIS nVisor SX111). The nVisor SX111 is a high-resolution (1280 x 1024 pixel) stereoscopic display with a 76o (H) x 64o (V) monocular field of view (FOV), and a total horizontal FOV of 102o (111o diagonal). This HMD is position and orientation tracked using a Precision Position Tracking (PPT E) optical motion tracking system by WorldViz, Inc. The head-mounted immersive VR system has already proved invaluable for conducting studies of brightness and lightness perception and cognitive control (task-switching study).

3D-rendered visual environments are updated at a 60 Hz frame rate based on observer head orientation (pitch, roll and yaw, as registered by an inertial sensor). A camera-based optical position tracking system (WorldViz, Inc.) updates the VR environment based on the head position (X, Y, Z) of an ambulatory observer within a region of space measuring 170 ft2. Visual environments rendered and presented in HMD-delivered VR can be co-registered with up to 32 individual headphone-delivered spatialized auditory stimuli using GoldMiner (AuSIM, Inc.), a state-of-the-art audio signal processing and simulation unit. This system allows the placement, movement, and onset/offset of virtual sound stimuli that would be impractical (even impossible) to realize in physical space. 3D soundscapes can thus be integrated with 3D visual environments to allow precise auditory and visual stimulus control to study, for example, auditory-visual multisensory integration.

(1) Real-time medium-resolution HMD VR display system (Oculus Rift). The Oculus Rift is a medium-resolution (640 x 800 pixel) stereoscopic display with a 90o (H) and 100o (V) monocular field of view (FOV), and a total horizontal FOV of X (110o diagonal). The Oculus Rift is an inexpensive ($350) HMD whose design stands to revolutionize the use of IVR in human society. The potential uses of IVR for research, industry, therapy, rehabilitation, and recreation, are nearly limitless. The CVCN IVRC will allow researchers to be on the forefront of these advances.

(2) Orientation- and position-tracked 18-sensor CyberGloves II (left and right hands). Hand pose is registered by 18 goniometers which record the joint angles of the fingers and palm. Hand orientation (pitch, roll, yaw) is measured by an inertial sensor, and hand position (X, Y, Z) is tracked by the camera-based optical position tracking system (WorldViz, Inc.). Actions of the hands can thus be rendered in VR to produce remarkably lifelike avatar hands which can also be programmed to interact with objects in the virtual environment. The avatar hands allow the design and conduct of experiments in which precisely calibrated visual stimuli can be delivered in any position relative to the hand. The response to such visual stimuli (either as thresholds, or suprathreshold responses such as magnitude estimates or response times) will be highly diagnostic of the neural systems which govern visuohaptic multisensory integration.

(1) VisionStation (eLumens, Inc.). This is a large-format (1.5 m diameter) hemispherical projection system. To an appropriately positioned observer images projected by the eLumens display subtend 190o, and thus subtend the entire field of view. This wide field of view enables the study of visual interactions across the totality of visual space (including being able to present ganzfelds stimulation) and permits the study of visual processing at extreme visual eccentricities. Researchers interested in the potency of peripheral (exogenous) stimuli to capture spatial attention using variants of the Posner cueing paradigm are currently employing the eLumens portion of the IVR core facility to present truly "peripheral" (50o eccentricity or more), as opposed to merely perifoveal (15o) cues, to more precisely determine whether peripheral visual processing (and attentional resources at such large eccentricities) differs in quantity or quality from what has heretofore been studied.