Document Detail

MPI CyberMotion Simulator: implementation of a novel motion simulator to investigate multisensory path integration in three dimensions.
Jump to Full Text
MedLine Citation:
PMID:  22617497     Owner:  NLM     Status:  MEDLINE    
Abstract/OtherAbstract:
Path integration is a process in which self-motion is integrated over time to obtain an estimate of one's current position relative to a starting point (1). Humans can do path integration based exclusively on visual (2-3), auditory (4), or inertial cues (5). However, with multiple cues present, inertial cues - particularly kinaesthetic - seem to dominate (6-7). In the absence of vision, humans tend to overestimate short distances (<5 m) and turning angles (<30°), but underestimate longer ones (5). Movement through physical space therefore does not seem to be accurately represented by the brain. Extensive work has been done on evaluating path integration in the horizontal plane, but little is known about vertical movement (see (3) for virtual movement from vision alone). One reason for this is that traditional motion simulators have a small range of motion restricted mainly to the horizontal plane. Here we take advantage of a motion simulator (8-9) with a large range of motion to assess whether path integration is similar between horizontal and vertical planes. The relative contributions of inertial and visual cues for path navigation were also assessed. 16 observers sat upright in a seat mounted to the flange of a modified KUKA anthropomorphic robot arm. Sensory information was manipulated by providing visual (optic flow, limited lifetime star field), vestibular-kinaesthetic (passive self motion with eyes closed), or visual and vestibular-kinaesthetic motion cues. Movement trajectories in the horizontal, sagittal and frontal planes consisted of two segment lengths (1st: 0.4 m, 2nd: 1 m; ±0.24 m/s(2) peak acceleration). The angle of the two segments was either 45° or 90°. Observers pointed back to their origin by moving an arrow that was superimposed on an avatar presented on the screen. Observers were more likely to underestimate angle size for movement in the horizontal plane compared to the vertical planes. In the frontal plane observers were more likely to overestimate angle size while there was no such bias in the sagittal plane. Finally, observers responded slower when answering based on vestibular-kinaesthetic information alone. Human path integration based on vestibular-kinaesthetic information alone thus takes longer than when visual information is present. That pointing is consistent with underestimating and overestimating the angle one has moved through in the horizontal and vertical planes respectively, suggests that the neural representation of self-motion through space is non-symmetrical which may relate to the fact that humans experience movement mostly within the horizontal plane.
Authors:
Michael Barnett-Cowan; Tobias Meilinger; Manuel Vidal; Harald Teufel; Heinrich H Bülthoff
Related Documents :
19187527 - Hilbert-huang versus morlet wavelet transformation on mismatch negativity of children i...
7948467 - The effects of magnitude and direction of stimulus change on auditory event-related pot...
16122977 - Functional characterization of mismatch negativity to a visual stimulus.
1385087 - Event-related potentials to repetition and change of auditory stimuli.
1086347 - Adaptive gain control of vestibuloocular reflex by the cerebellum.
19576237 - Sensitivity to spatial frequency and orientation content is not specific to face percep...
Publication Detail:
Type:  Journal Article; Research Support, Non-U.S. Gov't; Video-Audio Media     Date:  2012-05-10
Journal Detail:
Title:  Journal of visualized experiments : JoVE     Volume:  -     ISSN:  1940-087X     ISO Abbreviation:  J Vis Exp     Publication Date:  2012  
Date Detail:
Created Date:  2012-05-23     Completed Date:  2012-08-03     Revised Date:  2013-06-24    
Medline Journal Info:
Nlm Unique ID:  101313252     Medline TA:  J Vis Exp     Country:  United States    
Other Details:
Languages:  eng     Pagination:  e3436     Citation Subset:  IM    
Affiliation:
Department of Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics. mbarnettcowan@gmail.com
Export Citation:
APA/MLA Format     Download EndNote     Download BibTex
MeSH Terms
Descriptor/Qualifier:
Computer Simulation
Humans
Male
Motion
Motion Perception / physiology*
Orientation / physiology
Proprioception / physiology
Robotics / instrumentation,  methods*
Space Perception / physiology
Comments/Corrections

From MEDLINE®/PubMed®, a database of the U.S. National Library of Medicine

Full Text
Journal Information
Journal ID (nlm-ta): J Vis Exp
Journal ID (iso-abbrev): J Vis Exp
Journal ID (publisher-id): JoVE
ISSN: 1940-087X
Publisher: MyJove Corporation
Article Information
Copyright © 2012, Journal of Visualized Experiments
open-access:
collection publication date: Year: 2012
Electronic publication date: Day: 10 Month: 5 Year: 2012
pmc-release publication date: Day: 10 Month: 5 Year: 2012
Issue: 63
E-location ID: 3436
ID: 3468186
PubMed Id: 22617497
Publisher Id: 3436
DOI: 10.3791/3436

MPI CyberMotion Simulator: Implementation of a Novel Motion Simulator to Investigate Multisensory Path Integration in Three Dimensions
Michael Barnett-CowanI20804D3913
Tobias MeilingerI20804D3913
Manuel VidalI32872D3914
Harald TeufelI20804D3913
Heinrich H. BülthoffI20804D3913I21296D3915
Department of Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics
Laboratoire de Physiologie de la Perception et de l'Action, Collège de France - CNRS
Department of Brain and Cognitive Engineering, Korea University
Correspondence to: Michael Barnett-Cowan at mbarnettcowan@gmail.com, Heinrich H. Bülthoff at hhb@tuebingen.mpg.de


Protocol
1. KUKA Roboter GmbH
  1. The MPI CyberMotion Simulator consists of a six-joint serial robot in a 3-2-1 configuration (Figure 1). It is based on the commercial KUKA Robocoaster (a modified KR-500 industrial robot with a 500 kg payload). The physical modifications and the software control structure needed to have a flexible and safe experimental setup have previously been described, including the motion simulator's velocity and acceleration limitations, and the delays and transfer function of the system 9. Modifications from this previous setup are defined below.

Figure 1. Graphical representation of the current MPI CyberMotion Simulator work space.

  1. Complex motion profiles that combine lateral movements with rotations are possible with the MPI CyberMotion Simulator. Axes 1, 4 and 6 can rotate continuously. 4 pairs of hardware end-stops limit Axis 2, 3 and 5 in both directions. The maximum range of linear movements is strongly dependent on the position from which the movement begins. The current hardware end-stops of the MPI CyberMotion Simulator are shown in Table 1.

Table 1. Current technical specifications of the MPI CyberMotion Simulator.

  1. Before any experiment is performed on the MPI CyberMotion Simulator, each experimental motion trajectory undergoes a testing phase on a KUKA simulation PC (Office PC). The "Office PC" is a special product sold by KUKA which simulates the real robot arm and includes the identical operating system and control screen layout as the real robot. A schematic overview of the control system of the MPI CyberMotion Simulator for an open-loop configuration is shown in Figure 2.

Figure 2. Schematic overview of the open-loop control system of the MPI CyberMotion Simulator. Click here for larger figure.

  1. The details of the control structure can be found here 9. In brief, for an open-loop configuration such as that used in the current experiment, trajectories are pre-programmed by converting input trajectories in Cartesian coordinates to joint space angles through inverse kinematics (Figure 2).
  2. The MPI control system reads in these desired joint angle increments and sends these to the KUKA control system to perform axis movements via motor currents. Joint resolver values are sent to the KUKA control system which determines the current joint angle positions at an internal rate of 12ms, which in turn trigger the next joint increment to be read in from file by the MPI control system as well as write the current joint angle positions to disk. Communication between the MPI and KUKA control systems is by an Ethernet connection using the KUKA-RSI protocol.
  3. A racecar seat (RECARO Pole Position) equipped with a 5-point safety belt system (Schroth) is attached to a chassis which includes a footrest. The chassis is mounted to the flange of the robot arm (Figure 3a). Experiments are also possible by seating participants within an enclosed cabin (Figure 3b).

Figure 3. MPI CyberMotion Simulator setup. a) Configuration for current experiment with LCD display. b) Configuration for experiments requiring an enclosed cabin with front projection stereo display. c) Front projection mono display. d) Head mounted display.

  1. As the experiment is performed in darkness, infrared cameras allow visual monitoring from the control room.
2. Visualization
  1. Multiple visualization configurations are possible with the MPI CyberMotion Simulator including LCD, stereo or mono front projection, and head mounted displays (Figure 3). For the current experiment visual cues to self-motion are provided by an LCD display (Figure 3a) placed 50 cm in front of the observers who were otherwise tested in the dark.
  2. The visual presentation was generated using Virtools 4.1 software and consisted of a random, limited life time dot-field. A cuboid extending eight virtual units to the front, right, left, upwards and downwards from the point of view of the participant (i.e., 16 x 16 x 8 units in size) was filled with 200,000 equal size particles consisting of white circles 0.02 units in diameter in front of a black background. The dots were randomly distributed across the space (homogeneous probability distribution within the space). Movement in virtual units was scaled to correspond 1 to 1 with physical motion (1 virtual unit = 1 physical meter).
  3. Each particle was shown for two seconds before vanishing and immediately showing up again at a random location within the space. Thus half of the dots changed their position within one second. Dots between a distance of 0.085 and 4 units were displayed to the participants (corresponding visual angles: 13° and 0.3°).
  4. Movement within the dot field was synchronized with physical motion by receiving motion trajectories from the MPI control computer transmitted by an Ethernet connection using the UDP protocol. When moving through the dot-field the average number of dots stayed constant for all movements. This display provided no absolute size scale, but optic flow and motion parallax as dots were spheres with a fixed size; looking smaller according to their distance relative to the observer.
3. Experimental Design
  1. 16 participants, who were naïve to the experiment with the exception of one author (MB-C), wore noise-cancelling headphones equipped with a microphone to allow two-way communication with the experimenter. Additional auditory noise was continuously played through the headphones to further mask noise produced by the robot.
  2. Participants used a custom built joystick equipped with response buttons with data transmitted by an Ethernet connection using the UDP protocol.
  3. The angle of the two movement segments was either 45° or 90°. Movements in the horizontal, sagittal and frontal planes consisted of: forward-rightward (FR) or rightward-forward (RF), downward-forward (DF) or forward-downward (FD), and downward-rightward (DR) or rightward-downward (RD) movements respectively (Figure 4a).

Figure 4. Procedure. a) Schematic representation of trajectories used in the experiment. b) Sensory information provided for each trajectory type tested. c) Pointing task used to indicate the origin of where participants thought that they had moved from. Click here for larger figure.

  1. Sensory information was manipulated by providing visual (optic flow, limited lifetime star field), vestibular-kinaesthetic (passive self motion with eyes closed), or visual and vestibular-kinaesthetic motion cues (Figure 4b).
  2. Movement trajectories consisted of two segment lengths (1st: 0.4 m, 2nd: 1 m; ±0.24 m/s2 peak acceleration; Figure 4b). Trajectories consisted of translation only. No rotations of the participants occurred. To reduce possible interference from motion prior to each trial and ensure that the vestibular system was tested starting from a steady state, a 15 s pause preceded each trajectory.
  3. Observers pointed back to their origin by moving an arrow that was superimposed on an avatar presented on the screen (Figure 4c). Movement of the arrow was constrained to the trajectory's plane and controlled by the joystick. The avatar was presented from frontal, sagittal and horizontal viewpoints. Observers were allowed to use any or all viewpoints to answer. The starting orientation of the arrow was randomized across trials.
  4. As the pointing task required participants to mentally transform their pointing perspective from an egocentric to an exocentric representation, participants were given instructions on how to point back to their origin with reference to the avatar prior to practice and experimental trials. Participants were told that pointing should be made as if the avatar were their own body. Participants were then instructed to point to physical targets relative to the self using the exocentric measurement technique. For example, participants were instructed to point to the joystick resting on their lap half-way between themselves and the screen, which required participants to point the arrow forward and down relative to the avatar. All participants were able to perform these tasks without expressing confusion.
  5. Each experimental condition was repeated 3 times and presented in random order. Signed error and response time were analyzed as dependent variables in two separate 3 (plane) * 2 (angle) * 3 (modality) repeated measures ANOVA. Response times from one extreme outlier participant were removed from analysis.
4. Representative Results

Signed error results are collapsed across modalities and angles as no significant main effects were found for these factors. Figure 5a shows the significant main effect of movement plane (F(2,30) = 7.0, p = 0.003) where observers underestimated angle size (average data less than 0°) for movement in the horizontal plane (-8.9°, s.e. 1.8). In the frontal plane observers were more likely on average to overestimate angle size (5.3°, s.e. 2.6), while there was no such bias in the sagittal plane (-0.7°, s.e. 3.7). While main effects of angle and modality were not significant, angle was found to significantly interact with plane (F(2,30) = 11.1, p < 0.001) such that overestimates in the frontal plane were larger for movements through 45° (7.9°, s.e. 2.6) than through 90° (2.8°, s.e. 2.7), while such a discrepancy was absent for the other planes. In addition, modality was found to significantly interact with angle (F(2,30) = 4.7, p = 0.017) such that underestimates from vestibular information alone for movements through 90° were significantly larger (-4.3°, s.e. 2.1) compared to the visual (-2.0°, s.e. 2.4) and vestibular and visual information combined (2.3°, s.e. 2.2) conditions, while such discrepancies were absent for movements through 45°. No significant between subjects effect was for signed error (F(1,15) = 0.7, p = 0.432). Figure 5b shows the response time results. There was a significant main effect of modality (F(2,28) = 22.6, p < 0.001) where observers responded slowest when answering based on vestibular-kinaesthetic information alone (11.0 s, s.e. 1.0) compared to the visual (9.3 s, s.e. 0.8) and combined (9.0 s, s.e. 0.8) conditions. There was also a significant main effect of plane (F(2,28) = 7.5, p = 0.002) where observers responded slowest when moved in the horizontal plane (10.4 s, s.e. 1.0) compared to the sagittal (9.4 s, s.e. 0.8) and the frontal (9.4 s, s.e. 0.9) planes. There was no significant main effect of segment angle or any interactions. A significant between subjects effect was found for response time (F(1,14) = 129.1, p < 0.001).

Figure 5. Results. a) Signed error collapsed across modality for the planes tested. b) Response time collapsed across movement planes for the modalities tested. Error bars are +/- 1 s.e.m.


Discussion

Path integration has been well established as a means used to resolve where an observer originated but is prone to underestimates of the angle one has moved through 5. Our results show this for translational movement but only within the horizontal plane. In the vertical planes participants are more likely to overestimate the angle moved through or have no bias at all. These results may explain why estimates of elevation traversed-over terrain tend to be exaggerated 10 and also why spatial navigation between different floors of a building is poor 11. These results may also be related to known asymmetries in the relative proportion of saccule to utricule receptors (~0.58) 12. Slower response time based on vestibular-kinaesthetic information alone compared to when visual information is present suggests that there may be additional delays associated with trying to determine one's origin based on inertial cues alone, which may relate to recent studies showing that vestibular perception is slow compared to the other senses 13-16. Overall our results suggest that alternative strategies for determining one's origin may be used when moving vertically which may relate to the fact that humans experience movement mostly within the horizontal plane. Further, while sequential translations are rarely experienced they do occur most often in the sagittal plane - where errors are minimal - such as when we walk toward and move on an escalator. While post-experiment interviews did not reflect different strategies among the planes, experiments should explore this possibility. Experiments with trajectories using additional degrees of freedom, longer paths, with the body differently orientated relative to gravity, as well as using larger fields of view which are now possible with the MPI CyberMotion Simulator are planned to further investigate path integration performance in three dimensions.


Disclosures

No conflicts of interest declared.


MPI Postdoc stipends to MB-C and TM; Korean NRF (R31-2008-000-10008-0) to HHB. Thanks to Karl Beykirch, Michael Kerger & Joachim Tesch for technical assistance and scientific discussion.


References
Loomis JM,Klatzky RL,Golledge RG. Navigating without vision: Basic and applied researchOptometry and Vision ScienceYear: 20017828228911384005
Vidal M,Amorim MA,Berthoz A. Navigating in a virtual three-dimensional maze: how do egocentric and allocentric reference frames interactCognitive Brain ResearchYear: 20041924425815062862
Vidal M,Amorim MA,McIntyre J,Berthoz A. The perception of visually presented yaw and pitch turns: Assessing the contribution of motion, static, and cognitive cuesPerception & PsychophysicsYear: 2006681338135017378419
Loomis JM,Klatzky RK,Philbeck JW,Golledge R. Assessing auditory distance perception using perceptually directed actionPerception & PsychophysicsYear: 1998609669809718956
Loomis JM,Klatzky RL,Golledge RG,Cicinelli JG,Pellegrino JW,Fry PA. Nonvisual navigation by blind and sighted: Assessment of path integration abilityJournal of Experimental Psychology GeneralYear: 199312273918440978
Bakker NH,Werkhoven PJ,Passenier PO. The effects of proprioceptive and visual feedback on geographical orientation in virtual environmentsPresenceYear: 199983653
Kearns MJ,Warren WH,Duchon AP,Tarr MJ. Path integration from optic flow and body senses in a homing taskPerceptionYear: 20023134937411954696
Pollini L,Innocenti M,Petrone AStudy of a novel motion platform for flight simulators using an anthropomorphic robotYear: 2006Proceedings of the AIAA Modeling and Simulation Technologies Conference and ExhibitKeystone, Colorado AIAA20066360
Teufel HJ,Nusseck H-G,Beykirch KA,Butler JS,Kerger M,Bulthoff HH. MPI motion simulator: development and analysis of a novel motion simulatorYear: 2007Proceedings of the AIAA Modeling and Simulation Technologies Conference and ExhibitHilton Head, South Carolina AIAA20076476
Gärling T,Böök A,Lindberg E,Arce C. Is elevation encoded in cognitive mapsJournal of Environmental PsychologyYear: 199010341351
Montello DR,Pick HLJ. Integrating knowledge of vertically aligned large-scale spacesEnvironment and BehaviourYear: 199325457483
Correia MJ,Hixson WC,Niven JI. On predictive equations for subjective judgments of vertical and horizon in a force fieldActa oto-laryngologica SupplementumYear: 19682303
Barnett-Cowan M,Harris LR. Perceived timing of vestibular stimulation relative to touch, light and soundExperimental Brain ResearchYear: 2009198221231
Barnett-Cowan M,Harris LR. Temporal processing of active and passive head movementExperimental Brain ResearchYear: 20112142735
Sanders MC,Chang NN,Hiss MM,Uchanski RM,Hullar TE. Temporal binding of auditory and rotational stimuliExperimental Brain ResearchYear: 2011210539547
Barnett-Cowan M,Raeder SM,Bulthoff HH. Persistent perceptual delay for head movement onset relative to auditory stimuli of different duration and rise timesExperimental Brain ResearchYear: 2012 Forthcoming.

Figures

[Figure ID: Fig_3436]
Click here for additional data file (jove-63-3436.mov)


Tables
[TableWrap ID: d34e197]
Axis Range [deg] Max. velocity [deg/s]
Axis 1 Continuous 69
Axis 2 -128 to -48 57
Axis 3 -45 to +92 69
Axis 4 Continuous 76
Axis 5 -58 to +58 76
Axis 6 Continuous 120


Article Categories:
  • Neuroscience

Keywords: Neuroscience, Issue 63, Motion simulator, multisensory integration, path integration, space perception, vestibular, vision, robotics, cybernetics.

Previous Document:  Identification of Th1 epitopes within molecules from the lung-stage schistosomulum of Schistosoma ja...
Next Document:  Children's Speech Recognition and Loudness Perception with the Desired Sensation Level v5 Quiet and ...