When moving through space, we encode multiple sensory cues that guide our orientation through the environment. The integration between visual and self-motion cues is known to improve navigation. However, spatial navigation may also benefit from multisensory external signals. The present study aimed to investigate whether humans combine auditory and visual landmarks with improving their navigation abilities. Two experiments with different cue reliability were conducted. In both, participants’ task was to return an object to its original location by using landmarks, which could be visual-only, auditory-only, or audiovisual. We took error and variability of object relocation distance as measures of accuracy and precision. To quantify interference between cues and assess their weights, we ran a conflict condition with a spatial discrepancy between visual and auditory landmarks. Results showed comparable accuracy and precision when navigating with visual-only and audiovisual landmarks but greater error and variability with auditory-only landmarks. Splitting participants into two groups based on given unimodal weights revealed that only subjects who associated similar weights to auditory and visual cues showed precision benefit in audiovisual conditions. These findings suggest that multisensory integration occurs depending on idiosyncratic cue weighting. Future multisensory procedures to aid mobility must consider individual differences in encoding landmarks.

Interindividual Differences Influence Multisensory Processing During Spatial Navigation

Cuturi L. F.
Secondo
;
2022-01-01

Abstract

When moving through space, we encode multiple sensory cues that guide our orientation through the environment. The integration between visual and self-motion cues is known to improve navigation. However, spatial navigation may also benefit from multisensory external signals. The present study aimed to investigate whether humans combine auditory and visual landmarks with improving their navigation abilities. Two experiments with different cue reliability were conducted. In both, participants’ task was to return an object to its original location by using landmarks, which could be visual-only, auditory-only, or audiovisual. We took error and variability of object relocation distance as measures of accuracy and precision. To quantify interference between cues and assess their weights, we ran a conflict condition with a spatial discrepancy between visual and auditory landmarks. Results showed comparable accuracy and precision when navigating with visual-only and audiovisual landmarks but greater error and variability with auditory-only landmarks. Splitting participants into two groups based on given unimodal weights revealed that only subjects who associated similar weights to auditory and visual cues showed precision benefit in audiovisual conditions. These findings suggest that multisensory integration occurs depending on idiosyncratic cue weighting. Future multisensory procedures to aid mobility must consider individual differences in encoding landmarks.
File in questo prodotto:
Non ci sono file associati a questo prodotto.
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11570/3252474
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? 2
  • Scopus 9
  • ???jsp.display-item.citation.isi??? 8
social impact