Multisensory Integration of Vision and Intracortical Microstimulation for Sensory Substitution and Augmentation Maria C. Dadarlat,123 Joseph E. O’Doherty1 & Philip N. Sabes12
The performance of brain machine interfaces (BMIs) will ultimately be limited by the quality and nature of the sensory feedback provided to the user. For example, in a recent comparison of BMI and natural-movement control of a visible cursor, BMI performance only approached that of natural movements when proprioceptive feedback was received from the intact arm, congruent with the cursor movement [1]. In real BMI applications, natural afferent sensory input is limited to vision, and so there is widespread interest in developing techniques for providing artificial, supplemental feedback to replace proprioception. Here we present a novel approach to delivering supplemental sensory feedback via intracortical microstimulation (ICMS). We take advantage of the brain’s natural plasticity to implement a learning-based approach for artificial sensory feedback: a monkey is exposed to a novel ICMS input with fixed but arbitrary mapping to a natural sensory signal, vision. We hypothesized that strict temporal congruency between these two signals would be sufficient for the monkey to learn to interpret and utilize the ICMS signal. This approach worked: we have demonstrated multisensory integration of vision with an arbitrary ICMS signal, leading to both sensory augmentation and sensory substitution during a visually-guided reaching task. Methods: One monkey has been trained to perform an instructed-delay center-out reaching task to a hidden target, guided by a continuously updating error signal indicating the vector between the current hand position and the target (Fig. 1). The error vector is encoded by a random-dot visual flow field of variable coherence and/or a continuous multichannel ICMS signal (Fig. 1, bottom panel) delivered to the shoulder/arm area of primary somatosensory cortex. There are three trial types: vision-only trials serve as a measure of baseline performance, ICMS-only trials measure sensory substitution, and vision+ICMS trials measure sensory augmentation via multisensory integration. We evaluated performance for each trial type using a variety of behavioral measures. Results: As expected, on vision-only trials the variability of the animal’s performance improves with increasing visual coherence; an increase in the precision of visual input improves the precision of reaches. This can be seen in the correlation between the angle of the target (with respect to initial hand position) and the actual initial movement direction (Fig. 2, blue trace). Sensory substitution of vision by ICMS is evident in ICMS-only trials: the ICMS signal alone is sufficient for the animal to complete a smooth, direct reach to an unseen target, with performance equivalent the visual-only condition at low-to-moderate coherences (Fig. 2, green trace). Next, we observe sensory augmentation when both ICMS and vision are available, with performance better than either visiononly or ICMS-only (Fig. 2, red trace). Indeed, the performance in vision+ICMS trials is on par with that expected given optimal (minimum variance) cue combination of the visual and ICMS signals (Fig. 2, dashed red trace). Similar evidence for sensory substitution and sensory augmentation were observed for other behavioral measures, including the total movement time and path length and the number of movement corrections. In summary, we have demonstrated the use of a continuous, multichannel, non-biomimetic ICMS to deliver learning-based artificial sensory feedback that can augment or even replace natural visual feedback in a complex behavioral task. These results demonstrate the potential power of a learning-based approach to artificial sensory feedback in the BMI context. [1] Suminski A.J., Tkach D.C., Fagg A.H., Hatsopoulos, N. G., J. Neurosci. (2010) 30(50): 16777-16787 1
Center for Integrative Neuroscience and Department of Physiology, University of California, San Francisco, CA Joint Graduate Group in Bioengineering, University of California, Berkeley and University of California, San Francisco 3 Contact:
[email protected] 2
Figure 1. Behavioral Task: A monkey performs an instructed-delay center-out reaching task to an unseen target, guided by information about the error vector between the current hand location and the target. This vector is encoded visually, using a random-dot flow field (not shown), and/or via a multichannel ICMS signal. Figure shows the timeline for a sample trial (A) and ICMS pulse trains (from four of eight electrodes) on that trial (B). The eight electrodes were arbitrarily assigned to eight preferred directions; ICMS pulse rate was the product of two factors, one scaling linearly with distance to the target and the other representing a “cosine tuning” of rate with respect to the preferred direction of the given electrode. In example trial, the preferred direction of electrode 3 is closest to the initial error vector.
Figure 2. Initial Angle Estimation: A monkey’s ability to extract target angle information from ICMS is quantified by modeling the relationship between target angle and initial movement angle. Circular coefficient of determination is calculated for vision-only, vision+ICMS, and ICMS-only conditions, averaged over seven days of testing. Predicted values for vision+ICMS are based on a minimum-variance model of cue combination, with cue variability inferred from the unimodal conditions, using the simplifying assumption that variation in performance is due only to variability in estimating the error vector.