c o r t e x 4 8 ( 2 0 1 2 ) 8 8 2 e8 8 7

Available online at www.sciencedirect.com

Journal homepage: www.elsevier.com/locate/cortex

Special issue: Research report

The role of the motor system in discriminating normal and degraded speech sounds Alessandro D’Ausilio a,b, Ilaria Bufalari a, Paola Salmas b and Luciano Fadiga a,b,* a b

DSBTA, Section of Human Physiology, University of Ferrara, Italy IIT, The Italian Institute of Technology, Genova, Italy

article info

abstract

Article history:

Listening to speech recruits a network of fronto-temporo-parietal cortical areas. Classical

Received 17 February 2010

models consider anterior, motor, sites involved in speech production whereas posterior sites

Reviewed 07 May 2010

involved in comprehension. This functional segregation is more and more challenged by

Revised 22 May 2010

action-perception theories suggesting that brain circuits for speech articulation and speech

Accepted 6 May 2011

perception are functionally interdependent. Recent studies report that speech listening

Published online 26 May 2011

elicits motor activities analogous to production. However, the motor system could be crucially recruited only under certain conditions that make speech discrimination hard.

Keywords:

Here, by using event-related double-pulse transcranial magnetic stimulation (TMS) on lips

Motor theory of speech perception

and tongue motor areas, we show data suggesting that the motor system may play a role in

Motor system

noisy, but crucially not in noise-free environments, for the discrimination of speech signals. ª 2011 Elsevier Srl. All rights reserved.

Speech TMS

1.

Introduction

Traditional models of language brain organization separated perceptual and production modules in distinct areas (Gernsbacher and Kaschak, 2003). However, several recent studies do not support such a strict anatomo-functional segregation, showing that the motor system participates in a fronto-temporal-parietal brain network that plays a functional role both in speech production and comprehension (Fadiga et al., 2002; Watkins et al., 2003; Wilson et al., 2004; Skipper et al., 2005; Pulvermu¨ller et al., 2006; Londei et al., 2007, 2010; Roy et al., 2008). Yet, some authors proposed that temporo-parietal cortices provide the most prominent contribution to perception (Hickok and Poeppel, 2007). Crucially, neuroimaging and neurophysiological research

mostly use a correlational approach, and cannot say much about the causal role played by motor areas in speech perception. Also, the activation of motor areas during listening to speech might be driven by corollary corticocortical connections and thus have nothing to do with the process of comprehension itself (Toni et al., 2008). Therefore, the most convincing evidence in favor of a motor involvement in speech perception would be provided by the effects on perception caused by the selective alteration of neural activity in speech motor centers. Two recent studies, using a repetitive transcranial magnetic stimulation (rTMS) approach, suggested that targeting off-line the ventral premotor cortex (vPM) impairs speech perception of degraded acoustic stimuli (Meister et al., 2007) and of phoneme discrimination when the task requires a relatively high degree

* Corresponding author. DSBTA, Section of Human Physiology, University of Ferrara, Via Fossato di Mortara, 17/19, 44100 Ferrara, Italy. E-mail address: [email protected] (L. Fadiga). 0010-9452/$ e see front matter ª 2011 Elsevier Srl. All rights reserved. doi:10.1016/j.cortex.2011.05.017

c o r t e x 4 8 ( 2 0 1 2 ) 8 8 2 e8 8 7

of processing load (Sato et al., 2009). Moreover, it has been shown that 1 Hz rTMS applied over lips primary motor cortex alters the discrimination and identification of synthesized verbal stimuli containing different proportions of motorically discordant syllables ([ba] and [da]; Mo¨tto¨nen and Watkins, 2009). On the other hand, typical rTMS protocols albeit very useful, are prone to large spreading of effects also to distant cortical sites. In the case of language and speech studies this is even more critical due to the demonstrated bidirectional inter-areal functional connectivity (Matsumoto et al., 2004) supported by the large white matter tracts between Wernicke’s and Broca’s areas and inferior parietal lobule (Catani et al., 2005). Therefore, it’s unlikely that the application of long trains of rTMS might isolate the processing of an area from its functional circuit, and does not rule out the possibility that the TMS interference might have affected also temporo-parietal regions. Recently, our group, by using event-related double-pulse TMS on lips and tongue motor areas, have followed a different approach (D’Ausilio et al., 2009). We used an online TMS design granting better temporal and spatial resolution. Online TMS consists in the application of a short train of magnetic pulses during the experimental trial. Subjects were required to discriminate phonemes immersed in white noise produced with either the tongue or the lips. Focal stimulation of precentral motor cortex facilitated the discrimination of phonemes motorically concordant with the stimulated site ([d] and [t] with TMS on tongue motor area), with respect to that of the discordant items ([b] and [p] in this case), thus showing a somatotopic effect (D’Ausilio et al., 2009). In our previous study, phonemes were immersed in white noise (D’Ausilio et al., 2009) since we were aware of the fact that the motor system might be more critical when performing demanding phonological tasks or when stimuli are presented in adverse listening condition (Callan et al., 2004; Binder et al., 2004; Sato et al., 2009; Moineau et al., 2005; Boatman and Miglioretti, 2005). Therefore, in the present study we used an online TMS protocol in a phoneme discrimination task exactly the same as our previous study except that this time we removed the noise background. We predict that the motor system should not be involved in perfect listening conditions. Our model (Pulvermu¨ller and Fadiga, 2010) posits an attentional-like and motor-based mechanism able to tune-up temporo-parietal neurons via backward connections originating in premotor areas. Such mechanism uses phonological context and articulatory plans to reduce the number of sensory conflicting hypothesis, when auditory information is ambiguous. Therefore, and central for the model, is that task demands modulate this mechanism. To this aim, we compared performance of subjects in this phoneme discrimination when stimuli were perfectly intelligible respect to when discrimination is made harder because of a noisy background.

2.

Methods

2.1.

Subjects

Ten (mean age, 26.07; standard deviation (SD) ¼ 2.91; 6 females) and 13 (mean age: 24.32, SD: 3.22; 5 females), healthy, normal

883

hearing, right-handed subjects participated respectively to experiment 1 and 2 after giving their informed consent. They were paid for their participation. None had any history of neurological disease, trauma or psychiatric syndrome. Procedures were approved by the local ethical committee. Data from experiment 1 have already been presented and details can be found elsewhere (D’Ausilio et al., 2009).

2.2.

Stimuli

Stimuli were [b], [p], [d] and [t] spoken before an [œ] sound, through in-ear headphones and volume was set at a comfortable level during the training phase by each subject. [b] and [p], are labial sounds, requiring the critical lips movement for their production, whereas [d] and [t] are dental sounds that require a significant tongue movement. Stimuli were recorded from an actor by using a hi-fi microphone (AKG, C1000S) and lasted c.a. 180 msec. While in experiment 1 white noise was added to speech sounds through sound post-processing (Audacity), in experiment 2, subjects were presented with the same set of stimuli, except for the lack of noise superimposition.

2.3.

Task

Subjects were asked to listen and recognize the consonants of the presented stimuli and to respond as fast as possible by using a four-button pad. The correspondence between buttons and consonants was displayed throughout the experiment on a screen in front of them. A fixation-cross appeared 500 msec before the onset of the syllable to cue the beginning of stimulus presentation. Response reaction times (RTs) were given with the left index finger. RTs recording, stimuli presentation and TMS triggering were controlled by a custom made Basic script running under the MS-DOS environment to warrant timing accuracy. Inter-trial interval was randomly set between 5 sec and 8 sec. Stimuli order was random.

2.4.

TMS

TMS stimulation was delivered through a figure-of-eight 25 mm coil and a Magstim Rapid stimulator (Magstim, Whitland, UK). Hot spot of First Dorsal Interosseus (FDI) intrinsic hand muscle and resting Motor Threshold (rMT) were assessed by using standard protocols (Rossini et al., 1994). Motor Evoked Potentials (MEP) were recorded by using a tendon-belly montage with Ag/AgCl electrodes. Signal was band-pass filtered (50e1000 Hz) and digitized (5 kHz). Tongue and Lip M1 localization were based on standardized coordinates with respect to the functionally defined hot spot of the FDI muscle. Mean MNI coordinates for FDI, lips and tongue areas, were based on previous functional magnetic resonance imaging (fMRI) studies (lips: 56, 8, 46; tongue: 60, 10, 25; Pulvermu¨ller et al., 2006; FDI: 37, 25, 58; Niyazov et al., 2005). MNI coordinates (FDI, TongueM1 and LipM1) were first transformed into the 10e20 electroencephalography (EEG) system space (Steinstra¨ter et al., in preparation; web applet: http:// wwwneuro03.uni-muenster.de/ger/t2tconv/conv3d.html) and then the distances between FDI/tongue and FDI/lip were

884

c o r t e x 4 8 ( 2 0 1 2 ) 8 8 2 e8 8 7

calculated in the same standard space. Therefore, TongueM1 and LipM1 were located according to differential 10e20 EEG coordinates centered on the functionally defined FDI location (Lips: 6.6% of nasioneinion distance in the anterior direction and 5.8% of the inter-tragus distance in the lateral direction; mean distance from FDI, 5.5 cm; Tongue: 8.6% anterior and 11.6% lateral; FDI mean distance: 3.35 cm; mean distance between Lips and Tongue: 2.15 cm). In the stimulated trials, two pulses with a 50 msec interval were delivered at 110% of the FDI rMT. Coil orientation was maintained at 45 respect to the inter-hemispheric fissure. Pulses were given 100 msec and 50 msec before stimulus onset.

2.5.

Procedure

Subjects first completed a training block of 60 trials (15 for each stimulus category) with no TMS. This stage was followed by a TMS mapping session to functionally define FDI hot spot, rMT and to subsequently localize LipM1 and TongueM1 representations. After the mapping session, two experimental blocks (LipM1 and TongueM1) were presented in sequence, separated by a 2 min interval. The order of TMS over LipM1 and TongueM1 was counterbalanced across subjects. In each block subjects had to complete 80 trials, 60 with TMS and 20 random catch trials. Catch trials were exactly the same as the TMS trials except that no TMS was applied. Catch trials were used as a reference to evaluate the effect induced by TMS on behavior.

2.6.

Measures and analysis

Experimental measures included RTs, calculated from the beginning of consonant sound presentation, and Errors. The RTs data were collapsed into two categories, Labial and Dental speech sounds. In order to compare baseline discrimination performances in noisy and no-noisy discrimination conditions, accuracy and RTs were compared across the two tasks by means of two-tailed Bonferroni corrected t-tests. Accuracy was not analyzed since in the NoNoise condition performance, due to a ceiling effect, was close to 100% (see Fig. 1). To reduce inter-individual variability and make the effect of TMS comparable across groups, subjects’ performance was normalized by computing the percentage of variation of mean RTs in TMS stimulated trials with respect to trials without TMS. The effect of TMS on noisy and no-noisy labial and dental syllables discrimination was tested by means of a 2  2  2 mixed-model analysis of variance (ANOVA) on normalized RTs, including as between-subjects factor the presence of Noise (Noise vs NoNoise), and as within-subjects factors Phoneme type (Labial vs Dental) and Stimulation Site (LipM1 vs TongueM1). Duncan’s post-hoc test analysis was applied to the significant effects. Additionally, a separated ANOVA (2  2) was conducted on RTs in NoNoise Group, including the factors Phoneme type (Labial vs Dental) and Stimulation Site (LipM1 vs TongueM1).

3.

Fig. 1 e Baseline RTs (upper panel) and accuracy (lower panel) for the discrimination of labial and dental syllables embedded e or not embedded e in noise backgrounds. Asterisks denote significant comparisons.

Results

Baseline RTs of the two conditions (Noise and NoNoise) remained unaltered [Labials: t(25) ¼ .26, p > .05; Dentals:

t(25) ¼ .27, p > .05; Fig. 1], showing that the addition of noise did not slow down subjects’ performance. However, accuracy was significantly reduced when discrimination was performed under noisy with respect to no-noisy conditions [Labials: t(25) ¼ 2.71, p ¼ .01; Dentals: t(25) ¼ 3.36, p ¼ .003; Fig. 1], confirming that noise impairs performance. The between-group factor yielded no significant main effect of Noise [F(1,21) ¼3.346, p > .05], thus excluding a-specific differences between groups. Crucially, however, the three-way interaction between Noise  Phoneme type  Stimulation Site was found to be highly significant [F(1,21) ¼ 15.535, p ¼ .0007; Fig. 2]. In the same ANOVA, a significant two way Phoneme type  Stimulation Site was found [F(1,21) ¼ 18.889, p > .0003]. As predicted, Duncan post-hoc tests showed no significant differences in labial and dental syllables discrimination when TMS was applied either over LipsM1 or TongueM1 and the task was performed under NoNoise condition. However, as previously reported (D’Ausilio et al., 2009), the same was not true for Noise condition in which an effect of TMS Stimulation Site over syllables discrimination was found, showing a double dissociation between labial and dental syllables discrimination when TMS was applied either on LipsM1 or TongueM1 (Fig. 2). RTs collected in NoNoise condition significantly differed from Noise condition, in which TMS was applied on LipsM1 or TongueM1 and discrimination was performed respectively over dental or labial syllables (i.e., in incongruent conditions; all p < .04, Fig. 2). Repeated measures ANOVA conducted separately on NoNoise RTs did not show any significant effect or interaction [Phoneme type: F(1,12) ¼ .379, p > .05;

c o r t e x 4 8 ( 2 0 1 2 ) 8 8 2 e8 8 7

885

Fig. 2 e Results from mixed-model ANOVA (Noise 3 Stimulation Site 3 Phoneme type) on RTs (TMS/no-TMS) showing the differential effect of TMS Stimulation Site on Phoneme type when discrimination of sounds is performed with or without Noise. Asterisks denote significant comparisons. In the upper right inset, the histogram shows the differential effect of Noise and NoNoise on the discrimination of labial and dental sounds between the two TMS stimulation sites.

Stimulation Site: F(1,12) ¼ .214, p > .05; Phoneme type  Stimulation Site: F(1,12) ¼ .314, p > .05]. RTs performance showed no dissociation between Stimulation Site and stimulus categories. In fact, recognition of lip-produced phonemes did not differ from that of tongue-produced ones, when TMS was applied to LipM1 (Labial ¼ 99.8%  2.3 s.e.m.; Dental ¼ 98.8%  2.1 s.e.m.). The stimulation of the TongueM1 induced the same pattern (Labial ¼ 102%  2.8 s.e.m.; Dental ¼ 99%  2.9 s.e.m.). Therefore, differently from the discrimination performed under noisy conditions, the stimulation of M1 did not affect the performance in recognizing speech sounds produced with the concordant effector respect to discordant sounds produced with a different effector.

4.

Discussion

Passive listening to phonemes and syllables was shown to activate motor (Fadiga et al., 2002; Watkins et al., 2003; Skipper et al., 2005; Pulvermu¨ller et al., 2006; Londei et al., 2007, 2010; Roy et al., 2008) and premotor areas (Wilson et al., 2004; Skipper et al., 2005; Londei et al., 2007, 2010). Interestingly, these activations were somatotopically organized according to the effectors recruited in the production of these phonemes (Fadiga et al., 2002; Watkins et al., 2003; Pulvermu¨ller et al., 2006), and in accordance with the premotor activities in overt production (Wilson et al., 2004). Moreover, our earlier work (D’Ausilio et al., 2009), together with other recent studies (Meister et al., 2007; Sato et al., 2009; Mo¨tto¨nen and Watkins, 2009), provided rather strong evidence that the motor system specifically contributes to speech perception. On the other hand, could still be debated whether the motor system plays a crucial role in speech discrimination regardless of task difficulty (Sato et al., 2009). As proposed by some authors (Callan et al., 2004) the motor system could be recruited in speech discrimination mainly

under sub-optimal listening conditions (i.e., in noisy speech; Binder et al., 2004), or eventually when speech discrimination is made particularly hard (i.e., listening to non-native languages; Wilson and Iacoboni, 2006). In line with such hypothesis, our study (D’Ausilio et al., 2009), Meister’s (Meister et al., 2007) and Mo¨tto¨nen’s (Mo¨tto¨nen and Watkins, 2009) used demanding tasks. Our study and the work of Meister included white noise in the stimuli whereas Mo¨tto¨nen asked their subjects to detect subtle differences on a continuum among artificially built phonemes. The work of Sato et al. (2009) on the other hand, shows that the motor system might be more involved in tasks requiring some degree of complexity. These authors showed that rTMS over vPM affected phoneme discrimination only when phonemic segmentation is necessary. However, we already discussed the potential pitfalls and limitations of rTMS protocols in terms of spatial and functional selectivity (see Introduction). For these reasons, here we tried to overcome such limitations by using the same experimental procedure of our previous study, except that we now allowed normal listening conditions to the subjects. We compared performances of subjects discriminating no-noisy syllables and the same syllables immersed in noise, and data suggest that the motor cortex may play a more significant role in speech perception when the stimuli are degraded. However, we acknowledge that the present results cannot be the final word on such issue, and further studies will be necessary. In fact, we showed no effect of TMS in syllable discrimination performance, and we cannot exclude an intervening factor that may have canceled the TMS effect on the cortex. At the same time it’s worth noting that all procedures, experimenters, task and stimuli, were kept the same across the two experiments thus reducing the likelihood of a systematic confound. More importantly, the best confirmation comes from previous reports showing converging results but using a variety of techniques. In fact, our results are in line

886

c o r t e x 4 8 ( 2 0 1 2 ) 8 8 2 e8 8 7

with several other studies showing that anterior language areas might be recruited for sensory decisions and completion during sub-optimal listening conditions (Binder et al., 2004; Moineau et al., 2005; Boatman and Miglioretti, 2005; Shahin et al., 2009). Thus, supported by previous converging evidences and further confirmed by the present data, we suggest that the motor system may play a more important role when sensory information is incomplete, eventually by filling the gaps via attentional or top-down processing (Shahin et al., 2009). According to our hypothesis the motor system should not be involved in perfect listening conditions. However, by “noisy speech signals” we consider all natural conditions where missing data may be caused by environmental noise, multiple sources, non-standard pronunciation (even individual differences) or the known irregularities and elisions we apply in everyday speaking. Therefore, “perfect listening conditions” or “lab speech” are the most unnatural conditions. In these unnatural situations, acoustic features are redundant enough to enable 100% accuracy even in absence of a top-down motor contribution. In these conditions, the motor system is certainly not necessary, and we doubt the speech perceptionproduction system evolved for perceiving-producing perfect signals. Our model posits an attentional-like and motor-based mechanism able to tune-up temporo-parietal neurons via backward connections originating in premotor areas (Pulvermu¨ller and Fadiga, 2010). Therefore, the contribution of the motor system is to augment perception in a pro-active manner and the gain of such mechanism is specifically modulated by task demands. The motor system might furnish an attentional-like mechanism able to prime perceptual processes (Rizzolatti et al., 1987). Top-down processing could be guided by articulatory gestures, activated by a partial auditory feature extraction and subsequently employed for sensory completion of degraded speech (Shahin et al., 2009). Specifically, partial auditory information might still preactivate a limited subset of perceptual hypotheses, though not sufficient for a successful discrimination to happen. This model predicts that each perceptual hypothesis is associated to a given articulatory motor plan during learning (Guenther et al., 2006; and according to the basics of Liberman’s theory). Importantly, these motor plans could further guide the active search for critical auditory features or fill missing bits of information in the auditory trace. This motor-driven process would eventually help disambiguate between the remaining perceptual hypotheses. Therefore, the process of phonological discrimination is imagined as a prospective search in a multidimensional space of audio-motor features, where a convergent and iterative process of auditory and motor hypotheses confirmation is run on the input data. In some instances, the auditory component may reach a fast consensus due to an acoustically rich data set. However, in other cases (frequent in real life), the auditory data require some degree of integration from motor-based processes in the form of enhanced auditory features search or knowledge based gaps filling (Londei et al., 2010). Sensory completion and active feature search might be mediated by anticipatory mechanisms such as those proposed for general sensory-motor control (Wolpert and Kawato,

1998). Forward-inverse couples are based upon the ability of the system to predict either a sensory state given the motor command, or the motor state given the sensory state. These couples are built during development via active movement production and sensory feedback recording e such as the speech-babbling phase (Guenther et al., 2006). After development, these sensory-motor maps might be used to cope with a natural context where we are continuously exposed to incomplete or noisy sensory information. Therefore, we envisage speech perception as an active process searching for relevant features among several source of noise. This search might be directed toward salient features via attentional-like mechanisms, driven by the motor system.

Acknowledgments L.F. is supported by Italian Ministry of Education, by the E.C. grants CONTACT, ROBOT-CUB, POETICON and SIEMPRE and by Fondazione Cassa di Risparmio di Ferrara.

references

Binder JR, Liebenthal E, Possing ET, Medler DA, and Ward BD. Neural correlates of sensory and decision processes in auditory object identification. Nature Neuroscience, 7(3): 295e301, 2004. Boatman DF and Miglioretti DL. Cortical sites critical for speech discrimination in normal and impaired listeners. Journal of Neuroscience, 25(23): 5475e5480, 2005. Callan DE, Jones JA, Callan AM, and Akahane-Yamada R. Phonetic perceptual identification by native- and second-language speakers differentially activates brain regions involved with acoustic phonetic processing and those involved with articulatory-auditory/orosensory internal models. NeuroImage, 22(3): 1182e1194, 2004. Catani M, Jones DK, and Ffytche DH. Perisylvian language networks of the human brain. Annals of Neurology, 57(1): 8e16, 2005. D’Ausilio A, Pulvermu¨ller F, Salmas P, Bufalari I, Begliomini C, and Fadiga L. The motor somatotopy of speech perception. Current Biology, 19(5): 381e385, 2009. Fadiga L, Craighero L, Buccino G, and Rizzolatti G. Speech listening specifically modulates the excitability of tongue muscles: A TMS study. European Journal of Neuroscience, 15(2): 399e402, 2002. Gernsbacher MA and Kaschak MP. Neuroimaging studies of language production and comprehension. Annual Review of Psychology, 54: 91e114, 2003. Guenther FH, Ghosh SS, and Tourville JA. Neural modeling and imaging of the cortical interactions underlying syllable production. Brain and Language, 96(3): 280e301, 2006. Hickok G and Poeppel D. The cortical organization of speech processing. Nature Reviews Neuroscience, 8(5): 393e402, 2007. Londei A, D’Ausilio A, Basso D, Sestieri C, Del Gratta C, Romani GL, et al. Sensory-motor brain network connectivity for speech comprehension. Human Brain Mapping, 31(4): 567e580, 2010. Londei A, D’Ausilio A, Basso D, Sestieri C, Del Gratta C, Romani GL, et al. Brain network for passive word listening as evaluated with ICA and Granger causality. Brain Research Bullettin, 72(4e6): 284e292, 2007.

c o r t e x 4 8 ( 2 0 1 2 ) 8 8 2 e8 8 7

Matsumoto R, Nair DR, LaPresto E, Najm I, Bingaman W, Shibasaki H, et al. Functional connectivity in the human language system: A cortico-cortical evoked potential study. Brain, 127(10): 2316e2330, 2004. Meister IG, Wilson SM, Deblieck C, Wu AD, and Iacoboni M. The essential role of premotor cortex in speech perception. Current Biology, 17(19): 1692e1696, 2007. Moineau S, Dronkers NF, and Bates E. Exploring the processing continuum of single-word comprehension in aphasia. Journal of Speech, Language and Hearing Research, 48(4): 884e896, 2005. Mo¨tto¨nen R and Watkins KE. Motor representations of articulators contribute to categorical perception of speech sounds. Journal of Neuroscience, 29(31): 9819e9825, 2009. Niyazov DM, Butler AJ, Kadah YM, Epstein CM, and Hu XP. Functional magnetic resonance imaging and transcranial magnetic stimulation: Effects of motor imagery, movement and coil orientation. Clinical Neurophysiology, 116(7): 1601e1610, 2005. Pulvermu¨ller F, Huss M, Kherif F, Moscoso del Prado Martin F, Hauk O, and Shtyrov Y. Motor cortex maps articulatory features of speech sounds. Proceedings of the National Academy of Sciences USA, 103(20): 7865e7870, 2006. Pulvermu¨ller F and Fadiga L. Active perception: Sensorimotor circuits as a cortical basis for language. Nature Reviews Neuroscience, 11(5): 351e360, 2010. Rizzolatti G, Riggio L, Dascola I, and Umilta´ C. Reorienting attention across the horizontal and vertical meridians: Evidence in favor of a premotor theory of attention. Neuropsychologia, 25(1): 31e40, 1987. Roy AC, Craighero L, Fabbri-Destro M, and Fadiga L. Phonological and lexical motor facilitation during speech listening: A

887

transcranial magnetic stimulation study. Journal of Physiology Paris, 102(1e3): 101e105, 2008. Rossini PM, Barker AT, Berardelli A, Caramia MD, Caruso G, Cracco RQ, et al. Non-invasive electrical and magnetic stimulation of the brain, spinal cord and roots: Basic principles and procedures for routine clinical application. Report of an IFCN committee. Electroencephalography and Clinical Neurophysiology, 91(2): 79e92, 1994. Sato M, Tremblay P, and Gracco VL. A mediating role of the premotor cortex in phoneme segmentation. Brain and Language, 111(1): 1e7, 2009. Shahin AJ, Bishop CW, and Miller LM. Neural mechanisms for illusory filling-in of degraded speech. NeuroImage, 44(3): 1133e1143, 2009. Skipper JI, Nusbaum HC, and Small SL. Listening to talking faces: Motor cortical activation during speech perception. NeuroImage, 25(1): 76e89, 2005. Toni I, de Lange FP, Noordzij ML, and Hagoort P. Language beyond action. Journal of Physiology Paris, 102(1e3): 71e79, 2008. Watkins KE, Strafella AP, and Paus T. Seeing and hearing speech excites the motor system involved in speech production. Neuropsychologia, 41(8): 989e994, 2003. Wilson SM, Saygin AP, Sereno MI, and Iacoboni M. Listening to speech activates motor areas involved in speech production. Nature Neuroscience, 7(7): 701e702, 2004. Wilson SM and Iacoboni M. Neural responses to non-native phonemes varying in producibility: Evidence for the sensorimotor nature of speech perception. NeuroImage, 33(1): 316e325, 2006. Wolpert DM and Kawato M. Multiple paired forward and inverse models for motor control. Neural Networks, 11(7e8): 1317e1329, 1998.

The role of the motor system in discriminating normal ...

b IIT, The Italian Institute of Technology, Genova, Italy ... discordant syllables ([ba] and [da]; Möttönen and Watkins,. 2009). ... information is ambiguous. Therefore ...

NAN Sizes 0 Downloads 129 Views

Recommend Documents

The NRC System for Discriminating Similar Languages
in, for example, social media data, a task which has recently received increased attention .... We split the training examples for each language into ten equal-.

The Role of Action Observation in Motor Memory ...
Department of Biomedical Engineering, Washington University in Saint Louis, Saint ... conditions, subjects trained in a thirty-minute action observation session.

The role of motor contagion in the prediction of action
other visual control stimuli). Motor evoked ... visual representation of a different finger movement (Brass, .... These data suggest that the human STS plays an im-.

Vocal pitch discrimination in the motor system
Mar 31, 2011 - b IIT, The Italian Institute of Technology, Genova, Italy ... Article history: ..... Potentials (MEP) were recorded with a wireless EMG system. (Aurion ...

Vocal pitch discrimination in the motor system
Mar 31, 2011 - recruitment of the motor system during speech perception tasks. .... E-Prime (Psychology Software Tools, Inc.) script. The correct syn-.

Deluding the motor system
This predictive system is useful because it can be used to filter incoming ... The role of the cerebellum in predicting the sensory consequences of movement.

the motor system computes well
available to, and considered by, the actor about what s/he will do later. An early observation that hinted at the promise of this approach concerned a waiter filling.

The Role of the EU in Changing the Role of the Military ...
of democracy promotion pursued by other countries have included such forms as control (e.g. building democracies in Iraq and Afghanistan by the United States ...

The Role of the Syllable in Lexical Segmentation in ... - CiteSeerX
Dec 27, 2001 - Third, recent data indicate that the syllable effect may be linked to specific acous- .... classification units and the lexical entries in order to recover the intended parse. ... 1990), similar costs should be obtained for onset and o

Counting on the motor system- rapid action planning reveals the ...
Counting on the motor system- rapid action planning re ... agnitude-dependent extraction of numerical quanity.pdf. Counting on the motor system- rapid action ...

Strengthen Role Of Victims In Criminal Jurisprudence System ...
REPRESENTED BY SECRETARY TO THE GOVERNMENT,. DEPARTMENT OF HOME AFFAIRS, GOVERNMENT SECRETARIAT,. THIRUVANANTHAPURAM. 2. CIRCLE INSPECTOR ... NO.17965/2006. 2/-. Page 3 of 21. Main menu. Displaying Strengthen Role Of Victims In Criminal Jurisprudence

'System' in the International Monetary System - The National Bureau of ...
Paper prepared for the Conference on “Money in the Western Legal Tradition”, ... Bretton Woods System were intentional in building an international monetary ...

The role of government in determining the school ...
Apr 14, 2011 - span the arc, no coherence without chronology. .... If you have found this paper interesting, why not join our email list to receive occasional.

The Role of the Founder in Creating Organizational ... - Science Direct
4. Developing consensus on the criteria to be used in measuring how well the group is ... What is considered to be the “right” way for people to relate to ..... for getting parking spaces; many conference .... would still call daily from his reti

The Role of Television in the Construction
study, we report survey data that test the relationship between television exposure and the perceived ... called objective reality (census data, surveys, etc.). Con- tent analyses of television programs have ..... ism may have been due to the low rel

The role of self-determined motivation in the ...
multivariate analysis of variance revealed that exercisers in the maintenance stage of change ..... Our exploratory prediction .... very good fit to the data and was invariant across the sexes. .... 0.14 is considered a large effect size (Cohen, 1988

In the studies of computational motor learning in ...
r was the reward at the trial k, γ was the discount rate. The learner approximated the value function ( , ). V z s , where z was the weight vectors and the s was the state of the system (i.e. reach angel). The action was made by. ( , ) a Aws n. = +

The headteacher's role in the leadership of REAL Projects
outstanding Ofsted judgments are bi-products of a school culture in which ..... principles can become an architecture around which systems (and schools) can.

The Role of Metal−Nanotube Contact in the ...
IBM T. J. Watson Research Center, Yorktown Heights, New York 10598, and Institute of Thin Film and ... A Si substrate is used as the back gate with the SiO2 as the gate dielectric. Source (S) and drain. (D) contact patterns with a spacing of 300 nm a

Exploring the role of personality in the relationship ...
article (e.g. in Word or Tex form) to their personal website or institutional ... Fischhoff, 2007), engage in more social comparison (Schwartz et al., 2002) and are ...

The role of consumption substitutability in the ... - Isabelle MEJEAN
boosts employment but does not allow residents to purchase enough additional consumption ...... Framework, Chapter 3, Ph.D. dissertation, Princeton University.

The weakening role of science in the management ... - Oxford Journals
tel: +1 709 772-2341; fax: +1 709 772-4105; e-mail: [email protected]. Introduction ... For Permissions, please email: [email protected]. 723 .... which options were put forward in light of stock trends and expected ...