Exp Brain Res (2010) 201:323–330 DOI 10.1007/s00221-009-2044-6

R ES EA R C H A R TI CLE

How and when auditory action eVects impair motor performance Alessandro D’Ausilio · Riccardo Brunetti · Franco Delogu · Cristina Santonico · Marta Olivetti Belardinelli

Received: 22 July 2009 / Accepted: 2 October 2009 / Published online: 22 October 2009 © Springer-Verlag 2009

Abstract Music performance is characterized by complex cross-modal interactions, oVering a remarkable window into training-induced long-term plasticity and multimodal integration processes. Previous research with pianists has shown that playing a musical score is aVected by the concurrent presentation of musical tones. We investigated the nature of this audio-motor coupling by evaluating how congruent and incongruent cross-modal auditory cues aVect motor performance at diVerent time intervals. We found facilitation if a congruent sound preceded motor planning with a large Stimulus Onset Asynchrony (SOA ¡300 and ¡200 ms), whereas we observed interference when an incongruent sound was presented with shorter SOAs (¡200, ¡100 and 0 ms). Interference and facilitation, instead of developing through time as opposite eVects of the same mechanism, showed dissociable time-courses suggesting their derivation from distinct processes. It seems

A. D’Ausilio (&) DSBTA, Human Physiology Section, University of Ferrara, via Fossato di Mortara 17/19, 44100 Ferrara, Italy e-mail: [email protected] A. D’Ausilio · R. Brunetti · F. Delogu · C. Santonico · M. O. Belardinelli ECONA, Interuniversity Centre for Research on Cognitive Processing in Natural and ArtiWcial Systems, via dei Marsi 78, 00185 Rome, Italy F. Delogu · M. O. Belardinelli Department of Psychology, Sapienza University of Rome, via dei Marsi 78, 00185 Rome, Italy R. Brunetti Università Europea di Roma, Via Aldobrandeschi 190, 00163 Rome, Italy

that the motor preparation induced by the auditory cue has diVerent consequences on motor performance according to the congruency with the future motor state the system is planning and the degree of asynchrony between the motor act and the sound presentation. The temporal dissociation we found contributes to the understanding of how perception meets action in the context of audio-motor integration. Keywords Audio-motor integration · Sensory-motor · Music · Action planning · Action facilitation · Action interference · Action-perception cycle

Introduction The theory of common coding postulates that when an action and a percept share a set of features they tend to be associated (Prinz 1990). According to such hypothesis, the presentation of a stimulus, previously associated with a particular action, will automatically prepare the system to produce the same action (Hommel et al. 2001). This basic mechanism might be at the basis of behavioural plasticity connecting arbitrary stimuli to speciWc motor responses. The link between sensory and motor coordinate systems, on one hand, helps motor control and the prediction of the consequence of one’s own action (Wolpert and Kawato 1998). On the other hand it might serve to anticipate the motor plan necessary to obtain a desired sensory state (Wolpert and Kawato 1998), to decode other’s action goals (Rizzolatti and Craighero 2004), or even to coordinate one’s own motor execution with other individuals (Sebanz et al. 2006). In this context, musicians are of special interest because they are extensively trained in a variety of taskspeciWc sensory-motor domains and do often act in coordination with other musicians.

123

324

Sensory-motor brain plasticity in musicians is the result of repeated co-occurrences of speciWc actions and the associated sensory eVects (Münte et al. 2002). Musicians, in fact, have proven to be of great interest in the study of how speciWc training can shape somato-sensory (Elbert et al. 1995), motor (Pascual-Leone et al. 1995; Hund-Georgiadis and von Cramon 1999), and auditory representations (Pantev et al. 1998), as well as multimodal integration networks (Stewart et al. 2003). Musicians have also been used as a model of both long-term structural (Schlaug et al. 1995) and short-term functional changes in the brain (Rosenkranz et al. 2007). A growing number of studies have focused on the mechanisms for the integration of auditory and motor information (D’Ausilio 2007; Fadiga et al. 2009; Zatorre et al. 2007). Neuroimaging research has recently addressed this issue from diVerent perspectives. On one hand, it has been found that motor and premotor activities can be elicited, in experts, by passive listening to known melodies (e.g. Haueisen and Knösche 2001; Bangert et al. 2006; Baumann et al. 2007). On the other hand, several studies have focused on short-term music training in non-experts. An EEG study showed increased activity in sensorimotor areas in naïve participants, both during muted piano playing and passive listening (Bangert and Altenmüller 2003). Interestingly, a TMS study further demonstrated that passive listening to a rehearsed piece induces an increased facilitation at the level of the primary motor cortex, after only 30 min of practice (D’Ausilio et al. 2006). Recently it was found that, in non-musicians, premotor activities are elicited by passive listening of a rehearsed piece, but not by a diVerent combination of the same notes (Lahav et al. 2007). In parallel, behavioural studies examined the eVects of this audio-motor functional link in musicians. A speciWc experimental procedure has been developed in which the participant receives a visual instruction to perform an action (a musical score) while, at the same time, an auditory stimulus which can be incongruent or congruent with that action is presented. The impairment or facilitation generated by the two kinds of stimuli is measured in terms of reaction time (RT), error rate, and direction of errors (Prinz 1997). Using this procedure (Drost and colleagues 2005a, b, 2007) measured the eVects of congruent and incongruent piano sounds on musicians’ performance of visually cued actions on a piano. Drost et al. (2005b) found longer RTs in the incongruent condition than in the congruent condition and demonstrated that this eVect occurs at the stage of motor programming. Drost et al. (2005a) extended the previous results, obtained with single notes, to chords. Finally, Drost et al. (2007) showed that this eVect occurs only when played notes have the timbre of the musical instrument the single participant is accustomed to play (e.g. guitar notes triggered by piano strokes do not elicit the eVect).

123

Exp Brain Res (2010) 201:323–330

Both neuroimaging and behavioural studies support the idea that music training induces a speciWc sensory-motor functional connection. This connection is evidenced by the emergence of motor activities while listening to rehearsed musical excerpts (Lahav et al. 2007) and by the eVects on motor performance induced by listening to plausible action eVects (Drost et al. 2005a, b). Summing up, converging evidence was found that when a sound-evoked motor representation is analogous to that necessary to execute the piano movement associated to that very sound, the presentation of sound “A” interferes with the concurrent motor preparation for producing sound “B”, while facilitating the preparation of sound “A”. A limitation of the previous studies has been that they did not consider the audio-motor process as something developing in time. Sounds were always presented at the same time as the visual imperative stimulus. The temporal characteristics of the audio-motor functional connection are of great theoretical importance for mainly two reasons: (i) cortical and behavioural processing develops in time, with dissociable functional steps; (ii) the auditory stimuli, itself, develops in time. Listening to musical tones (in expert musicians) might trigger a continuous process of audio-motor transformation, unfolding in time. This process, when connected to musical execution, can be conceived as a parallel search in a multidimensional space of (sensory-motor) features that gradually converge to a motor solution. In agreement with this idea, neurophysiological studies have shown that the processing of musical sounds has a precise time course, characterized by speciWc neural markers (Shahin et al. 2003, 2008; Kuriki et al. 2006; Pizzamiglio et al. 2005). Therefore, one issue in urgent need for further research is how these behavioural eVects concerning the interaction between perception and action develop over time. Our main experiment tests the prediction that the amplitude (RT diVerences) and direction of eVects (RT reduction or increase) will depend on the stimulus onset asynchrony (SOA) between the piano sound and the visually instructed action on the piano (visuo-motor task). Our hypothesis is that the motor task will be interfered in diVerent ways, depending upon the diVerent stages in which the auditory processing meets the motor programming (manipulated through diVerent SOAs). We predict three critical time points (auditory stimulus presented ¡300, ¡200, and ¡100 ms earlier than the visual stimulus), reXecting three separate processing stages evidenced in recent neurophysiological research (Shahin et al. 2003, 2008; Kuriki et al. 2006; Pizzamiglio et al. 2005). A ¡100 ms SOA should in principle align the beginning of the visuo-motor task with component N1 of auditory evoked potentials (AEP). This component, elicited by single musical tones (as is the case of our experiments), does not seem to be heavily modulated by musical expertise but to reXect higher-order spectral

Exp Brain Res (2010) 201:323–330

sound analysis (Shahin et al. 2003). A ¡200 ms SOA should instead align the beginning of the task with AEP P2 component. This component instead, has been referred as the Wrst signature of musical expertise, possibly reXecting the Wrst stage of auditory-motor transformation (Shahin et al. 2003; Kuriki et al. 2006). Finally, a ¡300 ms SOA should instead align the beginning of the task with activities associated to a complete audio-motor transformation (Pizzamiglio et al. 2005; Shahin et al. 2008). We hypothesize that aligning the beginning of the visuomotor task with these three stages of the auditory-motor continuum will induce dissociable eVects on RT performance.

Experiment 1 Drost et al. (2005b) suggested that pianists’ playing is interfered with by the concurrent presentation of a piano sound that conXicts with the visually presented score. The aim of Experiment 1 was to determine the direction of the eVects in terms of facilitation or interference by contrasting them with a NoSound baseline (visuo-motor task). Method Fourteen right-handed (assessed with OldWeld’s inventory; OldWeld 1971) subjects (8 females), aged 22–52 (mean 30.5; SD 9.9), participated in the Wrst experiment. They were pianists, with at least 5 years of formal training. A questionnaire assessed the weekly amount of practice (mean 21.61 h; SD 20.06), years of training (mean 15 years; SD 9.2), and music studies starting age (mean 9.46 years; SD 4.03) for each subject. We assessed that none of the musicians had absolute pitch through a speciWc question in the pre-experimental questionnaire. In the training session we veriWed that all subjects had perfect sightreading for simple music scores. Subjects were presented with both visual and auditory stimuli. Visual stimuli consisted of 5 scores, each displaying two notes on a musical staV (B4 followed by G4, A4, C5, or D5, notated in G-clef) presented in isolation on a computer screen (IBM ThinkPad, 15” monitor) placed 50 cm away from the subject. Auditory stimuli consisted of 5 piano sounds, (G4, A4, B4, C5, and D5) with a duration of 400 ms, presented via headphones (AKG, model K271 Studio) at a comfortable listening level. The piano keyboard was a MIDI controller (M-Audio Keystation 49e) placed on a table between the computer screen and the subject. Hand conWguration on the piano keyboard was natural: thumb was placed on the “G4” key, index Wnger on the “A4”, middle Wnger on the “B4”, ring Wnger on the “C5”, and little Wnger on the “D5” key. The procedure did not require any movement of the hand from this position.

325

Stimulus presentation, conditions, randomization, and response recording were controlled by a custom-made script in MAX/MSP, Cycling’74, version 4.6.2 programming environment. The session consisted of 700 trials lasting on average 3 s each, plus 5 training trials at the beginning to accustom the subjects to the task. The experiment was divided into 5 blocks of 140 trials and participants could rest for several minutes between blocks. Each trial started with a visual presentation of a score indicating a “B4” on the staV. The score prompted the subject to play “B4” on the keyboard with the middle Wnger, which was followed by the appropriate auditory feedback (piano note B4). A correct key press also triggered the disappearance of the score and after a 500 ms blank interval a second score appeared on the screen, the imperative stimulus. The new score was shown for 130 ms; it showed the previous “B4” followed by one of four notes: “G4”, “A4”, “C5”, or “D5”. At this point the subject had to press the piano key indicated by the second note on the staV as quickly as possible, by using, respectively, the thumb, index, ring, or little Wnger. No auditory feedback followed the second key press. The design consisted in four visual stimuli £ 5 auditory stimuli (including NoSound) £ 35 trial repetition, randomly presented. The analysis included three auditory conditions, match, mismatch, and no-sound. In the match condition the visual imperative stimulus (second note on the staV in each trial) was accompanied by the corresponding piano sound. In the mismatch condition the sound of one of the other three possible notes was presented. In the no-sound condition the subject was presented with the visual imperative stimulus alone. Subjects’ task was to follow the visual score requests and produce the correct action on the piano keyboard as quickly as possible. The subjects were told to ignore the sounds. The dependent measure was RT, that is the time from the onset of the second score to the corresponding key press on the MIDI keyboard. Outliers larger than mean §2SD were excluded from further analysis. Repeated measures analysis of variance (ANOVA) was performed on the three-level factor “condition” (match, mismatch, no-sound), and a Duncan’s post hoc analysis was used to test speciWc comparisons. Results and discussion Mean RTs were 464 ms (SE 11.29) for the match condition, 483 ms (SE 17.57) for mismatch, and 448 ms (SE 8.79) for no-sound. A within-subjects ANOVA revealed a signiWcant main eVect of “cueing” [F(2,26) = 8.46, p < 0.001]. Post hoc analysis showed a signiWcant diVerence between match and mismatch conditions (p < 0.05), with longer RT for mismatch (on average 19.14 ms, SE 8.26) and also a signiWcant diVerence between no-sound

123

326

and mismatch conditions (p < 0.001), with longer RT for the Mismatch (on average 35.3 ms, SE 10.51). The diVerence between match and no-sound was not signiWcant (p = 0.27; on average 16.2 ms, SE 6.6). Errors were not analyzed since subjects performance was almost perfect in terms of accuracy (>99%). The results of the Wrst experiment conWrm that a motor plan can be interfered with a sound that evokes a diVerent motor representation. However, they also show that a congruent piano sound does not facilitate performance, but it tends to delay the response, though not by as much as a mismatching sound does. Therefore, we can see that even a matching sound does not produce an RT eVect which is signiWcantly diVerent from a baseline with no co-occurring musical sounds. According to these results, it is inappropriate to speak about facilitation in this case.

Experiment 2 In Experiment 1 the visuo-motor task alone did not diVer from the cued match condition. This lack of facilitation in the congruent condition is likely due to the fact that facilitation and interference happen at diVerent moments during the action planning. In Experiment 2 we directly tested the hypothesis whether this lack of facilitation is caused by action planning processes. For this purpose, we manipulated the time interval between the auditory cue and the visual imperative stimulus while maintaining most of the relevant features of the previous experiment. Method Nine right-handed (assessed with the same method used in Experiment 1) pianists (4 female; age range 22–42 with mean 26.8 and SD 6.1) participated in the second experiment. None of them had participated in Experiment 1 nor were informed of the results of the Wrst study. The same questionnaire as in Experiment 1 assessed the weekly amount of practice (mean 14.1 h; SD 11.2), years of training (mean 11.8 years; SD 3.4), and music studies starting age (mean 9.1 years; SD 2.4) for each subject. Selection criteria were the same as those used in Experiment 1. No musicians had absolute pitch and all subjects demonstrated to have perfect sight reading for simple music scores. Subjects were presented with both visual and auditory stimuli and asked to play the visually presented notes on a piano keyboard. The main focus of Experiment 2 was that of exploring the time deployment of the eVects seen in Experiment 1. In order to reduce the experimental session length, we reduced the number of Wnger conditions by selecting a subset of scores and notes used in Experiment 1.

123

Exp Brain Res (2010) 201:323–330

SpeciWcally, since the Wrst experiment showed that index and thumb RTs are highly comparable, only these two Wngers were used (thumb and index Wnger RT diVerence, 2 ms; SE 11.12). Thumb, index, and middle Wngers were always kept in the same natural position. Visual stimuli consisted of scores “G4”, “A4”, and “B4” presented on the computer screen. Auditory stimuli consisted of the piano notes “G4”, “A4”, and “B4”. Task requirements and procedures for Experiment 2 were the same as in Experiment 1 except for the details explicitly described. The equipment was the same as in Experiment 1. Stimuli presentation, conditions, randomization, and response recording was controlled by a custommade script in MAX/MSP. The experiment consisted of 650 trials lasting on average 3 s each, plus Wve training trials at the beginning. The experimental session was divided into Wve blocks of 130 trials, to allow the subjects to rest between blocks. As in Experiment 1, a trial consisted of a “B4” note presented on the screen that signaled the subject to press that key on the piano (middle Wnger); the correct key press was followed by the auditory “B4” and triggered the disappearance of the visual “B4”. The second score, presented after a 500 ms blank interval, was visible for 130 ms and showed the initial “B4” followed by one of two notes: “G4” or “A4”. At this point the subject had to press the piano key indicated by the second note on the staV as quickly as possible, by using, respectively, the thumb or index Wnger. No auditory feedback followed the second key press. The design included two variables. The Wrst one, “condition”, analogously to Experiment 1, included three conditions: match, mismatch, and no-sound. The second variable, “time”, concerned the timing of the audio stimuli (Fig. 1). Piano sounds, either congruent or not, could appear in six time positions relative to the onset of the imperative visual stimulus: ¡300 (t¡300), ¡200 (t¡200), ¡100 (t¡100), 0 (t0), +100 (t+100), +240 (t+240) ms. A two-factor repeated measure ANOVA was run considering “time” and “condition” as within factors. The dependent measure was RT, as in Experiment 1. Post hoc comparisons (Duncan’s) were used to test speciWc comparisons. Results and discussion The results are shown in Fig. 1. The analysis revealed signiWcant main eVects of “condition” [F(2,16) = 5.812, p < 0.05] and “time” [F(5,40) = 7.079, p < 0.0001], as well as a signiWcant interaction [F(10,80) = 7.137, p < 0.0001]. Post hoc analysis showed a signiWcant diVerence between mismatch and match for t¡300 (p < 0.0001), t¡200 (p < 0.0001), t¡100 (p = 0.0295) and t0 (p = 0.0159) (average diVerence in RT between mismatch and match: t¡300 = 33 ms, t¡200 = 32 ms, t¡100 = 12 ms, t0 = 13 ms,

Exp Brain Res (2010) 201:323–330

Paradigm

Video

327

Motor

Match MisMatch

Audio

NoSound

RT

ms

Motor Response

+240 +100

Imperative Visual Stimulus + sound (/NoSound)

Match MisMatch

0

SOA

-100 -200 -300

Fig. 2 Results of Experiment 2. Match (dark grey) and mismatch (light grey) for each time point (¡300, ¡200, ¡100, 0, +100, +240 ms). Error bars denote standard error, and asterisks signiWcant comparisons. The horizontal line represents the RTs for the no-sound condition, and the light grey thick line shows its standard error

Motor Response & Associated Sound

Experiment 3 Imperative Visual Stimulus Fixation Point

Fig. 1 Experiment 2 timeline. Graphical representation of the procedure used in the second experiment. We employed a task very similar to that of experiment 1. Here we add a varying delay between the presentation of the second visual imperative stimulus and the audio presentation, either matching or not with the video

t+100 = 6 ms, t+240 = 2 ms). Mismatch RTs were signiWcantly longer than no-sound RTs at t¡200 (p = 0.0291), t¡100 (p = 0.0313), and t0 (p = 0.0025), while match RTs were shorter than no-sound RTs at t¡300 (p < 0.0001) and t¡200 (p = 0.0003). RTs were faster than in Experiment 1 due to the use of a 2-choice instead of a 4-choice paradigm. Errors were not analyzed since subjects’ performance was almost perfect in terms of accuracy (>99%). Summing up the Wndings, we can conWrm the results of Experiment 1 at t0. When imperative stimulus and sound were simultaneously presented, mismatch caused slower responses than both match and the no-sound baseline. More interestingly, we found that the diVerence between match and mismatch was present only at the early time points (t¡300, t¡200, t¡100, and t0). SpeciWcally, interference with the motor plan was signiWcant at the t¡200, t¡100, and t0 ms time points, while facilitation showed its eVect earlier, for the t¡300 and t¡200 ms conditions. This pattern of results shows that facilitation and interference, triggered by action eVects, act within a small time overlap. Facilitation is visible earlier and dissolves around the time when interference is beginning to inXuence subjects’ RTs Fig. 2.

In Experiment 2 we showed that varying SOAs induced dissociable eVects on performance, for congruent and incongruent conditions. However, it is still possible that, at least to some extent, the auditory cue can inXuence motor performance not because of its informative content (congruent or incongruent with the motor plan), but because it provides an aspeciWc auditory signal able to alert (facilitation) or divert (interference) the subject from the motor planning depending on the given SOA. In order to exclude this alternative attentional interpretation of our main result, in Experiment 3 we tested whether the presentation of a meaningless sound (not associated to any response) has any eVect on the visuo-motor task. Method Twelve right-handed (assessed with the same method used in Experiment 1) pianists (3 female; age range 18–43 with mean 26.8 and SD 7.3) participated in the third experiment. None of them had participated in Experiment 1 or 2 nor was informed of the results of these studies. The same questionnaire as in Experiment 1 assessed the weekly amount of practice (mean 15.6 h; SD 12.3), years of training (mean 14.4 years; SD 9.1), and music studies starting age (mean 12.3 years; SD 4.0) for each subject. Selection criteria were the same as those used in Experiment 1. No musicians had absolute pitch and all subjects demonstrated to have perfect sight reading for simple music scores. Subjects were presented with both visual and auditory stimuli and asked to play the visually presented notes on a piano keyboard. The main focus of Experiment 3 was that of verifying the inXuence of a task-unrelated sound presented at diVerent SOA (the same as in Experiment 2) to

123

328

subjects’ performance. Task requirements and procedures for Experiment 3 were the same as in Experiment 2 except for few details explicitly described. The equipment was the same as in Experiment 1 and 2. Stimuli presentation, conditions, randomization, and response recording was controlled by a custom-made script in MAX/MSP. The experiment consisted of 350 trials lasting on average 3 s each, plus 5 training trials at the beginning. The experimental session was divided into 2 blocks of 175 trials, to allow the subjects to rest between blocks. Trial timeline, visual imperative stimuli and subjects’ instructions were the same as in Experiment 2. The only change was that instead of a congruent/incongruent piano sound, we presented a burst of white noise. The white noise stimulus lasted 400 ms (with a 10 ms linear ramp at the beginning and 30 ms linear ramp at the end). In a pre-experimental psychophysical assessment, the white noise level was adjusted to match the intensity of piano notes used in the previous experiments. The noise was presented at the same SOAs used in Experiment 2. Therefore, the timing of the audio stimuli relative to the onset of the imperative visual stimulus was: ¡300 (t¡300), ¡200 (t¡200), ¡100 (t¡100), 0 (t0), +100 (t¡100), +240 (t¡240) ms. A no-sound condition was also included as in Experiment 3 as a reference. The dependent measure was RT. We ran a two-tailed paired t test (corrected with the Bonferroni method) between nosound and all SOAs to verify whether a noise burst presented at diVerent time points could aVect performance. In addition, we ran an independent sample two-tailed t test between the no-sound conditions of Experiment 2 and 3 in order to verify whether the two groups were comparable in terms of general performance. Results and discussion The two-tailed paired t test between no-sound and all SOAs showed no signiWcant diVerence for all SOA (t¡300 vs. NoSound: t(11) = ¡2.24, p = n.s.; t¡200 vs. NoSound: t(11) = ¡3.05, p = n.s.; t¡100 vs. NoSound: t(11) = 0.65, p = n.s.; t0 vs. NoSound: t(11) = 1.81, p = n.s.; t¡100 vs. NoSound: t(11) = 1.81, p = n.s.; t¡240 vs. NoSound: t(11) = 0.9, p = n.s.). Mean RT and standard errors for each condition are shown in Table 1. The no-sound conditions across Experiment 2 and 3 did not diVer (t(19) = 0.668, p = n.s.—Mean RT: Experiment 2, 407, SE 6; Experiment 3, 422, SE 19). Errors were not analyzed since subjects’ performance was almost perfect in terms of accuracy (>99%). On the whole, our results demonstrate that the mere presentation of an irrelevant sound (at any SOA from the imperative stimulus) did not aVect subjects’ performance. Moreover, the fact that the RTs in the no-sound condition did not change between Experiment 2 and 3 is an

123

Exp Brain Res (2010) 201:323–330 Table 1 Results of Experiment 3 t¡300

t¡200

t¡100

t0

T+100

T+240

NoSound

RT

410

411

424

430

430

425

422

SE

20.1

19.8

18.3

18.6

17.9

18.5

19

RT and standard error values for all conditions in Experiment 3

additional corroboration that the two groups of subjects have analogous RT performance. Therefore, we can conWrm that the inXuence of sound we found in Experiments 1 and 2 is not related to an aspeciWc attentional eVect, but it is due to the speciWc musical information that that sounds transmits.

General discussion The ideomotor principle posits that actions and eVects become associated due to their repetitive co-occurrence (Hommel et al. 2001). The mere presentation of an eVect activates the associated motor plan, whereas, conversely, motor preparation aVects the ability to detect and process the associated sensory stimulus. These simple predictions have been veriWed in a number of studies, conWrming this approach as a viable tool in the study of action-perception integration processes (for a review see Schütz-Bosbach and Prinz 2007). In this context, experts are a model of overlearning of these kinds of associations. For example, musicians in the audio-motor domain (Repp and Knoblich 2004; Keller et al. 2007) and dancers in the visuo-motor domain (Calvo-Merino et al. 2006) demonstrate a higher motor awareness within their Weld of expertise. Conversely, it has been found that musicians (Drost et al. 2005a, b, 2007), as well as touch typists (Rieger 2004), show motor preparation when attending to stimuli belonging to the domain of their motor expertise, such as musical notes or printed letters. However, little is known about the time course of these latter eVects. Our experiments speciWcally examined the temporal deployment of the interactions between sound and movement in audio-motor experts. Moreover, the introduction of two control conditions, one with the ecological condition of no-sound and the other with a non musical sound, allowed us to have a sharper control of the eVects of congruence and incongruence between cue and imperative stimulus. We found a strong facilitation when the sound preceded the imperative visual stimulus by 200–300 ms. DiVerently, the interference was eVective in a time window running from 200 ms before to the onset of the visual imperative stimulus. Neither interference nor facilitation occurred when the sound is presented after the onset of the visual imperative stimulus.

Exp Brain Res (2010) 201:323–330

Musical tones presented at late SOAs (t¡100 and t¡240 ms) did not elicit any signiWcant eVect on motor performance. Such result is consistent with a temporal model of musical tone processing where the Wrst “motor” stage should fall around 200 ms after stimulus onset (Shahin et al. 2003; Kuriki et al. 2006). Thus, at positive SOAs (+100 and +240 ms) the Wrst stage of audio-motor conversion appears at least 300 ms after the visual imperative stimulus. We interpret this lack of facilitation as if the sound-evoked motor resonance occurred too late in the motor planning cascade. The earliest SOA (t¡300) only showed facilitation and no interference on RTs. A 300 ms interval has been indicated to be the time necessary to terminate an audio-motor transformation and preactivate a full motor representation (Pizzamiglio et al. 2005; Shahin et al. 2008). A 300 ms interval is also congruent with the timing of the premotor cortex activation (Murray et al. 2006; Pizzamiglio et al. 2005), often reported in investigations of sound-evoked motor plans (Lahav et al. 2007; Kaplan and Iacoboni 2007), and of top-down control processes in music behaviours (Shahin et al. 2008; Koelsch 2006). Thus, the lack of interference observed at the ¡300 ms SOA might be caused by a topdown inhibition of the motor plan evoked by the incongruent sound. Thus, we propose that higher-order processes need at least 300 ms to fully modulate the eVects of the sound-evoked plan: after this time lapse from the sound onset, these top-down processes can take advantage of audio-motor transformation (facilitation) or inhibit it (therefore avoiding interferences). The ¡200 ms SOA (t¡200) instead, evidenced both facilitation and interference of subjects’ RTs. This result is in line with our prediction that 200 ms is the time necessary to complete the Wrst stage of audio-motor conversion (Shahin et al. 2003; Kuriki et al. 2006). By the time the subject is visually prompted to play a musical score, the sound analysis has already reached the Wrst critical step in the audiomotor continuum. However, this stage might reXect an almost fast process of motor conversion that is not subject to higher order control and thus elicit both facilitation and interference. The ¡100 ms SOA (t¡100) instead evidenced only interference. This result might indicate that the motor translation has not been completed, or otherwise we would have also seen facilitation. This time lag, according to the abovementioned neurophysiological experiments, represents the auditory analysis culminating in a spectral representation of a complex musical stimulus (Shahin et al. 2003). Therefore, the interference we observe at this early stage might be of a diVerent nature, possibly linked with the perceptual crossmodal interaction between auditory and visual analyses. Following this rationale, our interference seems to be due to the cross-modal incongruence between the sound and the

329

imperative stimulus during the bottom-up conversion of both auditory and visual stimuli into motor plans. This process is very fast and it appears to be eVective before the motor transformation of the sound has been carried out exhaustively. Our results show that facilitation and interference have two diVerent time-courses, but why would we need two diVerent time deployments for such processes? One possible explanation might be found by using an ecological approach. In fact, motor performance in a simple or overlearned task is usually optimal, and there is no signiWcant advantage in speeding up its execution. However, interference might play an important role in error correction and online performance tuning, especially for skilled behaviours (Maidhof et al. 2009). These considerations become even more important if we consider that online monitoring of motor performance is only a part of the picture. For instance, musicians can also act in strict coordination with other individuals. In order to do so, the system is necessarily tuned to perform fast online corrections that are driven by bottom-up mechanisms and rely on shared representations of actions and perceptions (Schütz-Bosbach and Prinz 2007; Rizzolatti and Craighero 2004; D’Ausilio 2007). These considerations are of extreme relevance if we consider that in ecological contexts the observer (listener) is not passively waiting for information—which is the case for typical experimental paradigms. In natural contexts, sensory stimulation can also be produced by other’s actions that are often the result of concerted activities between the perceiver and one or more individuals (Sebanz et al. 2006).

Conclusion In this study, we conWrm that potential action eVects are translated into motor representations, as already demonstrated in a variety of behavioural, neuroimaging, and neurophysiological studies (Schütz-Bosbach and Prinz 2007; Rizzolatti and Craighero 2004). Our results also show that these sensory-motor transformations modulate behaviour according to their timing and congruence with the action that will be prepared. Correspondingly, we speculate that the auditory-motor translation pre-activates the motor plan that would produce the incoming stimulation, but also that the eVects on motor performance strictly depend on the state the motor system will engage in the near future. The timing analysis allowed us to describe the processes of action interference and facilitation as pertaining to two temporally dissociable mechanisms that govern action performance when it has to be adjusted in response to external events: a Wrst, early, mechanism able to interfere with the planned action and a second, late, higher level mechanism able to facilitate the motor planning. It is also

123

330

possible that the sensory-motor resonance could be qualitatively diVerent if the observer (listener) has to imitate action, respond with a complementary action, or simply understand action (Newman-Norlund et al. 2007). In this study, we provided a trace of the temporal dynamics of the mechanisms underlying sensory-motor integration together with the observation of dissociation between facilitation and interference. This evidence can provide new insights on the understanding of the mechanisms of experts’ performance, action monitoring, and action understanding.

References Bangert M, Altenmüller EO (2003) Mapping perception to action in piano practice: a longitudinal DC-EEG study. BMC Neurosci 4:26 Bangert M, Peschel T, Schlaug G, Rotte M, Drescher D, Hinrichs H, Heinze HJ, Altenmüller E (2006) Shared networks for auditory and motor processing in professional pianists: evidence from fMRI conjunction. Neuroimage 30:917–926 Baumann S, Koeneke S, Schmidt CF, Meyer M, Lutz K, Jancke L (2007) A network for audio-motor coordination in skilled pianists and non-musicians. Brain Res 1161:65–78 Calvo-Merino B, Grèzes J, Glaser DE, Passingham RE, Haggard P (2006) Seeing or doing? InXuence of visual and motor familiarity in action observation. Curr Biol 16:1905–1910 D’Ausilio A (2007) The role of the mirror system in mapping complex sounds into actions. J Neurosci 27:5847–5848 D’Ausilio A, Altenmüller E, Olivetti Belardinelli M, Lotze M (2006) Cross-modal plasticity of the motor cortex while listening to a rehearsed musical piece. Eur J Neurosci 24:955–958 Drost UC, Rieger M, Brass M, Gunter TC, Prinz W (2005a) ActioneVect coupling in pianists. Psych Res 69:233–241 Drost UC, Rieger M, Brass M, Gunter TC, Prinz W (2005b) When hearing turns into playing: movement induction by auditory stimuli in pianists. Q J Exp Psychol A 58:1376–1389 Drost UC, Rieger M, Prinz W (2007) Instrument speciWcity in experienced musicians. Q J Exp Psychol A 60:527–533 Elbert T, Pantev C, Wienbruch C, Rockstroh B, Taub E (1995) Increased cortical representation of the Wngers of the left hand in string players. Science 270:305–307 Fadiga L, Craighero L, D’Ausilio A (2009) Broca’s area in language, action, and music. Ann N Y Acad Sci 1169:448–458 Haueisen J, Knösche TR (2001) Involuntary motor activity in pianists evoked by music perception. J Cogn Neurosci 13:786–792 Hommel B, Müsseler J, Aschersleben G, Prinz W (2001) The theory of event coding (TEC): a framework for perception and action planning. Behav Brain Sci 24:849–878 Hund-Georgiadis M, von Cramon DY (1999) Motor-learning related changes in piano players and non-musicians revealed by functional magnetic-resonance signals. Exp Brain Res 125:417–425 Kaplan JT, Iacoboni M (2007) Multimodal action representation in human left ventral premotor cortex. Cogn Process 8:103–113 Keller PE, Knoblich G, Repp BH (2007) Pianists duet better when they play with themselves: on the possible role of action simulation in synchronization. Conscious Cogn 16:102–111 Koelsch S (2006) SigniWcance of Broca’s area and ventral premotor cortex for music-syntactic processing. Cortex 42:518–520

123

Exp Brain Res (2010) 201:323–330 Kuriki S, Kanda S, Hirata Y (2006) EVects of musical experience on diVerent components of MEG responses elicited by sequential piano-tones and chords. J Neurosci 26:4046–4053 Lahav A, Saltzman E, Schlaug G (2007) Action representation of sound: audiomotor recognition network while listening to newly acquired actions. J Neurosci 27:308–314 Maidhof C, Rieger M, Prinz W, Koelsch S (2009) Nobody is perfect: ERP eVects prior to performance errors in musicians indicate fast monitoring processes. PLoS ONE 4:e5032 Münte TF, Altenmüller E, Jäncke L (2002) The musician’s brain as a model of neuroplasticity. Nat Rev Neurosci 3:3473–3478 Murray MM, Camen C, Gonzalez Andino SL, Bovet P, Clarke S (2006) Rapid brain discrimination of sounds of objects. J Neurosci 26:1293–1302 Newman-Norlund RD, van Schie HT, van Zuijlen AM, Bekkering H (2007) The mirror neuron system is more active during complementary compared with imitative action. Nat Neurosci 10:817–818 OldWeld RC (1971) The assessment and analysis of handedness: the Edinburgh inventory. Neuropsych 9:97–113 Pantev C, Oostenveld R, Engelien A, Ross B, Roberts LE, Hoke M (1998) Increased auditory cortical representation in musicians. Nature 392:811–814 Pascual-Leone A, Nguyet D, Cohen LG, Brasil-Neto JP, Cammarota A, Hallett M (1995) Modulation of muscle responses evoked by transcranial magnetic stimulation during the acquisition of new Wne motor skills. J Neurophysiol 74:1037–1045 Pizzamiglio L, Aprile T, Spitoni G, Pitzalis S, Bates E, D’Amico S, Di Russo F (2005) Separate neural systems for processing action- or non-action-related sounds. Neuroimage 24:852–861 Prinz W (1990) A common coding approach to perception and action. In: Neumann O, Prinz W (eds) Relationships between perception and action: current approaches. Springer, Berlin, pp 167–201 Prinz W (1997) Perception and action planning. Eur J Cogn Psych 9:129–154 Repp BH, Knoblich G (2004) Perceiving action identity: how pianists recognize their own performances. Psych Sci 15:604–609 Rieger M (2004) Automatic keypress activation in skilled typing. J Exp Psych: HPP 30:555–565 Rizzolatti G, Craighero L (2004) The mirror-neuron system. Ann Rev Neurosci 27:169–192 Rosenkranz K, Williamon A, Rothwell JC (2007) Motorcortical excitability and synaptic plasticity is enhanced in professional musicians. J Neurosci 27:5200–5206 Schlaug G, Jäncke L, Huang Y, Steinmetz H (1995) In vivo evidence of structural brain asymmetry in musicians. Science 267:699–701 Schütz-Bosbach S, Prinz W (2007) Perceptual resonance: actioninduced modulation of perception. Trends Cogn Sci 11:349–355 Sebanz N, Bekkering H, Knoblich G (2006) Joint action: bodies and minds moving together. Trends Cogn Sci 10:70–76 Shahin A, Bosnyak DJ, Trainor LJ, Roberts LE (2003) Enhancement of neuroplastic P2 and N1c auditory evoked potentials in musicians. J Neurosci 23:5545–5552 Shahin AJ, Roberts LE, Chau W, Trainor LJ, Miller LM (2008) Music training leads to the development of timbre-speciWc gamma band activity. Neuroimage 41:113–122 Stewart L, Henson R, Kampe K, Walsh V, Turner R, Frith U (2003) Brain changes after learning to read and play music. Neuroimage 20:71–83 Wolpert DM, Kawato M (1998) Multiple paired forward and inverse models for motor control. Neural Net 11:1317–1329 Zatorre RJ, Chen JL, Penhune VB (2007) When the brain plays music: auditory-motor interactions in music perception and production. Nat Rev Neurosci 8:547–558

How and when auditory action eVects impair motor ...

Oct 22, 2009 - ¡300 and ¡200 ms), whereas we observed interference when an incongruent ... planning and the degree of asynchrony between the motor act and the ...... ily tuned to perform fast online corrections that are driven by bottom-up ...

358KB Sizes 0 Downloads 140 Views

Recommend Documents

Motor contributions to the temporal precision of auditory ... - Nature
Oct 15, 2014 - saccades, tactile and haptic exploration, whisking or sniffing), the motor system ..... vioural data from a variant of Experiment 1 (Fig. 1) in which ...

dissociation of action and intention in human motor ...
neuron system and reflects the motor program encoding the observed action. Whether motor resonance ... observer's own natural grasping program, regardless of the avatar's action – as the ..... abstract intentional content of the action. The above .

The empathic brain: how, when and why?
Sep 1, 2006 - philosophy, we question the assumption of automatic empathy and propose ..... call forth a concept, a word is needed. J. Pers. Soc. .... Phone: +1 800 460 3110 for Canada, South and Central America customers. Fax: +1 314 ...

How, when and Yen
25,000 workers; and value-added, cost- savings ..... Hillwood is starting construction on five buildings ..... by, Cheddar's Casual Cafe, Wells Fargo and. Bank of ...

Dynamic Modulation of Human Motor Activity When Observing Actions
Feb 23, 2011 - College London, London WC1N 3AR, United Kingdom. Previous studies ... analyses, and to David Bradbury for technical assistance. We also ...

Auditory Attention and Filters
1970), who showed that receiver operating characteristics (ROCs) based on human performance in a tone-detection task fitted well to comparable ROCs for.

Counting on the motor system- rapid action planning reveals the ...
Counting on the motor system- rapid action planning re ... agnitude-dependent extraction of numerical quanity.pdf. Counting on the motor system- rapid action ...

The Role of Action Observation in Motor Memory ...
Department of Biomedical Engineering, Washington University in Saint Louis, Saint ... conditions, subjects trained in a thirty-minute action observation session.

Chromatin remodelling: why, when & how?
reasons for this. Remodellers come in different families with different functions. They are multiprotein com- plexes for which structural information is difficult to ... switch (ISWI) chromatin remodellers operate in mam- malian cells and (d) in the

Subliminal speech perception and auditory streaming - Laboratoire de ...
Current theories of consciousness assume a qualitative dissociation between conscious and unconscious processing: while subliminal stimuli only elicit a transient activity, supraliminal stimuli have long-lasting influences. Nevertheless, the existenc

When, where and how to perform efficiency estimation
Sep 15, 2011 - ness, government, infrastructure, public transportation, energy production ... the approximately 850 electricity and 730 gas companies was compulsory. ... either focus on one or the other, or propose a naive alternative when ...

When and How to Wash Your Hands.pdf
There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. When and How to Wash Your Hands.pdf. When and How to Wash Your Hands.pdf. Open. Extract. Open with. Sign In.

The Dieter's Dilemma: Identifying When and How to Control ...
chocolate over a healthy apple. That is, learning that one is closer to ..... Cambridge, England: Cambridge University Press. Amodio, D.M., Harmon-Jones, E., ...

QE likely to impair living standards for generations - Guggenheim ...
Mar 26, 2015 - Fiat currency is a form of currency that a government has declared to be legal tender, but is not backed by a physical commodity. This reprint ...

Propellant grain and rocket motor
In rocket motors of the prior art it has been customary to restrict the ends of the ... FIGURE 1 is an illustration, partly in cross section, of an improved rocket motor ...

Human-Auditory-System-Response-to-Modulated-Electromagnetic ...
Human-Auditory-System-Response-to-Modulated-Electromagnetic-Energy.pdf. Human-Auditory-System-Response-to-Modulated-Electromagnetic-Energy.pdf.

Metacognitive illusions for auditory information - Semantic Scholar
students participated for partial course credit. ... edited using Adobe Audition Software. ..... tionships between monitoring and control in metacognition: Lessons.