Consciousness and Cognition Consciousness and Cognition 16 (2007) 687–699 www.elsevier.com/locate/concog

Review

Close to me: Multisensory space representations for action and pre-reflexive consciousness of oneself-in-the-world Dorothe´e Legrand

a,*

, Claudio Brozzoli b,c,d, Yves Rossetti Alessandro Farne` b,c,d

b,c,d

,

a CREA, Paris F-75015, France INSERM, U864, Espace et Action, Bron F-69500 France c Universite´ Lyon, Lyon I, UMR-S 864, Lyon F-69003, France Hospices Civils de Lyon, Hoˆpital Neurologique, Mouvement et Handicap, Lyon F-69003, France b

d

Received 24 January 2007 Available online 1 August 2007

Abstract Philosophical considerations as well as several recent studies from neurophysiology, neuropsychology, and psychophysics converged in showing that the peripersonal space (i.e. closely surrounding the body-parts) is structured in a body-centred manner and represented through integrated sensory inputs. Multisensory representations may deserve the function of coding peripersonal space for avoiding or interacting with objects. Neuropsychological evidence is reviewed for dynamic interactions between space representations and action execution, as revealed by the behavioural effects that the use of a tool, as a physical extension of the reachable space, produces on visual–tactile extinction. In particular, tool-use transiently modifies action space representation in a functionally effective way. The possibility is discussed that the investigation of multisensory space representations for action provides an empirical way to consider in its specificity pre-reflexive self-consciousness by considering the intertwining of self-relatedness and object-directness of spatial experience shaped by multisensory and sensorimotor integrations. Ó 2007 Elsevier Inc. All rights reserved. Keywords: Extinction; Peripersonal space; Tool-use; Body-as-subject; Self-relatedness of spatial cognition

1. Egocentric frame of reference and peripersonal space An egocentric frame of reference is defined as the spatial frame of reference allowing to locate an object according to the perceiver’s own location: something is experienced as near or far, to the right or to the left relatively to the location in space of the subject’s body. In the egocentric frame of reference, we experience spatiality insofar as we relate locations to our own perspective as perceiving subjects. Egocentric space is thus a space one should think of ‘‘as a participant, as someone plunged into its center, as someone with things to do *

Corresponding author. E-mail address: [email protected] (D. Legrand).

1053-8100/$ - see front matter Ó 2007 Elsevier Inc. All rights reserved. doi:10.1016/j.concog.2007.06.003

688

D. Legrand et al. / Consciousness and Cognition 16 (2007) 687–699

in that space’’ (Campbell, 1994, 5). As stated by Merleau-Ponty about depth: ‘‘it announces a certain indissoluble link between things and myself by which I am placed in front of them’’ (Merleau-Ponty, 1962 [1945], p. 298). Moreover, and as will be further detailed in the next sections, one’s body is not a mere point anchoring an undifferentiated spatial frame. Rather, it also plays a role in structuring a peripersonal space, i.e. the space surrounding it. In short, then, and by definition, locating objects in the egocentric frame of reference and the peripersonal space involves locating them relatively to oneself, or to one’s body-parts. The question that these definitions raise concerns the status of such ‘‘self-relatedness’’: the egocentric frame is centred on me, and peripersonal space is surrounding me, but how is this ‘‘me’’ represented and experienced? 2. Object-centred spatial frames Importantly, egocentric localisation is not adequately described as the identification of places by reference to a particular physical object, namely, the body of the perceiver itself. The egocentric frame of reference is by definition centred on one’s body but the latter is not itself taken as an object located within that space (Campbell, 1993, 1994). Indeed, object-centeredness does not characterize egocentric but allocentric frames of reference, i.e. the spatial frame of reference allowing to locate objects with respect to each other. In this sense, identifying one’s location according to some objective spatial coordinates, and then relating one’s own location objectified as such to the location of other objects, would amount to locate an object according to one’s own location in an allocentric frame of reference. However, ‘‘. . . when the subject is identifying places egocentrically, it cannot be thought of as doing so by first identifying a physical thing—itself—through a body image, and then identifying places by their relation to its body’’ (Campbell, 1993, 74). The egocentric frame cannot be reduced to a body-centred allocentric space because, on the one hand, ‘‘axes that are distinctive of an egocentric frame are those which are immediately used by the subject in the direction of action’’ (Campbell, 1993, 75); while, on the other hand, the objectifying stance characterizing allocentric spatial representations is irrelevant for action. To account for the fact that a subject might successfully interact with objects located in the space surrounding him (e.g. cross a bridge), it is not sufficient that he has a (mental or physical) map indicating his own location (the familiar marker ‘‘you are here’’) and the location where the bridge starts. As anyone who ever used a map knows, such information is useful to move towards the bridge from one’s current location if and only if one manages to relate the allocentric marker ‘‘you are here’’ to one’s current physical location, in the egocentric frame of reference. Knowing in which direction you need to turn in order to go from ‘‘here’’ to the bridge requires knowing whether the bridge is to your right or your left: you need to locate the bridge in your egocentric frame of reference. This is so because ‘‘right’’ and ‘‘left’’ are viewer-dependent directions, while non-egocentric maps are ‘‘objective’’ in the sense of being independent of the contingencies of the perceiver/agent’s location (Grush, 2001). The same goes in the peripersonal space. For example, if you have a non-egocentric map of your desk, indicating the keyboard and a marker ‘‘your hand is here’’, that would remain insufficient to move your hand efficiently in order to type on the keyboard. For that to be possible, you would need to orient such map in your egocentric frame of reference. It follows from these considerations that the egocentric frame of reference is not merely ‘‘a particular class of object-centred frame, namely, those which are centred on the body or a part of the body [taken as object]’’ (Campbell, 1993, 72). 3. Non-objectifying spatial self-relatedness As just underlined, the origin of the egocentric frame and the delimitation of peripersonal space is not the body as an object observed among others. Rather, the ability to execute action towards egocentrically located object implies that the egocentric frame must be centred on the body as a subject-agent. Better understanding this point requires the clarification of the notions of body-as-object and body-as-subject. The body-as-object is the body as object of e.g. perception (more generally as object of representation/ experience). The body-as-subject is the body as subject/agent of e.g. perception (more generally as subject of representation/experience). By definition, these terms remain ontologically neutral, thus the body-as-object

D. Legrand et al. / Consciousness and Cognition 16 (2007) 687–699

689

and the body-as-subject are not supposed to differ from each other from an objective physiological point of view. However, it is essential to consider that the body-as-subject cannot be captured in its specificity if it is taken as an object of perception, for that can only lead, by definition, to a representation of the body-asobject. In this sense, saying that the egocentric reference frame and the peripersonal space are centered on the body-as-subject, as we argued for in the pervious sections, amounts to say that egocentric and peripersonal space are structured in a self-related and non-objectifying manner. To account for such non-objectifying form of self-relatedness characterizing the egocentric frame of reference and the peripersonal space, Campbell introduces the notion of causal indexicality (1993, 1994). The latter characterizes one’s practical grasp of the implication for one’s action of one’s surrounding environment, by contrast with a ‘‘detached’’ (allocentric) picture of it. For example, a notion such as ‘‘within reach’’ defines peripersonal space by having immediate implications for the subject’s action: the subject’s representation of something as being within (or beyond) reach is tied to the subject’s ability (or unability) to perform reaching actions towards the object so represented. In Campbell’s terms, such spatial representation is causally indexical. Coherently with the aforementioned distinction between subject and object, such indexicals are essential in Perry’s sense (1993): they cannot be reduced to any objective description. In the spatial domain, the egocentric frame is essentially indexical in the sense that it is anchored to the perceiver’s body in a way that cannot be reduced to any objectifying representation of one’s body. Again, this is not to say that the non-objectified body differs ontologically from the biological body (Bermudez, 2005). Here, the point is only to differentiate objectifying ways of relating to one’s body-as-object and non-objectifying ways of relating to one’s body-as-subject (Gallagher, 2003; Legrand, 2006, 2007). The relation to one’s body-as-subject is not objectifying in the sense that e.g. grasping an object with one’s hand necessitates locating the object in a body-relative space, but does not necessitate locating one’s hand as a particular object in this same body-relative spatial frame: ‘‘the subject, when put in front of his scissors, needle and familiar tasks, does not need to look for his hands or his fingers, because they are not objects to be discovered in objective space’’ (Merleau-Ponty, 1962 [1945], p. 121). ‘‘I observe external objects with my body, I handle them, examine them, walk around them, but my body itself is a thing which I do not observe: in order to be able to do so, I should need the use of a second body which itself would be unobservable’’ (Merleau-Ponty, 1962 [1945], p. 104). This point has been made forcefully not only in phenomenology but also in analytic philosophy of mind (see e.g. Bermudez (1998, 2005)).1 4. The subject as origin of spatial reference These considerations may seem to raise an important problem: how is it that one’s hand can e.g. grasp scissors if hand and scissors are not located in the same spatial frame? Bermudez proposes that ‘‘there is no single spatial coordinate system that encompasses both bodily awareness and external perception’’ and concludes from there that ‘‘acting effectively upon the world requires some sort of translation between [these] two fundamentally different coordinate frames’’ (2005, 313). However, this is a potential solution to a problem that in fact does not arise at this level. The problem is not to relate two objects, hand and scissors, that would be located in two distinct spatial frames. Indeed, the bodyas-subject is located neither in the scissors’ exteroceptive spatial frame, nor in any other objectifying spatial frame where one’s body parts would be located relative to e.g. the immovable torso and/or relevant joints (Bermudez, 2005, 310-1). Such location concerns only the body-as-object and as described above, that would merely amount to a particular type of allocentric location. As recalled by Campbell, the immediate problem with such a view is to understand why referring to any additional objectifying spatial frame should be thought to achieve anything relevant regarding the determination of the egocentric frame of exteroception (1993, 73). 1

Note that this point concerns normal everyday life cases. Of course, the point is not to say that we cannot objectify one’s body parts by explicitly observing them and their location. For example, learning new technical actions sometimes requires looking both at the executing limb and the target of the intended action. It is the case as well in experimentally distorted conditions (e.g. when one needs to learn to move appropriately while wearing prisms which distort the perceived visual field). Objectification of one’s body as a whole happens in so-called out-of-body experiences (Blanke & Arzy, 2005; Blanke, Landis, Spinelli, & Seeck, 2004, 2005) where the subject reports to be observing his own body from the outside.

690

D. Legrand et al. / Consciousness and Cognition 16 (2007) 687–699

In fact, at this level, the body-as-subject is not located in any objectifying frame of reference, be it exteroceptive or interoceptive. In Bermudez’ own terms, ‘‘the spatial content of bodily awareness cannot be specified within a Cartesian frame of reference that takes the form of axes centered on an origin’’ (2005, 310). This is so because the body-as-subject is the origin of the egocentric frame of reference where objects are located. Therefore, what needs to be understood is that the body’s spatiality is not only fundamentally distinct from exteroceptive spatiality but also and at the same time constitutively intertwined with it, by constituting its point of origin. 5. Intersection In what follows, we will consider empirical investigations of the ways spatiality is structured relatively to the subject’s body and action, thereby trying to tackle some of the behavioural correlates of the experience of spatiality. Given that the representation of egocentric and peripersonal space is both object-directed and self-related, the following sections will consider whether shifting from the theoretical to the empirical levels of investigation can allow the determination of any of the neurophysiological mechanisms underlying the intertwining of object-directedness and self-relatedness. To that aim, we will review a set of investigation tackling the body and action-relatedness of spatial representation. We will argue that these data cohere with the conception of consciousness as being characterized both by its subject and its object, that is, the conception of self-relatedness and object-directedness as being closely intertwined with each other. 6. Multisensory representation of peripersonal space: Neurophysiological evidence Peripersonal space, i.e. the space surrounding the perceiver’s body, is differentiated in far-peripersonal space and near-peripersonal space. Spatial coding, and especially the coding of near-peripersonal space is most frequently multisensory (Duhamel, Colby, & Goldberg, 1991, 1998; Graziano & Gross, 1995, 1998; Hyvarinen & Poranen, 1974; Rizzolatti, Scandolara, Matelli, & Gentilucci, 1981, 1998). Indeed, as we typically receive a simultaneous flow of information from each of our senses in real world situations, a stable representation of external space requires an integrated multisensory processing (Driver & Spence, 2002): information stemming from multiple sensory modalities needs to be integrated in a single, unified percept. The last two decades of neurophysiological research have brought a large body of evidence to support the notion that multisensory integration is a frequent feature in spatial representation. Animal studies revealed that the near-peripersonal space in the monkey is coded through multisensory integration at the single-neuron level. A circuit of subcortical structures and cortical areas of the monkey brain, like putamen, the post-central gyrus and parietal areas 7b and VIP, as well as premotor areas plays a special role in representing the space that closely surrounds the animal’s body. These areas contain multisensory neurons that have overlapping tactile and visual and/or auditory receptive fields (RF) in rough spatial register among them, the visual and auditory RF extending only a few centimetres outward from the somatosensory RF. These neurons respond to incoming stimuli of each modality, provided that visual or auditory stimuli are displayed close to the body part whereby the tactile RF is located. Most of these neurons conform to some common functional properties. Their visual RF are typically restricted to the space immediately surrounding the body part, and operate to some degree in body-part-centred coordinates, moving with the body-part when it moves, although multiple reference frames can be represented within the neuronal population (Avillac, Deneve, Olivier, Pouget, & Duhamel, 2005). The strength of the visual response decreases with distance from the body part and, overall, the distribution of such a multisensory representation covers mostly the animal’s head, torso and paws. On the basis of these functional properties, several authors have suggested that premotor cortex, parietal areas and putamen form an interconnected system for integrated multisensory coding of near-peripersonal space (Colby, Duhamel, & Goldberg, 1993; Duhamel, Colby, & Goldberg, 1998; Fogassi et al., 1996, 1998; Graziano, Hu, & Gross, 1997).

D. Legrand et al. / Consciousness and Cognition 16 (2007) 687–699

691

7. Peripersonal space coding in humans A growing body of neuropsychological evidence suggests that the human brain forms integrated visual–tactile representations of the space closely surrounding the body. In this respect, the study of a neuropsychological condition called ‘extinction’ provided considerable insight into the behavioural characteristics of multimodal spatial representation in humans. Extinction (Loeb, 1885; Oppenheim, 1885) is a pathological sign following brain damage whereby patients may fail to perceive contralesional stimuli only under conditions of double (contra- and ipsilesional) simultaneous stimulation (Bender, 1952), thus revealing the competitive nature of this phenomenon (Di Pellegrino & De Renzi, 1995; Driver, 1998; Duncan, 1980; Ward, Goodrich, & Driver, 1994). A number of studies have shown that extinction can emerge when concurrent stimuli are presented in different sensory modalities, i.e., different sensory inputs delivered to the ipsi- and contralesional side of the patient’s body. Tactile extinction, for example, can be modulated by visual and auditory events simultaneously presented in the space region near the tactile stimulation, increasing or reducing tactile perception, depending upon the spatial arrangement of the stimuli. In a series of studies, we tested whether the presentation of a visual stimulus in the right ipsilesional field could extinguish the tactile stimulus presented on the contralesional hand, which was otherwise well detected by patients when presented alone. The prediction of these studies was that if a multisensory (visuo-tactile) system processing tactile and visual stimuli near the body is in charge of coding left and right spatial representations, then delivering visual stimuli close to a body part (67 cm, i.e. in the near-peripersonal space) would be more effective in producing cross-modal visual–tactile extinction than presenting the same visual stimuli at larger distances (P35 cm, i.e. in the far-peripersonal space). The results of these studies confirmed the presence of stronger cross-modal visual–tactile extinction when visual stimuli were displayed in the near—as compared to the far-peripersonal space. These findings were taken as providing a strong neuropsychological support to the idea that the human brain represents near-peripersonal space through an integrated multisensory visuo-tactile system. Owing to this system’s activity, the somatosensory representation of the ipsilesional hand may be activated by the nearby presentation of a visual stimulus, thus competing with the contralesional hand representation activated by a tactile stimulus. Since the competition is biased in favour of the ipsilesional side in extinction patients, the ipsilesional visual stimulus appears to extinguish the contralesional stimulus presented in a different modality. This would be due to the fact that the processing of the somatosensory stimulation of the contralesional hand is disadvantaged in terms of competitive weights, bearing a comparatively weaker representation. We assessed which reference frame is used to code multisensory near-peripersonal space in a patient with left tactile extinction who was asked to cross the hands, so that the left hand was in the right hemispace and the right hand in the left hemispace (Di Pellegrino, La`davas, & Farne`, 1997). In such a crossed-hand situation, a visual stimulus presented near the right hand (located in the left space) still extinguished tactile stimuli applied to the left hand (now located in the right hemispace). Thus, visual–tactile extinction was not modulated by the position of the hands in space, as far as the spatial correspondence between sensory modality and the stimulated hand was kept constant (i.e., visual stimulus-right hand/tactile stimulus-left hand). This finding, by showing that the visual peripersonal space remains anchored to the hand even when it is moved in another hemispace, strongly suggests that the near-peripersonal space is at least partially coded in a hand-centered coordinate system. The pattern of results observed in the case of visual–tactile stimulation of the hand is highly consistent with the functional properties of the multisensory system that has been described in monkeys, further suggesting that human and non-human primates might share, at some level, similar cerebral mechanisms for near space representation. The multisensory representation of space is anchored neither to a mere ‘‘bodily point’’, nor to the body as a whole but to specific body-parts, in this case the hand. This raises the question of whether humans represent near-peripersonal space not only in relation to hands, but also to other body parts. In this respect, neurophysiological findings revealed a somatotopic distribution of multisensory neurons’ RFs, which are known to be mostly located on the animal hand/arm, trunk, and face. These latter neurons seem to be particularly relevant for the coding of near-peripersonal space, since a specific multimodal area of the parietal lobe (VIP) is mainly devoted to representing space near the face (Colby et al., 1993; Duhamel et al., 1991, 1998). On this basis, we reasoned that a multisensory mechanism, similar to that operating in the case of the hand, might also be

692

D. Legrand et al. / Consciousness and Cognition 16 (2007) 687–699

involved in representing near-peripersonal space in relation to the human face. Therefore, we followed the same rationale to investigate whether the presentation of ipsilesional visual stimuli might modulate left tactile extinction also at the level of the face (La`davas, Zeloni, & Farne`, 1998). Similarly, we expected that crossmodal extinction would be stronger by presenting visual stimuli near, as compared to far from, the patients’ face. This hypothesis has been assessed in a group of patients with right brain damage presenting left tactile extinction and was clearly supported. As for the hand, visual stimuli presented to the ipsilesional side produced a decrease in the detection of contralesional tactile stimuli, particularly when visual stimuli were presented near the ipsilesional cheek. In this near-peripersonal condition, patients reported only few touches of the left cheek, while these stimuli were otherwise well perceived when delivered alone. The extinction phenomenon was much less severe when visual stimuli were delivered far from the face; in the far peripersonal condition, patients were able to report the majority of contralesional touches. All together, these studies suggest that multisensory representations of space are coded within the near-peripersonal space of the face and the hand, and these representations might differ from those controlling visual information in the far-peripersonal space (Farne` & La`davas, 2002; La`davas, 2002). 8. Multiple representations of peripersonal space Does the modular organisation of space, which seems to operate as a general principle governing spatial perception, also apply to the representation of the near-peripersonal space? By referring to the Graziano and Gross’s metaphor of near-peripersonal space as a ‘gelatinous medium’ surrounding the body, we asked whether this would be a unitary and homogeneous sector of space encompassing the whole body, or an ensemble of modules separately representing the space immediately adjacent to a given body-part. We recently tested this unitary vs. modular representation hypothesis (Farne`, Dematte`, & La`davas, 2005). As the two hypotheses make opposite predictions, we contrasted them directly by investigating cross-modal visual–tactile extinction in a group of right brain-damaged patients. We reasoned that, if the unitary hypothesis were true, then tactile stimuli delivered on the contralesional hand would be comparably extinguished by ipsilesional visual stimuli irrespective of the stimulated body-part (either the hand or the face), provided that the visual stimulus were presented near the body. Alternatively, if near-peripersonal space is represented in a modular way, then tactile stimuli delivered on the contralesional hand would be more severely extinguished when ipsilesional visual stimuli are presented near the homologous body part (i.e., the right hand), than near the non-homologous body part (i.e., the right side of the face). The two hypotheses also differed with respect to the near–far modulation of cross-modal extinction since its presence in the case of stimulation of non-homologous sectors would support the unitary hypothesis, whereas its absence would favour the modular organization hypothesis. The results showed a visual–tactile extinction stronger for homologous than for non-homologous combinations and showed that the effect was selectively present when visual stimuli were presented near the ipsilesional side of the patients’ body. In sharp contrast, when visual stimuli were presented far from the ipsilesional side of the patients’ body, the amount of visual–tactile extinction obtained in homologous and non-homologous combinations was comparable. By extending to this peculiar sector of space the principle of the modular space organisation, these findings support the view that different multisensory representations are coded within the near-peripersonal space of the hand and the face. Further support to this view has been recently provided by neuroimaging findings showing a human parietal face area representing head-centred visual and tactile maps (Bremmer, Schlack, Duhamel, Graf, & Fink, 2001; Bremmer et al., 2001; Sereno & Huang, 2006). 9. Multisensory representation of peripersonal space for action Taken together, the neurophysiological and neuropsychological findings reviewed above converge in showing that peripersonal space is structured in far and near peripersonal space sectors, the latter being specifically coded in a multisensory, body-part-centered and modular manner. So far, these considerations allow for a fine-grained description of the structure of space and of its anchoring to the body. This provides the adequate basis to ask further questions about the determinants of such a spatial structure. Specifically, we will ask the following questions: Is the extension of the peripersonal space fixed in space or can it be modified? If it can be modified, what are the conditions of such a modulation? Is a simple change of our visual body-image sufficient

D. Legrand et al. / Consciousness and Cognition 16 (2007) 687–699

693

to dynamically remap far space as near, or is some kind of sensory-motor activity necessary to produce this remapping? In terms introduced above, is the structuring of peripersonal space related to the body-as-object, i.e. the body as object of perception (body image), or to the body-as-subject characterized by functional processes relating one’s perception of spatial location to one’s own transient ability to act towards these locations? We argued above that the egocentric frame of reference is anchored to the body-as-subject. The point here is to present empirical arguments showing that (1) in the peripersonal space, the egocentric frame is not merely anchored to the body-as-object, and (2) the body-as-subject is characterized functionally by its ability to act. As argued by Merleau-Ponty, one must differentiate between an objective representation of space linked to a reflexive representation of one’s body, on the one hand, and a pre-objective anchoring of space in non-reflexive bodily activity, on the other. He conceives of the experience of space as prior to any spatial judgement, and as setting face to face subject and world in a way that is determined by the possibility for action of a subject involved in the world (1962 [1945], 311). In what follows, we review empirical investigations of the specific manner in which space can be structured by the perceiver’s own action. Recent neurophysiological animal studies have examined whether the near-peripersonal space of monkeys’ hands, and especially its spatial extension and location, can be modified through different kinds of sensorimotor experience. The question at stake is whether a passive change of the corporeal configuration is sufficient, or whether some goal-directed activity is needed. So far, this question has been investigated by considering the effect of tool-use on the extension of the peripersonal space (Iriki, Tanaka, & Iwamura, 1996, 2001; Obayashi, Tanaka, & Iriki, 2000). Tools enable human beings and other animals to manipulate objects that would otherwise not be reachable by hands. Acting on distant objects by means of a physical tool requires sensory information that is mainly provided by vision and touch. The functional expansion of the peri-hand area whereby vision and touch are integrated would render the possibility of reaching and manipulating far objects as if they were closer to the hand. More in detail, a recoding of relatively far visual stimuli as nearer ones has been observed in monkey singlecells studies, after extensive training in using a rake to retrieve distant food, thus extending the hand’s reachable space by connecting the animal’s hand with objects located outside its reaching distance. A few minutes of tool-use induced an expansion of visual RF of visual–tactile neurons recorded in the parietal cortex. This rapid expansion along the tool axis seemed to incorporate the tool into the peri-hand space representation. The extended visual RF contracted back to the pre-tool-use dimension after a short rest, even if the monkey was still passively holding the rake (Iriki et al., 1996). No modification of the visual RF was ever found if the monkey was just passively holding the tool. Therefore, the tool-use related expansion of the visual RF was strictly-dependent upon the active use of the rake to reach distant objects. A similar effect of recoding of visual stimuli located in far-peripersonal space, as if they were closer to the participants’ body, has been documented behaviourally in right brain-damaged patients with tactile extinction (Farne` & La`davas, 2000). In this study, the amount of cross-modal visual–tactile extinction was assessed by presenting visual stimuli far from the patients’ ipsilesional hand, at the distal edge of a 38-cm long rake passively held in their hand. The patients’ performance was evaluated before tool-use, immediately after a 5-min period of tool-use, and after a further 5–10 min resting period. To control for any possible effect due to directional motor activity, cross-modal extinction was also assessed immediately after a 5-min period of hand pointing movements. We found that far visual stimuli induced more contralesional tactile extinction immediately after tool-use (retrieving distant objects with the rake), than before tool-use, when they just hold the rake passively. This evidence of a functional expansion of the peri-hand space lasted a few minutes after tool-use. After the resting period, the severity of cross-modal extinction was back to pre-tool-use levels, suggesting that such an extension of the hand’s near-peripersonal space contracted back towards the patients’ hand. Finally, no change in cross-modal extinction was found immediately after the execution of control pointing movements toward the same distant objects. Further neuropsychological evidence of dynamic changes of peri-hand space in humans has been reported by other authors, yielding similar results (Berti & Frassinetti, 2000; Maravita, Husain, Clarke, & Driver, 2001). Specifically considering the role played by passive or active experience in reshaping peripersonal space, the results of a recent study (Farne´, Bonifazi, & La´davas, 2005) were clear in showing that a relatively prolonged, but passive experience with a tool is not sufficient to induce such a dynamic remapping of far space

694

D. Legrand et al. / Consciousness and Cognition 16 (2007) 687–699

as near space. Indeed, passive exposure to the proprioceptive and visual experience of wielding a rake did not alter the severity of visual–tactile extinction, which was found to be comparable to that obtained when the tool was actually absent (see also Maravita, Spence, Clarke, Husain, & Driver, 2000; Maravita, Spence, Kennett, & Driver, 2002). This favours the idea that plastic modifications of peripersonal space coding are not the product of passive changes in proprioceptive/kinesthesic, or visual inputs per se. An artificial extension of our reachable space by a hand-held tool would not necessarily imply a phenomenon of tool incorporation, unless the tool is used in some active way. Indeed, when cross-modal extinction was assessed equally far in space, but immediately after the active use of a long tool, we observed a significant increase of cross-modal extinction. These findings considerably extend our knowledge about dynamic tool incorporation in humans, by making clear that the plastic modifications are tightly linked to the active, purposeful use of a tool as physical extension of the body, which allows interactions with otherwise non-reachable objects. These data suggest that the representation of space is neither static nor passive. Moreover, its modulation is not merely related to the proprioceptive or visual representation of the body-as-object. Rather, the functional coding of space is specifically built up transiently in a body-related way thanks to processes of sensorimotor integration. At this level, and as already argued for above, the body is not represented as a mere object in space but is a subject/agent of spatial structuring (Merleau-Ponty, 1962 [1945], p. 292): ‘‘When we say that an object is huge or tiny, nearby or far away, it is often without any comparison, even implicit, with any other object, or even with the size and objective position of our own body, but merely in relation to a certain ‘scope’ of our gestures’’ (Merleau-Ponty, 1962 [1945], p. 310–311; see also Coello, Casalis, & Moroni, 2005; Witt, Proffitt, & Epstein, 2005). The body is the tool-user anchoring the subjective perspective, and in this sense it is not itself equivalent to a tool or any particular object. Rather, and conversely, through active tool-use, the tool may be incorporated. These considerations anchor spatial representation to the body at a functional level, and to functionally incorporated tools, and not to the body or passively hold tools merely taken as objects of proprioceptive and/or visual representations. As tools are functionally incorporated, peripersonal space is functionally extended, thereby maintaining its anchoring to the functional body ‘‘as a system of possible actions’’ (Merleau-Ponty, 1962 [1945], p. 291). All together, the data just reviewed suggest that tool-use can change space representation both in normal subjects and in brain damaged patients. In particular, a passive change of the corporeal configuration (hand + tool) is not sufficient: some goal-directed activity is needed. These results raise a further question concerning the critical determinant of such a temporary change. Does this depend upon the physical, absolute length of the tool, or the operative length of the tool that can be effectively used to act on objects? In this respect, the differential amount of cross-modal extinction obtained with different tools was not determined by the absolute length of the tool, but by its operative length (Farne`, Iriki, & La`davas, 2005). These results favour the notion that peri-hand space elongation is directly related to the functionally effective length of the tool, i.e. by the distance at which the operative part of the tool is located with respect to the hand. Importantly, this coheres with the aforementioned functional reshaping of spatial representation. It is worth noting that these and more recent studies (Farne`, Serino, & La`davas, 2007; Bonifazi, Farne`, Rinaldesi, & Ladavas, 2007) suggest that tool-use may affect the strength of visual–tactile interaction at far locations without altering it at the level of the hand. Indeed, following the tool-use training, visual–tactile extinction increased both at the distal edge, as well as midway between the hand and the tool-tip, but absolutely no change was observed near the hand. This suggest that the remapping can be achieved through a functional reweighting of the competitive visual–tactile interaction that does not alter multisensory coding of the acting body-part. These findings are coherent with the general property of near-peripersonal space recalled above: it is coded in a body-part-centred manner. Here, we see that its functional remapping of far as near space does not necessarily amount to detaching near-peripersonal space from its bodily anchoring (Holmes & Spence, 2004). Rather, the reported data suggest that the functional modulation of spatial representation involved a modification of the functional body itself, remaining anchored to the effector (the hand) independent of the tool-use-dependent functional modulation. These considerations link the coding of peripersonal space to the body schema (Head & Holmes, 1911): The body schema is ‘‘a way of stating that my body is in-the-world’’ (Merleau-Ponty, 1962 [1945], p. 115).

D. Legrand et al. / Consciousness and Cognition 16 (2007) 687–699

695

10. Consciousness of oneself as subject-in-the-world What precedes deepens our understanding that spatial representations are not only object-directed but also self-related, i.e. related in a functional manner to one’s acting body. Now, the question is whether such selfrelatedness of spatial experience is itself experienced as such: is there a form of self-consciousness involved in peripersonal and egocentric spatial reference? First, one of the lessons one may wish to draw from the phenomenon of extinction is that the tactile experience of one’s body can be extinguished, that is, modulated by one’s experience of the near peripersonal space, at least following brain damage. The conclusion drawn about peripersonal space can be applied to the tactile experience of one’s body: the latter competes with visual experience of the surrounding world and its extinction is modulated in a functional action-related manner. This point has far-reaching implications since it participates to downplay both the intimacy and the strength of one’s tactile bodily experiences. However, as we’ll see below, this conclusion cannot be generalized beyond the particular case of explicit (reported) consciousness of the body-as-object (e.g. explicit experience of the visual appearance of one’s hand). Second, one may conclude that egocentric spatiality does not involve explicit bodily self-consciousness from the fact that, as argued above, ‘‘the egocentric identification of places does not depend upon a prior [objectifying] identification of a body’’ (Campbell, 1993, 74). Some empirical results support this view. Specifically, it has been reported that vision of a fake hand can deceive the multisensory coding of peri-hand space, such that a visual stimulus presented near a rubber hand is actually processed as if it were near the patient’s real hand (Farne`, Pavani, Meneghello, & La`davas, 2000; see also Pavani, Spence, & Driver, 2000). Noteworthy, in these experiments the protocol was not intended to actively induce any rubber hand illusion, nor the potential self-attribution of the rubber hand to oneself (Botvinick & Cohens, 1998; Tsakiris & Haggard, 2005). The subjects were conscious that their own non-affected hand was placed behind their back. Therefore, it may be argued that the multisensory representation of peri-hand space is somewhat resistant to discrepant information, such that it can be illusorily referred to a properly aligned rubber hand. In line with this view, another experiment showed that the multisensory representations of the peri-hand space is to some degree resistant to cognitive top–down influences, as nearby visual stimuli can be processed as being near the body despite the fact that a transparent barrier is interposed between the hand and the visual stimulus, thus preventing any possible physical interaction between them. Specifically, the transparent barrier protecting the hand against the approaching visual stimulus did not prevent the multisensory coding of nearby stimuli, thus allowing the spatial modulation of visual–tactile extinction. In other terms, visuo-tactile representation of near-peripersonal space can be formed despite the subject’s explicit awareness concerning the physical impossibility for the hand to be touched (Farne`, Dematte`, & La`davas, 2003). These results must be interpreted in light of the aforementioned distinction between the body-as-object and the body-as-subject. We showed above that the latter specifically anchors and modulates multisensory representations of peripersonal space. The last two results now show that these spatial representations can be activated despite explicit consciousness of the body-as-object as being beyond sight (behind the subject’s back) and beyond touch (underneath a transparent glass). It thus seems correct to argue that the constitution of an egocentric frame does not involve the form of selfconsciousness defined as the conception of oneself as one physical object among many (Campbell, 1993, 92). However, this point does not lead to the conclusion that egocentric reference and peripersonal space are selfrelated without involving any form of bodily self-consciousness. Such conclusion would be warranted only if self-consciousness were always objectifying. However, this is not the case, and even less the case as far as bodily self-consciousness is concerned. Indeed, the paradigmatic form of bodily self-consciousness is non-objectifying, in the sense that one usually does not take one’s body as an object of experience but rather experiences oneself as a bodily anchored subject. This is notably evidenced by the sharp contrast there exists between the form of bodily consciousness you paradigmatically enjoy and the objectifying bodily consciousness that remains the only available in deafferentation (Gallagher, 2005; Legrand, 2007). It is normally so hard to understand ‘‘what it feels like’’ to be such patients because you normally entertain a non-objectifying bodily self-consciousness, in the sense defined above that you can but usually don’t take your body as an object of perception (Gallagher, 2003).

696

D. Legrand et al. / Consciousness and Cognition 16 (2007) 687–699

In the relevant literature (Gallagher, 2003, 2005; Legrand, 2006, 2007; Thompson, 2005, 2007; Zahavi, 2005), the experience of oneself as a bodily subject is named bodily pre-reflexive self-consciousness. It is not a reflexive form of consciousness in the sense that reflexivity involves that the subject takes himself as an object of consciousness. The notion of pre-reflexivity here intends to capture the manner in which the body is experienced as subject. To better grasp the specificity of pre-reflexive consciousness by contrast with the reflexive level, take your right hand into your left one, and consider the difference between the experiences of each of your hands. You experience the right hand as touched and for that you take it as a particular object of tactile investigation. This is not the case for your left hand: you experience the latter specifically as touching (vs. touched), and for that you do not take it as a particular object of tactile investigation. Rather, you experience it as the ‘‘hand-subject’’ of the tactile investigation of the other hand-object (Gallagher, 2003, 2005; Merleau-Ponty, 1962). As well, without taking yourself as an object of experience, you experience yourself pre-reflexively as the subject experiencing objects in a spatial manner, as the bodily point anchoring the spatial perspective on objects. Though fundamentally distinct, self-relatedness (pre-reflexive subjectivity) and object-directedness (intentionality) are also constitutively intertwined (Zahavi, 2005). The intricacy of the experience of objects and of oneself as a bodily anchored subject is what allows Merleau-Ponty to defend the view that ‘‘my body co-exists with the world’’ (1962 [1945], p. 292). In the spatial domain, as already described above, the experience of spatiality is ‘‘intentional’’ in the sense that it concerns the location of perceived external objects but it is also ‘‘subjective’’ in the sense that it is anchored to the perceiving subject’s perspective. Moreover, such perspective is bodily in that ‘‘the body is the subject’s point of view on the world. One’s own location, which determines what one can perceive, is the location of one’s body, and perceived objects are perceived as standing in spatial relations to one’s body’’ (Cassam, 1995, p. 332). To put it differently, at the pre-reflexive level, the body is not experienced as another object-in-the-world but as ‘‘from where’’ objects are experienced. All together, the empirical investigations of the determinant of spatial coding we reviewed here cohere with the conception of spatiality as being structured by the perceiving subject’s bodily activities. It is not by being reflexively conscious of one’s body as an object located in space (e.g. aligned with the shoulder vs. behind the back) that the perceiving subject structures spatial representations for action. Rather, it is by being a perceiver and agent in space that the subject experiences a surrounding space structured specifically by and for its own action. In Rizzolatti and colleagues’ words, ‘‘The movement-based space [. . .] becomes then our experiential peripersonal visual space’’ (1997, p. 191). Considering such spatial self-relatedness as a subjective dimension of conscious experience implies that spatial representation in the egocentric frame and peripersonal space involves a form of self-consciousness. In turn, such interpretation requires a fine-grained understanding of conscious experience. Most studies of consciousness focus on the object or content of consciousness and characterize consciousness by one of its features, namely, intentionality by which consciousness is ‘‘consciousness of’’ some object. However, such focus is not suitable for a specific investigation of the experience of space since it lets unexplored the self-relatedness characterizing egocentric reference and peripersonal space, as described in previous sections. To account for the richness of spatial experience, one must also consider a dimension of experience that is not reducible to its object-directedness (‘‘intentionality’’), namely, its self-relatedness (‘‘subjectivity’’).

11. Conclusion The present interdisciplinary investigation intended to underline the mutual constraints and enrichment of dominant philosophical accounts of the experience of spatiality and data issued from our neuropsychological studies. Note that the reported experiments have first been developed independently from the notion of prereflexive consciousness of oneself-in-the-world. As such they do not provide evidence for its existence. Rather, being an experiential phenomenon, the latter must be accounted for at the experiential and conceptual levels, as has been done in the previous sections. Furthermore, this interdisciplinary perspective can be relevantly applied to other neuropsychological cases, as hemi-neglect and anosognosia. Thereby, it offers fruitful opportunities to further investigate the behav-

D. Legrand et al. / Consciousness and Cognition 16 (2007) 687–699

697

ioural and neurophysiological counterparts of the intertwining of self-relatedness and object-directedness in spatial cognition. Acknowledgments This work was supported by the European Mobility Fellowship and the AVENIR project funding No. R05265CS. References Avillac, M., Deneve, S., Olivier, E., Pouget, A., & Duhamel, J. R. (2005). Reference frames for representing visual and tactile locations in parietal cortex. Nature Neuroscience, 8, 941–949. Bender, M. B. (1952). Disorders in perception. Springfield: CC Thomas. Bermudez, J. L. (1998). The paradox of self-consciousness. Cambridge, MA: The MIT Press. Bermudez, J. L. (2005). Phenomenology of bodily awareness. In D. W. Smith & A. L. Thomasson (Eds.), Phenomenology and philosophy of mind (pp. 295–316). Oxford: Oxford University Press. Berti, A., & Frassinetti, F. (2000). When far becomes near: Remapping of space by tool use. Journal of Cognitive Neuroscience, 3, 415–420. Blanke, O., & Arzy, S. (2005). The out-of-body experience: Disturbed self-processing at the temporo-parietal junction. Neuroscientist, 11(1), 16–24. Blanke, O., Landis, T., Spinelli, L., & Seeck, M. (2004). Out-of-body experience and autoscopy of neurological origin. Brain, 127(Pt 2), 243–258. Blanke, O., Mohr, C., Michel, CM., Pascual-Leone, A., Brugger, P., Seeck, M., et al. (2005). Linking out-of-body experience and self processing to mental own-body imagery at the temporoparietal junction. J Neurosci, 25(3), 550–557. Bonifazi, S., Farne`, A., Rinaldesi, L., & Ladavas, E. (2007). Dynamic size-change of peri-hand space through tool-use: Spatial extension or shift of the multisensory area? Journal of NeuroPsychology, 1, 101–114. Botvinick, M., & Cohens, J. (1998). Rubber hands ‘feel’ touch that eyes see. Nature, 391, 756. Bremmer, F., Schlack, A., Duhamel, J. R., Graf, W., & Fink, G. R. (2001). Space coding in primate posterior parietal cortex. NeuroImage, 14, S46–S51. Bremmer, F., Schlack, A., Shah, N. J., Zafiris, O., Kubischik, M., Hoffmann, K., et al. (2001). Polymodal motion processing in posterior parietal and premotor cortex: A human fMRI study strongly implies equivalencies between humans and monkeys. Neuron, 29, 287–296. Campbell, J. (1993). The role of physical objects in spatial thinking. In N. Eilan, R. McCarthy, & M. W. Brewer (Eds.), Problems in the philosophy and psychology of spatial representation. Oxford: Blackwell. Campbell, J. (1994). Past, space and self. Cambridge, MA: MIT Press. Cassam, Q. (1995). Introspection and bodily self-ascription. In J. Bermu´dez, A. J. Marcel, & N. Eilan (Eds.), The Body and the Self (pp. 311–336). Cambridge: MIT Press. Coello, Y., Casalis, S., & Moroni, C. (2005). Vision, espace et cognition: Fonctionnement normal et pathologique. Lille: Presses du Septentrion. Colby, C. L., Duhamel, J. R., & Goldberg, M. E. (1993). Ventral intraparietal area of the macaque: Anatomic location and visual response properties. Journal of Neurophysiology, 69, 902–914. Di Pellegrino, G., & De Renzi, E. (1995). An experimental investigation on the nature of extinction. Neuropsychologia, 33, 153–170. Di Pellegrino, G., La`davas, E., & Farne`, A. (1997). Seeing where your hands are. Nature, 338, 730. Driver, J. (1998). The neuropsychology of spatial attention. In H. Pashler (Ed.), Attention (pp. 57–71). Hove: Psychology Press. Driver, J., & Spence, C. (2002). Multisensory perception: Beyond modularity and convergence. Currant Biology, 10, R731–R735. Duhamel, J. R., Colby, C. L., & Goldberg, M. E. (1991). Congruent representation of visual and somatosensory space in single neurons of monkey ventral intra-parietal area (VIP). In J. Paillard (Ed.), Brain and Space (pp. 223–236). Oxford: Oxford University Press. Duhamel, J. R., Colby, C. L., & Goldberg, M. E. (1998). Ventral intraparietal area of the macaque: Congruent visual and somatic response properties. Journal of Neuropsysiology, 79, 126–136. Duncan, J. (1980). The locus of interference in the perception of simultaneous stimuli. Psychological Review, 87, 272–300. Farne´, A., Bonifazi, S., & La´davas, E. (2005). The role played by tool-use and tool length on the plastic elongation of peri-hand space: A single case study. Cognitive Neuropsychology, 22, 408–418. Farne`, A., Dematte`, M. L., & La`davas, E. (2003). Beyond the window: Multisensory representation of peripersonal space across a trasparent barrier. International journal of Psychophysiology, 50I, 51–61. Farne`, A., Dematte`, M. L., & La`davas, E. (2005). Neuropsychological evidence of modular organization of the near peripersonal space. Neurology, 65, 1754–1758. Farne`, A., Iriki, A., & La`davas, E. (2005). Shaping multisensory action-space with tools: Evidence from patients with cross-modal extinction. Neuropsychologia, 43, 238–248. Farne`, A., & La`davas, E. (2000). Dynamic size-change of hand peripersonal space following tool use. Neuroreport, 85, 1645–1649. Farne`, A., & La`davas, E. (2002). Auditory peripersonal space in humans. Journal of Cognitive Neuroscience, 14, 1030–1043.

698

D. Legrand et al. / Consciousness and Cognition 16 (2007) 687–699

Farne`, A., Pavani, F., Meneghello, F., & La`davas, E. (2000). Left tactile extinction following visual stimulation of a rubber hand. Brain, 123, 2350–2360. Farne`, A., Serino, A., & La`davas, E. (2007). Dynamic size-change of perihand space following tool-use: Determinants and spatial characteristics revealed through cross-modal extinction. Cortex, 3, 436–443. Fogassi, L., Gallese, V., Fadiga, L., Luppino, G., Matelli, M., & Rizzolatti, G. (1996). Coding of peripersonal space in inferior premotor cortex (area F4). Journali of Neurophysiology, 76, 141–157. Fogassi, L., Raos, V., Franchi, G., Gallese, V., Luppino, G., & Matelli, M. (1998). Visual responses in the dorsal premotor area F2 of the macaque monkey. Experimental Brain Research, 128, 194–199. Gallagher, S. (2003). Bodily self-awareness and object-perception. Theoria et Historia Scientiarum: International Journal for Interdisciplinary Studies, 7(1), 53–68. Gallagher, S. (2005). How the Body Shapes the Mind. Oxford: Oxford University Press. Graziano, M. S. A., & Gross, C. G. (1995). The representation of extrapersonal space: A possible role for bimodal, visuo-tactile neurons. In M. S. Gazzaniga (Ed.), The Cognitive Neuroscience (pp. 1021–1034). Cambridge: MIT Press. Graziano, M. S. A., & Gross, C. G. (1998). Visual responses with and without fixation: Neurons in premotor cortex encode spatial locations independently of eye position. Experimental Brain Research, 118, 373–380. Graziano, M. S. A., Hu, X. T., & Gross, C. G. (1997). Visuospatial properties of ventral premotor cortex. Journal of Neurophysiology, 77, 2268–2292. Grush, R. (2001). Self, world and space: On the meaning and mechanisms of egocentric and allocentric spatial representation. Brain and Mind, 1(1), 59–92. Head, S. H., & Holmes, S. G. (1911). Sensory disturbances from cerebral lesions. Brain, 34, 102–254. Holmes, N. P., & Spence, C. (2004). Extending or projecting space with tools? Multisensory interactions highlight only the distal and proximal ends of tools. Neuroscience Letters, 372, 62–67. Hyvarinen, J., & Poranen, A. (1974). Function of the parietal associative area 7 as revealed from cellular discharged in alert monkeys. Brain, 97, 673–692. Iriki, A., Tanaka, M., & Iwamura, Y. (1996). Coding of modified body schema during tool use by macaque postcentral neurones. Neuroreport, 7, 2325–2330. Iriki, A., Tanaka, M., Obayashi, S., & Iwamura, Y. (2001). Self-images in the video monitor coded by monkey intraparietal neuron. Neuroscience Research, 40, 163–173. La`davas, E. (2002). Functional and dynamic properties of visual peripersonal space in humans. Trends in Cognitive Science, 6, 17–22. La`davas, E., Zeloni, G., & Farne`, A. (1998). Visual peripersonal space centered on the face in humans. Brain, 121, 2317–2326. Legrand, D. (2006). The bodily self. The sensori-motor roots of pre-reflexive self-consciousness. Phenomenology and the Cognitive Sciences(5), 89–118. Legrand, D. (2007). Pre-reflective self-consciousness: On being bodily in the world. Janus Head, Special Issue: The Situated Body. 9(1) 493–519 . Loeb, J. (1885). Die elementaren Storungen einfacher Funktionen nach ober-flachlicher, umschriebener Verletzung des Großhirns. Pfluger Archiv, 37, 51–56. Maravita, A., Husain, M., Clarke, K., & Driver, J. (2001). Reaching with the tool extends visual–tactile interactions into far space: Evidence from cross-modal extinction. Neuropsychologia, 39, 580–585. Maravita, A., Spence, C., Clarke, K., Husain, M., & Driver, J. (2000). Vision and touch through the looking glass in a case of crossmodal extinction. Neuroreport, 169, 3521–3526. Maravita, A., Spence, C., Kennett, S., & Driver, J. (2002). Tool-use changes multimodal spatial interactions between vision and touch in normal humans. Cognition, 83, B25–B34. Merleau-Ponty, M. (1962). The Phenomenology of Perception (C. Smith, trans.). London, NY: Routledge. Obayashi, S., Tanaka, M., & Iriki, A. (2000). Subjective image of invisible hand coded by monkey intraparietal neurons. Neuroreport, 11, 3499–3505. Oppenheim, H. (1885). Uber eine durch eine klinisch bisher nicht verwetete Untersuchungsmethode ermittelte Sensibilitatsstorung bei einseitigen Erkrakungen des Großhirns. Neurologiches Centralblatt, 37, 51–56. Pavani, F., Spence, C., & Driver, J. (2000). Visual capture of touch: Out-of-the-body experiences with rubber gloves. Psychological Science, 11, 353–359. Perry, J. (1993). The problem of the essential indexical and other essays. New York: Oxford University Press. Rizzolatti, G., Fadiga, L., Fogassi, L., & Gallese, V. (1997). The space around us. Science, 277, 190–191. Rizzolatti, G., Luppino, G., & Matelli, M. (1998). The organization of the cortical motor system: New concepts. Electroencephalographic Clinical Neurophysiology, 106, 283–296. Rizzolatti, G., Scandolara, C., Matelli, M., & Gentilucci, M. (1981). Afferent properties of periarcuate neurons in macaque monkeys. II Visual responses. Behavioral Brain Research, 2, 147–163. Sereno, M. I., & Huang, R. S. (2006). A human parietal face area contains aligned head-centered visual and tactile maps. Nature Neuroscience, 9, 1337–1343. Thompson, E. (2005). Sensorimotor subjectivity and the enactive approach to experience. Phenomenology and the Cognitive Sciences, 4, 407–427. Thompson, E. (2007). Mind in life: Biology, phenomenology, and the sciences of mind. Cambridge, MA: Harvard University Press. Tsakiris, M., & Haggard, P. (2005). The rubber hand illusion revisited: Visuotactile integration and self-attribution. Journal of Experimental Psychology–Human Perception and Performance, 31, 80–91.

D. Legrand et al. / Consciousness and Cognition 16 (2007) 687–699

699

Ward, R., Goodrich, S., & Driver, J. (1994). Grouping reduces visual extinction: Neuropsychological evidence for weight-linkage in visual selection. Visual Cognition, 1, 101–129. Witt, J. K., Proffitt, D. R., & Epstein, W. (2005). Tool use affects perceived distance, but only when you intend to use it. Journal of Experimental Psychology–Human Perception and Performance, 31, 880–888. Zahavi, D. (2005). Subjectivity and Selfhood: Investigating the first-person perspective. Cambridge, MA: The MIT Press.

Multisensory space representations for action and pre ...

Aug 1, 2007 - b INSERM, U864, Espace et Action, Bron F-69500 France c Université Lyon ... d Hospices Civils de Lyon, Hôpital Neurologique, Mouvement et Handicap, Lyon F-69003, France. Received 24 ..... We will argue that these data cohere with the conception of consciousness as being characterized both by its.

175KB Sizes 1 Downloads 166 Views

Recommend Documents

Efficient Estimation of Word Representations in Vector Space
Sep 7, 2013 - Furthermore, we show that these vectors provide state-of-the-art perfor- ... vectors from huge data sets with billions of words, and with millions of ...

Political Space Representations with Approval Data ...
Mar 26, 2015 - Table 1: Association matrix for Parties, Messel. Parties GRE SPD CDU FDP PIR Left APP ödp VIO REP NPD. GRE. 831. 516. 208. 173 219 198.

Searching the space of representations: reasoning ...
the reasoning process, and whether it is possible to automate it in a way that it .... the work presented in this thesis also appears in the following conference paper: ...... call for the implementation of reformulation tactics for theorem provers,

towards reasoning and coordinating action in the mental space
Towards Reasoning and Coordinating Action in the Mental Space 331 basic operations: (1) relaxation in the configura- tion space (for reaching a target pose); (2) relax- ation in the null space of the kinematic transforma- tion (for producing the requ

Designing Numerical Representations for Young Children
Institute of Education ... Digital technology presents opportunities to design novel forms of numerical ... children to explore the meaning behind these ideas or.

Selecting different protein representations and ...
Apr 7, 2010 - selects the best type of protein representation in a data-driven manner, ..... lection seems similar to the well-known idea of feature selection in data mining ..... Figure 3: Analysis of relative protein representation importance on th

Shared Representations for Working Memory and ...
University, Heidelberglaan 2, 3584 CS Utrecht, the Netherlands. Summary ..... Figure 3. Time Course of Decoding Accuracy and Mean Neural Activity in. V1–V3.

Symbolic method for simplifying AND-EXOR representations of ...
Oct 25, 1995 - tionally, Reed-Muller designs have been seen as more costly to implement ... 0 IEE, 1996. IEE Proceedings online no. 19960196 .... Sft;rJ 1: Store the input function truth table as ininterm strings. Srep 2: Minterni strings ...

Inductive Learning Algorithms and Representations for Text ... - Microsoft
categorization in terms of learning speed, real- time classification speed, and classification accuracy. ... Decimal or Library of Congress classification systems,. Medical Subject Headings (MeSH), or Yahoo!'s topic .... method the weight of each ter

Invariant Representations for Content Based ... - Semantic Scholar
sustained development in content based image retrieval. We start with the .... Definition 1 (Receptive Field Measurement). ..... Network: computation in neural.

Designing Numerical Representations for Young Children
Institute of Education. 23-29 Emerald Street. London, UK. WC1N 3QS. +44 (0)20 7763 2137 [email protected]. ABSTRACT. Digital technology presents ...

Invariant Representations for Content Based ... - Semantic Scholar
These physical laws are basically domain independent, as they cover the universally ... For the object, the free parameters can be grouped in the cover, ranging.

Learning Topographic Representations for ... - Semantic Scholar
the assumption of ICA: only adjacent components have energy correlations, and ..... were supported by the Centre-of-Excellence in Algorithmic Data Analysis.

Learning Topographic Representations for ... - Semantic Scholar
Figure 1: The covariance matrices of (a) sampled s, estimated components (b) without and ... example is coherent sources in EEG or MEG, which can be linearly ...

Qualitative Spatial Representations for Activity Recognition - GitHub
Provide foundation for domain ontologies with spatially extended objects. • Applications in geography, activity recognition, robotics, NL, biology…

Sparse Representations for Text Categorization
statistical algorithm to model the data and the type of feature selection algorithm used ... This is the first piece of work that explores the use of these SRs for text ...

Multisensory gain within and across hemispaces in ... - DIAL@UCL
Nov 29, 2010 - ability that the RT of the fastest channel will be faster than the mean RT. Statistically, RTs .... To test for multisensory interactions in the RT data,.

capturing multisensory spatial attention - NeuroBiography
closed-circuit television camera and monitor was installed in front of the ...... (1997) used neutral objects (e.g., ping pong balls or a square of cardboard), ...

Selecting different protein representations and ...
Apr 7, 2010 - Figure 3: Analysis of relative protein representation importance on ..... prot: web-based support vector machine software for functional classifica-.

Distributed Representations of Sentences and ...
18 Apr 2016 - understand how the model is different from bag-of-words et al. → insight into how the model captures the semantics of words and paragraphs. | Experiments. | 11. Page 12. Experiments. □ 11855 parsed sentences with labels for subphras

Decompositions and representations of monotone ...
monotone operators with linear graphs by. Liangjin Yao. M.Sc., Yunnan University, 2006. A THESIS SUBMITTED IN PARTIAL FULFILMENT OF. THE REQUIREMENTS FOR THE DEGREE OF. Master of Science in. The College of Graduate Studies. (Interdisciplinary). The U