Anim Cogn (2010) 13:341–349 DOI 10.1007/s10071-009-0283-3
ORIGINAL PAPER
Facilitation of learning spatial relations among locations by visual cues: generality across spatial configurations Bradley R. Sturz Æ Debbie M. Kelly Æ Michael F. Brown
Received: 20 July 2009 / Revised: 10 September 2009 / Accepted: 14 September 2009 / Published online: 24 September 2009 Ó Springer-Verlag 2009
Abstract Spatial pattern learning permits the learning of the location of objects in space relative to each other without reference to discrete visual landmarks or environmental geometry. In the present experiment, we investigated conditions that facilitate spatial pattern learning. Specifically, human participants searched in a real environment or interactive 3-D computer-generated virtual environment open-field search task for four hidden goal locations arranged in a diamond configuration located in a 5 9 5 matrix of raised bins. Participants were randomly assigned to one of three groups: Pattern Only, Landmark ? Pattern, or Cues ? Pattern. All participants experienced a Training phase followed by a Testing phase. Visual cues were coincident with the goal locations during Training only in the Cues ? Pattern group whereas a single visual cue at a non-goal location maintained a consistent spatial relationship with the goal locations during Training only in the Landmark ? Pattern group. All groups were then tested in the absence of visual cues.
Results in both environments indicated that participants in all three groups learned the spatial configuration of goal locations. The presence of the visual cues during Training facilitated acquisition of the task for the Landmark ? Pattern and Cues ? Pattern groups compared to the Pattern Only group. During Testing the Landmark ? Pattern and Cues ? Pattern groups did not differ when their respective visual cues were removed. Furthermore, during Testing the performance of these two groups was superior to the Pattern Only group. Results generalize prior research to a different configuration of spatial locations, isolate spatial pattern learning as the process facilitated by visual cues, and indicate that the facilitation of learning spatial relations among locations by visual cues does not require coincident visual cues. Keywords Virtual environment Open-field Spatial pattern Facilitation Cue competition
Introduction
B. R. Sturz (&) Department of Psychology, Armstrong Atlantic State University, 229 Science Center, 11935 Abercorn Street, Savannah, GA 31419, USA e-mail:
[email protected] D. M. Kelly Department of Psychology, University of Saskatchewan, 9 Campus Drive, Saskatoon, SK S7N 5A5, Canada e-mail:
[email protected] M. F. Brown Department of Psychology, Villanova University, 800 Lancaster Ave, Villanova, PA 19085, USA e-mail:
[email protected]
At least two sources of spatial information are available to mobile animals as they navigate their environments: landmarks and environmental geometry (for a review, see Shettleworth 1998). Encoding of landmark information allows determination of position and orientation with reference to objects in the environment with known positions whereas encoding of geometric information allows determination of position and orientation with reference to geometric properties of a surrounding enclosure (Cheng 1986; for a review, see Cheng and Newcombe 2005). Theoretical accounts of spatial learning differ with respect to how these sources of spatial information are learned: either collectively by a unitary associative-based system
123
342
(e.g., Chamizo 2003; Graham et al. 2006; Miller 2009; Miller and Shettleworth 2007; Pearce et al. 2006) or independently by dual-systems composed of separate feature- and geometry/boundary-based systems (e.g., Cheng 1986; Cheng and Newcombe 2006; Doeller and Burgess 2008; Doeller et al. 2008; for a review, see Burgess 2006; Gallistel 1990). Contemporary approaches to discriminating theoretical accounts of spatial learning have relied on associative cue competition (e.g., overshadowing and/or blocking) as an important diagnostic tool. Specifically, competition between spatial cues is often interpreted as indicating that the spatial information is processed by the same learning system; whereas, lack of competition is often interpreted as indicating that the spatial information is processed by separate learning systems (e.g., Chamizo 2003; Cheng 2008; Shettleworth 1998). Feature based information has been shown to be susceptible to cue competition from featural cues whereas geometry/boundary learning appears to be immune to cue competition from featural cues (for a review, see Cheng and Newcombe 2006; however, see Cheng 2008). Recently, an additional type of spatial learning, also suggested to be based on geometric information, has been proposed (Brown and Terrinoni 1996; see also McNamara and Valiquette 2004). This type of spatial learning, termed spatial pattern learning, permits the learning of the location of objects in space relative to each other—also without reference to discrete visual landmarks or environmental geometry. Humans and rats have been shown to be capable of learning complex goal–goal relations in the absence of discrete visual landmarks, suggesting that they form and use a representation of the spatial relations among goal locations (for a review, see Brown 2006a, b; see also Mou and McNamara 2002; Shelton and McNamara 2001; Uttal and Chiong 2004). Like geometric cue learning, spatial pattern learning appears to be immune to competition from featural cues (Brown et al. 2002). For example, rats trained in a spatial pattern learning task in the presence of visual cues showed no deficiency in their ability to learn the spatial pattern (as compared to a group trained in the absence of the cues) during a testing phase when cues were absent. This result indicates that the presence of the visual cues during training did not overshadow learning about the spatial configuration of the goal locations. More recently, we (Sturz et al. 2009b) investigated humans’ ability to learn spatial relations among locations using a search task in which goal locations maintained consistent spatial relations to each other but varied unpredictably across trials with respect to landmarks and environmental geometry/boundaries. Consistent with results obtained with rats (Brown et al. 2002), we found that
123
Anim Cogn (2010) 13:341–349
humans represented the geometric pattern of spatial locations and that this process did not seem susceptible to cue competition. Specifically, we found no evidence for cue competition when visual cues marked goal locations during a training phase but were removed during a subsequent testing phase. In fact, the presence of the visual cues during the training phase facilitated learning the spatial relations among locations. The present experiment extends our previous work on human spatial pattern learning in three ways. First, we tested the generality of our previous findings of facilitation of learning spatial relations among locations by visual cues by using a diamond shaped spatial pattern rather than the square pattern we used previously (Sturz et al. 2009b). Second, because our prior experiments utilized visual cues that were coincident with the goal locations, we tested the extent to which coincident visual cues are required for the facilitation effect. In the case of the diamond pattern the goal locations were separated by a non-goal location in the center (see Fig. 1). As in our earlier experiments, one group of human participants was trained in the presence of visual cues which marked goal locations and were coincident with them (Cues ? Pattern) and a second group was trained in their absence (Pattern Only). To test the importance of the cues being coincident with the goal locations a third group (Landmark ? Pattern) was trained in the presence of only one visual cue, which was situated at the non-goal location in the center of the diamond pattern. All groups were then tested in the absence of visual cues. If evidence for facilitation of learning spatial relations among goal locations by visual cues is obtained for participants in the Landmark ? Pattern group, this would show that facilitation of spatial pattern learning by visual cues does not require that they be coincident with the goal locations. The third extension of our previous work is the use of a measure that isolated control by the spatial pattern from other processes that might be facilitated by the visual cues. Specifically, we examined the proportion of choices following discovery of a correct location that conformed to the spatial pattern. In the case of the diamond pattern used in these experiments, learning the structure of the pattern would be expected to produce a tendency to choose a location that is diagonal following choice of a correct location (i.e., one unit distance in both the x and y dimensions). Our standard performance measure (number of incorrect choices) can be modulated by a number of processes, and an analysis of proportion of diagonal choices following discovery of a goal location should better serve to isolate spatial pattern learning. Logistical problems associated with testing humans in navigational tasks have resulted in recent research utilizing
Anim Cogn (2010) 13:341–349
343
Fig. 1 Top panel Photo of the real environment search space (top left panel) or screen-shot from the first-person perspective of the virtual environment search space (top right panel) taken at the start location (S) from a possible Training trial for Pattern Only group. Please note that the Testing trials for all groups looked identical to the Pattern Only Training trials (see text for details). Bottom panel Overhead screen-shot of the virtual environment search space from two possible Training trials (left and middle columns) and a Testing trial (right column) for the Pattern Only (top row), Landmark ? Pattern (middle row) and Cues ? Pattern groups (bottom row). For illustrative purposes, the white dots mark the goal locations and the S marks the position where participants entered the openfield and thus started their search for all Training and Testing trials. The position of the diamond pattern was quasirandomized across trials (see text for details)
three-dimensional virtual environments (e.g., Sturz et al. 2006; Sturz and Kelly 2009; for a review, see Kelly and Gibson 2007). Despite extensive use of virtual environments in spatial research, relatively few direct comparisons have been made between human navigation in real and virtual environments (c.f., Klatzky et al. 1998; Montello et al. 2004; Richardson et al. 1999; Sturz et al. 2009a, b). As a result, real and virtual versions of the search task were used in parallel to allow explicit comparisons of mechanisms used by humans to navigate real and virtual environments.
Method Real environment Participants Sixty University of Saskatchewan undergraduate students (30 males and 30 females) served as participants. The mean age of participants that were included in all analyses and opted to provide age information was 19.53 (SEM = 0.32). Participants received extra class credit.
123
344
Apparatus and stimuli A search space (5.55 m in length 9 3.74 m in width 9 5.30 m in height) was created by hanging white opaque curtains from ceiling-to-floor of an experimental room. The floor was covered with shredded paper. Twentyfive raised bins (22 cm in diameter 9 18.5 cm in height), which also contained shredded paper, were arranged in a 5 9 5 matrix within the room (see top left panel, Fig. 1).
Anim Cogn (2010) 13:341–349
nine locations. The four bins constituting the pattern were designated as goal locations, and one ball was placed within each bin designated as a goal location. All bins were unmarked (all bins were terra cotta colored), and Testing was identical for the Pattern Only, Landmark ? Pattern, and Cues ? Pattern groups. Virtual environment Participants
Procedure Participants were randomly assigned to one of three groups: Pattern Only, Landmark ? Pattern, or Cues ? Pattern. Each group consisted of a total of 20 participants (10 males and 10 females). Participants completed two phases: Training and Testing. Each phase consisted of 15 trials in which four small red plastic balls were hidden within the paper shredding in the 25 bins and were arranged in a diamond pattern. The balls were placed in four bins to form the abstract shape of a diamond. This diamond pattern moved about the search space to a random location from trial to trial but the balls always maintained the same spatial relations to each other (i.e., in the shape of a diamond). Participants were required to search for these four balls. Participants began each trial at the same starting position. A choice was defined as when a participant fully inserted his or her hand into a bin. When all of the balls were retrieved, participants were required to return the balls to the experimenter and exit the open-field. Otherwise, participants continued searching until all balls were retrieved. All participants experienced a Training phase followed by a Testing phase. All Training and Testing trials were conducted in one continuous daily session lasting approximately 1 h. Only the Training phase differed across the groups of participants (see below). Training The Training phase consisted of 15 trials. For each trial, the diamond pattern was randomly assigned to one of nine locations. The four bins constituting the pattern were designated as goal locations, and one ball was placed within each bin designated as a goal location. Training was identical for the Pattern Only, Landmark ? Pattern, and Cues ? Pattern groups with the exception that the goal locations (bins) were marked in green for the Cues ? Pattern group and the non-goal location in the center of the diamond pattern was marked in green for the Landmark ? Pattern group. The remaining bins were unmarked (terra cotta colored). Testing The Testing phase consisted of 15 trials. For each trial, the diamond pattern was randomly assigned to one of
123
Sixty Armstrong Atlantic State University undergraduate students (30 males and 30 females) served as participants. The mean age of participants that were included in all analyses and opted to provide age information was 21.23 (SEM = 3.7). Participants received extra credit or participated as a part of a course requirement. Apparatus An interactive 3-D virtual environment was constructed and rendered using Valve Hammer Editor and run on the Half-Life Team Fortress Classic platform. A personal computer, 19-in. flat-screen liquid crystal display (LCD) monitor, optical mouse, keyboard, and headphones served as the interface with the virtual environment. The monitor (1152 9 864 pixels) provided a first-person perspective of the virtual environment (see top right panel, Fig. 1). The arrow keys of the keyboard, the mouse, and the left mouse button navigated within the environment. Headphones emitted auditory feedback. Experimental events were controlled and recorded using Half-Life Dedicated Server on an identical personal computer. Stimuli Dimensions are length 9 width 9 height and measured in virtual units (vu). The virtual environment (1050 9 980 9 416 vu) contained 25 raised bins (86 9 86 9 38 vu) arranged in a 5 9 5 matrix (see bottom panel, Fig. 1). The room was illuminated by a light source centered 64 vu below the ceiling. The wall opposite the start location (labeled S in Fig. 1) was lighter than the other three. Procedure The procedure for the virtual environment was identical to that of the real environment with a few exceptions: (1) participants were informed to locate the bins that transported them to the next virtual room, (2) participants moved via keyboard keys: : (forward), ; (backward), / (left), and ? (right). Diagonal movement occurred if two appropriate keys were depressed simultaneously. Movement of the
mouse changed the view in a 360° sphere within the virtual environment. Auditory feedback indicated movement within the environment (footstep sounds), and (4) participants selected a bin by jumping into it. To jump into a bin, participants simultaneously moved forward (:) and jumped (left mouse button). Auditory feedback indicated a jump occurred (‘‘huh’’ sound). Selection of a goal bin resulted in auditory feedback (transport sound from Super Mario Bros.TM). Selection of a non-goal bin resulted in different auditory feedback (game over sound from Super Mario Bros.TM) and required participants to jump out of the current bin and continue searching. Successful discovery of the first, second, and third goal locations were each followed by auditory feedback, but only successful discovery of all four goal locations resulted in auditory feedback and a 1 s intertrial interval in which the monitor went black and participants progressed to the next trial. Training Training in the virtual environment was identical to that in the real environment with the exceptions that the four goal bins were marked in red for the Cues ? Pattern group, the non-goal location in the center of the diamond pattern was marked in red for the Landmark ? Pattern group, and the remaining bins were unmarked (white colored). Testing Testing in the virtual environment was identical to that in the real environment.
Results Training Figure 2 shows mean number of errors during Training to locate all four goal locations (i.e., complete a trial) plotted across trial blocks for all three groups collapsed across environments. A three-way mixed analysis of variance (ANOVA) on mean errors with Block (1-3), Group (Pattern Only, Landmark ? Pattern, Cues ? Pattern) and Environment (real, virtual) as factors revealed main effects of Block, F2,228 = 70.01, P \ 0.001, Group, F2,114 = 68.56, P \ 0.001, and a significant Block 9 Group, F4,228 = 9.15, P \ 0.001, interaction. No other main effects or interactions were significant (Environment, F1,114 = 0.14, P = 0.71; Block 9 Environment, F2,228 = 1.91, P = 0.15; Group 9 Environment, F2,114 = 2.76, P = 0.07; Block 9 Group 9 Environment, F4,228 = 0.93, P = 0.45). Follow-up analyses were performed in order to isolate the source of the Block 9 Group interaction. Within Block 1, mean errors for the Pattern Only and Landmark ? Pattern groups were not statistically different from each other, F1,78 = 1.71, P = 0.23. However, during Blocks 2 and 3, mean errors
345
Mean Number of Errors to Complete Trial
Anim Cogn (2010) 13:341–349 18
Pattern Only: Training Landmark + Pattern: Training Cues + Pattern: Training
16 14 12 10 8 6 4 2 0 Block 1
Block 2
Block 3
Five-Trial Blocks
Fig. 2 Mean number of errors to locate all four goal locations (i.e., complete trial) during Training plotted by five-trial blocks for each group collapsed across environments. Error bars represent standard errors of the mean
were statistically greater for the Pattern Only group compared to that of the Landmark ? Pattern group, F1,78 = 16.18, P \ 0.001; F1,78 = 27.23, P \ 0.001, respectively. Search behavior was influenced by the spatial configuration of the goal locations or the visual cue(s) as mean errors decreased across trial blocks: Block 1 (M = 11.52; SEM = 0.43), Block 2 (M = 7.79; SEM = 0.51), Block 3 (M = 6.14; SEM = 0.47). Post hoc tests on the Block factor revealed each block was significantly different from all other blocks (all Ps \ 0.001). Post hoc tests on the Group factor revealed each group was significantly different from all other groups (all Ps \ 0.001). Specifically, showing the visual pattern in its entirety (i.e., Cues ? Pattern) during Training allowed these participants to significantly reduce the number of errors made (M = 2.64, SEM = 0.61) compared to the two groups for which the pattern had to be abstracted [i.e., Pattern Only (M = 13.63, SEM = 0.59) and Landmark ? Pattern (M = 9.16, SEM = 0.79)]. Performance was also quantitatively equivalent across real (M = 8.62, SEM = 1.39) and virtual (M = 8.33, SEM = 1.79) environments. Testing To confirm learning of the spatial relations among goal locations during Testing, an analysis similar to that used by Brown et al. (2001) was conducted in which the proportion of diagonal moves (moves conforming to the pattern) following discovery of a goal location was calculated. Figure 3 shows mean proportion of diagonal moves during Testing following discovery of a goal location plotted across trial blocks for all three groups collapsed across environments. A three-way mixed ANOVA on proportion
123
Anim Cogn (2010) 13:341–349 1.0
Mean Number of Errors to Complete Trial
Mean Proportion of Diagonal Moves Follwing Discovery of a Goal Location
346
Pattern Only: Testing Landmark + Pattern: Testing Cues + Pattern: Testing
0.8
0.6
0.4
0.2
0.0 Block 1
Block 2
Block 3
18 16
Pattern Only: Testing Landmark + Pattern: Testing Cues + Pattern: Testing
14 12 10 8 6 4 2 0 Block 1
Five-Trial Blocks
Block2
Block3
Five-Trial Blocks
Fig. 3 Mean proportion of diagonal moves following discovery of a goal location during Testing plotted by five-trial blocks for each group collapsed across environments. Error bars represent standard errors of the mean
of diagonal moves following discovery of a goal location with Block (1–3), Group (Pattern Only, Landmark ? Pattern, Cues ? Pattern), and Environment (real, virtual) as factors revealed a main effect of Block, F2,228 = 8.49, P \ 0.001, and a main effect of Group, F2,114 = 11.23, P \ 0.001. No other main effects or interactions were significant (Environment, F1,114 = 2.94, P = 0.09; Block 9 Group, F4,228 = 0.67, P = 0.62; Block 9 Environment, F2,228 = 0.38, P = 0.68; Group 9 Environment, F2,114 = 0.02, P = 0.98; Block 9 Group 9 Environment, F4,228 = 0.61, P = 0.66). Post hoc tests revealed that the mean proportion of diagonal moves following discovery of a goal location during Block 1 (M = 0.5; SEM = 0.03) were less than that of Block 2 (M = 0.56; SEM = 0.03) and Block 3 (M = 0.57; SEM = 0.03). However, Blocks 2 and 3 were not statistically different from each other (P = 0.49). Post hoc tests also revealed that the mean proportion of diagonal moves following discovery of a goal location for the Cues ? Pattern (M = 0.69, SEM = 0.04) and Landmark ? Pattern groups (M = 0.58, SEM = 0.05) were greater than that of the Pattern Only group (M = 0.38, SEM = 0.05). The Cues ? Pattern and Landmark ? Pattern groups were not significantly different (P = 0.09). Performance was also quantitatively equivalent across real (M = 0.59, SEM = 0.04) and virtual (M = 0.50, SEM = 0.04) environments. Such results isolate spatial pattern learning as the process facilitated by the visual cue(s). Figure 4 shows mean number of errors during Testing to locate all four goal locations (i.e., complete a trial) plotted across trial blocks for all three groups collapsed across environments. Although all participants learned the spatial relations among goal locations during Testing, participants
123
Fig. 4 Mean number of errors to locate all four goal locations (i.e., complete a trial) during Testing plotted by five-trial blocks for each group collapsed across environments. Error bars represent standard errors of the mean
in the Pattern Only group (M = 10.62, SEM = 0.48) made more mean errors to complete a trial than both the Landmark ? Pattern (M = 6.82, SEM = 0.48) and Cues ? Pattern groups (M = 6.51, SEM = 0.41). However, mean errors for the Landmark ? Pattern and Cues ? Pattern groups were not statistically different. A three-way mixed ANOVA on mean errors with Block (1–3), Group (Pattern Only, Landmark ? Pattern, Cues ? Pattern) and Environment (real, virtual) as factors revealed main effects of Block, F2,228 = 15.98, P \ .001 and Group, F2,114 = 9.98, P \ 0.001. No other main effects or interactions were significant (Environment, F1,114 = 3.13, P = 0.08; Block 9 Environment, F2,228 = 0.52, P = 0.59; Block 9 Group, F4,228 = 2.22, P = 0.07; Group 9 Environment, F2,114 = 0.12, P = 0.89; Block 9 Group 9 Environment, F4,228 = 0.94, P = 0.44). Post hoc tests revealed that the Pattern Only group was significantly different from the Landmark ? Pattern and Cues ? Pattern groups (both Ps \ 0.001). However, the performance of the Landmark ? Pattern and Cues ? Pattern groups was not significantly different (P = 0.77). Performance was also quantitatively equivalent across real (M = 7.24, SEM = 0.43) and virtual (M = 8.72, SEM = 0.59) environments.
Discussion In the present open-field search tasks, participants in both environments learned the spatial configuration of goal locations. During Training, participants in both environments who were exposed to the visual cue(s) performed superior to those not exposed to the visual cue(s), and
Anim Cogn (2010) 13:341–349
during Testing, participants in both environments trained with visual cue(s) performed superior to those trained without these cues. Thus, Testing results indicate that the presence of the visual cue(s) during Training was not detrimental to learning the spatial relations among goal locations, and this result was consistent across real and virtual environments. Although participants trained with a single visual cue at a non-goal location (i.e., Landmark ? Pattern) performed inferior to participants trained with coincident visual cues at all four goal locations (i.e., Cues ? Pattern) during the Training phase, participants in these two groups performed equivalently during the subsequent Testing phase. Equivalence in performance by participants in these groups also suggests that errors made during Training were not detrimental to learning the spatial relations among goal locations. These results support the conclusion that the visual cues(s) facilitated learning of the spatial relations among goal locations and that this facilitation effect does not require coincident visual cues. The present results are consistent with those obtained by Sturz et al. (2009b) in that we found no evidence for cue competition in a search task when boundaries and environmental geometry were rendered irrelevant, but we also extend our earlier findings to a novel configuration of spatial locations and suggest that coincident visual cues are not required for the facilitation effect. Moreover, the analysis of diagonal moves isolates pattern learning as the process that is facilitated by the visual cue(s). Although dissociating visual cues from the goal locations demonstrates that coincident visual cues are not required for the facilitation effect, the exact reason for the facilitation effect remains unclear. Possibilities appear to include: (1) verbal labeling, (2) associative cue-potentiation, (3) weighting of spatial information, and (4) dead reckoning. First, humans have sophisticated spatial language capacities, and it is possible that participants used the coincident visual cues (which provided visual exposure to the entire configuration of goal locations) to verbally encode the spatial relations among goal locations as an object (i.e., a ‘‘square’’ in Sturz et al. 2009b or a ‘‘diamond’’ in the present experiment) which renders it immune to cue competition (for a review, see Plumert and Spencer 2007). Second, Rescorla and Durlach (1981; see also Durlach and Rescorla 1980; Horne and Pearce 2009; Miller 2009; Miller and Shettleworth 2007; Pearce et al. 2006) described an associative process that results from coincident cues and produces mutual enhancement of the saliency of those cues. In our experiments, associative cue potentiation could have occurred between the coincident visual cues and the spatial relations among goal locations. It remains unclear whether a different explanation is required for the performance of the Landmark ? Pattern group. Participants in this group were exposed to a visual
347
cue that was not coincident with any of the goal locations, and despite more errors in Training compared to the Cues ? Pattern group, participants in these two groups made equivalent errors during the subsequent Testing phase. Although it is possible that in the absence of coincident visual cues, a verbal coding strategy was also employed by participants in the Landmark ? Pattern group (i.e., participants in both groups used a verbal code such as ‘‘diamond’’), it is not clear to us how to reconcile such an explanation with the dissociation between Training and Testing performance for participants in the Landmark ? Pattern group. In addition, it is possible that the identical performance of the Landmark ? Pattern and Cues ? Pattern groups during Testing could be explained if goal locations became associated with the single cue for participants in the Landmark ? Pattern group during Training resulting from within-event associations of a spatial nature. For example, finding a goal location may have associatively activated a representation of the visual cue which in turn associatively activated representations of the other goal locations (see Horne and Pearce 2009). Again, however, it is not clear to us how to reconcile such an explanation with respect to the dissociation between Training and Testing performance for participants in the Landmark ? Pattern group. Third, it has been suggested that spatial information is weighted by such factors as reliability and stability (Cheng et al. 2007; however, see also Nardini et al. 2008; Newcombe and Ratliff 2007; Ratliff and Newcombe 2008). For example, evidence suggests that landmark reliability and stability play critical roles in spatial learning in that landmarks may be ignored if perceived as unreliable and/or unstable (Biegler and Morris 1993, 1996; Jeffrey 1998; Learmonth et al. 2001). In the present experiment, we can consider the number of reliable and stable spatial cues with respect to the goal locations across groups. Specifically, if the pattern itself is considered as one reliable and stable spatial cue (albeit an unperceived one), the visual cue at the non-goal location for the Landmark ? Pattern group as one reliable and stable spatial cue, and the visual cues coincident with goal locations for the Cues ? Pattern group as one reliable and stable spatial cue each, the total number of reliable and stable spatial cues for each group during Training may account for the observed facilitation of learning spatial relations among locations by visual cues. Under these assumptions concerning spatial cue reliability and stability, the Pattern Only group would have one reliable and stable spatial cue, the Landmark ? Pattern two, and the Cues ? Pattern five. Such an assumption would allow a straightforward prediction of increased performance in the presence of more reliable and stable spatial cues. Although the Training data fit such a prediction, it remains unclear as to how to reconcile such an
123
348
explanation with the equivalence in Testing performance of the Landmark ? Pattern (two cues) and the Cues ? Pattern (five cues) groups. Perhaps, under our conditions, the presence of two cues for that of the Landmark ? Pattern group was sufficiently similar to the reliability and stability of the presence of the five cues for participants in Cues ? Pattern group by the end of Training. In addition, assuming the spatial relation between any two locations is weighted and that spatial choices are based upon this weighting, a goal location can only serve as a useful cue for predicting the spatial location of the next goal location if the spatial relations among goal locations are learned. As a result, discovery of each goal location would successively reduce the uncertainty about the spatial location of the next goal location but only if the spatial relations among goal locations were extracted and updated at each choice point. Importantly, uncertainty about the next goal location would decrease rapidly with successful discovery of each successive goal location. The weightings of these spatial relations between locations would develop from experience with the consequences of spatial choices. Therefore, participants in the Pattern Only group would have weighted the spatial relations between locations solely based upon trial-and-error learning; however, the presence of the visual cue(s) for participants in the Landmark ? Pattern and Cues ? Pattern groups may have accelerated this weighting process relative to trial-and-error learning because the visual cue(s) also served to constrain their spatial choices to goal locations. Finally, facilitation of learning spatial relations among goal locations in the presence of visual cue(s) may be related to information obtained through dead reckoning (for a review, see Etienne et al. 1998). Specifically, it has been suggested that dead reckoning is involved in not only using spatial relations to navigate among known goal locations but also in initial learning of the spatial relations among goal locations themselves (Brown 2006a). Movement vectors from one goal location to the next may be integrated into a representation of the spatial arrangement of goal locations. Perhaps the visual cue(s) allowed movement vectors to be better calibrated during Training for participants in the Cues ? Pattern and Landmark ? Pattern groups. Although this explanation seems problematic in accounting for the virtual environment data—as vestibular feedback was absent in the virtual environment task—dead reckoning has been shown to rely on both vestibular and optical information (Kearns et al. 2002; Nico et al. 2002), and investigations of path integration in virtual environments have allowed researchers to eliminate vestibular information (which is considered the primary input for return paths in mammals) and determine
123
Anim Cogn (2010) 13:341–349
that external visual information, specifically optical flow, provides sufficient information for determination of a return path (Ellmore and McNaughton 2004; Kearns et al. 2002). In conclusion, results from the present real- and virtual environment search tasks demonstrate that facilitation of learning spatial relations among locations generalizes to other spatial configurations of goal locations, that coincident visual cues are not required for the facilitation effect, and that spatial pattern learning is a process that is facilitated by visual cues. Such results appear consistent with extant comparative research concerning the learning of spatial relations among locations with rats (e.g., Brown et al. 2000, 2001, 2002; Brown and Terrinoni 1996; Brown and Wintersteen 2004; DiGello et al. 2002; for reviews, see Brown 2006a, b) and humans (e.g., Mou and McNamara 2002; Shelton and McNamara 2001; Sturz et al. 2009b; for reviews, see McNamara and Valiquette 2004; Uttal and Chiong 2004). Collectively, this evidence suggests that the representations and underlying mechanisms of learning complex goal–goal relations may be qualitatively similar across species. Present results also add to a growing body of literature suggesting similarity in mechanisms used by humans to navigate real and virtual environments. Acknowledgment This research was conducted following the relevant ethical guidelines for human research and was supported by an Alzheimer Society of Canada Grant to DMK. We thank Paul Cooke, Randi Dickinson, Stephanie Diemer, Roxanne Dowd, Karen Gwillim, Jenny Lee, Jason Lukich and Martha Forloines for their assistance with data collection and scoring. We would also like to thank three anonymous reviewers for comments on an earlier version of the manuscript.
References Biegler R, Morris RGM (1993) Landmark stability is a prerequisite for spatial but not discrimination learning. Nature 361:631–633 Biegler R, Morris RGM (1996) Landmark stability: further studies pointing to a role in spatial learning. Q J Exp Psychol 49B:307–345 Brown MF (2006a) Abstracting spatial relations among goal locations. In: Brown MF, Cook RG (eds) Animal spatial cognition: comparative, neural, and computational approaches. http://www.pigeon.psy.tufts.edu/asc/brown/ Brown MF (2006b) Spatial patterns: behavioral control and cognitive representation. In: Wasserman EA, Zentall TR (eds) Comparative cognition: experimental explorations of animal intelligence. Oxford, New York, pp 425–438 Brown MF, Terrinoni M (1996) Control of choice by the spatial configuration of goals. J Exp Psychol Anim B 22:438–446 Brown MF, Wintersteen (2004) Spatial patterns and memory for locations. Learn Behav 34:391–400 Brown MF, DiGello E, Milewski M, Wilson M, Kozak M (2000) Spatial pattern learning in rats: conditional control by two patterns. Anim Learn Behav 28:278–287 Brown MF, Zeiler C, John A (2001) Spatial pattern learning in rats: control by an iterative pattern. J Exp Psychol Anim B 27:407–416
Anim Cogn (2010) 13:341–349 Brown MF, Yang SY, DiGian KA (2002) No evidence for overshadowing or facilitation of spatial pattern learning by visual cues. Anim Learn Behav 30:363–375 Burgess N (2006) Spatial memory: how egocentric and allocentric combine. Trends Cogn Sci 10:551–557 Chamizo VD (2003) Acquisition of knowledge about spatial location: assessing the generality of the mechanism of learning. Q J Exp Psychol 56B:102–113 Cheng K (1986) A purely geometric module in the rat’s spatial representation. Cognition 23:149–178 Cheng K (2008) Whither geometry? Troubles of the geometric module. Trends Cogn Sci 12:355–361 Cheng K, Newcombe NS (2005) Is there a geometric module for spatial orientation? Squaring theory and evidence. Psychon B Rev 12:1–23 Cheng K, Newcombe NS (2006) Geometry, features, and orientation in vertebrate animals: a pictorial review. In: Brown MF, Cook RG (eds) Animal spatial cognition: comparative, neural, and computational approaches. http://www.pigeon.psy.tufts.edu/asc/ cheng/ Cheng K, Shettleworth SJ, Huttenlocher J, Rieser JJ (2007) Bayesian integration of spatial information. Psychol Bull 133:625–637 DiGello E, Brown MF, Affuso J (2002) Negative information: both presence and absence of spatial pattern elements guide rats’ spatial choices. Psychon B Rev 9:706–713 Doeller CF, Burgess N (2008) Distinct error-correcting and incidental learning location relative to landmarks and boundaries. Proc Natl Acad Sci 105:5909–5914 Doeller CF, King JA, Burgess N (2008) Parallel striatal and hippocampal systems for landmarks and boundaries in spatial memory. Proc Natl Acad Sci 105:5915–5920 Durlach PJ, Rescorla RA (1980) Potentiation rather than overshadowing in flavor-aversion learning: an analysis in terms of withincompound associations. J Exp Psychol Anim B 6:175–187 Ellmore TM, McNaughton BL (2004) Human path integration by optic flow. Spat Cogn Comput 4:255–272 Etienne AS, Berlie J, Georgakopoulos J, Maurer R (1998) Role of dead reckoning in navigation. In: Healy S (ed) Spatial representation in animals. Oxford, New York, pp 54–68 Gallistel CR (1990) The organization of learning. MIT Press, Cambridge Graham M, Good MA, McGregor A, Pearce JM (2006) Spatial learning based on the shape of the environment is influenced by properties of the objects forming the shape. J Exp Psychol Anim B 32:44–59 Horne MR, Pearce JM (2009) Between-cue associations influence searching for a hidden goal in an environment with a distinctive shape. J Exp Psychol Anim B 35:99–107 Jeffrey KJ (1998) Learning of landmark stability and instability by hippocampal place cells. Neuropharmacology 37:677–687 Kearns MJ, Warren WH, Duchon AP, Tarr M (2002) Path integration from optic flow and body senses in a homing task. Perception 31:349–374 Kelly DM, Gibson BM (2007) Spatial navigation: spatial learning in real and virtual environments. Comp Cogn Behav Rev 2:111–124 Klatzky RL, Loomis JM, Beall AC, Chance SS, Golledge RG (1998) Spatial updating of self-position and orientation during real, imagined, and virtual locomotion. Psychol Sci 9:293–298 Learmonth AE, Newcombe NS, Huttenlocher J (2001) Toddlers’ use of metric information and landmarks to reorient. J Exp Child Psychol 80:225–244
349 McNamara TP, Valiquette CM (2004) Remembering where things are. In: Allen GL (ed) Human spatial memory: remembering where. Lawrence Erlbaum, Mahwah, pp 3–24 Miller NY (2009) Modeling the effects of enclosure size on geometry learning. Behav Process 80:306–313 Miller NY, Shettleworth SJ (2007) Learning about environmental geometry: an associative model. J Exp Psychol Anim B 33:191–212 Montello DR, Waller D, Hegarty M, Richardson AE (2004) Spatial memory of real environments, virtual environments, and maps. In: Allen G (ed) Human spatial memory: remembering where. Lawrence Erlbaum, Mahwah, pp 251–285 Mou W, McNamara TP (2002) Intrinsic frames of reference in spatial memory. J Exp Psychol Learn 28:162–170 Nardini M, Jones P, Bedford R, Braddick O (2008) Development of cue integration in human navigation. Curr Biol 18:689–693 Newcombe NS, Ratliff KR (2007) Explaining the development of spatial reorientation: modularity-plus-language versus the emergence of adaptive combination. In: Plumert JM, Spencer JP (eds) The emerging spatial mind. Oxford, New York, pp 53–76 Nico D, Isra/l I, Berthoz A (2002) Interaction of visual and ideothetic information in a path completion task. Exp Brain Res 146:379–382 Pearce JM, Graham M, Good MA, Jones PM, McGregor A (2006) Potentiation, overshadowing, and blocking of spatial learning based on the shape of the environment. J Exp Psychol Anim B 32:201–214 Plumert JM, Spencer JP (2007) The emerging spatial mind. Oxford, New York Ratliff KR, Newcombe NS (2008) Reorienting when cues conflict: evidence for an adaptive combination view. Psychol Sci 19:1301–1307 Rescorla RA, Durlach P (1981) Within-event learning in Pavlovian conditioning. In: Spear NE, Miller RR (eds) Information processing in animals: memory mechanisms. Erlbaum, Hillsdale, pp 81–111 Richardson AE, Montello D, Hegarty M (1999) Spatial knowledge acquisition from maps, and from navigation in real and virtual environments. Mem Cogn 27:741–750 Shelton AL, McNamara TP (2001) Systems of spatial reference in human memory. Cogn Psychol 43:274–310 Shettleworth SJ (1998) Cognition, evolution, and behavior. Oxford, New York Sturz BR, Kelly DM (2009) Encoding of relative enclosure size in a dynamic three-dimensional virtual environment by humans. Behav Process 82:223–227 Sturz BR, Bodily KD, Katz JS (2006) Evidence against integration of spatial maps in humans. Anim Cogn 9:207–217 Sturz BR, Bodily KD, Katz JS, Kelly DM (2009a) Evidence against integration of spatial maps in humans: generality across real and virtual environments. Anim Cogn 12:237–247 Sturz BR, Brown MF, Kelly DM (2009b) Facilitation of learning spatial relations among locations by visual cues: implications for theoretical accounts of spatial learning. Psychon B Rev 16:306–312 Uttal DH, Chiong C (2004) Seeing space in more than one way: children’s use of higher order patterns in spatial memory and cognition. In: Allen GL (ed) Human spatial memory: remembering where. Lawrence Erlbaum, Mahwah, pp 125–142
123