Coarse to fine dynamics of monocular and binocular processing in human pattern vision Peter Neri1 Institute of Medical Sciences, University of Aberdeen, Aberdeen AB25 2ZD, United Kingdom

Biological image processing has been hypothesized to adopt a coarse to fine strategy: the image is initially analyzed at a coarse spatial scale, and this analysis is then used to guide subsequent inspection at a finer scale. Neurons in visual cortex often display response characteristics that are consistent with this hypothesis for both monocular and binocular signals. Puzzlingly, measurements in human observers have failed to expose similar coarse to fine dynamics for human pattern vision, questioning the applicability of direct parallels between single neurons and perception. We performed a series of measurements using experimental protocols that were specifically designed to examine this question in more detail. We were able to confirm that, when the analysis is restricted to the linear properties of the perceptual process, no coarse to fine dynamics were evident in the data. However, when the analysis was extended to nonlinear descriptors, a clear coarse to fine structure emerged that consisted of two processes: an early nonlinear process operating on a coarse spatial scale followed by a linear process operating on a fine spatial scale. These results potentially serve to reduce the gap between the electrophysiological and behavioral findings. nonlinear kernel stereoscopic

| psychophysics | retinal disparity | reverse correlation |

T

he early stages of image processing in human vision involve rapid extraction of salient features such as edges (1). This is a challenging task for most natural images, because meaningful edges (i.e., relevant to an ecological understanding of the environmental layout) occur at different scales (both big and small); how is the visual system to know which scale to choose for analyzing the image effectively? This issue is not only relevant to each of the two retinal images separately but also to their integration for successful stereoscopic vision (2). A recurrent theme in spatial models of both monocular and binocular vision is the coarse to fine strategy for early image processing (2, 3): Images are first analyzed at a coarse scale to identify overall scene layout; this early stage is followed by targeted analysis at a finer scale to work out the details of the scene. The notion of a coarse to fine strategy has naturally led to the expectation that the temporal dynamics of neural processing may display a drift of preferred spatial frequency from low (coarse) to high (fine) as time evolves over a brief window. Electrophysiological recordings from single neurons have confirmed this expectation for both monocular (4, 5) and binocular (6) signals. More specifically, the preferred spatial frequency of single units in early visual cortex may increase by a twofold factor over a time window of <100 ms (4). Although it remains unclear whether this property is also apparent at the level of larger neuronal assemblies (7), it is generally accepted that the neural signatures of coarse to fine spatial analyses exist and are robust (8). A question of fundamental importance is whether the dynamic properties of single neurons have any appreciable behavioral impact; because it is ultimately behavior that determines evolutionary fitness, it becomes critical to establish whether any such effects exist. This issue was directly addressed by a recent study (7), which relied on psychophysical reverse correlation to retrieve the shape of the perceptual filters associated with an ori-

www.pnas.org/cgi/doi/10.1073/pnas.1101246108

entation discrimination task. Psychophysical reverse correlation is a powerful tool for estimating sensory tuning, and results from this technique often mirror the properties of sensory neurons more effectively than other methods (9). Perhaps surprisingly, Mareschal et al. (7) reported no change in sensory tuning across a temporal segment spanning the window relevant to neurons in relation to either orientation or, more importantly, spatial frequency. Because spatial frequency is directly relevant to coarse to fine analysis and because its dynamic properties seem robust in single neurons (6, 8), this psychophysical result may highlight a discrepancy between the properties of sensory neurons and behavior (7). The study by Mareschal et al. (7) points out that behavior presumably reflects activity across a large population of neurons, not just from individual ones, and that the effects of dynamic retuning may not be measurable at the population level. Whichever interpretation is adopted, the reported discrepancy between electrophysiology and psychophysics has fundamental implications for how we think about bridging the gap between behavior and single neurons. The importance of this issue warrants additional investigation. We allowed for three critical differences between our study and previous studies. (i) We specifically designed our stimulus and experimental protocol so as to maximize the chances of observing dynamic shifts in spatial frequency if they exist. (ii) We extended the measurements to binocular processing, because it is conceivable that coarse to fine analysis may not apply monocularly (as reported by Mareschal et al. in ref. 7) but may be implemented binocularly, a possibility that has not been addressed by previous studies. (iii) We extended the reverse correlation analysis to expose not only the linear properties of the underlying system but also its nonlinear properties (10). With these important additions/ extensions, we were able to expose clear and robust coarse to fine dynamics for detecting a simple visual feature (a vertical bar). Our results show that, at this level of analysis, the behavioral signatures of neuronal processing can be exposed effectively and that the coarse to fine strategy is relevant not only to individual neurons but also to the entire perceptual system. Results We asked observers to report the depth (near/far with respect to fixation depth) of a bright vertical bar located in the center of the screen (Fig. 1A). Our goal was to map both monocular and binocular perceptual filters simultaneously, requiring that the signal would be uniquely specified both monocularly and binocularly. The far target was, therefore, rendered by the binocular pair shown in Fig. 1G, where the two bars (one to each eye) always appeared at the locations shown in Fig. 1. The near target

Author contributions: P.N. designed research, performed research, contributed new reagents/analytic tools, analyzed data, and wrote the paper. The author declares no conflict of interest. This article is a PNAS Direct Submission. 1

E-mail: [email protected].

This article contains supporting information online at www.pnas.org/lookup/suppl/doi:10. 1073/pnas.1101246108/-/DCSupplemental.

PNAS Early Edition | 1 of 6

NEUROSCIENCE

Edited by Wilson S. Geisler, University of Texas, Austin, TX, and approved May 11, 2011 (received for review January 27, 2011)

B

Top

A

1 deg

C

D E

−7

F −.5

9 arcmin

Z

0

.5

Bottom

Time (ms)

−200

−.5

Far Time

Left eye

J

K

LL eye

R eye

C

D

E

−200 ms

was similarly rendered by the binocular pair in Fig. 1H. There is an obvious problem with this design: if the task is to decide between near and far, it can be performed monocularly [compare the image for one eye (e.g., left) between Fig. 1G and Fig. 1H). To ensure that the task required binocular integration, we asked observers to report the depth of the target bar indirectly through comparison with two strips located above and below the target bar (Fig. 1 A and B). Materials and Methods has a detailed explanation of how the enforcement of binocular integration was achieved by this design. First-Order (Linear) Perceptual Filters. Because each target bar was flashed only briefly and because the target region of the stimulus did not vary across its vertical extent, we can represent the stimuli in Fig. 1H using the spatiotemporal diagrams shown in Fig. 1I, where the target bar is indicated by a bright pixel occurring at a specific spatial position (x axis) and specific time (y axis). Using established protocols (9–11), we perturbed this spatiotemporal region using an additive Gaussian noise source (Fig. 1 J and K) and applied psychophysical reverse correlation to the resulting spatiotemporal probes (Materials and Methods). This procedure allowed us to retrieve monocular perceptual filters for detecting a near vs. far bar within the context of a binocular task, and examples are shown in Fig. 2 A and B. As expected, the perceptual filters show positive modulations at the near-target spatiotemporal locations (red) and negative modulations at the far-target locations (blue). Because the task was binocular, we were interested in deriving an analogous spatiotemporal descriptor for the binocular process. First, we computed a binocular interaction map for each frame

F

G

H

0 ms

−.5

Nonius

Fig. 1. Stimulus and task. A shows one frame of the stimulus from the entire monitor; the region relevant to the task is magnified in B. This region contained a top random dot stereogram (RDS; red), a bottom RDS (blue), and two RDSs in the middle (green) separated by a gap. The dynamic noise probe appeared within a 1.5° × 1.5° region centered on this gap (indicated by the magenta dashed outline) and was superimposed onto the target bar (bright vertical bar shown in B). C portrays a side view of B to clarify the depth arrangement of the different regions; the target bar is indicated by the black segment. C–F display all four possible configurations for the stimulus; observers were asked to report whether the target bar was at the same depth as the top or bottom RDS (two-way choice). Materials and Methods has an explanation as to why this design enforces binocular processing. The two possible configurations for the target bar (i.e., far and near) were rendered by fixed monocular images shown in G and H, respectively. Because the target bar only lasted one stereo frame, it can be represented by two monocular bright pixels embedded within spatiotemporal regions, which is shown in I. This is the spatiotemporal representation adopted throughout the remaining figures. We added spatiotemporal noise to the two images in I, resulting in the two monocular probes shown in J and K. The stimulus was preceded and followed by a Nonius marker (L) in the form of a binocular square containing unpaired monocular vertical and horizontal segments (compare images for the two eyes).

2 of 6 | www.pnas.org/cgi/doi/10.1073/pnas.1101246108

Space (deg)

Space

Right eye

time

I

Near

H

200

J

0 Binocular

Space (deg)

Space

Right eye

Space (deg)

Space

R eye

Far Nonius Near

L eye

.5

0 Left eye

G

0

B Time (ms)

Nonius

A

7

I 200 ms

.5

3 −.5

0

Z

.5 Space (deg)

−3

Fig. 2. Monocular filters and binocular interaction matrix (aggregate observer; >60,000 trials). A and B show monocular filters obtained by applying psychophysical reverse correlation to the noise probe pictured in Fig. 1 J and K. We computed a binocular interaction matrix for each stereo frame of the stimulus (Materials and Methods); for example, the binocular interaction matrix in C corresponds to the monocular spatial profiles indicated by cyan and magenta dashed outlines in A and B. J shows the (time) average across C–I. The yellow line shows the marginal average across the diagonal axis orthogonal to it; similarly, the white line refers to the other diagonal dimension [which defines isodisparity points (8)]. All surfaces plot Z scores (colored for |Z| > 2). Contour levels (red for positive and blue for negative) correspond to interpolated and denoised (using Matlab wiener2 function) images of the filter (introduced to aid visualization of surface structure).

of the stimulus; there were seven frames (each lasting 57 ms) resulting in the seven interaction maps shown in Fig. 2 C–I. Each interaction map consists of the product filter between the spatial profile in one eye at a given time point (one is indicated by the cyan dashed outline in Fig. 2A) and the corresponding spatial profile from the other eye (indicated by the magenta dashed outline in Fig. 2B); the intensity of the map reflects whether the noisy perturbations delivered to each location in one eye covaried positively (bright pixels) or negatively (dark pixels) with the perturbations delivered to the other eye for all possible combinations of different spatial locations between the two eyes (Materials and Methods). A very similar analysis has been previously used to characterize the binocular properties of single neurons in cat visual cortex (8). Although it is not readily apparent from the individual interaction maps corresponding to different time points shown in Fig. 2 C–I, these maps do contain structure, which is shown more effectively by the time average in Fig. 2J. This map shows a clear oriented modulation resembling a Gabor wavelet tilted by 45°, with positive (red) and negative (blue) lobes flanking the main diagonal; a remarkably similar structure is observed in binocular single neurons (figure 1 in ref. 8). To reduce the dimensionality of the data shown in Fig. 2 and make them comparable between monocular and binocular datasets, we make two observations. (i) Data for the two eyes (Fig. 2 A and B) are very similar except for obvious differences in symmetry, and therefore, they can be combined into one map by subtracting B from A (Fig. S1 has additional analyses supporting this statement). (ii) The binocular structure of each interaction map (Fig. 2J) varies along one diagonal but not the other (compare white and yellow traces), and therefore, this structure is wellrepresented by a diagonal marginal average (white solid trace in Neri

Second-Order (Nonlinear) Perceptual Filters. Previous work has shown that the characterization of various perceptual processes can be augmented by studying how the second-order statistical properties of the noise source affect psychophysical choice (10), thus enhancing the description afforded by first-order statistics alone. Fig. 3B shows an example of this analysis: instead of computing the perceptual filter using the mean of the classified noise, this image was obtained by relying on the variance of the noise fields. Fig. 3D shows the outcome of performing a similar computation for the binocular data (Materials and Methods). Both images display obvious structure, offering the opportunity to gain more insight into the underlying neural process than that afforded by first-order analysis alone (Fig. 3 A and C). More specifically, there are two noteworthy aspects of these images that set them apart from their first-order counterparts. (i) At least for the monocular data and possibly for the binocular data as well (Quantitative Analysis), the second-order filters display modulations within an earlier time window than first-order filters.

1storder

A

.5

−200

nd 2 order 0 .5

−.5

B

0 1storder monocular

14

2ndorder monocular −14

200 Space (deg) −200

3

D

0

.5

100

nd

−1

1

−.5

0

.5

−1

−3

Monocular

−6

−4

−2

−2 0

−4

−6 −100

Fig. 3. Compact representation of the entire filter set (aggregate observer). A and C show first-order filters, and B and D show second-order filters; A and B show monocular data, and C and D show binocular data. Surface plotting conventions are as in Fig. 2 (the monocular filters are plotted to the top Zscore legend and the binocular filters are plotted to the bottom Z-score legend). Yellow and green data points above B show monocular first- and second-order marginal averages across time; smooth lines show best-fit Gabor functions. Data points below D show the same averages for binocular data. Error bars show ±1 SEM.

Neri

−8 1storder filter

0

4

0

2

4

0

nd

0

−8

Binocular

3

nd

−.5

100

2 order filter

1

0

Z

2 order binocular 200

BLate/early energy log−ratio CSpatial frequency centroid (c/deg)

Time centroid (ms) −100

nd

st

1 order binocular

A

2 order filter

Time (ms)

Time (ms)

Space (deg)

C

Z

Quantitative Analysis. It is necessary to confirm that the qualitative observations that we have made so far are quantitatively robust and borne out by individual observer analysis as opposed to cursory evaluation of aggregate data [the dataset shown in Figs. 2 and 3 was pooled from all trials across all observers (i.e., it refers to a hypothetical aggregate observer)]. Because we found some variability across observers (which is normal), it is difficult to draw conclusions from simply inspecting individual filters (Fig. S3). We therefore performed additional analyses that captured relevant aspects of both first- and second-order filters and quantified each aspect using a single value for each observer. The data could then be subjected to simple population statistics in the form of paired two-tailed t tests, confirming or rejecting specific hypotheses about the overall shape of the filters. Our conclusions are therefore based on individual observer data and not on the aggregate observer (which is used solely for visualization purposes). This distinction is important, because there is no generally accepted procedure for generating an average filter from individual images for different observers. Fig. 4 plots three different metrics of particular interest for first-order filters on the x axis vs. second-order filters on the y axis. The first metric (Fig. 4A) consists of a simple centroid estimation over time; it returns an estimate of the time point where most filter modulations occur. The monocular data show a clear effect, whereby the second-order filters modulate before the first-order filters (solid symbols fall below the diagonal unity line at P < 0.03 on a paired t test). The average asynchrony (centroid difference) was ~60 ms (when measured in relation to time of peak modulation, it was ~80 ms). A closely related result is shown by the second metric (Fig. 4B), which consists of the log ratio between the energy of the filter after target occurrence (late) and the energy before target occurrence (early); this metric is zero for filters that modulate near target occurrence, negative for filters that modulate primarily before target occurrence, and positive for filters that modulate after target occurrence. Results from this analysis for the monocular data are consistent with those obtained using centroid analysis (compare solid symbols between Fig. 4A and Fig. 4B); more specifically, the late/early ratio is significantly

2 order filter

0

Time (ms)

Time (ms)

−.5

(ii) Both monocular and binocular second-order filters display a much coarser spatial structure than the first-order filters in that their modulations spread more broadly across space. The latter feature can be appreciated more readily by plotting marginal averages over time for both first-order (yellow) and second-order (green) filters (1D traces above Fig. 3B and below Fig. 3D); both conform to odd Gabor functions, but the second-order carrier frequency is lower than the first-order frequency.

2 1 0

1storder filter

1storder filter

Fig. 4. Individual observer analysis based on scalar metrics. A plots temporal centroid, B plots late/early energy log ratio, and C plots spatial frequency centroid (Materials and Methods has details on how these quantities were computed). Estimates from first-order filters are plotted on the x axis, and estimates from second-order filters are plotted on the y axis. Solid symbols refer to monocular data, and open symbols refer to binocular data. Different symbols refer to different observers. Error bars show ±1 SEM.

PNAS Early Edition | 3 of 6

NEUROSCIENCE

Fig. 2J and Fig. S2 has additional analyses supporting this statement). Fig. 3A shows the result of combining filters from the two eyes and also exploiting spatial symmetry around the center. Fig. 3C shows the result of collapsing binocular interaction maps into 1D marginals with one marginal per frame: each row of pixels in Fig. 3C shows the diagonal marginal (equivalent to the solid white trace in Fig. 2J) corresponding to the binocular interaction map for a given frame. For example, the top row shows the diagonal marginal corresponding to Fig. 2C, the second row shows the diagonal marginal corresponding to Fig. 2D, and so on for the remaining rows (corresponding to Fig. 2 E–I). Notice in this respect that the x axis in Fig. 3C can be interpreted to represent retinal disparity (8). Fig. 3 A and C offers a compact representation of both monocular and binocular data in a similar format; neither shows obvious alterations in spatial scale across time. The monocular filter peaks sharply in concomitance with target appearance (0 ms), whereas the binocular filter modulates over a more extended time window; aside of this temporal modulation of overall amplitude, there is no clear evidence of any change in their spatial profile over time. Fig. 3A is, therefore, consistent with previous studies (7), whereas Fig. 3C extends those results to the binocular domain.

greater for first- than second-order filters (solid symbols fall below the unity line at P < 0.02). The binocular data showed similar effects (compare open with solid symbols); second-order filters modulate before first-order filters, which was assessed by both time centroid (open symbols in Fig. 4A fall below the unity line at P < 0.04) and late/early ratio (open symbols in Fig. 4B fall below the unity line at P < 0.002). In line with previous reports (12), we conclude that the linear (first-order) and nonlinear (second-order) components of the relevant process operate in an asynchronous fashion. The third metric (Fig. 4C) was adopted to gauge potential differences in spatial tuning. For each filter, we obtained a marginal average over time (green/yellow traces in Fig. 3), computed its power spectrum, and calculated the spectrum centroid (Materials and Methods). The resulting estimate provides an indication of which spatial frequency region is preferentially represented by the filter structure. Monocular second-order filters are characterized by lower spatial frequencies than first-order filters (solid symbols fall below the unity line at P < 0.03), and a similar result applies to binocular filters (open symbols fall below the unity line at P < 0.003). On average, monocular centroids shifted by 0.8 ± 0.6 octave from second to first order, which is in excellent agreement with the 0.6 ± 0.7-octave shift reported for single neurons (4). The shift was slightly larger for the binocular data at 1.1 ± 0.5, which was also well within the shift range reported for binocular complex cells (8). We conclude from this analysis that the spatial structure of second-order filters is coarser than the structure of first-order filters. When combined with the temporal results detailed in the previous paragraph, these findings can be summarized by the notion that two separate processes with different characteristics were operating in the human observer: an early coarse nonlinear process followed by (after a delay of 50–100 ms) a linear process with finer spatial tuning. Modeling. Computational models can be useful for interpreting

perceptual filters like those shown in Fig. 3. We have simulated three models of particular interest and relevance to the present discussion. The diagram in Fig. 5A outlines a template-matching model corresponding to the ideal observer (13). The two images delivered to the eyes are matched against near-target monocular signals to generate a near-target output; the same process is repeated for a far-target template (Materials and Methods), and the difference is used as the decision variable (if more than zero respond near or if less than zero respond far). As expected, this linear template matcher returns a monocular first-order filter (Fig. 5B) that corresponds to the difference between near and far targets. It does not return a binocular filter (Fig. 5D), and it does not return second-order filters (Fig. 5 C and E). Although useful as a simple benchmark reference, the template matcher model is implausible, because it does not perform convolution over time; this would translate into a human observer being able to pick out stimulus information only from the single frame corresponding to the target, which is unlikely. A more plausible implementation involves space–time convolution (10, 14), which is shown in Fig. 5F; in addition, this model combines outputs from the two eyes through squaring, thus implementing a standard stereo energy model (15). As shown in Fig. 5I, this well-established model of binocular combination returns a binocular first-order filter; it is also able to replicate the peaked structure of the monocular first-order filter (Fig. 5G) despite involving front-end convolution, a nontrivial result. Notice, however, that it fails to return second-order filters (Fig. 5 H and J; additional analysis relating to the role of binocularly correlated/ anticorrelated signals is shown in Fig. S4). To rectify this discrepancy with the empirical results, we added a nonlinear component to the model, which is shown in Fig. 5K. Each monocular stimulus is further convolved with a spatially coarse front-end filter (green outline); the energy of this filter is 4 of 6 | www.pnas.org/cgi/doi/10.1073/pnas.1101246108

A

F

K

B

C

D

E

G

H

I

J

L

M

N

O

Fig. 5. Three different models of increasing complexity. The model in A involved template-matching (•) with ideal front-end templates (yellow outlines). The model in F replaced template-matching with convolution (*) and combined outputs from the two eyes nonlinearly (disparity energy model). The model in K was similar to the one in F but with an added monocular component (acting before binocular combination) consisting of squared convolution with a coarse template (green outline); the output of this operation was delayed (τ) and then used to control the gain (by divisive normalization; ÷) of the output from the convolution with the finer template (red symbols). B–E show filters obtained by challenging the model in A with the same stimuli used during the experiments; the filters are arranged as in Fig. 3. G–J refer to the model in F, and L–O refer to the model in K. Filter modulations show simulated Z scores by plotting the ratio between mean and SD across model iterations; to expose consistent structure induced by the models more effectively, we only plot filter modulations (bright for positive and dark for negative) that exceeded 2 SDs (details in Materials and Methods).

used to modulate the output of the original linear filter by divisive normalization (16) after a delay of ~100 ms (Materials and Methods). As shown in Fig. 5 L–O, this simple model is able to capture most of the features observed experimentally (at least qualitatively). More specifically, the model is able to capture both the asynchrony between first- and second-order filters and their difference in spatial tuning. The latter result required the introduction of the spatially coarse filter in Fig. 5K; we attempted to replicate broader spatial tuning for the second-order filters using model components that did not differ in spatial tuning, but we failed. We conclude from the modeling section that the twoprocess interpretation that we offered earlier based on filter inspection is sensible in terms of potentially underlying neural circuitry. Discussion Even if we assume that the relationship between single neurons and behavior is relatively straightforward, this does not imply that behavioral first-order filters obtained using psychophysical reverse correlation should reflect the temporal dynamics of the underlying neuronal filters in an equally straightforward manner. As mentioned in Results, a plausible implementation of visual detection involves spatiotemporal convolution of the underlying filter with the input stimulus (14); if the human observer reads out the summed output of this convolution, then the corresponding Neri

Materials and Methods Stimulus and Task. The overall stimulus extended horizontally to the entire width of the monitor, which subtended 40° at the adopted viewing distance of 57 cm (Fig. 1A); the stimulus vertical extent was 10°. For descriptive purposes, it can be subdivided into three horizontal strips: a top strip that is 4.25° high (Fig. 1 B–F, red), a middle strip that is 1.5° high (green), and a bottom strip that is 4.25° high (blue). Top and bottom strips consisted entirely of random dot stereograms (RDSs; i.e., devoid of monocular cues to their depth); RDSs were generated by assigning random luminance values (from a uniform distribution spanning 0–70 cd/m2) to pixels 9 × 9 arcmin in size. The middle strip also consisted of an RDS always at zero disparity, but as shown Fig. 1B, its central 3° region was interrupted to allow for insertion of a bright vertical target bar that could appear at either a near or far disparity of 9 arcmin (bar height was 1.5° and width was 9 arcmin). Observers were required to determine whether the target bar was near or far; however, this was not the

Neri

judgment that they were asked to report. The reason for avoiding a simple near/far judgment was that the disparity of the target bar, whether near or far, was rendered by fixed monocular images shown in Fig. 1 G and H (for far and near, respectively); fixed monocular signals were necessary, because our goal was to accurately map the monocular filters in addition to the binocular ones. This requirement introduced monocular signals for performing a near/ far judgment: observers would have been in a position to discriminate near from far targets using only one eye (compare the two images for one eye in Fig. 1G with Fig. 1H and notice that they differ). To make this strategy ineffective, we asked observers to express a judgment based on a comparison between the depth of the target bar and the depth of the top and bottom strips. These two strips always appeared at opposite depth from fixation (i.e., one near and one far), and either configuration (top strip near together with bottom strip far or top strip far together with bottom strip near) was randomly selected on each trial. When the depth of the target bar is taken into account, this protocol leads to the four possible stimulus configurations shown in Fig. 1 C–F. Observers were asked to report whether the depth of the target bar matched the depth of the top or bottom strip. This top/bottom task requires that the depth of at least one of those two strips is determined; failing that, performance must be at chance. Because the two strips were devoid of monocular cues, the task could not be performed monocularly: even if observers could potentially determine the depth of the target bar using only one eye, one eye would not be sufficient to determine the depth of the reference strips. Because the percentage of correct responses generated by the observers that we tested was consistently above chance (73% ± 6%; mean ± SD across observers), they could not be performing the task monocularly. Additional evidence that observers relied on both eyes to process the target stimulus is provided in Fig. S1. We found that not all observers could perform the above-detailed task, and therefore, we were forced to exclude from the data collection 5 of 11 observers that we had preliminarily tested (we retained six observers); the criterion for exclusion was rigorous (details in SI Materials and Methods). All observers were naive to the purpose and methodology of the study but had prior experience in performing psychophysical tasks (although not involving stereoscopic stimuli); they were paid 9 British pounds per hour. Blocks consisted of 100 trials, and observers received feedback (correct/incorrect) after each trial. We collected 10,100 ± 3,000 trials per observer. Spatiotemporal Probe. Each stereo frame lasted 57 ms and was rendered by eight refresh frames at 140 Hz; these eight frames were sent alternately to the two eyes (the refresh rate per eye was, therefore, 70 Hz) using ferroelectric stereo goggles (Cambridge Research Systems) so that each eye received four identical frames per stereo frame. The target bar only lasted one stereo frame and was embedded within a spatiotemporal noise probe lasting seven stereo frames (total of 400 ms) and consisting of 10 vertical bars (Fig. 1 J and K). We adopted a Gaussian noise source of SD = 2.6 cd/m2 except for observers S3 and S6, who required that the intensity be reduced to 1.7 cd/m2 to ensure that the stimulus would not exceed the operating range of the monitor (background luminance was 35 cd/m2). The intensity of the target bar was adjusted individually for each observer to yield threshold performance, resulting in monocular luminance values for the target bar of 18 ± 6 cd/m2 across observers. Each noise probe was generated independently on each trial for each eye. Immediately before and after the appearance of the probe, observers were asked to maintain fused fixation on a Nonius marker (Fig. 1L) that remained on the screen all the time except for the 400 ms during which the probe was shown. The top and bottom RDS strips only appeared in concomitance with the probe, whereas the middle RDS strip was always present to aid stable fusion. Derivation of Perceptual Filters. We use ne½q;z to denote the spatiotemporal noise sample associated with a far- (q = 0) or near-target (q = 1) stimulus and an incorrect (z = 0) or correct (z = 1) response by the observer delivered to either the left (e = 0) or right (e = 1) eye. Each element of a given sample is indexed by n(xi, tj) (i.e., the intensity of the corresponding bar at spatial location xi and temporal location tj). We estimated monocular first-order filters as pe1 ¼ Æne½1;1 æ þ Æne½0;0 æ − Æne½1;0 æ − Æne½0;1 æ, where 〈〉 is average across trials of the indexed type (11), and monocular second-order filters as pe2 ¼ varðne½1;1 Þ þ varðne½0;0 Þ − varðne½1;0 Þ − varðne½0;1 Þ, where var() is variance across trials (10). In Fig. 3 A and B, we plot p1 and p2 (average of two eyes) where p1 ¼ p01 − p11 (same for p2). To increase statistical power for all filter estimates, we also exploited the evident (and expected) odd ↔ ↔ symmetry across space: p ¼ p − p , where p is p after flipping the dimen↔ sion of space (p ðxi ; tj Þ ¼ pð − xi ; tj Þ). To compute binocular filters, we first mapped n0½q;z and n1½q;z into their space–space outer product N½q;z ðxi ; xj ; tk Þ ¼ n0½q;z ðxi ; tk Þ × n1½q;z ðxj ; tk Þ (i.e., for each time point, we comPNAS Early Edition | 5 of 6

NEUROSCIENCE

first-order perceptual filter is an extensively blurred image of the underlying neuronal filter (10) from which it would be prohibitively difficult to expose any temporal dynamics. Even if the read out process is nonlinear (e.g., maximum operation across time), it is not obvious that the underlying front-end filter function can be exposed effectively (10), particularly when the target signal is temporally extended (7, 14). For the above-mentioned reasons, our experimental design adopted a target signal that was restricted to a brief period within the noise probe (Fig. 1I) in an effort to maximize our ability to resolve the dynamics of the underlying filter. We also restricted the dimensionality of the noise probe to one spatial dimension (rather than two) to increase the reliability of our measurements. Our simulations indicate that the presence of a nonlinear binocular integration stage leads to temporally localized monocular first-order filters (Fig. 5G), possibly contributing an additional factor that may have increased the opportunity for resolving significant dynamic changes. By combining the design choices just detailed with second-order analysis of the perceptual system (12, 10), we were able to expose a clear coarse to fine structure consisting of two processes occurring in temporal succession (one coarse and one fine). We emphasize that our results are fully consistent with the results of Mareschal et al. (7) in the sense that our first-order analysis, like their analysis, did not expose any dynamic change in the preferred spatial frequency of the perceptual filter. Furthermore, it is not clear that our results contradict their overall conclusions. It is certainly the case that we report the presence of coarse to fine behavioral signatures, whereas Mareschal et al. (7) did not, but it is less clear how the combined linear–nonlinear structure that we exposed here may relate to the response properties of individual neurons as measured by existing electrophysiological studies: single neurons display dynamic tuning in relation to their first-order filters (4, 5, 8) and not a combined first-order plus second-order representation (or at least, there is no evidence in relation to the latter possibility). In other words, it is not clear that, if we were to insert an electrode into the circuit in Fig. 5K, the resulting measurements would conform to the measurements reported in the electrophysiological literature (Fig. S5 has a closer examination of this issue and examples of when similarities may or may not be expected). The question of whether the specific coarse to fine dynamic properties reported for single neurons are somehow related to the specific coarse to fine properties reported for behavior here is, therefore, still open (7). Based on the results detailed here, we can state that human pattern vision, similar to individual neurons in visual cortex (8), is characterized by a coarse to fine architecture at the functional level. Whether and how this behavioral architecture may be related to the physiological properties of single neurons are questions that will require further investigation, possibly using more invasive techniques (e.g., awake behaving monkey electrophysiology combined with psychophysical reverse correlation) (17).

puted the outer product matrix between the noise trace for the left eye and the noise trace for the right eye) (Fig. 2 C–I). We then computed the full first- and second-order binocular filters P∗1 and P∗2 using the same equations used for pe1 and pe2 , except that we substituted N[q,z] for ne½q;z . The P* filters are 3D; we reduced their dimensionality by taking diagonal averages across the two dimensions of space (white solid trace in Fig. 2J), an operation equivalent to collapsing these two dimensions into the one dimension of disparity (8). The resulting p∗1 and p∗2 filters were 2D and directly comparable with p1 and p2 (Fig. 3). ∼

Scalar Metrics. The time centroid was computed as t•m, where t is the vector ∼ of time points (seven values from −200 to 200 ms), m is the normalized ~ is squared marginal of the filter across space, and • is the inner product. m treated above as a probability distribution over time and was computed as follows: from each filter p, we derived a 1D temporal profile by first inverting the sign of all values to one side of the zero space point (e.g., for the filter in Fig. 3A, this would be the left half) and then averaging across the dimension of space; to ensure that all values were positive (necessary to treat as distribution), we squared each value of the resulting vector and normalized to a sum of one. The late/early log energy ratio was log[ρ(plate)/ ρ(pearly)], where ρ() is rms and plate is the portion of the filter corresponding to time values > 0, whereas pearly is the portion corresponding to time values < 0. The spatial frequency centroid was obtained by first extracting the marginal average across time for each filter; from the resulting 1D vector, we computed the power spectrum and applied the same centroid calculab where f is the vector of sampled spatial fretion detailed earlier (i.e., f•m, b was obtained by simply normalizing the power spectrum to quencies and m a sum of one without prior squaring, which was not necessary because the spectrum is already positive). Models. All models conformed to the neuron–antineuron scheme (18). The stimulus was processed separately by a near- and a far-preferring unit. The output of the latter was subtracted from the output of the former; if this difference was greater than zero, the model would respond near, and otherwise, it would respond far. For each model, we describe only the near1. Morgan MJ (2011) Features and the ‘primal sketch.’ Vision Res 51:738–753. 2. Marr D, Poggio T (1979) A computational theory of human stereo vision. Proc R Soc Lond B Biol Sci 204:301–328. 3. Anderson CH, Van Essen DC (1987) Shifter circuits: A computational strategy for dynamic aspects of visual processing. Proc Natl Acad Sci USA 84:6297–6301. 4. Bredfeldt CE, Ringach DL (2002) Dynamics of spatial frequency tuning in macaque V1. J Neurosci 22:1976–1984. 5. Mazer JA, Vinje WE, McDermott J, Schiller PH, Gallant JL (2002) Spatial frequency and orientation tuning dynamics in area V1. Proc Natl Acad Sci USA 99:1645–1650. 6. Menz MD, Freeman RD (2003) Stereoscopic depth processing in the visual cortex: A coarse-to-fine mechanism. Nat Neurosci 6:59–65. 7. Mareschal I, Dakin SC, Bex PJ (2006) Dynamic properties of orientation discrimination assessed by using classification images. Proc Natl Acad Sci USA 103:5131–5136. 8. Menz MD, Freeman RD (2004) Temporal dynamics of binocular disparity processing in the central visual pathway. J Neurophysiol 91:1782–1793. 9. Neri P, Levi DM (2006) Receptive versus perceptive fields from the reverse-correlation viewpoint. Vision Res 46:2465–2474. 10. Neri P (2010) Stochastic characterization of small-scale algorithms for human sensory processing. Chaos 20:045118.

6 of 6 | www.pnas.org/cgi/doi/10.1073/pnas.1101246108

preferring unit; the far-preferring unit was the same except that it involved far filters instead of near ones. For the model in Fig. 5A, the two input stimuli s0 and s1 (containing both target and noise) delivered to the two eyes were template-matched to the two monocular images h0 and h1 corresponding to the near target (indicated by yellow outlines in Fig. 5A) (i.e., the output of the near-preferring unit was simply s0 • h0 + s1 • h1). For the stimuli we used, this is an ideal detection strategy (13). For the model in Fig. 5F, the nearpreferring unit responded by summing over space and time the spatiotemporal matrix resulting from (s0 * h0 + s1 * h1)2, where * is spatiotemporal convolution; this model is closely related to the disparity energy model (15). For the model in Fig. 5K, the input to each eye was convolved not only with the filter h but also with a spatially and temporally broader filter g (indicated by the green outline in Fig. 5K); importantly, the output of this convolution was squared, thus introducing an early second-order nonlinearity before binocular combination. The output of this squared convolution was delayed by two stereo frames (114 ms) and used to modulate the output of the convolution with h through divisive normalization (indicated by red symbols in Fig. 5K). The output for the left eye can therefore be written as (s0 * h0)/[k + |(s0 * g0)2|τ], where k was a constant set to 30% of the average output from the term added to it within brackets; except for the temporal delay [indicated by | . |τ (e.g., |p(xi, tj)|τ = p(xi, tj − τ), where τ is the delay value)], this expression represents a standard implementation of divisive normalization (19). A similar expression was applied to the right eye, and the outputs from the two eyes were then combined just as in the previous model to obtain the output of the near-preferring unit. These models were challenged with stimuli like those used in the psychophysical experiments to derive simulated perceptual filters (Fig. 5). We adjusted target intensity to yield threshold performance; values were 0.7, 3, and 7 (for the three models, respectively) in units of noise SD (notice that the value for the third model is similar to the human threshold value). We simulated 100 iterations of 50,000 trials each. ACKNOWLEDGMENTS. I thank two anonymous reviewers for useful comments. This work was supported by a Royal Society University Research fellowship and a Medical Research Council New Investigator Research grant.

11. Ahumada AJ, Jr. (2002) Classification image weights and internal noise level estimation. J Vis 2:121–131. 12. Neri P, Heeger DJ (2002) Spatiotemporal mechanisms for detecting and identifying image features in human vision. Nat Neurosci 5:812–816. 13. Green DM, Swets JA (1966) Signal Detection Theory and Psychophysics (Wiley, New York). 14. Neri P, Levi D (2008) Temporal dynamics of directional selectivity in human vision. J Vis 8:22.1–22.11. 15. Ohzawa I, DeAngelis GC, Freeman RD (1990) Stereoscopic depth discrimination in the visual cortex: Neurons ideally suited as disparity detectors. Science 249:1037–1041. 16. Heeger DJ, Simoncelli EP, Movshon JA (1996) Computational models of cortical visual processing. Proc Natl Acad Sci USA 93:623–627. 17. Nienborg H, Cumming BG (2009) Decision-related activity in sensory neurons reflects more than a neuron’s causal effect. Nature 459:89–92. 18. Britten KH, Shadlen MN, Newsome WT, Movshon JA (1992) The analysis of visual motion: A comparison of neuronal and psychophysical performance. J Neurosci 12: 4745–4765. 19. Schwartz O, Simoncelli EP (2001) Natural signal statistics and sensory gain control. Nat Neurosci 4:819–825.

Neri

Supporting Information Neri 10.1073/pnas.1101246108 SI Materials and Methods Observer Selection. We found that not all observers could perform the task detailed in Materials and Methods, and therefore, we were forced to exclude some of them. The criterion for exclusion was rigorous and relied on a preliminary testing session during which time prospective observers were shown increasingly difficult versions of the stimulus until its configuration conformed to the one detailed in Materials and Methods. Their ability to perform the task was assessed at each stage, and only observers who succeeded in performing above chance for all configurations were retained. More specifically, observers were initially presented with a stimulus containing all three strips and a target bar (without any added noise) that remained on the screen all the time. They were asked to describe the stimulus (i.e., which region was near, which region was far, and whether the target bar was at the same depth as the top or bottom plane). After it was so determined that their stereovision was intact, we presented this stimulus for a duration of 800 ms over 100 trials and asked them

0

1

2

B 2

2

1

0

0 st

1 order filter SNR, left eye

1

2

3

4

5

6 log(inter−eye SNR ratio)

1storder filter SNR, right eye

A

to perform the task. During this stage, the target bar lasted for the entire 800 ms. If they performed above chance for two to three blocks, we reduced stimulus duration to 400 ms and repeated testing. If they passed this stage, we reduced the duration of the target bar to 57 ms (one stereo frame) and repeated testing. If they passed this stage, we added noise while keeping the intensity of the target bar well above noise SD. If they passed this stage, they were retained for the whole study, and we proceeded to determine their threshold target intensity using a twoup and one-down staircase procedure. In practice, we found that observers fell into one of two categories: those who could perform the task right away and those who found it virtually impossible. The latter group failed to perform above chance in the early stages of the preliminary session, and therefore, the choice as to whether a given observer was adequate for the study or not was invariably straightforward. Using the above criterion, we were able to retain 6 of 11 observers that we tested.

−2 Subject number

Fig. S1. Comparison of monocular filter amplitude between the two eyes. A plots first-order monocular filter signal to noise ratio (SNR) for the left eye on the n½1 n½0 , where n[1] is the number of 2wðn½1 þ n½0 Þ 1 correct trials, n[0] is the number of incorrect trials, w is the variance of the external noise source, and Φ is the mean of squares (ΦðAÞ ¼ ∑i;j a2i;j across d elements d ai,j of matrix A). This metric equals zero for a filter containing solely noise. SNR values are significantly greater than zero for both left (points lie to the right of the vertical dashed line at P < 0.005) and right eyes (points lie above the horizontal dashed line at P < 0.001), and they do not differ between the two eyes (points lie on the solid unity line at P = 0.53). This lack of intereye SNR difference, however, may simply result from some observers being biased to one eye and the other observers being biased to the other eye. To address this possibility, B shows that the lack of intereye SNR difference holds for each observer individually. Error bars show 95% bootstrap intervals for the log ratio between the SNR value on the x axis in A and the value on the y axis; they all span the zero equality point (horizontal gray line). Different symbols in both A and B refer to different observers. Error bars in A show ±1 SEM.

x axis vs. the right eye on the y axis (error bars show ±1 SEM). SNR was computed as follows (1, 2): SNRðp1 Þ ¼ log½Φðp1 Þ

1. Murray RF, Bennett PJ, Sekuler AB (2002) Optimal methods for calculating classification images: Weighted sums. J Vis 2:79–104. 2. Neri P (2010) Stochastic characterization of small-scale algorithms for human sensory processing. Chaos, 10.1063/1.3524305.

Neri www.pnas.org/cgi/content/short/1101246108

1 of 4

−3

Z −.5

3

0

Space (deg)

−.5

(

2

)

12

0

2

4

D

8

0

.5

2

(

) C

Space (deg)

4

0

nd

B

.5

Binocular

2 order binocular anisotropy

A

1storder binocular anisotropy

Fig. S2. Both first- and second-order binocular filters show anisotropic structure consistent with stereoscopic processing. A shows the equivalent of Fig. 2J for the second-order (as opposed to first-order) binocular filter. The oriented structure is similar (compare with Fig. 2J): the filter modulates more along the 2   A•B positive diagonal (marginal average is shown by white trace) than the negative diagonal (yellow trace). We quantified this anisotropy as log , where A•C the matrices A, B, and C are shown by the correspondingly labeled panels. This metric equals zero for an isotropic filter and is greater than zero for filters displaying anisotropic structure consistent with stereoscopic integration of eye signals. D plots this binocular anisotropy index for both first- (x axis) and secondorder (y axis) filters. Both filter orders are significantly positive (data points fall to the right of the vertical dashed line at P < 0.001 and above the horizontal dashed line at P < 0.02). Interestingly, the two quantities are negatively correlated (r = −0.85, significant at P < 0.05; the gray oval is tilted to align with the best-fit line and is positioned at center of mass, with parallel to line and orthogonal to line widths equal to SDs of the data across the two axes). Error bars in D show ±1 SEM; different symbols refer to different observers.

0

.5

−200 −.5

0

.5

3

0

4 Z

2 order mono 1 order mono 200 Space (deg) Space (deg)

−3

−4

3

3

0

Z

Z

−3

−3

nd

−200

st

1 order bin 1

S4

Z

st

Time (ms)

Time (ms)

−.5

S1

−.5

0

S2

S3

.5

nd

−12001

2 order bin −.5

0

.5

−1

9

S5

14

Z

Z

−9

−14

4

3

Z

Z

−4

−3

3

S6

3

Z

Z

−3

−3

3

3

Z

Z

−3

−3

Fig. S3. Individual observer data. Each quadruplet refers to an individual observer and is plotted using the same conventions adopted for Fig. 3.

Neri www.pnas.org/cgi/content/short/1101246108

2 of 4

Energy model (Fig 5F)

−1

B

Energy model Two−stage model (Fig 5K)

1

0

1

0

1

0

1

An tic o

re C or

−1

D

E

F

−1

G

H

I

−1

K

L

Human

J

0

C

rre

la te d

la te d

A

Space (deg)

Fig. S4. Correlated vs. anticorrelated perspective. The distinction between correlated and anticorrelated signals has been motivated by evidence that they are handled differently at the level of biological binocular systems (1). A, D, G, and J show the equivalent of Fig. 2J, except that they were computed after setting negative values within the space–space outer product N[q,z] (Materials and Methods) to zero [i.e., only correlated (same sign) pixel pairs were retained for analysis]. B, E, H, and K show results from the same analysis but after setting the positive values to zero [i.e., only anticorrelated (opposite sign) pixel pairs were retained]. C, F, I, and L show marginal averages equivalent to the white trace in Fig. 2J, black when computed from A, D, G, and J, and orange when computed from B, E, H, and K. Shading shows ±1 SEM. A, B, and C show simulated results for the energy model in Fig. 5F. D, E, and F show results for the same energy model but after replacing the fine-scale convolution filter (indicated by orange outline in Fig. 5F) with a coarser filter; this filter resembled the coarse filter in Fig. 5K (indicated by the green outline in Fig. 5K) except that it only contained the positive (bright) modulation to ease comparison with the fine-scale filter. G, H, and I show results for the two-stage model in Fig. 5K with a coarse filter that only contained the positive modulation (to ease comparison with the previous model). J, K, and L show human data (aggregate observer). Surface plots from A to I show simulated Z scores (same plotting conventions used in Fig. 5). Surfaces in J and K are plotted to the same conventions as Fig. 2J. The x axis in L can be interpreted to reflect retinal disparity; this panel disregards the temporal evolution of the process (data are averaged across time). The human data plotted in this panel show that processing of correlated and anticorrelated pixel pairs impacts the system differently depending on disparity range. For disparities within ~0.25°, the two signal classes impact the system in the manner predicted by the energy model (C); for disparities beyond this range, they have opposite effects, which is contrary to the prediction of the energy model (but accounted for by the two-stage model in I).

1. Cumming BG, Parker AJ (1997) Responses of primary visual cortical neurons to binocular disparity without depth perception. Nature 389:280–283.

Neri www.pnas.org/cgi/content/short/1101246108

3 of 4

Fig. S5. Simulated results of hypothetical electrophysiological recordings at different sites within the model circuit in Fig. 5K. We only simulate the monocular circuit. The following minor modifications were necessary to translate the psychophysical simulation into its electrophysiological counterpart. (i) Only noise was used as stimulus (without any added target signal), which is customary in single-neuron experiments. (ii) Input stimulation was not broken into trials, but rather, the stimulus consisted of a long stream of spatiotemporal noise (indicated by fading edges of the input noise in the figure), again as customary in electrophysiological experiments. (iii) The fine-scale filter (immediately below the input noise stimulus in the figure) presented a negative flank adjacent to the positive peak (rather than the positive peak alone as in Fig. 5K), a more plausible configuration for the receptive field of a single neuron. (iv) Because spatial frequency (SF) shift effects in single neurons have been measured over a timescale of 20–50 ms (1), it was necessary to increase the temporal resolution of the simulations to 10 ms (rather than the ~60 ms used in the psychophysical experiments). (v) As a consequence of the increased temporal resolution, it was necessary to up-sample the convolution filters, which we then rendered as linearly decreasing functions over time (rather than boxcar functions as in Fig. 5K) to coarsely capture the typical response time course of single neurons. (vi) The output of the circuit was not converted into a binary decision, but rather kept as a continuous variable time-locked to the stimulus so that we applied reverse correlation using the standard stimulus-triggered average (STA) procedure. Different colors refer to different recording sites indicated by the electrode icons. The surface plot connected to the electrode line shows the corresponding STA filter, whereas the plot immediately to the right of the STA surface shows SF centroid value over time for the STA surface, with the solid horizontal line indicating the value for a filter containing solely noise, the vertical dashed line marking the 30- to 90-ms range (within which most electrophysiological SF shift effects have been observed), and the shaded region indicating ±1 SD across 200 model iterations (5,000 stimulus time points per iteration). STA filter modulations show simulated Z scores (similar to plotting conventions used for Fig. 5). The blue recording site monitors the output of the convolution with the finescale filter; the corresponding SF centroid value is initially within noise baseline level, and it increases to a significantly higher SF, and eventually returns to baseline. The yellow recording site monitors the output of the convolution with the coarse-scale filter (before squaring); the corresponding SF centroid value is initially within noise baseline level, decreases to a significantly lower SF, and eventually returns to baseline. The magenta recording site monitors the output of the same stage, but after squaring; the associated STA filter is featureless and SF centroid values remain within baseline. The red recording site monitors the summed output from the yellow and blue sites; the corresponding SF centroid value is initially within noise baseline level, decreases to a significantly lower SF, increases to a significantly higher SF, and eventually returns to baseline. This shift from low to high SF values resembles the coarse to fine effect reported for single neurons (1, 2). A similar result is obtained for the green recording site, which monitors the summed output of the yellow recording site and the output of the fine-scale convolution (blue recording site) after the application of delayed divisive normalization (τ and ÷). The output of the circuit is monitored by the black recording site.

1. Bredfeldt CE, Ringach DL (2002) Dynamics of spatial frequency tuning in macaque V1. J Neurosci 22:1976–1984. 2. Mazer JA, Vinje WE, McDermott J, Schiller PH, Gallant JL (2002) Spatial frequency and orientation tuning dynamics in area V1. Proc Natl Acad Sci USA 99:1645–1650.

Neri www.pnas.org/cgi/content/short/1101246108

4 of 4

Coarse to fine dynamics of monocular and binocular ...

same depth as the top or bottom RDS (two-way choice). Materials and. Methods has an explanation as to why this design enforces binocular pro- cessing. The two possible configurations for the target bar (i.e., far and near) were rendered by fixed monocular images shown in G and H, respectively. Because the target bar ...

2MB Sizes 0 Downloads 162 Views

Recommend Documents

Comparison of fine structural mice via coarse iteration
Apr 30, 2014 - α+1, we associate an ex- tender F∗ α+1 = E∗ ↾ β where β is .... (i) iU (Cn(NΛ)) ⊴ IT ; or. (ii) bT does not drop in model or degree and IT = Cm(N.

A Coarse-To-Fine Approach for Fast Path Finding for Mobile Robots
[2008-S-031-01, Hybrid u-Robot Service System Technology Development for Ubiquitous City]. ... Hierarchical map abstraction and coarse-to-fine path finding. The 2009 IEEE/RSJ .... comparison with low-level A* search. The experiments ...

Binocular Vision and Squint.pdf
Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. Binocular Vision and Squint.pdf. Binocular Vision and Squint.pdf.

pdf-1894\normal-binocular-vision-theory-investigation-and-practical ...
pdf-1894\normal-binocular-vision-theory-investigation-and-practical-aspects.pdf. pdf-1894\normal-binocular-vision-theory-investigation-and-practical-aspects.

Effects of position, understorey vegetation and coarse ...
Location Nahuel Huapi National Park, at 41° S in north-western Patagonia,. Argentina. ...... shrub–conifer interactions in the Patagonian steppe (Kitzberger.

Binocular rivalry - Cell Press
percept, binocular co-operation gives way to competition. Perception then alternates between the two eyes' images as they rival for perceptual dominance .... Psilocybin links binocular rivalry switch rate to attention and subjective arousal levels in

binocular conference - Graduate Program in Science & Technology ...
University of Toronto's Institute for the History and Philosophy of Science and Technology are pleased to ... and email to: [email protected].

pdf-1487\binocular-anomalies-diagnosis-and-vision-therapy-by ...
Try one of the apps below to open or edit this item. pdf-1487\binocular-anomalies-diagnosis-and-vision-therapy-by-john-r-grisham-j-david-griffin.pdf.

Rhythms of Consciousness: Binocular Rivalry Reveals ...
Jul 3, 2009 - colours in Adobe Illustrator ©). These stimulus parameters yielded a modal full dominance interval around 550 ms. Subjects were seated and a ...

Monocular Obstacle Detection
Ankur Kumar and Ashraf Mansur are students of Robot Learning Course at Cornell University, Ithaca, NY 14850. {ak364, aam243} ... for two labeled training datasets. This is used to train the. SVM. For the test dataset, features are ..... Graphics, 6(2

Coarse-grained sediment delivery and distribution ... - Semantic Scholar
Downloaded from .... Lee, 2003; Normark et al., 2006). The focus of this study is the ...... Hitchcock, C.S., Helms, J.D., Randolph, C.E., Lindvall,. S.C., Weaver ...

On the evolution of coarse categories
If, however, there are substantial costs to categorization such as a reduction in decision making .... individual could learn (subject to errors) action profiles from others .... example provides a numerical illustration of the intuition behind. Resu

Catalogue and price-list of fine imported and American machinery ...
Whoops! There was a problem loading more pages. Catalogue and price-list of fine imported and American machinery (1875).pdf. Catalogue and price-list of fine imported and American machinery (1875).pdf. Open. Extract. Open with. Sign In. Main menu. Di

Coarse-grained sediment delivery and distribution in ...
dite system development in Santa Monica Basin during the last ~7000 yr. ... spect to their source-to-sink characteristics (i.e., small drainage basin .... et al., 1997), and this hampers effective discrimi- nation of staging ... filing system. Genera

Coarse-grained sediment delivery and distribution in the Holocene ...
in Santa Monica Basin are the best record for esti mating ... Holocene Santa Monica Basin, California: Implications for evaluating ...... load occurred during years with a high ENSO ..... Program (ODP) Core Repository in College Station,. Texas ...