Computational stereoscopic zoom Seungkyu Lee Hwasup Lim James D. K. Kim Chang Yeong Kim

Downloaded from SPIE Digital Library on 29 May 2012 to 202.20.191.223. Terms of Use: http://spiedl.org/terms

Optical Engineering 51(3), 037008 (March 2012)

Computational stereoscopic zoom Seungkyu Lee Hwasup Lim James D. K. Kim Chang Yeong Kim Samsung Advanced Institute of Technology Kiheung-Gu, Yong-In, Republic of Korea E-mail: [email protected]

Abstract. Optical zoom lenses mounted on a stereo color camera magnify each left and right two-dimensional (2-D) image increasing focal length. However, without adjusting the baseline distance, the optical zoom distorts three-dimensional (3-D) perception because the optical zoom magnifies projected 2-D images not an original 3-D object. We propose a computational approach to stereoscopic zoom that magnifies stereo images without 3-D distortion. We computationally manipulate the baseline distance and convergence angle between left and right images by synthesizing novel view stereo images based on the depth information. We suggest a volume-predicted bidirectional occlusion inpainting method for novel view synthesis. Original color image is warped to the novel view determined by the adjusted baseline and convergence angle. Rear volume of each foreground object is predicted, and the foreground portion of each occlusion region is identified. Then we apply our inpainting method to fill in the foreground and background respectively. Experimental results show that the proposed inpainting method removes the cardboard effect that significantly decreases the perceptual quality of synthesized novel view image but has never been addressed in the literature. Finally, 3-D object presented by stereo images is magnified by the proposed stereoscopic zoom method without 3-D distortion. © 2012 Society of Photo-Optical Instrumentation Engineers (SPIE). [DOI: 10.1117/1.OE.51.3.037008]

Subject terms: zoom; stereoscopic; depth; inpainting; depth-image-based rendering. Paper 111215 received Sep. 29, 2011; revised manuscript received Jan. 9, 2012; accepted for publication Jan. 10, 2012; published online Apr. 3, 2012.

1 Introduction Recent revolutions in three-dimensional (3-D) technology has introduced us to various 3-D experiences not just in a movie theater, but also at home with consumer electronics like 3-D, digital cameras, and mobile phones. 3-D display technology rapidly advances also brings forth the demand on the capturing and processing for 3-D content creation. Nowadays, a stereo camera is most commonly used; for example, 3-D film production. A zoom lens mounted on such a conventional stereo camera magnifies two-dimensional (2-D) image projected onto the left and right color sensor, respectively. Does the 3-D reconstruction using two magnified 2-D images correctly deliver magnified 3-D information? Optical zoom lens cannot work on depth information correctly in the conventional zoom framework. The stereo camera with two respective zoom lens introduces a 3-D distortion on the magnified 3-D scene (Fig. 1). This is because the 2-D optical zoom magnifies the projected images onto the camera image plane not the original 3-D object. In order to correctly deal with the problem, we have to change the baseline distance and convergence angle simultaneously according to the change of focal length (in 2-D zoom) as shown in Fig. 2. This can be achieved by mechanical adjustment of the baseline distance and convergence angle and controlling the stereo camera rig synchronized with the zoom lens (Fig. 2). However, this mechanical stereoscopic zoom is limited by the maximum camera size. For instance, in order to perform a 20× zoom of an object at far distance, we have to increase the baseline distance up to around 20 m 0091-3286/2012/$25.00 © 2012 SPIE

Optical Engineering

which is impractical in most capturing scenarios. We conclude that the current stereo rig or optical zoom lens do not provide a correct stereoscopic zoom function. In this paper, we propose a computational stereoscopic zoom method in which we change baseline and convergence angle by novel view synthesis algorithm with color and depth images (Fig. 3). Depth information can be obtained from either stereo vision technique or calibrated depth camera. For robust and accurate novel view synthesis, we suggest bidirectional occlusion inpainting (Fig. 4). For a practical and efficient 3-D capturing configuration, we employ an image-processing technique to synthesize as many novel views of high-quality images as possible from given original stereo views. For the novel view synthesis, occlusion is the most critical issue governing the quality of 3-D imaging. Completing both color and depth of the occlusion region is challenging and an ongoing research problem related to various research fields such as image processing, machine learning and human cognitive science. Occlusion inpainting is usually evaluated by a quantitative measure like peak signal to noise ratio (PSNR) as well as a qualitative measure like mean opinion score (MOS) test. Previous work such as Oh et al.1 performs occlusion inpainting from only background to avoid any subjective artifact and unnatural inpainting result. They only use 2-D color information, which does not maintain a consistent 3-D geometry in the synthesized view. As a result, foreground objects become unrealistically thinner than what they are supposed to be due to the missing data from rear volume of the objects.2 The cardboard effect refers to such an artifact, which implies that the rear volume of any foreground object is ignored and the object in a synthesized view becomes a thin cardboard

037008-1

Downloaded from SPIE Digital Library on 29 May 2012 to 202.20.191.223. Terms of Use: http://spiedl.org/terms

March 2012/Vol. 51(3)

Lee et al.: Computational stereoscopic zoom

Fig. 1 2-D optical zoom: without baseline and convergence angle change, we cannot receive correctly magnified 3-D object.

Fig. 4 Overall framework of the proposed bidirectional occlusion inpainting algorithm for novel view synthesis.

Fig. 5 Cardboard effect of unidirectional occlusion inpainting. Fig. 2 Stereoscopic zoom with baseline and convergence angle change.

Fig. 3 Computational stereoscopic zoom.

(Fig. 5). Fehn3 introduces a depth-based rendering framework to synthesize multiview input videos for 3D-TV. Choi et al.4 propose a real-time novel view synthesis algorithm using an interpolation technique based on the side reference views. However, they only synthesize new views in between the reference views. Therefore the reference views inherently constrain the field of view of the set of Optical Engineering

synthesized images. Do et al.5 interpolate new views from reference views and inpaint occluded regions using only background. Liu et al.6 also use two reference views to fill the occlusion holes of synthesized views. Schmeing and Jiang7 perform occlusion inpainting using the occluded region information derived by temporal consistency in the video sequence, which is inapplicable for static image. Yamei et al.8 propose a depth-based view synthesis method where additional color image is required. Lawrence et al.,9 Muller et al.,10 and Smolic et al.11 introduce a video-based view interpolation method where twolayer representation inspired by layered depth images is adopted for high-quality video-based rendering. View extrapolation also has been tried in some prior work. Wang et al.12 propose a texture and depth inpainting method from stereo images. In their framework, disparity-color consistency is considered; however, this does not guarantee the consistent inpainting result over different synthesized views. Zhang and Tam13 use a depth image for stereoscopic image generation where occlusion holes are filled by depth smoothing that distorts texture. Zhang et al.14 average neighborhood pixels to fill holes that produces undesired texture artifacts. Fan and Chi15 introduce a depth-preprocessing technique to avoid holes in occlusion regions in a synthesized image. However, this method distorts the original depth as well as color. 2 Computational Stereoscopic Zoom As shown in Fig. 3, we can adjust the baseline and convergence angle of stereo views upon the assigned focal length by the computational novel view synthesis method. In Fig. 6, d and θ are known from the current configuration of stereo camera. Once we apply an optical zoom by changing focal length of each lens, the ratio between α and β is also known. Now we calculate ϕ and dwarp in

037008-2

Downloaded from SPIE Digital Library on 29 May 2012 to 202.20.191.223. Terms of Use: http://spiedl.org/terms

March 2012/Vol. 51(3)

Lee et al.: Computational stereoscopic zoom

information. We assume we have a calibrated depth camera for each color image to obtain the reliable depth information. Given a color and depth image pair of an original view, color pixels are warped into a new image viewpoint based on respective depth. This reveals an occluded region that cannot be seen in the original view (Fig. 7). In order to complete the synthesized view images, we propose a novel bidirectional inpainting for realistic occlusion completion, free from the cardboard effect. Our method addresses the problem by predicting and completing rear volume using the corresponding front volume. Based on the predicted rear volume, foreground portion of occlusion regions are identified, and the occluded pixels are bidirectionally inpainted from both foreground and background neighbors. 3.1 Boundary Cropping

Fig. 6 Computational stereoscopic zoom.

Fig. 6 to perform a novel stereo view synthesis by adjusting a baseline distance and convergence angle. Then we obtain a d computational stereoscopic zoom. From Fig. 6, tan θ ¼ αþβ and tan ϕ ¼ dβ. Then we can calculate ϕ from the following equation:   αþβ · tan θ ¼ arctanðK · tan θÞ; (1) ϕ ¼ arctan β where K is αþβ β . Next, from tan ϕ ¼ follows:  d warp ¼ d ·

 tan ϕ −1 . tan θ

dþd warp αþβ ,

we derive dwarp as

(2)

3 Stereo View Synthesis Having magnified left and right images, ϕ and d warp in Fig. 6, we are ready for novel stereo view synthesis based on depth

Fig. 7 3-D warping based on a set of color and depth images.

Boundary cropping after 3-D warping is a preprocessing step (Fig. 8). Usually color transition at edge region is not distinctive enough to offer the clear separation between foreground and background regions. Foreground boundary has remnant pixels of background colors and vice versa. Even though morphological erosion at occlusion boundaries may eliminate the unexpected pixels, it is not applicable for irregular remnants of real-world images. We propose an irregular version of morphological erosion: a boundary cropping based on the color histogram of local patch. At each boundary pixel, we compare the color histogram of local region with that of boundary pixel in order to eliminate any dissimilar pixel (remnant pixel) from its boundary.   (3) min jIði; jÞ − I mode ðnÞj > α; where Iði; jÞ is a boundary pixel color at ði; jÞ and I peak ðnÞ is a color of n’th peak of color histogram within local patch. If boundary pixel color is away from any local mode color of the color histogram by α, we decide the boundary pixel is remnant and remove it. We can adjust the amount of remnant elimination by α. Figure 8(a) shows the result of boundary cropping where most remnants at both foreground and background are removed. Figure 8(b) shows that the inpainting after boundary cropping makes a clean boundary. One observation is that both boundary erosion and cropping make the foreground thinner when we adopt a unidirectional inpainting method. It is because the removed region from foreground will be recovered from background. Our bidirectional inpainting recovers the cropped regions of foreground and background with respective neighbors and free from this artifact.

Fig. 8 Boundary cropping: (a) before and after boundary cropping; (b) sample inpainting results before and after boundary cropping.

Optical Engineering

037008-3

Downloaded from SPIE Digital Library on 29 May 2012 to 202.20.191.223. Terms of Use: http://spiedl.org/terms

March 2012/Vol. 51(3)

Lee et al.: Computational stereoscopic zoom

3.2 Boundary Labeling The label along the occlusion boundary, whether each boundary pixel belongs to the foreground or background, plays an important role in correct inpainting direction. Our observation is that the depth gradient toward the foreground and the direction toward the occlusion region point in the same direction along the background boundary. On the other hand, they point in the opposite direction along the foreground boundary. The occlusion direction is obtained by computing the gradient of binary occlusion map where occlusion region is filled with value 1. Thus the sign of the inner product between these two vectors labels boundary. We employ a Markov random field (MRF) framework with the data term defined by the inner product and the label smoothness term. Figure 9 shows a sample foreground and background labeling results by the proposed labeling method.16 3.3 Volume Prediction The volume prediction step with 3-D depth image detects foreground and background portions of the occlusion region in synthesized view. Figure 10 conceptually illustrates how conventional unidirectional occlusion inpainting distorts 3-D information. On the other hand, inpainting after the volume prediction allows us to keep 3-D volume in occlusion region. Predicted and added depth volume corresponds to the missing rear surface of an object. Based on the warping information, we can transfer the predicted volume in the original view to synthesized view. As a result, we can determine foreground and background portions of occlusion region. Instead of a global rear volume prediction, we propose a practical local volume prediction method where clear foreground-background separation hardly exists. We only determine locally relative foreground and background region based on the boundary labels and predict the minimum amount of local rear volume to fill in occlusions. Different methods can be applied for the volume prediction such as uniform prediction and symmetric prediction.

Fig. 10 Uni/bidirectional occlusion inpainting. (a) Unidirectional. (b) Bidirectional.

Fig. 11 Uniform volume prediction before and after warping: black dots represent foreground pixels, and white dots represent pixels on the predicted rear surface of the foreground.

have an identical depth located at behind foreground pixels as follows: P¯ −1 ¼ P−1 þ α ⋅ D−1 ¼ P0 þ α ⋅ ð2D0 − D1 Þ;

(4)

where P−1 ¼ P0 and D−1 ¼ D0 − ðD1 − D0 Þ ¼ 2D0 − D1 (Fig. 11). Uniform prediction works well with narrow occlusion region where only a small portion of rear volume needs to be predicted. However, this is not always true, especially when the local volume corresponds with only a small portion of the whole rear volume. 3.3.2 Symmetric prediction

Symmetric prediction is another simple but better scheme where rear volume is predicted to be bilateral symmetry with front volume. We define that

3.3.1 Uniform prediction

P−1 ¼ P1; P−2 ¼ P2 ; D−1 ¼ D0 − ðD1 − D0 Þ ¼ 2D0 − D1 and

Uniform prediction is one of simplest schemes where the rear volume of an object is assumed to be flat occupying identical amount of front volume. For each foreground pixel Pi , corresponding warped pixel is P¯i ¼ Pi þ α · Di where Di represents the depth of the pixel. Then predicted rear pixels P−i

D−2 ¼ D0 − ðD2 − D0 Þ ¼ 2D0 − D2 Then we can predict the occluded foreground pixels as follows (Fig. 12):

Fig. 9 Occlusion boundary labeling result. (a) Original color image. (b) Boundary label of warped original image (red ¼ foreground). Optical Engineering

037008-4

Downloaded from SPIE Digital Library on 29 May 2012 to 202.20.191.223. Terms of Use: http://spiedl.org/terms

March 2012/Vol. 51(3)

Lee et al.: Computational stereoscopic zoom

Fig. 12 Symmetric volume prediction before and after warping.

Fig. 13 Bidirectional inpainting.

Fig. 15 Bidirectional occlusion inpainting.

P¯ −1 ¼ P−1 þ α ⋅ D−1 ¼ P1 þ α ⋅ ð2D0 − D1 Þ.

(5)

P¯ −2 ¼ P−2 þ α ⋅ D−2 ¼ P2 þ α ⋅ ð2D0 − D2 Þ.

(6)

We empirically find that it gives more realistic prediction result in local region than uniform prediction. We use the symmetric prediction in our experiments. 3.4 Bidirectional Inpainting Based on the volume prediction results on the 3-D-depth points, bidirectional inpainting is proposed where both foreground and background inpaintings are separately performed by maintaining the predicted 3-D geometry of the image after occlusion inpainting step. Estimated rear volume disclosed in a synthesized image is filled with foreground neighbor first. Remaining background portion of occlusion is then inpainted from background neighbor as illustrated in Fig. 13. More specifically, we use a modified exemplar based inpainting.17 In our occlusion inpainting, both color and depth similarities are considered to find best matching sample. Figure 14 shows that the depth helps avoiding inappropriate sample patch in the exemplar based inpainting method. 4 Experimental Results We quantitatively and qualitatively evaluate our framework on the Middleburry data set that consists of 21 subjects with multiview images. Figure 15 presents the samples of our experimental results, showing that our bidirectional inpainting keeps the overall volume of foreground object,

while previous unidirectional inpainting method1 loses the rear volume of its foreground object. Each subject of the data set has seven different views of color images with two depth images of them (second and sixth). We change the baseline distance between original and synthesized view. Experimental results of bidirectional occlusion inpainting are summarized in Tables 1 and 2. Table 1 compares the accuracy of whole region of the synthesized images. Table 2 only compares the accuracy of inpainted region of the synthesized images for demonstrating the performance gain of inpainting method better. More importantly, it Table 1 Synthesized stereo image accuracy (PSNR [dB]). Oh et al.1 þ exemplar17

Proposed þ exemplar17

Gain

40 mm

34.95

35.48

0.51

80 mm

33.06

33.45

0.38

120 mm

31.54

32.13

0.59

160 mm

30.45

30.90

0.45

200 mm

28.87

29.28

0.41

Overall

31.77

32.24

0.47

Baseline

Table 2 Occlusion region only accuracy (PSNR [dB]). Oh et al.1 þ exemplar17

Proposed þ exemplar17

Gain

40 mm

16.70

17.49

0.80

80 mm

16.03

16.55

0.51

120 mm

15.30

16.36

1.06

160 mm

15.03

15.76

0.73

200 mm

14.31

15.09

0.78

Overall

15.47

16.25

0.78

Baseline

Fig. 14 Color plus depth inpainting: sample patch is selected preferring color as well as depth similarity preventing from unrealistic inpainting result as shown in color only inpainting result (blue circle).

Optical Engineering

037008-5

Downloaded from SPIE Digital Library on 29 May 2012 to 202.20.191.223. Terms of Use: http://spiedl.org/terms

March 2012/Vol. 51(3)

Lee et al.: Computational stereoscopic zoom

method works in 3-D space presenting superior performance to previous inpainting methods. The proposed inpainting method even works on the case where only foreground occlusion occurs; for example, a thin object that is warped and fully overlapped onto background. Computational stereoscopic zoom results show that the magnified stereo images have correct disparity magnification. Acknowledgment The author thank Dr. Hyunjung Shim for technical discussion and proofreading. References

Fig. 16 Computational stereoscopic zoom results (5×): both stereo images and disparity are magnified correctly. Original stereo 2-D zoom computational stereoscopic zoom.

provides more realistic occlusion inpainting result without distorting foreground object. Figure 16 shows final results for our computational stereoscopic zoom. First column shows overlapped original stereo images of 40-mm baseline distance. Convergence is adjusted on the background region of test images. Second column includes five times of 2-D zoom results compared with ours on the third column. From the 2-D zoom results, the disparity of each object is not magnified enough to describe the underlying 3-D information. As a result, stereo images do not contain much 3-D information. On the other hand, our computational stereoscopic zoom magnifies the disparity of 3-D objects. Note that we enforce the background of each image to hold zero disparity by computationally adjusting convergence angle; shifting one image toward the other. For example, in the third example of 2-D zoom, background as well as white head of the doll present almost zero disparity, while our method convincingly magnifies the disparity of the doll. Overall, we find that conventional 2-D zoom magnifies projected 2-D image, but the depth information represented by the disparity in stereo images remains or is distorted. On the other hand, the proposed computational stereoscopic zoom magnifies depth, concurrently keeping the relative distance between background and foreground objects in 3-D space. 5 Conclusion In this paper, we propose computational stereoscopic zoom based on our novel view synthesis method, adjusting baseline and convergence angle computationally. The proposed method correctly magnifies 3-D scene without depth distortion. We also propose bidirectional occlusion inpainting for realistic novel view synthesis. Unlike previous methods, our Optical Engineering

1. K.-J. Oh, S. Yea, and Y.-S. Ho, “Hole-filling method using depth based in-painting for view synthesis in free viewpoint television (ftv) and 3-D video,” in Picture Coding Symposium, PCS, Nagoya, Japan, pp. 1–4 (May 2009). 2. K. Shimono et al., “Removing the cardboard effect in stereoscopic images using smoothed depth maps,” Proc. SPIE 7524, 75241C-1–75241C-8 (2010). 3. C. Fehn, “Depth-image-based rendering (dibr), compression and transmission for a new approach on 3D-TV,” Proc. SPIE, 5291, 93–104 2004). 4. J.-Y. Choi et al., “Real-time view synthesis system with multi-texture structure of gpu,” in International Conf. on Consumer Electronics (ICCE), IEEE, Malaysia, pp. 1–4 (Jan 2010). 5. L. Do, S. Zinger, and P. H. N. de With, “Quality improving techniques for free-viewpoint DIBR,” Proc. SPIE, 7524, 75240I-1–75240I-4 (2010) 6. L. Zhan-wei et al., “Arbitrary view generation based on dibr,” in International Symposium on Intelligent Signal Processing and Communication Systems, IEEE, Xiamen, China, pp. 168–171 (Nov 2007). 7. M. Schmeing and X. Jiang, “Depth image based rendering: a faithful approach for the disocclusion problem,” in 3DTV-Conf., IEEE, Tampere, Finland, pp. 1–4 (Jun 2010). 8. F. Yamei et al., “Depth-image based view synthesis for threedimensional television,” in IEEE Conf. on Industrial Electronics and Applications, IEEE, Xian, China, pp. 2428–2431 (May 2009). 9. C. L. Zitnick et al., “High-quality video view interpolation using a layered representation,” ACM Trans. Graph. 23(3), 600–608 (2004). 10. K. Muller et al., “Coding and intermediate view synthesis of multiview video plus depth,” in IEEE International Conf. on Image Processing, pp. 741–744 (Nov 2009). 11. A. Smolic et al., “Intermediate view interpolation based on multiview video plus depth for advanced 3-D video systems,” IEEE International Conf. on Image Processing, IEEE, San Diego, USA, pp. 2448–2451 (Oct 2008). 12. L. Wang et al., “Stereoscopic inpainting: joint color and depth completion from stereo images,” in IEEE Conf. on Computer Vision and Pattern Recognition, IEEE, Anchorage, USA, pp. 1–8 (June 2008). 13. L. Zhang and W. Tam, “Stereoscopic image generation based on depth images for 3d tv,” IEEE Trans. Broad. 51(2), 191–199 (2005). 14. L. Zhang, W. Tam, and D. Wang, “Stereoscopic image generation based on depth images,” in International Conf. Im. Proc., IEEE, Singapore, 5, 2993–2996 (2004). 15. Y.-C. Fan and T.-C. Chi, “The novel non-hole-filling approach of depth image based rendering,” in 3DTV Conf., IEEE, Istanbul, Turkey, pp. 325–328 (May 2008). 16. H. Lim et al., “Bi-layer inpainting for novel view synthesis,” in International Conf. on Image Processing, IEEE, Brussels, Belgium (2011). 17. A. Criminisi, P. Perez, and K. Toyama, “Region filling and object removal by exemplar based image inpainting,” IEEE Trans. Im. Proc. 13(9), 1200–1212 (2004).

Seungkyu Lee received his PhD in computer science and engineering from Penn State University, U.S. in 2009. He has been a research engineer in the Korea Broadcasting System Technical Research Institute where he carried out research on HD image processing, MPEG4-AVC, and the standardization of Terrestrial Digital Mobile Broadcasting. He is currently a principal research scientist, Advanced Media Lab., Samsung Advanced Institute of Technology. His research interests include color and depth image processing, symmetry-based computer vision, and 3-D modeling and reconstruction.

037008-6

Downloaded from SPIE Digital Library on 29 May 2012 to 202.20.191.223. Terms of Use: http://spiedl.org/terms

March 2012/Vol. 51(3)

Lee et al.: Computational stereoscopic zoom

Hwasup Lim received his PhD in electrical engineering from Penn State University, U.S. He is currently a principal research scientist, Advanced Media Lab., Samsung Advanced Institute of Technology. His research interests include 3-D modeling and reconstruction, color/depth image processing, and depth image based rendering (DIBR).

Chang Yeong Kim received his MS and PhD degrees from Korea Advanced Institute of Science Technology in 1987 and 1996, respectively. He is the head of Advance Media Lab. in Samsung Advanced Institute of Technology since 2010. He is the Samsung Electronics fellow since 2006 and IS&T honorary member. His research interests include 3-D image processing, color processing, and medical imaging.

James D. K. Kim received his BS and MS degrees from Yonsei University in 1993 and 1995, respectively. He is the director of 3D Mixed Reality Group of Advance Media Lab. in Samsung Advanced Institute of Technology since 2007. He is the Samsung Research Master of 3D Graphics field and received honorary doctorate degree in 2010. His research interests include 3-D image processing, graphics, and augmented reality.

Optical Engineering

037008-7

Downloaded from SPIE Digital Library on 29 May 2012 to 202.20.191.223. Terms of Use: http://spiedl.org/terms

March 2012/Vol. 51(3)

Computational stereoscopic zoom

10, 2012; published online Apr. 3, 2012. .... by computing the gradient of binary occlusion map where occlusion ..... PhD degrees from Korea Advanced Institute.

4MB Sizes 0 Downloads 256 Views

Recommend Documents

pdf zoom kindle
Whoops! There was a problem loading more pages. Retrying... pdf zoom kindle. pdf zoom kindle. Open. Extract. Open with. Sign In. Main menu. Displaying pdf ...

SEGPA zoom sur.pdf
Whoops! There was a problem loading more pages. SEGPA zoom sur.pdf. SEGPA zoom sur.pdf. Open. Extract. Open with. Sign In. Main menu. Displaying ...

ogio-zoom-catalog.pdf
(iii) Substitutes y and rearrange for 3x. Solve 3x. = 1.150. x = 0.127. M1. M1. A1. Page 3 of 10. Main menu. Displaying ogio-zoom-catalog.pdf. Page 1 of 10.

zoom in pdf
There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. zoom in pdf.

zoom-cheat-sheet.pdf
... for tips on how to transition from BlueJeans to Zoom. Whoops! There was a problem loading this page. Retrying... zoom-cheat-sheet.pdf. zoom-cheat-sheet.pdf.

Zoom Centre (2).pdf
Sign in. Loading… Whoops! There was a problem loading more pages. Retrying... Whoops! There was a problem previewing this document. Retrying.

zoom h4 manual pdf
Page 1. Whoops! There was a problem loading more pages. Retrying... zoom h4 manual pdf. zoom h4 manual pdf. Open. Extract. Open with. Sign In. Main menu.

zoom g1 patches pdf
zoom g1 patches pdf. zoom g1 patches pdf. Open. Extract. Open with. Sign In. Main menu. Displaying zoom g1 patches pdf.

LINUX BY ZOOM TECH.pdf
Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. LINUX BY ZOOM TECH.pdf. LINUX BY ZOOM TECH.pdf. Open.

zoom r16 manual pdf
zoom r16 manual pdf. zoom r16 manual pdf. Open. Extract. Open with. Sign In. Main menu. Displaying zoom r16 manual pdf.

End-to-End Stereoscopic Video Streaming System
This brings the idea of developing streaming system delivering stereo ... receiver side, open source VideoLAN Client media player has been extended in order ...

Computational Vision
Why not just minimizing the training error? • Never select a classifier using the test set! - e.g., don't report the accuracy of the classifier that does best on your test ...

Leica mini zoom manual pdf
There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. Leica mini zoom ...

Perceptual Issues in Stereoscopic Signal Processing
Index Terms—Distortion, perceptual, stereoscopic. I. INTRODUCTION. STEREOSCOPIC 3D image .... A. Basic Perceptual Issues of Crosstalk. While the lack of complete image separation is the cause of crosstalk ...... hidden graphic elements (e.g. text)

pdf-1866\the-computational-brain-computational-neuroscience-by ...
... apps below to open or edit this item. pdf-1866\the-computational-brain-computational-neurosc ... -by-patricia-smith-churchland-terrence-j-sejnowski.pdf.

computational electromagnetics
the so-called Euler´s brachistochrone problem [Gould 1957]. ..... challenge on how we should educate the graduate students in this rapidly changing world. The.

computational abilities
The analysis of networks with strong backward coupling proved intractable. ..... This same analysis shows that the system generally fails in a "soft" fashion, with.

Computational Stereo
Another advantage is that stereo is a passive ... Computing Surveys, VoL 14, No. 4, December ...... conditions, cloud cover present in one im- age and not in the ...

Computational Stereo
For correspondence: S. T. Barnard, Artificial Intelligence Center, SRI ... notice is given that copying is by permission of the Association for Computing Machinery. To ... 3.2 Control Data CorporatJon ...... conditions, cloud cover present in one im-

computational abilities
quential memory also emergent properties and collective in origin? This paperexamines a .... rons like a conventional digital computer. There is no evidence.

Computational Vision
Gain control / normalization ... Increase in tolerance to position. Local max .... memory. Wallis & Bulthoff '01 but signif. The BG. L(10,4). 4A), alth mance on faces.

ePub Zoom: How Everything Moves: From Atoms and ...
Optimize your storage and back up your files because the Windows Fall ... to Blizzards and Bees, read online Zoom: How Everything Moves: From Atoms and ...