High-Resolution Image Reconstruction from Rotated and Translated Low-Resolution Images with Multisensors You-Wei Wen,1 Michael K. Ng,2 Wai-Ki Ching3 1

Department of Mathematics, The University of Hong Kong, Hong Kong, and Faculty of Applied Mathematics, Guangdong University of Technology, P. R. China

2

Department of Mathematics, The University of Hong Kong, Hong Kong

3

Department of Mathematics, The University of Hong Kong, Hong Kong

Received 30 January 2004; accepted 10 May 2004

ABSTRACT: We extend the multisensor work by Bose and Boo (1998) and consider the perturbations of displacement error that are due to both translation and rotation. The warping process is introduced to obtain the ideal low-resolution image, which is located at exactly horizontal and vertical shift. In this approach, the problem of high-resolution image reconstruction is turned into the problem of image restoration, and the system becomes spatially invariant rather than spatially variant in the original problem. An efficient algorithm is presented. Experimental results show that the proposed methods are quite effective, and they perform better than the bilinear image interpolation method. © 2004 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 14, 75– 83, 2004; Published online in Wiley InterScience (www.interscience. wiley.com). DOI 10.1002/ima.20010

Key words: high resolution; low resolution; reconstruction

I. INTRODUCTION In this article, we study methods that increase spatial resolution by fusing information from a sequence of images (with partial overlap in successive elements of frames, for example, video sequence). Many applications in image processing require images with high resolution. Because of the high cost of high-precision optics and image sensors and hardware limitations, it is not easy to obtain such images. Therefore, it is important to increase the resolution by image processing technologies. Motivated by the need to improve resolution images from Landsat image data, Tsai and Huang (1984) first proposed a formulation to construct a high-resolution image from several down-sampling low-resolution images. Kim et al. (1990) then extended this work to blurred and noisy images by using a weighted least square formulation. High resolution was modeled as an interpolation problem with nonuniformly sampled data in Irani

and Peleg (1993) and Sauer and Allebach (1987); algorithms such as projection onto convex sets (POCS) and back-projection (BP) methods were also proposed. Recently, Hardie et al. (1997) and Chan et al. (1998) considered conjugate gradient (CG) methods for Tikhonov regularized super-resolution. In order to speed up the convergence rate, preconditioning conjugate gradient (PCG) methods were proposed (Chan et al., 1998; Lin et al., 2004; Ng and Sze, 2000; Ng et al., 2000). Hardie et al. (1998) considered estimating a high-resolution image from a sequence of undersampled rotated and translationally shifted frames. In their study, the warping process was assumed to be performed with the high-resolution image. The literature for high resolution video image reconstruction can be found in Duhamel and Maitre (1989), Elad and Feuer (1997), Elad and Hel-Or (2001), Lertrattanapanich and Bose (2002), and Park et al. (2002). The first step to comprehensively analyze the high-resolution image reconstruction problem is to model the image acquisition scheme. The need for model accuracy is undeniable in the attainment of high resolution along with the design of an algorithm whose robust implementation will produce the desired quality in the presence of uncertain model parameters. Multiple down-sampled images of a scene are often obtained by using multiple identical image sensors shifted relative to each other by subpixel displacements or by using one sensor with several captures. Bose and Boo (1998) considered the problem of reconstructing a high-resolution image from multiple undersampled, shifted, degraded frames with subpixel displacement errors. The low-resolution images result from blurring and undersampling operation performed on the high-resolution image, and can be expressed as f 共l 1,l 2兲 ⫽ D共l 1,l 2兲 H共l 1,l 2兲 共␦兲f ⫹ n共l 1,l 2兲 ,

Correspondence to: You-Wei Wen; e-mail: [email protected] Grant sponsors: Mr. You-Wei Wen’s research supported in part by the Chinese Academy of Science with Key Project of Knowledge Innovation KZCX1-SW-18 and Guangdong Natural Science Foundation Grant No. 032475. Professor Ng’s research supported in part by the Research Grants Council Grant Nos. 7130/02P and 7046/03P and HKU CRCG Grant Nos. 10203907, 10204437, and 10203501.

© 2004 Wiley Periodicals, Inc.

(1)

where f(l 1,l 2) is the observed low-resolution image, f is the desired high-resolution image, n(l 1,l 2) is the additive noise, D (l 1,l 2) is a down-sampling matrix, H (l 1,l 2) ( ␦ ) is the blur matrix, and ␦ is the vector of displacement errors. As perfect subpixel displacements are

practically impossible, each low-resolution images differs from the others in the blur matrix, which is a function of the displacement errors. Therefore blue functions are spatially variant. Solution for f is constructed by applying the maximum a posteriori (MAP) regularization technique proposed by Bose and Boo (1998) and Ng and Yip (2001). Recently, Chan et al. (2003) looked at the high-resolution image reconstruction problem from the wavelet point of view. Bose et al. (1994), Ching et al. (2003), and Ng et al. (2002) considered the registration errors in the system matrix H (l 1,l 2) ( ␦ ) and proposed the total least squares method to minimize such errors. One disadvantage of the Bose and Boo (1998) model is that the perturbations of displacement are due to pure translation, which makes it difficult to handle the case when there is rotation. In this article, we extend the multisensor work of Bose and Boo, and consider that the perturbations of displacement are generated by not only translation but also rotation. One challenge in including rotation in the model is to perform integration when the support is rotated in order to obtain the pixel value. We notice that the pixel position moves away from the ideal position because of the perturbations of displacement. If the displacement errors are obtained, the pixel value in the ideal position can be computed by bilinear interpolation method. Thus, we avoid performing integration directly in the rotated support field. One advantage of obtaining the pixel value in the ideal position is that we can turn the problem of highresolution image reconstruction into the problem of image restoration. Moreover, the system becomes spatially invariant rather than spatially variant, in Bose and Boo’s (1998) model. Thus we can solve the minimization problem effectively by using discrete cosine transforms or fast Fourier transforms. This article is organized as follows. In Section II, we give a mathematical formulation of the high-resolution image reconstruction problem. In Section III, a method to reconstruct a high-resolution image is presented. In Section IV, we give some numerical examples to illustrate the effectiveness of the proposed methods. II. THE MATHEMATICAL MODEL In this section, we present the mathematical model for the highresolution image reconstruction problem. A. The Ideal Model. For ease of presentation, here, i (or m 1 , l 1 ) and j (or m 2 , l 2 ), respectively, correspond to vertical and horizontal indices. Assume that the input image f( x, y) denotes the continuous high-resolution imagery in the focal plane coordinate system ( x, y). Let the sampled base interval be P 1 ⫻ P 2 , and the low-resolution image of a given scene can be obtained from the sensors with N 1 ⫻ N 2 pixel. The aim is to construct a high-resolution image of the same scene by using an array of q 1 ⫻ q 2 low-resolution sensors. For simplicity, we just consider the case q 1 ⫽ q 2 ⫽ q. Then the size of the high-resolution image is M 1 ⫻ M 2 , where M 1 ⫽ qN 1 and M 2 ⫽ qN 2 . To have sufficient information to resolve the high-resolution image, there are subpixel displacements between the image frames. In the ideal case, the q ⫻ q CCD image sensor arrays are shifted away from each other by a value proportional to P 1 /q ⫻ P 2 /q. Thus, for l 1 , l 2 ⫽ 0, 1, . . . , q ⫺ 1 with (l 1 , l 2 ) ⫽ (0, 0), the horizontal and vertical displacements of the (l 1 , l 2 )-th sensor array with respect to the (0, 0)-th reference sensor array are modeled as (P 1 /q)l 1 and (P 2 /q)l 2 , respectively. The observed low-resolution image f(l 1l 2) for the (l 1 , l 2 )-th images is modeled by

76

Vol. 14, 75– 83 (2004)

Figure 1. Sequence of low-resolution images and construction of the observed high resolution-image in ideal model.

f m共l11,l,m2兲2 ⫽

1 P 1P 2



P1共m1⫹1/2兲⫹共P1/q兲l1

P1共m1⫺1/2兲⫹共P1/q兲l1



P2共m2⫹1/2兲⫹共P2/q兲l2

f共x, y兲 dx dy

P2共m2⫺1/2兲⫹共P2/q兲l2

⫹ n m共l11,l,m2兲2,

(2)

where nm共l 11,l,m2兲2 is the additive noise in the (l 1 , l 2 )-th image frame. Next we consider the relationship between the low-resolution image and the high-resolution image. As the low-resolution image is a shifted, down-sampled version of the high-resolution image, the idea which intersperses the low-resolution image to form a hypothetical high-resolution image is to assign f m共l11,l,m2兲2 ⫽ f共q共m 1 ⫺ 1兲 ⫹ l 1, q共m 2 ⫺ 1兲 ⫹ l 2兲.

(3)

Figure 1 shows how to construct a 4 ⫻ 4 observed high-resolution 1 image g from four low-resolution images {f(s,t) } s,t⫽0 , where all (s,t) (l 1 ,l 2 ) (l 1 ,l 2 ) f have 2 ⫻ 2 pixels. Let f , f, and n denote the corresponding vectors obtained by stacking column by column scan of the image for the low-resolution image for the (l 1 , l 2 )-th image frame, the original image, and the noise, respectively. The relationship between the low-resolution image, the observed high-resolution image, and the original high-resolution image can be expressed as follows: f 共l 1,l 2兲 ⫽ D共l 1,l 2兲 Hf ⫹ n共l 1,l 2兲 ,

(4)

where D (l 1,l 2) is the 2D down-sampling matrix and H is the blur matrix corresponding to the aspect ratio of the low-resolution image to the high-resolution image. Let g be the observed high-resolution image (the blurred high-resolution image); the relationship between the observed high-resolution image and the original high-resolution image can be expressed as follows g ⫽ Hf.

(5)

Thus, the low-resolution images result from blurring and downsampling operations performed on the high-resolution image, and are also corrupted by additive noises. The down-sampling matrix D (l 1,l 2) , generating aliased low-resolution images from the blurred high-resolution image, is a diagonal matrix. The diagonal elements are equal to 1 if the corresponding component of f(l 1,l 2) comes from the (l 1 , l 2 )-th image sensor, and zero otherwise. Blurring can be caused by the point spread function of the low-resolution imaging device, which is usually modeled as a spatial averaging operator (Bose and Boo, 1998). The blur matrix H may be a Block-Toeplitz-

Toeplitz-Block (BTTB) matrix, a Block-Circulant-Circulant-Block (BCCB) matrix, or a Block-Toeplitz-plus-Hankel-Toeplitz-plusHankel-Block (BTHTHB) matrix under the assumptions of zero boundary condition, periodic boundary condition, and Neumann boundary condition, respectively (Ng and Bose, 2003).

assume that small perturbations are represented by the translation and rotation. Let

B. Rotational and Translational Model. In the previous subsection, we considered the ideal case of the observed low-resolution degraded image model and assumed that each low-resolution image has an exact horizontal and vertical shift. In practice, the motion occurs during image acquisition, and there could be small perturbations around the ideal subpixel locations (Bose and Boo, 1998). Here small perturbations are modeled as the sum of a translation and a rotation. In general, this information will not be known beforehand but can be obtained by estimating the motion between the observed low-resolution image with rotation and translation and the lowresolution image without translation and rotation. In the perturbation case, the horizontal and vertical displacements of the (l 1 , l 2 )-th sensor array with respect to the (0, 0)-th reference sensor array are modeled as

then ␥m共l 11,l,m2,2x兲 and ␥m共l 11,l,m2,y兲 are affine transforms of (m 1 , m 2 ). 2

␦ 共l1,l2,x兲 ⫽

P1 共l ⫹ ␧ 共l1,l2,x兲兲 q 1

and ␦共l 1,l 2,y兲 ⫽

P2 共l ⫹ ␧共l 1,l 2,y兲 兲, q 2

1 P 1P 2



P1共m1⫹1/2兲⫹␦共l1,l2,x兲

P1共m1⫺1/2兲⫹␦共l1,l2,x兲



P2共m2⫹1/2兲⫹␦共l1,l2,y兲

f共x, y兲 dx dy⫹n៮

共l1,l2兲 m1,m2

.

P2共m2⫺1/2兲⫹␦共l1,l2,y兲

(6) When only the translation is considered, the relationship between the observed low-resolution image and the original high-resolution image can be expressed as f៮ 共l 1,l 2兲 ⫽ D共l 1,l 2兲 H共l 1,l 2兲 共␦共l 1,l 2,x兲 , ␦共l 1,l 2,y兲 兲f ⫹ n៮ 共l 1,l 2兲 ,



␦ 共l1,l2,x兲 ⫺ ql 1 , P1

␥ m共l11,l,m2,x兲 2 ␥ m共l11,l,m2,y兲 2

冊 冉 ⫽

and ␥m共l 11,l,m2,y兲 ⫽ 2

␦共l 1,l 2,y兲 ⫺ ql2 , P2

冊冉 冊 冉

a1 a2 a3 a4

m1 m2



d 共l1,l2,x兲 d 共l1,l2,y兲



(8)

with a 1 ⫽ a 4 ⫽ cos ␪共l 1,l 2兲 ⫺ 1,

a2 ⫽ ⫺sin ␪共l 1,l 2兲 ,

and

a3 ⫽ sin ␪共l 1,l 2兲 ,

where d (l 1,l 2, x) , d (l 1,l 2,y) are translation and ␪ (l 1,l 2) is the rotation angle. We note that ៮f m共l11,l,m2兲2 1 P 1P 2



共l ,l ,x兲

P1共共m1⫹␥m11,m2 2 兲⫹1/2兲⫹P1l1/q 共l ,l ,x兲

P1共共m1⫹␥m11,m2 2 兲⫺1/2兲⫹P1l1/q



共l ,l ,y兲

P2共共m2⫹␥m11,m2 2 兲⫹1/2兲⫹P2l2/q

f共x, y兲 dx dy 共l ,l ,y兲

P2共共m2⫹␥m11,m2 2 兲⫺1/2兲⫹P2l2/q 共l1,l2兲

⫹ n m共l11,l,m2兲2 ⫽ f m1⫹␥共l ,l ,x兲,m2⫹␥共l ,l ,y兲 ⫹ n m共l11,l,m2兲2. 1 2

1 2

m1,m2

m1,m2

In above equation, the integration support is not rotated according to angle ␪. However, if the angle is small enough, this integration support could be used as an approximant of the rotated version. We denote f(l 1,l 2) as the low-resolution image without translation and rotation, and f៮(l 1,l 2) as the low-resolution image with translation and rotation. Let T (l 1,l 2) be the geometric warp performed on the image f(l 1,l 2) , then the relationship between f(l 1,l 2) and f៮(l 1,l 2) can be expressed as follows: f៮ 共l 1,l 2兲 ⫽ T共l 1,l 2兲 f共l 1,l 2兲 .

(9)

(7)

where H (l 1,l 2) ( ␦ (l 1,l 2, x) , ␦ (l 1,l 2,y) ) is the blur matrix of the (l 1 , l 2 )-th image frame. We note that the blur matrix in such high-resolution image reconstruction becomes spatially variant. The whole blur matrix is made up of blue matrices from each image frame, i.e., H共 ␦ 兲 ⫽





respectively, where ␧ (l 1,l 2, x) and ␧ (l 1,l 2,y) denote the normalized horizontal and vertical displacement errors. The observed low-resolution image is written as

៮f m共l11,l,m2兲2 ⫽

␥ m共l11,l,m2,x兲 ⫽ 2

D 共l1,l2兲H 共l1,l2兲共 ␦ 共l1,l2,x兲, ␦ 共l1,l2,y兲兲.

l1,l2

One disadvantage is that the blur matrix is spatially variant, even though H( ␦ ) has the same structure as H (l 1,l 2) ( ␦ (l 1,l 2, x) , ␦ (l 1,l 2,y) ), but with some entries being perturbed. It is a near-BTHTHB matrix but it can no longer be diagonalized by the discrete cosine transform. In next subsection, we introduce the warping process, which maps the perturbation displacement image into perfection of subpixel displacement alignment of sensor (l 1 , l 2 ). Another disadvantage of the above model is that rotation is not included in the model. The displacement errors may also depend on the pixel position, i.e., ␦ (l 1,l 2, x) and ␦ (l 1,l 2,y) are functions of the pixel position (m 1 , m 2 ). To handle the situation that displacement errors include translation and rotation, we extend Bose and Boo’s (1998) model. We

C. The Geometric Warp Matrix. The geometric warp matrix T (l 1,l 2) describes the procedure of translation and rotation between the observed low-resolution image and the ideal low-resolution image. In this subsection, we consider the structure of the geometric warp matrix. Let f denote the image vector with image size of N 1 ⫻ N 2 , T denote the geometric warp matrix of size N 1 N 2 ⫻ N 1 N 2 , and g be the aliasing version of f with the effect of the geometric warp matrix T, i.e., g ⫽ Tf. We know that the image f and every pixel g i, j is originally from f i, j with the known horizontal and vertical displacement of ␦ xi, j and ␦ i,y j , respectively, i.e., g i,j ⫽ f i⫹␦i,jx,j⫹␦i,jy. The geometric warp matrix T and the image g are unknown. The pixel value g i, j (i ⫽ 1, . . . , N 1 ; j ⫽ 1, . . . , N 2 ) is originally from f i, j , and its pixel position g i, j may move in four possible directions: southeast [ ␦ i,x j , ␦ i,y j ⱖ 0; see Fig. 2(a)], northwest [ ␦ xi, j , ␦ i,y j ⱕ 0; see Fig. 2(b)], northeast [ ␦ i,x j ⱖ 0, ␦ yi, j ⱕ 0; see Fig. 2(c)], and southwest [ ␦ i,x j ⱕ 0, ␦ i,y j ⱖ 0; see Fig. 2(d)]. For simplicity, we only consider the case of Figure 2(a). The value g i, j can be obtained through the bilinear interpolating method, which is

Vol. 14, 75– 83 (2004)

77

Under the periodic boundary condition, A N 1,N 1⫺1 is a zero matrix, and A N 1,1 ⫽ A N 1,N 1⫹1 , A j, j , and A j, j⫹1 , respectively, are given by

A j,j ⫽



a 1,j

b N2,j

b 1,j a 2,j · · · ·· ·

0 b N2⫺1,j a N2,j



and

Aj,j⫹1 ⫽



c1,j

d1,j c2,j

dN 2,j

0 ·· · ·· d N 2 ⫺1,j · cN 2,j



.

Under the Neumann boundary condition, A N 1,1 is zero matrix, and A N 1,N 1⫺1 ⫽ A N 1,N 1⫹1 , A j, j , and A j, j⫹1 are given by

A j,j ⫽



a 1,j b 1,j a 2,j

0 ·· ··· b N2⫺1,j · b N2,j a N2,j



and

Aj,j⫹1 ⫽

Figure 2. The relationship between the original pixel and the displacement pixel.



c1,j

d1,j c2,j

·· · ·· · dN 2,j

0 dN 2⫺1,j cN 2,j



where specified in the MPEG standard (1993), from four neighboring pixel values for i, j being integers and 0 ⱕ ␦ xi, j , ␦ i,y j ⬍ 1 in the following formula:

a i,j ⫽ 共1 ⫺ ␦ i,jx兲共1 ⫺ ␦ i,jy兲,

b i,j ⫽ 共1 ⫺ ␦ i,jx兲 ␦ i,jy, i ⫽ 1, 2, . . . , N 2,

j ⫽ 1, 2, . . . , N 1,c i,j ⫽ ␦ i,jx共1 ⫺ ␦ i,jy兲, g i,j ⫽ 共1 ⫺ ␦ i,jx兲共1 ⫺ ␦ i,jy兲f i,j ⫹ 共1 ⫺ ␦ i,jx兲 ␦ i,jyf i⫹1,j

i ⫽ 1, 2, . . . , N 2, j ⫽ 1, 2, . . . , N 1.

⫹ ␦ 共1 ⫺ ␦ 兲f i,j⫹1 ⫹ ␦ ␦ f x i,j

y i,j

x y i,j i,j i⫹1,j⫹1

.

(10)

We note that the calculation of g i, j in the above equation involves points outside the scene. For example, g N 2,1 requires the values f N 2⫹1,1 and f N 2⫹1,2 , which are unknown. To solve this problem, one often resorts to the boundary condition on the image f. Here we visualize the structure of the geometric warp matrix based on different boundary conditions:

T⫽



A 1,1 0 0 A N1,1

A 1,2 0 0 ·· A 2,2 A 2,3 · ·· ·· ·· 0 · · · 0 A N1⫺1,N1⫺1 A N1⫺1,N1 A N1,N1⫺1 A N1,N1



.

Under the zero boundary condition, A N 1,1 and A N 1,N 1⫺1 are zero matrices, and A j, j and A j, j⫹1 are given by

A j,j ⫽



a 1,j b 1,j a 2,j · · · ·· · 0

0 b N2⫺1,j a N2,j



and

Aj,j⫹1 ⫽

78

Vol. 14, 75– 83 (2004)



c1,j

0

d i,j ⫽ ␦ i,jx␦ i,jy,

d1,j c2,j

0 ·· · ·· d N 2 ⫺1,j · cN 2,j



.

In general, the geometric warp matrix does not have any specific structure. However, if the rotation angle is equal to zero, the displacement errors ␦ i,x j and ␦ yi, j are constant. In this case, the geometric warp matrix T is a BTTB matrix under the zero boundary condition, or a BCCB matrix under the periodic boundary condition, or a BTHTHB matrix under the Neumann boundary condition. III. HIGH-RESOLUTION IMAGE RECONSTRUCTION ALGORITHMS In this article we assume that the blur matrix H and down-sampling matrix D (l 1,l 2) are known, and the geometric warp matrix T (l 1,l 2) is unknown. The geometric warp matrix T (l 1,l 2) can be obtained by motion estimation algorithms. In the next subsection, we give the method to obtain the translation and rotation parameter and the down-sampling matrix. The reconstruction methods are then presented. A. Motion Estimation. Accurate estimation of displacement errors of a motion image to a reference image is important in image registration. Many motion estimation schemes, such as block matching algorithm (de Haan and Biezen, 1994; Girod, 1993), pel-based estimation methods (Netravali and Robbins, 1979; Nosratinia and Orchard, 1993), optical flow approach (Singh, 1991), and frequencydomain methods (Chen et al., 2003; Ziegler, 1990), have been proposed over the years. Here we focus on the optical flow approach. The geometric warping process is modeled as the translation and rotation only. The modeling parameters are not known in

advance but can be obtained by estimating the motion between the observed low-resolution image f៮(l 1,l 2) obtained by (7) and the ideal low-resolution image f(l 1,l 2) obtained by (4). According to the above discussion, the observed low-resolution image for the (l 1 , l 2 )-th image can be expressed as 共l1,l2兲 ៮f m共l11,l,m2兲2 ⫽ f 共l1,l2兲 共l ,l ,x兲 共l ,l ,y兲 ⫹ n m ,m . 1 2 m1⫹␥ ,m2⫹␥ 1 2

1 2

m1,m2

m1,m2

(11)

Here ␥m共l 11,l,m2,2x兲 and ␥m共l 11,l,m2,y兲 are the horizontal and vertical displacement 2 errors. In Bergen et al. (1992), ␥m共l 11,l,m2,2x兲 and ␥m共l 11,l,m2,y兲 were expressed as 2 the affine form:



␥ m共l11,l,m2,x兲 2 ␥ m共l11,l,m2,y兲 2

冊 冉 ⫽

a1 a2 a3 a4

冊冉 冊 冉 m1 m2



d 共l1,l2,x兲 d 共l1,l2,y兲



.

(12)

Here, we take into account the translation and rotation, and we have a 1 ⫽ a 4 ⫽ cos ␪共l 1,l 2兲 ⫺ 1,

a2 ⫽ ⫺sin ␪共l 1,l 2兲 ,

and

a3 ⫽ sin ␪共l 1,l 2兲 .

Next we have to estimate ␪ (l 1,l 2) , d (l 1,l 2, x) , and d (l 1,l 2,y) . To simplify, we abbreviate ␪ (l 1,l 2) , d (l 1,l 2, x) , d (l 1,l 2,y) as ␪ , d x , d y . For very small of ␪, we have the following approximations: sin ␪ ⬇ ␪ and cos ␪ ⬇ 1. Thus, we obtain the approximation displacement errors:

␥ m共l11,l,m2,x兲 ⬇ ⫺m2 ␪ ⫹ dx 2

and

␥m共l 11,l,m2,y兲 ⬇ m1 ␪ ⫹ dy . 2

For very small values of ␪ , d x , and d y , the right-hand side of (11) can be expanded by the Taylor formula: 共l1,l2兲

2兲 f m1⫹␥共l ,l ,x兲,m2⫹␥共l ,l ,y兲 ⬇ f m共l11,l⫺m x y 2␪⫹d ,m2⫹m1␪⫹d 1 2

1 2

m1,m2

m1,m2

⬇ f m共l11,l,m2兲2 ⫹

⌬f m共l11,l,m2兲2 ⌬x

共d x ⫺ m 2␪ 兲 ⫹

⌬f m共l11,l,m2兲2 ⌬y

共d y ⫹ m 1␪ 兲.

(13)

Let f t ⫽ ៮f m共l11,l,m2兲2 ⫺ f m共l11,l,m2兲2,

fx ⫽

⌬f 共l1,l2兲 , ⌬x

and fy ⫽

⌬f 共l 1,l 2兲 . ⌬y

The motion parameters can be estimated by solving the following minimization problem: 共 ␪ , d , d 兲 ⫽ min x

y



␪ ,d x ,d y m 1 ,m 2

共fx 共d ⫺ m2 ␪兲 ⫹ fy 共d ⫹ m1 ␪兲 ⫺ ft 兲 . x

y

2

and r共␪, dx , dy 兲 ⫽





冘 冘 冘

g2

m1,m2

gf x

m1,m2

m1,m2

冘 冘 冘

冘 冘 冘

gf x

m1,m2

f 2x

m1,m2

gf y

gf y

m1,m2

f xf y

m1,m2

f xf y

m1,m2

m1,m2

f

2 y

冘 冘 冘

冣冢 冣 冢 冣 ␪ dx dy

gf t

m1,m2



f xf t

m1,m2

.

(14)

f yf t

m1,m2

In practice, we need to determine which low-resolution image is obtained from the (l 1 , l 2 )-th image frame with respect to the reference image frame. For example, in Figure 3, the square image (■) obtained from the (0, 1)-th image frame, (1, 0)-th image frame, or (1, 1)-th image frame? There are several methods for mapping the low-resolution images to the high-resolution image grid. Our method is to choose a low-resolution image randomly as the reference image frame, and then we estimate the horizontal and vertical shifts for other image frames relative to the reference image frame. According to their relative positions, we can decide how to locate the low-resolution images into the high-resolution lattice. For example, in Figure 3, we choose the square image (■) as the reference image frame; the rotation and translation parameters for the circle image (F), diamond image (}), and triangle image (Œ) are (0.0001, ⫺0.450, 0.050), (⫺0.0001, ⫺0.535, 0.550), and (0, 0.005, 0.525), respectively. Based on these estimates, we reset the circle image (F) a the reference image frame, and the square image (■), diamond image (}), and triangle image (Œ) are the (0, 1)-th image frame, the (1, 0)-th image frame, and the (1, 1)-th image frame, respectively. B. Image Reconstruction for the Ideal Model. The highresolution image reconstruction problem is to find a solution f satisfying Eq. (4) for l 1 ⫽ 0, 1, . . . , q ⫺ 1; l 2 ⫽ 0, 1, . . . , q ⫺ 1. This can be done by constructing a least squares problem with regularization: f ⫽ argmin



储D共l 1,l 2兲 Hf ⫺ f共l 1,l 2兲 储22 ⫹ ␣储Lf储22 ,

(15)

l 1 ,l 2

where L is the regularization operator (the first-order finite difference operator) and the parameter ␣ is the regularization parameter. The solution of (15) is expected to keep

We denote g ⫽ m 1f y ⫺ m 2f x

Figure 3. Map the low-resolution images to high-resolution image.

共fx dx ⫹ fy dy ⫹ g␪ ⫺ ft 兲2 .

m 1 ,m 2



储D 共l1,l2兲Hf ⫺ f共l 1,l 2兲 储22

l1,l2

To solve the minimization problem (13), we set the first derivative equations of the function r( ␪ , d x , d y ) to be zero. Then the problem becomes to find a solution of the following linear equation

small and is stabilized through the penalty term ␣ 储Lf 储22. The minimization problem (15) is equivalent to the linear system:

Vol. 14, 75– 83 (2004)

79

冉冘



H t共D 共l1,l2兲兲 tD 共l1,l2兲H ⫹ ␣ L tL f ⫽

l1,l2



Ht 共D共l 1,l 2兲 兲t f共l 1,l 2兲 .

(16)

l 1 ,l 2

Equation (16) can be expressed as the simpler form:

冋 冉冘 Ht

l1,l2





共D 共l1,l2兲兲 tD 共l1,l2兲 H ⫹ ␣ L tL f ⫽ Ht



共D共l 1,l 2兲 兲t f共l 1,l 2兲 .

(17)

l 1 ,l 2

It is easy to note that if we have sufficient shifted low-resolution images, then the matrix ¥ l 1,l 2 (D (l 1,l 2) ) t D (l 1,l 2) is an identity matrix, i.e.,



共D 共l1,l2兲兲 tD 共l1,l2兲 ⫽ I.

l1,l2

Therefore, the high-resolution image resolution problem is equivalent to the image restoration problem. When the Neumann boundary condition is applied to the images and the regularization operator, the matrices H t H and L t L have a BTHTHB structure and can always be diagonalized by the discrete cosine transform matrix (see, e.g., Ng and Bose, 2003). It follows that the desired high-resolution image can be obtained very efficiently. C. Image Reconstruction for the Rotation and Translation Model. As translation and rotation occur during the image acquisition, the ideal low-resolution image f(l 1,l 2) is not available; thus reconstruction formula (17) cannot be applied. If the ideal low-resolution image which locates in the highresolution image grid without displacement is available, then the high-resolution image can be obtained by solving (17). The observed low-resolution image is the aliased version of the ideal low-resolution with translation and rotation, and the relationship between the observed low-resolution images and the ideal low-resolution images is given by f៮(l 1,l 2) ⫽ T (l 1,l 2) f(l 1,l 2) . Therefore the ideal low-resolution images can be obtained by f 共l 1,l 2兲 ⫽ 共T共l 1,l 2兲 兲⫺1 f៮共l 1,l 2兲 .

(18)

However, it is not necessary to compute the inverse of T (l 1,l 2) explicitly. In our current implementation, there is no need to compute the geometric warp matrix T (l 1,l 2) . In fact, the pixel position (m 1 , m 2 ) in f(l 1,l 2) has a horizontal displacement of ␥m共l 11,l,m2,2x兲 and a vertical displacement of ␥m共l 11,l,m2,y兲 , respectively, to form the pixel 2 position (m 1 , m 2 ) in f៮(l 1,l 2) . When the pixel position (m 1 , m 2 ) in f៮(l 1,l 2) shifts the horizontal displacement ⫺ ␥m共l 11,l,m2,2x兲 and the vertical

Figure 5. The original image (a), the bilinear interpolation image (b), the cubic interpolation image (c), Model I (d), and Model II (e), respectively.

displacement ⫺ ␥m共l 11,l,m2,y兲 , respectively, it corresponds to pixel posi2 tion (m 1 , m 2 ) in f(l 1,l 2) , i.e., from ៮f m共l11,l,m2兲2 ⫽ f 共l1,l2兲 共l ,l ,x兲 共l ,l ,y兲 m1⫹␥ ,m2⫹␥ 1 2

1 2

m1,m2

m1,m2

we obtain 共l1,l2兲

f m共l11,l,m2兲2 ⫽ ៮f m1⫺␥共l ,l ,x兲,m2⫺␥共l ,l ,y兲. 1 2

1 2

m1,m2

m1,m2

Thus, the perfection subpixel displacement image f(l 1,l 2) (which has no displacement error) can be restored through the actual shifted and rotated image f៮(l 1,l 2) (the image obtained by taking translation and rotation into account) by using the interpolation scheme. Therefore, the problem of reconstructing the high-resolution image is transformed to the problem of finding a solution of the following minimization problem 共H tH ⫹ ␣ L tL兲f ⫽ Ht



共D共l 1,l 2兲 兲t f共l 1,l 2兲 .

l 1 ,l 2

We note that H and L have a BTHTHB matrix; thus the left-hand side of the above formula can be diagonalized by the discrete cosine transform. The algorithm reads:

Table I. Comparison among the bilinear method, cubic method, pure translation model (Model I), and translation and rotation model (Model II) when the actual displacement errors include both translation and rotation. Bilinear SNR PSNR Figure 4. The observed low-resolution image with true motion parameter of (0, 0, 0), (0.0001, 0.0300, ⫺0.0400), (⫺0.0150, ⫺0.0500, 0.0500), and (⫺0.0030, ⫺0.0200, 0.0300), respectively.

80

Vol. 14, 75– 83 (2004)

30 40 50

25.36 25.42 25.43

Cubic

Model I

Model II

RE

PSNR

RE

PSNR

RE

PSNR

RE

0.0746 0.0741 0.0740

26.18 26.26 26.27

0.0679 0.0673 0.0672

26.82 26.98 27.00

0.0631 0.0619 0.0618

27.30 27.48 27.50

0.0597 0.0584 0.0583

Table II. The true and the estimated motion parameters. Frame

True

Estimated for Model I

Estimated for Model II

2 3 4

(0.0001, 0.0300, ⫺0.0400) (⫺0.0150, ⫺0.0500, 0.0500) (⫺0.0030, ⫺0.0200, 0.0300)

(0.0000, 0.0344, ⫺0.0395) (0.0000, 0.0344, ⫺0.0395) (0.0000, ⫺0.1130, 0.1209)

(0.0001, 0.0358, ⫺0.0425) (⫺0.0150, ⫺0.0672, 0.0655) (⫺0.0028, ⫺0.0265, 0.0434)

Algorithm

and

1. Interpolate the reference image frame into high-resolution image using bilinear interpolation method and obtain the initial high-resolution image fH 0. 2. For l 1 , l 2 ⫽ 1, 2, . . . , q ⫺ 1, compute approximation low-resolution images f 共l 1,l 2兲 ⫽ D共l 1,l 2兲 Hf0H , ft ⫽ ៮f 共l 1,l 2兲 ⫺ f 共l 1,l 2兲 , ⌬f 共l 1,l 2兲 ⌬f 共l 1,l 2兲 , gy ⫽ , ⌬x ⌬y

gx ⫽

and estimate the motion parameters ( ␪ , d x , d y ), using (14). 3. Compute the displacement errors ␥m共l 11,l,m2,2x兲 and ␥m共l 11,l,m2,y兲 using 2 ␥ m共l11,l,m2,2x兲 ⫽ m 1共cos ⫺ 1兲 ⫺ m2 ␪ ⫹ dx ,

␥m共l 11,l,m2,y兲 ⫽ m1 ␪ ⫹ 共cos ␪ ⫺ 1兲 ⫹ dy . 2 4. Reconstruct the ideal low-resolution image f(l 1,l 2) from f៮(l 1,l 2) using 共l1,l2兲 f m共l11,l,m2兲2 ⫽ ៮f m1⫺␥共l ,l , x兲,m2⫺␥共l ,l ,y兲. 1 2

1 2

m1,m2

m1,m2

5. Solve the minimization problem: 共H tH ⫹ ␣ L⬘L兲f ⫽ Ht



共D共l 1,l 2兲 兲t f共l 1,l 2兲

l 1 ,l 2

RE ⫽

储f ⫺ fˆ储2 , 储f储2

(20)

where f, fˆ denote the original and estimated image, respectively, and M1, M2 is the size of the high-resolution images. Four 64 ⫻ 64 low-resolution images, shown in Figure 4, were generated from the 128 ⫻ 128 high-resolution image in Figure 5(a). Gaussian white noises with signal-to-noise ratio (SNR) of 30 dB are added to the low-resolution images. In the tests, the horizontal and vertical perturbation displacements are chosen randomly between ⫺0.1 and 0.1, and the rotation angle is chosen randomly in radian between ⫺0.02 and 0.02. The motion parameters are estimated by using (13). There are small perturbations around the ideal subpixel locations for the four images. We choose the first image as the reference image frame; the motion parameter of this image is (0, 0, 0), and the parameters of others images are estimated relative to this image. The bilinear interpolation and the cubic interpolation are shown in Figures 5(b) and (c), respectively. Figure 5(d) and (e) are reconstructed by using the pure translation model (Model I) and the translation and rotation model (Model II), respectively. It is clear that our approaches perform considerably better than the bilinear interpolation and cubic interpolation. In particular, we note that the number in the clock is vague in the interpolation image, but is legible in Figures 5(d) and (e).

by using the discrete cosine transforms. IV. EXPERIMENTAL RESULTS In this section, we implement the proposed method to reconstruct high-resolution images. In the tests, the Neumann boundary condition is applied and the regularization matrix is defined as

R ⫽ I M1 䊟



1 ⫺1

⫺1 2 ⫺1 ⫺1 · · · ·· ·





··

· 2 ⫺1 ⫺1 1



M2

1 ⫺1 ⫺1 2 ⫺1 ⫺1 · · · ·· ·

··

· 2 ⫺1 ⫺1 1



䊟 IM 2. M1

The peak signal-to-noise ratio (PSNR) and the relative error (RE) are used to evaluate the proposed methods (Lagendijk and Biemond, 1991). They are defined, respectively, by PSNR ⫽ 10 log10

2552 1 储f ⫺ fˆ储2 M1 M2

(19) Figure 6. The low-resolution image of frames 6, 7, 8, and 9.

Vol. 14, 75– 83 (2004)

81

The PSNR and RE values are listed in Table I. Note that the PSNR values improve about 0.5 dB when we consider the displacement error both translation and rotation. Table II presents the true motion parameter and estimated motion parameter. It is interesting to note that both models can estimate the motion parameter approximately when the rotation angle is very small. If the rotation angle becomes large, then the estimated motion parameter is not accurate when we only consider the translation displacement errors. The images sequence with 12 low-resolution images are used in an another experiment. The low-resolution images are generated by using (4) with l 1 ⫽ l 2 ⫽ 0. Additional noise is added resulting in a blurred SNR of 30 dB for each frame. Figure 6 shows the low-resolution image of frames 6, 7, 8, and 9. We use these lowresolution images to reconstruct the high-resolution image of frame 8. The true motion parameters are unknown. The PSNRs are 24.46 dB, 24.74 dB, 23.29 dB, and 25.40 dB for bilinear interpolation, cubic interpolation, Model I, and Model II, respectively. The original image and reconstructed image are presented in Figure 7. We note that it is not enough to reconstruct the high-resolution image considering the displacement errors generated only by translation. The result shows that the PSNR value for Model I is less than for the

Figure 8.

Frame no. versus PSNR.

interpolation method. If we consider both translation and rotation, the PSNRs improve 0.94 dB. The PSNRs for other frames are presented in Figure 8. REFERENCES J. Bergen, P. Anandan, K. Hanna, and R. Hingorani, Hierarchical model based motion estimation, Proc Second European Conf Computer Visions, Springer-Verlag, 1992, pp. 237–252. N. Bose and K. Boo, High resolution image reconstruction with multisensors, Int J Imaging Syst Technol 9 (1998), 294 –304. N. Bose, H. Kim, and B. Zhou, Performance analysis of the TLS algorithm for image reconstruction from a sequence of undersampled noisy and blurred frames, Int Conf Image Process (94), Austin, TX, November, 1994, pp. 571–575. CCITT Recommendation MPEG-1, Coding of moving pictures and associated audio for digital storage media at up to about 1.5 Mbit/s, ISO/IEC/ 11172, Geneva, Switzerland, 1993. R. Chan, T. Chan, M. Ng, W. Tang, and C. Wang, Preconditioned iterative methods for high-resolution image reconstruction with multisensors, Proc SPIE, Symp Adv Signal Process: Algorithms, Architectures and Implementations, Vol. 3461, San Diego, CA, July, 1998, pp. 348 –357. R. Chan, T. Chan, L. Shen, and Z. Shen, Wavelet algorithms for high resolution image reconstruction, SIAM J Sci Comput 4(24) (2003), 1408 – 1432. J. Chen, U. Koc, and K. Liu, Design of digital video coding systems: A complete compressed domain approach, Marcel Dekker, New York, 2002. W. Ching, M. Ng, K. Sze, and A. Yau, Superresolution image reconstruction from blurred observations by multisensors, Int J Imaging Syst Technol 13(3) (2003), 153–160. G. de Haan and W. Biezen, Sub-pixel motion estimation with 3-D recursive search block-matching, Signal Process: Image Commun 6(3) (1994), 229 – 239. W. Duhamel and H. Maitre, Multi-channel high resolution blind image restoration, Proc IEEE ICASSP 99, AZ, March, 1999, pp. 3229 –3232. M. Elad and A. Feuer, Restoration of single super-resolution image from several blurred, noisy and down-sampled measured images, IEEE Trans Image Process 6(12) (1997), 1646 –1658.

Figure 7. Reconstructed high-resolution image of Frame 8. The original image (a), the bilinear interpolation image (b), the cubic interpolation image (c), Model I (d), and Model II (e).

82

Vol. 14, 75– 83 (2004)

M. Elad and Y. Hel-Or, A fast super-resolution reconstruction algorithm for pure translational motion and common space invariant blur, IEEE Trans Image Process 10(8) (2001), 1187–1193.

B. Girod, Motion-compensating prediction with fractional accuracy, IEEE Trans Commun 41(4) (1993), 604 – 612. R. Hardie, K. Barnard, and E. Armstrong, Joint MAP registration and high-resolution image estimation using a sequence of undersampled images, IEEE Trans Image Process 6(12) (1997), 1621–1633. R. Hardie, K. Barnard, E. Armstrong, and E. Watson, High-resolution image reconstruction from a sequence of rotated and translated frames and its application to an infrared imaging system, Opt Eng 37(1) (1998), 247–260. M. Irani and S. Peleg, Improving resolution by image registration, CVGIP: Graph Models, Image Process 53 (1993), 324 –335. S. Kim, N. Bose, and H. Valenzuela, Recursive reconstruction of high resolution image from noisy undersampled multiframes, IEEE Trans Acoust Speech, Signal Process 38 (1990), 1013–1027.

M. Ng, J. Koo, and N. Bose, Constrained total least-squares computations for high-resolution image reconstruction with multisensors, Int J Imaging Syst Technol 12(1) (2002), 35– 42. M. Ng and K. Sze, Preconditioned iterative methods for superresolution image reconstruction with multisensors. In Luk R, Ed. Proc SPIE Symp Adv Signal Process: Algorithms, Architectures and Implementations, Vol. 4116, pp. 396 – 405. M. Ng and A. Yip, A fast MAP algorithm for high-resolution image reconstruction with multisensors, Multidim Syst Signal Process 12 (2001), 143– 164. N. Nguyen, P. Milanfar, and G. Golub, A computational efficient superresolution image reconstruction algorithm, IEEE Trans Image Process 10 (2001), 573–583.

R. Lagendijk and J. Biemond, Iterative identification and restoration of images, Kluwer, New York, 1991.

A. Nosratinia, and M. Orchard, Discrete formulation of Pel-recursive motion compensation with recursive least squares updates, IEEE Int Conf Acoustics, Speech, Signal Process, Minneapolis, MN, 1993, pp. 229 –232.

S. Lertrattanapanich and N. Bose, High resolution image formation from low resolution frames using Delaunay triangulation, IEEE Trans Image Process 11(12) (2002), 1427–1441.

M. Park, E. Lee, J. Park, M. Kang, and J. Kim, DCT-based high-resolution image reconstruction considering the inaccurate sub-pixel motion information, Opt Eng 41(2) (2002), 370 –380.

F. Lin, W. Ching, and M. Ng, Preconditioning regularized least squares problems arising from high-resolution image reconstruction from low-resolution frames, Linear Algebra and its Applications, In Press, Corrected Proof, available online 25 March 2004. A. Netravali and J. Robbins, Motion compensated television coding—Part I, Bell Syst Tech J 58 (1979), 631– 670. M. Ng and N. Bose, Mathematical analysis of super-resolution methodology, IEEE Signal Process Mag 20(3) (2003), 62–74. M. Ng, R. Chan, and A. Yip, Cosine transform preconditioners for high resolution image reconstruction, Lin Alg Appls 316 (2000), 89 –104.

K. Sauer and J. Allebach, Iterative reconstruction of band-limited images from non-uniformly spaced samples, IEEE Trans Circuits Syst 34 (1987), 1497–1505. A. Singh, Optic flow computation—A unified perspective, IEEE Computer Society Press, Los Alamitos, CA, 1991. R. Tsai and T. Huang, Multiframe image resolution and registration, Adv Comput Vision Image Process 1 (1984), 317–339. M. Ziegler, Hierarchical motion estimation using the phase correlation method in 140Mbit/s HDTV-coding, Signal Processing of HDTV II, Proceedings of the Third International Workshop on HDTV, 31 August–1 September, 1989, edited by L. Chiariglione, Turin, Italy, pp. 131–137.

Vol. 14, 75– 83 (2004)

83

High-resolution image reconstruction from ... - Wiley Online Library

Mar 25, 2004 - Translated Low-Resolution Images with Multisensors. You-Wei Wen,1 ... high-resolution image reconstruction is turned into the problem of.

234KB Sizes 1 Downloads 219 Views

Recommend Documents

ELTGOL - Wiley Online Library
ABSTRACT. Background and objective: Exacerbations of COPD are often characterized by increased mucus production that is difficult to treat and worsens patients' outcome. This study evaluated the efficacy of a chest physio- therapy technique (expirati

poly(styrene - Wiley Online Library
Dec 27, 2007 - (4VP) but immiscible with PS4VP-30 (where the number following the hyphen refers to the percentage 4VP in the polymer) and PSMA-20 (where the number following the hyphen refers to the percentage methacrylic acid in the polymer) over th

Recurvirostra avosetta - Wiley Online Library
broodrearing capacity. Proceedings of the Royal Society B: Biological. Sciences, 263, 1719–1724. Hills, S. (1983) Incubation capacity as a limiting factor of shorebird clutch size. MS thesis, University of Washington, Seattle, Washington. Hötker,

Kitaev Transformation - Wiley Online Library
Jul 1, 2015 - Quantum chemistry is an important area of application for quantum computation. In particular, quantum algorithms applied to the electronic ...

from institutions to financial development and ... - Wiley Online Library
MA 02148, USA. .... The former may impact the degree of risk-sharing and availability of liquidity in an ..... influence the political process by which institutions enhance or hinder financial development. Table 2 ...... American Political Science.

Transfer of adaptation from visually guided ... - Wiley Online Library
Keywords: double visual stimulation, eye movement, human, saccade averaging, saccadic plasticity. Abstract. The adaptive mechanisms that control the amplitude of visually guided saccades (VGS) are only partially elucidated. In this study, we investig

PDF(3102K) - Wiley Online Library
Rutgers University. 1. Perceptual Knowledge. Imagine yourself sitting on your front porch, sipping your morning coffee and admiring the scene before you.

Standard PDF - Wiley Online Library
This article is protected by copyright. All rights reserved. Received Date : 05-Apr-2016. Revised Date : 03-Aug-2016. Accepted Date : 29-Aug-2016. Article type ...

Authentic inquiry - Wiley Online Library
By authentic inquiry, we mean the activities that scientists engage in while conduct- ing their research (Dunbar, 1995; Latour & Woolgar, 1986). Chinn and Malhotra present an analysis of key features of authentic inquiry, and show that most of these

TARGETED ADVERTISING - Wiley Online Library
the characteristics of subscribers and raises advertisers' willingness to ... IN THIS PAPER I INVESTIGATE WHETHER MEDIA TARGETING can raise the value of.

Verbal Report - Wiley Online Library
Nyhus, S. E. (1994). Attitudes of non-native speakers of English toward the use of verbal report to elicit their reading comprehension strategies. Unpublished Plan B Paper, Department of English as a Second Language, University of Minnesota, Minneapo

PDF(270K) - Wiley Online Library
tested using 1000 permutations, and F-statistics (FCT for microsatellites and ... letting the program determine the best-supported combina- tion without any a ...

Phylogenetic Systematics - Wiley Online Library
American Museum of Natural History, Central Park West at 79th Street, New York, New York 10024. Accepted June 1, 2000. De Queiroz and Gauthier, in a serial paper, argue that state of biological taxonomy—arguing that the unan- nointed harbor “wide

PDF(270K) - Wiley Online Library
ducted using the Web of Science (Thomson Reuters), with ... to ensure that sites throughout the ranges of both species were represented (see Table S1). As the ...

Accounting for uncertainty in DEMs from repeat ... - Wiley Online Library
Dec 10, 2009 - 1 Department of Watershed Sciences, Utah State University, 5210 Old Main Hill, NR 210, Logan, UT 84322, USA. 2 Institute of Geography ...

Microbial conversion of sugars from plant ... - Wiley Online Library
selected chemicals and fuels that can be produced from microbial fermentation of plant-derived cell-wall sugars and directed engineering for improvement of microbial biocatalysts. Lactic acid and ethanol production are highlighted, with a focus on en

Synthesis of Monodisperse Microparticles from ... - Wiley Online Library
Mar 11, 2011 - Newtonian Polymer Solutions with Microfluidic Devices. Dr. A. R. Abate , Dr. S. Seiffert , Dr. M. Windbergs , Dr. A. Rotem ,. Dr. A. S. Utada , Prof. D. A. Weitz. Harvard University. School of Engineering and Applied Sciences and Depar

Peer-Reviewed Publication: A View from Inside - Wiley Online Library
tor, and recently as editor-in-chief (R.S.F.) and managing editor (L.E.P.) ... editor may recognize papers that have a very small or no likelihood of ..... software. THE BUSINESS OF PUBLISHING. Authors and editors are, for the most part, scientists,

The melanoma epidemic: lessons from prostate ... - Wiley Online Library
histopathology of malignant melanoma in the 1930s. (and in every subsequent decade), it is fair to conclude that criteria for morphologic diagnosis of melanoma. (and of other disorders) are a work in progress. Particularly strong progress was made in

The retreat from overgeneralization in child ... - Wiley Online Library
Page 1. Advanced Review. The retreat from overgeneralization in child language acquisition: word learning, morphology, and verb argument structure. Ben Ambridge,. ∗ ... Advanced Review wires.wiley.com/cogsci when speakers combine words to produce e