sIris Recognition Algorithms Based on Texture Analysis Richard Yew Fatt Ng Computer Vision and Intelligent Systems (CVIS) Group Universiti Tunku Abdul Rahman, Malaysia [email protected]

Yong Haur Tay Computer Vision and Intelligent Systems (CVIS) Group Universiti Tunku Abdul Rahman, Malaysia [email protected]

Abstract Iris recognition has become a popular research in recent years due to its reliability and nearly perfect recognition rates. Iris recognition system has three main stages: image preprocessing, feature extraction and template matching. An innovative method is proposed to extract iris features based on texture analysis. Iris textures are analyzed to capture the discriminating frequency information. Specific filters with different center frequency are applied to three different zones to extract the texture of the iris. Different weightings are given to each zone depending on its contribution to the recognition. The encoded binary templates are compact in size and can avoid the visibility of the individual iris images. The templates are suitable for implementing iris recognition devices using DSP (Digital Signal Processor). The proposed method was evaluated using CASIA iris image database version 1.0 [1]. Experimental results show that the proposed approach has achieved high accuracy of 98.62%.

1. Introduction Biometric identification is an emerging technology which gains more attention in recent years. Iris has distinct phase information which spans about 249 degrees of freedom [2,3]. This advantage let iris recognition be the most accurate and reliable biometric identification. The three main stages of an iris recognition system are image preprocessing, feature extraction and template matching. The iris image needs to be preprocessed to obtain useful iris region. Image preprocessing is divided into

978-1-4244-2328-6/08/$25.00 © 2008 IEEE

Kai Ming Mok Computer Vision and Intelligent Systems (CVIS) Group Universiti Tunku Abdul Rahman, Malaysia [email protected]

three steps: iris localization, iris normalization and image enhancement. Iris localization detects the inner and outer boundaries of iris. Eyelids and eyelashes that may cover the iris region are detected and removed. Iris normalization converts iris image from Cartesian coordinates to Polar coordinates. The iris image has low contrast and non-uniform illumination caused by the position of the light source. All these factors can be compensated by applying local histogram equalization. The paper proposed an innovative method for feature extraction and template matching stages. The iris region is divided into three zones according to the characteristic of the iris texture. Texture of the iris in each zone is analyzed in terms of the discriminating frequency information. Log Gabor filters with different center frequency are chosen accordingly to extract iris most significant texture features. Different weightings are selected for each zone based on its contribution to the recognition. The encoded binary templates are compact in size and can prevent the visibility of individual iris images. The templates can be stored and processed effectively using DSP technology. Iris segmentation method is presented in next section. Section 3 discusses iris normalization and image enhancement algorithms. Section 4 and Section 5 describe details of the proposed feature extraction and template matching stages. Experimental results and discussions are illustrated in Section 6 while the conclusion is drawn in the last section.

2. Iris segmentation This section discusses the iris segmentation method. It includes iris inner and outer boundaries localization, upper and lower eyelids detection and

eyelashes, reflection algorithms.

and

pupil

noise

removal

2.1. Iris inner boundary localization As pupil is a black circular region, it is easy to detect the pupil inside an eye image. Firstly, pupil is detected using thresholding operation. An appropriate threshold is selected to generate the binary image which contains pupil only. Morphological operator is applied to the binary image to remove the reflection inside the pupil region and other dark spots caused by eyelashes. Since the inner boundary of an iris can be approximately modeled as circles, circular Hough transform is used to localize the iris [4]. Firstly, edge detector is applied to binary image to generate the edge map. Gaussian filter is applied to smooth the image to select the proper scale of edge analysis. The center coordinate and radius of the circle with maximum number of edge points are defined as the pupil center and iris inner boundary respectively.

2.2. Iris outer boundary localization In order to locate the iris outer boundary, the proposed method selects two search regions including the outer iris boundary. The pupil center is referred as origin. The search region is a sector with radius from pupil boundary to a maximum radius. The intensities of each radius in the two search regions are added up. The right and left iris boundary is the maximum difference between the sum of intensities of two outer radius and two inner radius.

(a) (b) Figure 2. (a) Upper and lower search regions of the iris image. (b) Upper and lower eyelids detection. Sobel edge detection is applied to the search regions to detect the eyelids. In order to reduce the false edges detection caused by eyelashes, Sobel kernel is tuned to the horizontal direction. After edge detection step, the edge image is generated. The eyelids are detected using linear Hough Transform method.

2.4. Eyelashes, reflection and pupil noise removal The eyelashes and pupil noises are observed to have lower intensity values. A simple thresholding technique is applied to segment eyelashes and pupil noises accurately. Reflection regions are characterized by high intensity values close to 255. A high threshold value can be used to separate the reflection noise.

Reflection

Eyelashes

Pupil

Figure 3. Normalized iris image with pupil, eyelashes and reflection noises.

3. Normalization and enhancement (a) (b) Figure 1.(a) Right and left search regions of the iris image. (b) Iris inner and outer boundaries localization.

2.3. Upper and lower eyelids detection Two search regions are selected to detect upper and lower eyelids. The pupil center, iris inner and outer boundaries are used as reference to select the two search regions.

978-1-4244-2328-6/08/$25.00 © 2008 IEEE

Iris may be captured in different size with varying imaging distance. Due to illumination variations, the radial size of the pupil may change accordingly. Therefore, the iris region needs to be normalized to compensate for these variations. Figure 3 shows the iris image after normalization. Normalization remaps each pixel in the localized iris region from the Cartesian coordinates to polar coordinates. The non-concentric polar representation is normalized to a fixed size rectangular block.

Figure 4. Normalization process.

r  [ R p , Rn ( )]

  [0,2 ]

xi  xc  r  cos( )

(1)

yi  yc  r  sin( )

(2)

where (xi,yi) denotes the polar coordinates of a point inside iris region. (xc,yc) and Rp are the center coordinates and radius of the pupil respectively. Rn(θ) is the distance from pupil center to the iris outer boundary which is in the function of θ. It can be calculated using Equation (3) and (4).

Figure 7. (a) Three zones of the iris image. (b) Pupillary zone, collarette boundary and ciliary zone of the iris. The iris region is divided into three zones according to the characteristic of the iris texture. Zone Z1, Z2 and Z3 are illustrated in Figure 6 and 7. Iris is divided into two major regions: pupillary zone and ciliary zone. Zone Z1 is the pupillary zone which contains abundant texture information. Collarette boundary is the part of iris separating pupillary zone and ciliary zone. It often appears in the middle zone Z2. Zone 3 corresponds to the ciliary zone with flat textures.

4. Feature extraction

Figure 5. Geometry representation for iris normalization. (xd,yd) and Rs denote center coordinates and radius of the iris respectively.

d  ( x d  xc ) 2  ( y d  y c ) 2

(3)

Rn ( )  Rs  (d  sin  ) 2  d  cos

(4)

2

Iris has abundant of unique texture features, especially inside the pupillary zone. Based on texture analysis, local spatial pattern of the iris consists of orientation and frequency information. Frequency information is discriminating in recognizing irises from different people [7]. 1D Log Gabor filter is used to extract the frequency information which represents the iris textures. A Log Gabor filter is a Gaussian transfer function on a logarithmic scale [5]. It has strictly band pass filter to remove the DC components caused by background brightness. The 1D Log Gabor filter on the linear frequency scale has a transfer function as in Equation (5).

G ( w)  exp(( log(w / w0 ) 2 ) / 2(log(k / w0 )) 2 ) (5) The normalized iris image has low contrast and nonuniform illumination caused by the light source position. The image needs to be enhanced to compensate for these factors. Local histogram equalization is applied to the normalized iris image to reduce the effect of nonuniform illumination and obtain well-distributed texture image.

Figure 6. Enhanced iris image.

978-1-4244-2328-6/08/$25.00 © 2008 IEEE

where w0 denotes the filter’s centre frequency and k denotes the bandwidth of the filter. The ratio k/w0 is set to be constant to generate consistent filter shape [6]. Firstly, 2D normalized iris image is decomposed into a few 1D intensity signals. The 1D intensity signals are convolved with the 1D Log Gabor filter in Equation (5). 1D intensity signals are used because the information density is the highest in the angular direction, which corresponds to the horizontal row in the normalized iris images [7]. The iris feature is extracted based on texture analysis. It is observed that the inner zone Z1 contains finest iris texture. The variations of the fine texture

indicate that it contains high frequency components. The high frequency information can be extracted using Log Gabor filter with high center frequency, w0. The middle zone Z2 has larger block of texture due to the presence of collarette boundary. It is processed using filter with a lower center frequency. The flattest texture appears in the outer zone Z3. The texture has low frequency components and therefore a coarsest filter with lowest center frequency is used to capture the local details of the outer zone Z3. After convolution, a series of real and imaginary numbers is generated. The phase information is quantized into four quadrants in the complex plane. Each quadrant is represented with two bits phase information. Therefore, each pixel in the normalized image is demodulated into two bits code in the template. The phase information is extracted because it provides the discriminating information for recognizing irises from different people. It does not depend on extraneous factors, such as illumination and imaging contrast.

(8)

α, β and γ have decreasing weightings because inner zone provides more texture information than the outer zone. Zone Z1 contains most significant texture features that contribute to the recognition. Outer zone has less discriminating information and is often occluded by eyelids and eyelashes. In order to account for rotational variance, one of the two templates is shifted right and left bit-wise during matching. Each bit shifting in the template corresponds to rotation of the iris by an angle depends on the angular resolution. A few Hamming distances are calculated from successive shifts of the template. The lowest Hamming distance is chosen as the best match between the two templates. Finally, a threshold is set to distinguish whether the templates are from same iris or different irises.

(templateA  templateB)  maskA  maskB maskA  maskB (6)

Total hamming distance is summation of Hamming distance from three different zones with different weightings. (7) THD  HD1  HD2  HD3 where HDi, i=1,2,3 denotes the Hamming distance between two templates computed from three different zones, Z1, Z2 and Z3. α, β and γ represent the weightings of the Hamming distance for zone Z1, Z2,

6.1. Experimental results The proposed algorithm was evaluated on CASIA iris image database version 1.0 [1]. There are 756 iris images from 108 different irises. For each eye, 7 images are captured in two sessions. The resolution of the iris image is 320×280 pixels. ROC Curve FAR (False Acceptance Rate)

Hamming distance is defined as the measure of dissimilarity between two binary templates [2,3]. A value of zero would represent a perfect match. The two templates that are completely independent would give a Hamming distance near to 0.5. A threshold is set to decide the two templates are from the same iris or different irises. The fractional hamming distance is sum of the exclusive-OR between two templates over the total number of bits. Masking template is used in the calculation to exclude the noise regions. Only those bits in the templates that correspond to ‘1’ bit in the masking template will be used in the calculation.

978-1-4244-2328-6/08/$25.00 © 2008 IEEE

    1

6. Experimental results and discussions

5. Template matching

HD 

and Z3 respectively. The weightings must satisfy the condition defined in Equation (8).

0.06

0.05

0.04

EER=1.38% 0.03

0.02

0.01

0.00 0.00

0.01

0.02

0.03

0.04

0.05

0.06

FRR (False Rejection Rate)

Figure 8. ROC curve for iris recognition results. ROC curve is plotted to measure the recognition accuracy. From the experimental results, the algorithm shows an overall accuracy of 98.62%. It is noted that the result is not perfect due to low quality of the iris images. The iris region is heavily occluded by eyelids

and eyelashes or distorted much due to pupil dilation and constriction. Some of the iris images are in defocused or motion blurred condition as shown in Figure 9. Image quality assessment is needed to select clear images with high quality.

8. Acknowledgements The author would like to acknowledge Institute of Automation, Chinese Academy of Science for providing CASIA iris image database [1]. This research is partially funded by Malaysia MOSTI ScienceFund 01-02-11-SF0019.

9. References (a) (b) (c) Figure 9. (a) An occluded eye. (b) A defocused eye. (c) A motion blurred eye.

6.2. Discussions The binary templates are encoded by quantizing the phase information. Phase information is discriminating for recognizing irises from different people. The templates contain no amplitude information of the irises. Actual iris images cannot be reconstructed from the templates. Therefore, the use of binary templates could avoid the visibility of individual iris images. Furthermore, the compact templates are encoded in binary format. The binary templates can be stored and processed effectively using DSP technology. Thousand of comparisons between different templates can be computed within one second. Because the matching is computationally efficient, it is suitable for comparisons of million of templates in large databases. This advantage lets binary templates suitable for implementing iris recognition devices based on DSP technology.

7. Conclusions An innovative iris recognition algorithm based on texture analysis is presented in this paper. The iris region is divided into three zones according to the characteristic of the iris texture. Specific filters with different center frequency are defined to extract the frequency information from each zone. Different weightings are given to each zone depending on its contribution to the recognition. The encoded binary templates are small in size and able to avoid the visibility of actual iris images. The binary templates are suitable for implementing iris recognition devices using DSP technology. The experimental results show that the proposed iris recognition algorithm is effective. The approach has achieved a high recognition rate up to 98.62%.

978-1-4244-2328-6/08/$25.00 © 2008 IEEE

[1] “CASIA Iris Image http://www.sinobiometrics.com/, 2007.

Database,”

[2] J. Daugman, “High Confidence Visual Recognition of Persons by a Test of Statistical Independence”, IEEE Tans. Pattern Analysis and Machine Intelligence, vol.15, pp.11481161, 1993. [3] J. Daugman, “How Iris Recognition Works”, IEEE Trans. CSVT, vol. 14, no. 1, pp. 21 – 30, 2004. [4] R.P. Wildes, “Iris Recognition: An Emerging Biometric Technology”, Proceedings of the IEEE, vol.85, pp.13481363, 1997. [5] D. Field, “Relations between the statistics of natural images and the response properties of cortical cells”, Journal of the Optical Society of America, 1987. [6] X. Yuan and P. Shi, “Efficient iris recognition system based on iris anatomical structure”, IEICE Electronic Express, vol. 4, no. 17, pp.555-560, 2007. [7] L. Ma, T. Tan, Y. Wang, and D. Zhang, “Personal identification based on iris texture analysis”, IEEE Trans. On Pattern Analysis and Machine Intelligence, vol 25, no.12, pp. 1519-1533, 2003.

ÿþM icrosoft W ord - 1 0 9 . doc

Biometric identification is an emerging technology which gains more attention in recent years. Iris has distinct phase information which spans about 249 degrees of freedom [2,3]. ..... The lowest Hamming distance is chosen as the best.

293KB Sizes 1 Downloads 54 Views

Recommend Documents

ÿþM icrosoft W ord - TITLEPAGE
After all, the completion of this thesis would not have been possible without .... in line with the inter-firm bargaining, I generalize the labor-management ... between a broadband internet service provider, AOL, and its internet content supplier, ..

ÿþM icrosoft W ord - TITLEPAGE
manufacturers; IBM provides a Chinese computer manufacturer, Lenovo, the technologies for laptop production; and some Taiwanese computer manufacturers,.

ÿþM icrosoft W ord - S hort B irn ENGF inal 0 9 0 9 3 0 x ...
Jun 2, 2009 - public health systems as an important determinant of health. For the societal ..... home parishes. Chadwick ..... income security, derive from employment or. Denmark that ... Michigan State University Press, 1997). 66 Vicente ...

ÿþM icrosoft W ord - UNIT . 1 F ourier S eries
1. 1. 1. 2 sin n k. f x nx n π. ∞. ⌈. ⌉. - -. ∴. = │. │. │. │. ⌊. ⌋. ∑ . Problem 15 Write Parseval's formula in the interval (. ) , 2. c c n. + . Solution: ( ). (. ) (. ) 2. 2. 2. 2. 2. 0. 1. 1. 1. 2. 4 2 c n n c a. f x dx. a b

ÿþM icrosoft W ord - T he I llustrious B raganca C ... -
Among these men were the famous brothers: Professors Francisco and Antonio Vicente; engineer turned patriot, Tristão de Bragança Cunha, and Placido, a.

ÿþM icrosoft W ord - S ocialismin L atin A merica
May 7, 2007 - more equitable global economy. (Grandin ... Capitalism in Latin America has arrived to the point that international banking ... domestic or global consequences. ...... London, UK: The John Hopkins UP and National Endowment.

f-j 0 (1'W c.?
Oct 1, 2015 - MEMORANDUM. TO. FROM. 01C, Asst. Schools Division Superintendent. Chief, SGOD. Chief, CID. All Secondary School Principals.

!"(&*$ 0 3&0% 1 !. 0& 1* 0&
Enter the data you collected into two lists of a graphing calculator. " ! Display the data in a scatter plot. Notice that the points. Use the model for the data. ! & $ " ".

Page 1 99th Percentile Update Latency | m. 0 0%y 0 1 IBOOOL Dh ...
99th Percentile Update Latency. | m. 0. 0%y. 0. 1. IBOOOL. Dh .Q. @§35. 1v. @mem ea mgm. | ei. T. Mdm. LMP. W. _ _. 0 0. 0 0. 5 4. 600 - îucouwmëê 5:35 ...

!"(&*$ 0 3&0% 1 !. 0& 1* 0&
shown at the right. Also, record the number of pennies that would fit in a circle with a diameter of 0 inch. ! ! Enter the data you collected into two lists of a graphing.

/w5H w?96 11 7!.AMR 5026, 0;.89/ /.|/O 0 OH 8 m
Aug 29, 1980 - Foreign Application Priority Data crawlers, left and right, a main ... vators, transportation vehicles, portable conveyors, and the like, must always ...

9 9 0 9.pdf
раÑÑ‚Ð2Ð3⁄4Ñ€ Ñ€ Ñ€ Ð ́/Ð ̧Ð1⁄2Ñ„. 0,9 бут. 200Ð1⁄4люрÐ ̧Ñ фарÐ1⁄4. Learn numbers 0 9 android apps on google play. Week 11 bills 5 4 ...

4 6 6 7 9 4 0 1 0 9 * www.XtremePapers.com
Give non-exact numerical answers correct to 3 significant figures, or 1 decimal place in the case of angles in degrees, unless a different level of accuracy is specified in the question. The use of an electronic calculator is expected, where appropri

Fixed Points: [[1 0 1 0 1 1 0 1 0 1 1 1 1 1 1 1] ] - GitHub
Key bae: Fixed Points: [[0 0 0 0 0 0 1 1 1 1 1 1 1 0 1 0]. [0 1 0 1 1 1 1 1 0 0 0 0 1 0 0 1]. ] Negated Fixed Points: [[0 1 1 1 0 0 1 1 1 1 0 1 0 0 0 1]. ] Key baf:.

SEP 0 9 2016
Sep 9, 2016 - The revised ASP quarterly report template is available for download at the Division website ... 218 or email [email protected]. 5.

office of the undersecretary for regional operations 1 6 0 1 0 3 6 9
Feb 17, 2016 - Training Fee: PhP 9,000.00 per pax. Deadline of registration: March 11, 2016 b. Basic Monitoring and Evaluation Course (BMEC). April 13-15 ...

TEAM P W L D GF GA GD PTS Olympic Club 10 8 0 1 1 29 9 18 24 ...
28. 21. 7. 14. Chicas FC. 10. 5. 5. 0. 0. 28. 20. 8. 15. Hawks Futbol Club. 10. 3. 4. 2. 1. 21. 30. -8. 10. Orange Crush. 10. 3. 7. 0. 0. 12. 39. -25. 9. Primero de Mayo.

Doc #9.pdf
Loading… Page 1. Whoops! There was a problem loading more pages. Doc #9.pdf. Doc #9.pdf. Open. Extract. Open with. Sign In. Main menu. Displaying Doc #9.pdf.

1 LPG eneral 101 CS 1 0 5 Min 9 2 102 CS 1 0 5 Min ...
Item. Gend Item. Max. Item Name. Partici Pinnan. Code er Type pants y. 101 Prasangam - Malayalam. O 5 Min. 102 Padyamchollal - Malayalam. 0 5 Min.

Corporation Bank PO Exam 9 - 5 - 2 0 1 0 Question paper.pdf ...
Corporation Bank PO Exam 9 - 5 - 2 0 1 0 Question paper.pdf. Corporation Bank PO Exam 9 - 5 - 2 0 1 0 Question paper.pdf. Open. Extract. Open with. Sign In.

w-9-form.pdf
For individuals, this is your social security number (SSN). However, for a resident. alien, sole proprietor, or disregarded entity, see the Part I instructions on page ...

w-9-form.pdf
List account number(s) here (optional). Address (number, street, and apt. or suite no.) City, state, and ZIP code. Print or type. See Specific Instructions on page 2. Taxpayer Identification Number (TIN). Enter your TIN in the appropriate box. The TI

w-9-form.pdf
To apply for an SSN, get Form SS-5, Application. for a Social Security Card, from your local Social Security. Administration office or get this form online at ...