E-ISSN:2583-2468

Research Article

Recognition Segmentation

Applied Science and Engineering Journal for Advanced Research

2022 Volume 1 Number 2 March
Publisherwww.singhpublication.com

IRIS Recognition Segmentation Signal Processing System

Kamboj P1*
DOI:10.54741/asejar.1.2.4

1* Preeti Kamboj, Mtech Scholar, Department of Electronics and Communication Engineering, Uiet Punjab University, Chandigarh, India.

Iris recognition has gained a lot of attention in both research and practical applications over the last decade because of the increasing focus on security. Due to its high accuracy, reliability, and uniqueness, iris recognition is becoming increasingly popular in the fields of access control, electronic commerce, and border security. In comparison to other biometric modalities, the iris is the most powerful verification (1:1) and identification (1:N) feature because the human iris remains unchanged throughout the lifespan. Segmentation, Normalization, Feature Extraction, and Template Matching comprise the first four stages of iris recognition. These preprocessing techniques used by different researchers for iris segmentation are the focus of this review paper.

Keywords: active contour technique, canny edge detection, iris recognition, segmentation

Corresponding Author How to Cite this Article To Browse
Preeti Kamboj, Mtech Scholar, Department of Electronics and Communication Engineering, Uiet Punjab University, Chandigarh, , India.
Email:
Kamboj P, IRIS Recognition Segmentation Signal Processing System. Appl. Sci. Eng. J. Adv. Res.. 2022;1(2):17-20.
Available From
https://asejar.singhpublication.com/index.php/ojs/article/view/17

Manuscript Received Review Round 1 Review Round 2 Review Round 3 Accepted
2022-03-01 2022-03-10 2022-03-22
Conflict of Interest Funding Ethical Approval Plagiarism X-checker Note
None Nil Yes 12.22

© 2022by Kamboj Pand Published by Singh Publication. This is an Open Access article licensed under a Creative Commons Attribution 4.0 International License https://creativecommons.org/licenses/by/4.0/ unported [CC BY 4.0].

Introduction

For example, a biometric identification system uses some kind of unique feature or characteristic to identify an individual [1]. Biometric identification based on iris patterns is called "iris recognition" and is a form of automatic biometric identification [2]. Due to its unique characteristics, such as ridges, rings, furrows, freckles, complex patterns, and thus a high degree of randomness [3], the demand for Iris recognition is growing daily in various fields of access control, security at border areas, and so on. Iris patterns differ from person to person. Both eyes of a person are unique, even if they are identical twins or identical siblings. Human identification can be highly reliable and accurate due to the uniqueness, stability over the course of a person's entire life, the inability to forge, and the protection provided by iris recognition.

Researchers Flom and Safir introduced the idea of iris recognition as a biometric in 1987. The iris recognition field was flooded with new algorithms in the following years, including those developed by researchers like John Daugman, K.W. Bowyer, Wildes, and A.K. Jain. The iris is the part of the eye that is visible to the outside world. Image segmentation, normalisation, feature extraction, and template matching are the steps in iris recognition as depicted in Fig. 1.

asejar_17_01.JPG
Figure 1:
The stages of iris recognition system

IRIS Segmentation

After acquiring an iris image, the next step in iris recognition is image segmentation. The iris is removed from the digital image of the eye.

If you want to generate templates, you need to know where the iris's outer and inner boundaries are located. Researchers have used a variety of methods to segment the iris. Here are a few examples:

Conventional Methods

There are various edge detection operators like Canny Edge Detector and Hough Transform used in these methods of iris segmentation to identify Iris borders. Although the outer and inner boundaries of an iris appear to be circular, this is not the case, and they do not segment the region accordingly [8].

Following are a few examples of traditional iris localization methods:-

Integro Differential Operator [8]

Iris recognition systems frequently employ this Daugman-developed segmentation technique. Iris boundaries and upper and lower eyelids can be identified using this technique, which uses the maximum value of the contour integral along concentric circles of smoothed radial image derivative to compute the contour integral. Faster computations are possible because it makes use of the first derivative of the input data. The Integro differential operator is given by Equation(1).

asejar_17_02.JPG
I
(x, y) is an image, ds is circular arc of radius r, (x0, y0) are center coordinates, symbol * denotes convolution and G𝞼(r) is a smoothing function.

Canny Edge Detector

It's a tool for spotting iris borders. It reveals the eye's major edges, making it simple to pinpoint the iris boundary. It's used to smooth out the image and remove the fuzziness from it. Errors can be found using this method. In spite of the fact that it performs complex calculations, it takes less time and less memory space to run. It's a very straightforward approach.

Circular Hough Transform

The pupil and iris regions can be deduced using a standard computer vision and powerful algorithm. Once the initial derivatives of intensity values in an eye image have been calculated, thresholding is used to obtain the final result: an edge map.


Equation can define any circle using parameters such as the Xc and Yc centre coordinates and the radius (r) that passes through each edge point

asejar_17_03.JPG
The processing speed is slow if the calculating quantity is large [10].

K-means Clustering

Using this technique, the eye image is divided into three distinct areas based on their intensity. The iris, which includes the pupil's eyelashes, is represented in the first region. The sclera and luminance reflections in the second region have high intensity values. In between these two regions and on top of the skin is the third region. Because the upper eyelid is occluded by an arc, the iris does not lose any of its useful areas.

Active Contour Models

These contours are treated as deformable borders in this algorithm (not circular shape). Iris localization from the sclera and the pupil is improved. Listed below are the active contouring methods. Traditional edge detection methods are based on bending an initial shape toward an object's boundary and then evolving a curve around that object. A weighted combination of internal forces derived from snake shape and external forces derived from the image defines the energy function for defining an active contour model that is minimised.

Energy function is defined in equation (3):

Where
asejar_17_04.JPG

Eint(V(s)):represents the internal energy of the spline due to bending.

Eimag (V(s)): represents the image forces.

Econ (V(s)): represents the external constraint force.

Active Contour without Edges

The stopping function is never zero on the edges of discrete gradients, so the curve may pass through the boundary. To address this issue, a new active contour model has been developed:

The Active contour without edges model. Eyelashes and corneal reflections have no effect on this model's ability to accurately locate the pupil. Gradient Vector Flow (GVF) Snake

The outer iris boundary is the most difficult to locate, compared to the inner iris boundary. Because of the lack of intensity difference between the iris and sclera boundary, it cannot be segmented by active contour without edges model correctly. The Gradient Vector Flow Field is defined by GVF Snake as a new external force field to address this issue.

Statistical Learning Methods

The coarse iris centre and radius, the coarse iris boundaries, and the fine iris boundaries and centres are calculated using the least median of square and linear basis function in a three-step iris segmentation process. In order to reject low-quality images, such as those that are out of focus or fail to segment, it focuses on image quality evaluation.

Least Median of Square Differential Operator (LMedSDO)

It is method by which we calculate coarse iris center and radius of iris inner and outer boundaries. It is more robust than IDO (Integro Differential Operator )at cost of more calculation. It is given by equation (4)

asejar_17_05.JPG
Where med stands for median

Linear Basis Function

It is used to find coarse inner and outer boundaries of iris. It is preferred because it uses trigonometric functions () where (K= 1, 2) as basis functions which provides fast and flexible computation. It is given by equation (5):

Where
asejar_17_06.JPG

RANSAC (Random Sample Consensus)

Occlusion by eyelids, eyelashes, and speculations is a common cause of out-of-focus points in an image. The following are the components of this procedure:

  • All of the coarse boundary points are subject to the Linear Basis Function model.

  • We reject the boundary points that are too far from our coarse boundary points because we consider them to be outliers.
  • The rest of the coarse boundary points are subject to the Linear Basis Function model.
  • Head to 1. The Linear Basis Function model is used for all coarse boundary points.
  • As a loop, this procedure can be repeated as many times as necessary. These inner and outer curves are used to calculate the fine iris' centres and are denoted as (xi,yi). The fine centres are the weight centres of the inner and outer boundaries. It is given by equation (6) (7):

asejar_17_07.JPG
Where

(xc,yc) are iris centres,(xi,yi) are the coordinates of final boundary points given by fine boundary curves in the original iris image

Radial Suppression Edge Detection

Non-separable wavelet transform is used to locate the iris' deformed inner and outer boundaries.

With 99.75 percent accuracy, which is better than previous methods, the radial suppression edge detection edge maps eliminate Eyelash's iris texture and edges, and the boundaries of the iris can now be accurately located regardless of the pupil's shape, be it noncircular or elliptical.

Next, the thresholding technique is used to remove the edges that are not connected to each other.

Conclusion

Various techniques for iris segmentation have been proposed by various researchers over time, with varying levels of segmentation accuracy.

The long-term stability of iris makes it an ideal material for use in places like airports, harbours, and research labs. Because of its effectiveness and precision, iris recognition has gained a lot of traction in recent years.

References

1. S. Prabhakar, S. Pankanti, & A. K Jain. (2003). Biometric recognition: Security and privacy concerns. IEEE Security and Privacy, pp. 33-42 [Crossref][PubMed][Google Scholar]

2. Khalid A. Darabkh, Raed T. AI-Zubi, Mariam T. Jaludi, & Hind AI-Kurdi. (2014). An efficient method for feature extraction of human iris patterns. IEEE [Crossref][PubMed][Google Scholar]

3. Shaabad A. Sahmoud, & Ibrahim S. Abuhaiba. 92013). An efficient iris segmentation method in unconstrained environments. Pattern Recognition, 46(12), 3174-3185 [Crossref][PubMed][Google Scholar]

4. Richard P. Wildes. (1997). Iris recognition: An emerging biometric technology. IEEE, 85(9) [Crossref][PubMed][Google Scholar]

5. K. W. Bowyer, K. Hollingsworth, & P. J. Flynn. (2008). Image understanding for iris biometrics: A survey. Computer Vision Image Understanding, 110(2), 281-307 [Crossref][PubMed][Google Scholar]