www.ijcrt.org © 2022 IJCRT | Volume 10, Issue 5 May 2022 | ISSN: 2320-2882 IJCRT2205827 International Journal of Creative Research Thoughts (IJCRT) www.ijcrt.org h133 IRIS-BASED PERSONAL VISION THROUGH IN-DEPTH LEARNING METHODS 1R.Rami Reddy,2 Dr.T.Manikumar 1 PG Research Scholar,2 Assistant Professor 1 Master of Computer Applications, 1Madanapalle institute of Technology and Science, Madanapalle,Chittoor,Andhra Pradesh, India. Abstract: One of the most important modules in a computer program is the one that handles user security [2] [5]. It was proven that simple passwords can not guarantee high efficiency and are easily accessible by hackers [1] [3]. Another well-known method is biometric based analysis. In recent years, increased interest has been seen in the iris as a biometrics object. It was due to the high efficiency and accuracy guaranteed by this measurable feature [15] [13]. The results of such interest can be seen in the literature. There are many, many different methods proposed by different authors [14] [9]. In this paper, the authors introduce their own identity-based algorithm based on the iris [6]. To differentiate, the algorithm used a CNN (MobileNet)-based transfer learning model and neural networking networks. Immediately after the separation, a separation is performed on the separated outlet and part of the iris is separated. Studies have shown that satisfactory results can be achieved in the proposed way [4] [11]. Index Terms - Iris-based human identity recognition, CNN, Transfer learning, Image segmentation, artificial neural networks. I. INTRODUCTION The solution to such a problem is really simple. The most popular answer is biometrics. Science that identifies (or verifies) a person on the basis of his or her measurable characteristics (e.g., fingerprints, iris, retina, keystroke dynamics) [7] [8]. These traits can be divided into three main groups - physical (connected with our bodies and appropriate measurements), behavioral (these traits we can learn - e.g., signature) or a combination of physical and behavioral features at the same time. time (e.g., voice) [12] [13]. We can conclude that for each computer program (with a biometrics-based security system) the user will not provide any additional passwords as it will be a real password with its own measurable features [16] [20]. Various tests and studies show that it is one of the most important indicators that can ensure high accuracy, efficiency and level of iris recognition [22] [30]. This feature contains more than 250 unique features [26]. Each of them is used to describe who you are (in the form of a vector element) [25]. In the literature, it was also proved that such vectors are completely different from both of the same human eyes (left and right), and moreover it is true, even in the case of twins. Each of them has different irises (completely different vectors). Most importantly the iris is really hard to deceive. In the literature, we can find only a few research papers that provide important evidence that such a spoofing process was successfully completed [31]. However, it should also be mentioned in these works, using iris-based biometrics systems only [29]. It means that such solutions do not consider iris livens and are at risk of print attacks (with iris image) [9]. On the other hand, the iris has a very bad shape - it is really difficult to collect a high quality iris sample without special devices [10]. In some cases, even the help of an experienced ophthalmologist is needed to complete the procedure [19]. Of course, iris samples can also be collected by smartphones novels (e.g., Apple iPhone 12 Max or Samsung Galaxy S20 +) with high quality cameras [20] [22]. However, a second person is also needed. If we want to collect such images ourselves, we can use special sensors available in the market. However, their prices are really high and some of them require special lighting conditions to obtain accurate, high-quality images [17]. An important part of this work is also linked to the assessment process used in the quality assurance process [28]. Initially, the authors used the Scrum method to find a step-by-step solution to increase algorithm accuracy [24]. In each phase, we assessed the quality of the solution created. It was an important indicator of whether progress had been made or not [27][32]. Another consideration to consider during the design of iris-based security systems is the prevention of fraud [30]. What is often seen in biometrics systems is better visibility on the basis of printed images than actual samples. It is strongly associated with iris- based systems [23]. This problem was explained in detail. In the paper, the authors suggested that live images of iris printing, the use of contact lenses and the combination of both can have a significant impact on awareness of the false system [21]. All tests were performed on the IRIS-WVU iris website. In addition, the authors introduced a new approach to preventing such attacks through a deep neural convolutional network [6][33]. www.ijcrt.org © 2022 IJCRT | Volume 10, Issue 5 May 2022 | ISSN: 2320-2882 IJCRT2205827 International Journal of Creative Research Thoughts (IJCRT) www.ijcrt.org h134 II PROPOSED METHOD In purposed method we are performing the classification of either the Iris-based Human Identity Recognition identification using Convolution Neural Network (CNN) of deep learning method. As image analysis based approaches for Iris-based human identity recognition. Hence, proper classification is important for the proper nutrition that which will be possible by using our proposed method. Where we are classifying the images using the CNN algorithm. Once after the classification, the iris part is segmented. Block diagram of proposed method is shown below. Figure-2.1: Block Diagram 2.1 ADVANTAGES • Accurate classification • Less complexity • High performance III MODULES 3.1 System 3.2 User 3.1. System: 3.1.1 Create Dataset: The dataset containing images of the left and right eyes images are considered that which are to be classified is split into training and testing dataset with the test size of 30-20%. 3.1.2 Pre-processing: Resizing and reshaping the images into appropriate format to train our model. 3.1.3 Training: Use the pre-processed training dataset is used to train our model using CNN algorithm. 3.1.4 Classification: The results of our model is display of classified images either it is left or right eye. 3.1.5 Segmentation: Once after the classification the iris part is segmented. 3.2. User: 3.2.1 View training accuracy: User can check for the accuracy of the trained algorithm 3.2.2 Upload Image The user has to upload an image which needs to be classified. 3.2.3 View Results www.ijcrt.org © 2022 IJCRT | Volume 10, Issue 5 May 2022 | ISSN: 2320-2882 IJCRT2205827 International Journal of Creative Research Thoughts (IJCRT) www.ijcrt.org h135 The classified and segmentation image results are viewed by user. IV ARCHITECTURE Figure-4.1: Architecture V Results and Outputs 5.1 Home: In our project, we are classifying the eyes and making the segmentation of the iris. Figure 5.1: Home www.ijcrt.org © 2022 IJCRT | Volume 10, Issue 5 May 2022 | ISSN: 2320-2882 IJCRT2205827 International Journal of Creative Research Thoughts (IJCRT) www.ijcrt.org h136 5.2 About Project: Here the user will get a breif idea about the project. Figure 5.2: About Project 5.3 Upload Image: Here the images can be uploaded those which are to be classified. Figure 5.3: Image Uploading 5.4 Classified output: The classified output. Figure 5.4: Model choosing www.ijcrt.org © 2022 IJCRT | Volume 10, Issue 5 May 2022 | ISSN: 2320-2882 IJCRT2205827 International Journal of Creative Research Thoughts (IJCRT) www.ijcrt.org h137 VI. CONCLUSION In this paper, we propose an in-depth reading framework for iris recognition, by fine-tuning the pre-trained conversion model at Image Net. This framework works for some biometrics recognition problems, and is especially useful in situations where only a few labeled images are available for each class. We applied the proposed framework to the well-known iris database, IIT-Delhi, and achieved promising results, surpassing previous methods in these databases. We train these models with very few original images in each class. We also introduced a visual approach to find the most important regions while performing iris detection[34]. VII REFERENCES [1] Marasco, Emanuela, and Arun Ross. ”A survey on antispoofing schemes for fingerprint recognition systems.” ACM Computing Surveys (CSUR) 47.2 (2015): 28. [2] Minaee, Shervin, and AmirAliAbdolrashidi. ”Highly accurate palmprint recognition using statistical and wavelet features.” Signal Processing and Signal Processing Education Workshop (SP/SPE), IEEE, 2015. [3] Bowyer, Kevin W., and Mark J. Burge, eds. ”Handbook of iris recognition”. London, UK: Springer, 2016. [4] Ding, Changxing, and Dacheng Tao. ”Robust face recognition via multimodal deep face representation.” IEEE Transactions on Multimedia 17.11 (2015): 2049-2058. [5] S Minaee, AAbdolrashidi, and Y Wang. ”Face recognition using scattering convolutional network.” Signal Processing in Medicine and Biology Symposium (SPMB). IEEE, 2017. [6] A. Kumar and A. Passi, Comparison and combination of iris matchers for reliable personal authentication, Pattern Recognition, vol. 43, no. 3, pp. 1016-1026, Mar. 2010. [7] RM. Farouk, Iris recognition based on elastic graph matching and Gabor wavelets, Computer Vision and Image Understanding, Elsevier, 115.8: 1239-1244, 2011. [8] C. Belcher and Y. Du, Region-based SIFT approach to iris recognition, Optics and Lasers in Engineering, Elsevier 47.1: 139-147, 2009. [9] S Umer, BC Dhara, and BhabatoshChanda. ”Iris recognition usingmultiscale morphologic features.” Pattern Recognition Letters 65: 67-74, 2015. [10] S Minaee, AAbdolrashidi, and Y Wang. ”Iris recognition using scattering transform and textural features.” Signal Processing and Signal Processing Education Workshop (SP/SPE), IEEE, 2015. [11] LeCun, Yann, et al. ”Gradient-based learning applied to document recognition.” Proceedings of the IEEE: 2278-2324, 1998. [12] A Krizhevsky, I Sutskever, GE Hinton, ”Imagenet classification with deep convolutional neural networks”, Advances in neural information processing systems, 2012. [13] He, Kaiming, et al. ”Deep residual learning for image recognition.” Proceedings of the IEEE conference on computer vision and pattern recognition. 2016. [14] Badrinarayanan, Vijay, Alex Kendall, and Roberto Cipolla. ”Segnet: A deep convolutional encoder-decoder architecture for image segmentation.” IEEE transactions on pattern analysis and machine intelligence 39.12: 2481-2495, 2017. [15] Ren, S., He, K., Girshick, R., Sun, J. “Faster r-cnn: Towards real-time object detection with region proposal networks”, In Advances in neural information processing systems, 2015. [16] Dong, Chao, et al. ”Learning a deep convolutional network for image super-resolution.” European conference on computer vision. Springer, Cham, 2014. [17] Minaee, Shervin, and AmiraliAbdolrashidi. ”Deep-Emotion: Facial Expression Recognition Using Attentional Convolutional Network.” arXiv preprint arXiv:1902.01019, 2019. [18] Sun, Yi, et al. ”Deep learning face representation by joint identification-verification.”, NIPS, 2014. [19] Minaee, Shervin, et al. ”MTBI Identification From Diffusion MR Images Using Bag of Adversarial Visual Features.” IEEE transactions on medical imaging, 2019. [20] Minaee, Shervin, et al. ”A deep unsupervised learning approach toward MTBI identification using diffusion MRI.” Engineering in Medicine and Biology Society (EMBC), IEEE, 2018. [21] Kim, Yoon. ”Convolutional neural networks for sentence classification.”, Conference on Empirical Methods on Natural Language Processing, 2014. [22] A Severyn, AMoschitti. ”Learning to rank short text pairs with convolutional deep neural networks.”, SIGIR conference on research and development in information retrieval, ACM, 2015. [23] S Minaee, Z Liu. ”Automatic question-answering using a deep similarity neural network.” Global Conference on Signal and Information Processing, IEEE, 2017. [24] Bahdanau, Dzmitry, Kyunghyun Cho, and YoshuaBengio. ”Neural machine translation by jointly learning to align and translate.” arXiv preprint arXiv:1409.0473 (2014). [25] AS Razavian, H Azizpour, et al. ”CNN features off-the-shelf: an astounding baseline for recognition.” IEEE conference on computer vision and pattern recognition workshops, 2014. [26] S Minaee, AAbdolrashidi, Y Wang. ”An experimental study of deep convolutional features for iris recognition.” signal processing in medicine and biology symposium (SPMB), IEEE, 2016. [27] Deng, Jia, et al. ”Imagenet: A large-scale hierarchical image database.” 2009 IEEE conference on computer vision and pattern recognition. IEEE, 2009. [28] https://pytorch.org/ [29] Ajay Kumar and ArunPassi, ”Comparison and combination of iris matchers for reliable personal authentication, Pattern Recognition, vol. 43, no. 3, pp. 1016-1026, Mar. 2010. [30] https://www4.comp.polyu.edu.hk/ csajaykr/IITD/DatabaseIris.htm [31] M Zeiler, R Fergus. ”Visualizing and understanding convolutional networks.” European conference on computer vision, springer, Cham, 2014. www.ijcrt.org © 2022 IJCRT | Volume 10, Issue 5 May 2022 | ISSN: 2320-2882 IJCRT2205827 International Journal of Creative Research Thoughts (IJCRT) www.ijcrt.org h138 [32] T.Manikumar et al,Automated Test Data Generation for Branch Testing Using Incremental Genetic Algorithm” published in the “Sadhana-Academy Proceedings in Engineering Sciences”, Springer Publisher, September 2016, Volume 41, Issue 9, pp 959–976 [33] T.Manikumar et alA Buffered Genetic Algorithm for Automated Branch Coverage in Software Testing” published in the “Journal of Information Science and Engineering”, Inst Information Science Publisher, March 2019,Vol. 35 No. 2, pp. 245-273 [34] T.Manikumar et al,Multimodal Biometric system using Deep Learning Techniques” published in the “International Journal for Innovative Research in Multidisciplinary field ”, ISSN:2455-0620, December 2020, Vol-6, Issue-12, pp.95-99
2022 • 2 Pages • 158.77 KB
2022 • 3 Pages • 93.26 KB
2022 • 3 Pages • 32.67 KB
2022 • 2 Pages • 136.08 KB
2022 • 65 Pages • 2.27 MB
2022 • 1 Pages • 235.01 KB
2022 • 23 Pages • 2.9 MB
2022 • 4 Pages • 210.67 KB
2022 • 9 Pages • 185.95 KB
2022 • 13 Pages • 4.38 MB
2022 • 2 Pages • 44.83 KB
2022 • 2 Pages • 95.75 KB
2022 • 32 Pages • 264.49 KB
2022 • 8 Pages • 234.53 KB
2022 • 6 Pages • 78 KB
2022 • 1 Pages • 165.71 KB