The document proposes a secure personal identification system based on human retina recognition. The system uses retinal vascular patterns, which are unique to each individual, for identification. It consists of three stages: 1) preprocessing to extract the vascular pattern from retinal images, 2) feature extraction to identify feature points like endings and bifurcations, and represent them as vectors, and 3) matching input images to templates by calculating distances between feature vectors. Experimental results on two retinal image databases achieved over 97% accuracy, demonstrating the potential of the proposed system for high-security identification.
Agenda:
Introduction
Supercomputers for Scientific Research
Covid-19 Tracking and Prediction
Covid-19 Research and Diagnosis
Use Case 1 NLP and BERT to answer scientific questions
Use Case 2 Covid-19 Data Lake and Platform
COVID-19 detection from scarce chest X-Ray image data using few-shot deep lea...Shruti Jadon
In the current COVID-19 pandemic situation, there is an urgent need to screen infected patients quickly and accurately. Using deep learning models trained on chest X-ray images can become an efficient method for screening COVID-19 patients in these situations. Deep learning approaches are already widely used in the medical community. However, they require a large amount of data to be accurate.
This document is a 36-page bachelor's thesis written by Duc Minh Luong Nguyen titled "Detect COVID-19 from Chest X-Ray images using Deep Learning". The thesis was submitted to Metropolia University of Applied Sciences in May 2020. It aims to build a deep convolutional neural network to detect COVID-19 using only chest X-ray images. The model achieves an accuracy of 93% at detecting COVID-19 patients versus healthy patients, despite being trained on a small dataset of 115 images for each class.
This thesis aims to develop deep learning models to detect COVID-19 pneumonia in chest X-ray images. The author trains two models: 1) A binary classifier to distinguish COVID-19 pneumonia from non-COVID cases, which classifies all test cases correctly. 2) A four-class classifier to identify COVID-19, viral pneumonia, bacterial pneumonia, and normal cases, which achieves an average accuracy of 93% on the test set. Gradient-weighted Class Activation Mapping is used to interpret the four-class model and finds it can focus on patchy areas characteristic of COVID-19 pneumonia to make accurate predictions.
The adaptive mechanisms include the following AI paradigms that exhibit an ability to learn or adapt to new environments:
Swarm Intelligence (SI),
Artificial Neural Networks (ANN),
Evolutionary Computation (EC),
Artificial Immune Systems (AIS), and
Fuzzy Systems (FS).
This document is a research project submission for a Master's in Data Analytics. It proposes using a convolutional neural network to classify multiple types of skin lesions from dermoscopic images. Specifically, it will use a modified VGG19 network, replacing the classification layers with global average pooling, dropout, and two fully connected layers with softmax activation. The model will be tested on the ISIC 2018 dataset containing over 10,000 images across 7 lesion classes, after preprocessing the images. The goal is to assist dermatologists in identifying lesion types.
The document provides background information on machine learning and discusses its application to predicting COVID-19. It outlines the objectives of developing a machine learning model to predict whether a patient has COVID-19 based on their clinical information and identifying influential features. The document describes conducting a literature review and experiment to determine the most suitable machine learning techniques and influential features. It also defines the scope of the thesis and provides an outline of the following chapters.
Agenda:
Introduction
Supercomputers for Scientific Research
Covid-19 Tracking and Prediction
Covid-19 Research and Diagnosis
Use Case 1 NLP and BERT to answer scientific questions
Use Case 2 Covid-19 Data Lake and Platform
COVID-19 detection from scarce chest X-Ray image data using few-shot deep lea...Shruti Jadon
In the current COVID-19 pandemic situation, there is an urgent need to screen infected patients quickly and accurately. Using deep learning models trained on chest X-ray images can become an efficient method for screening COVID-19 patients in these situations. Deep learning approaches are already widely used in the medical community. However, they require a large amount of data to be accurate.
This document is a 36-page bachelor's thesis written by Duc Minh Luong Nguyen titled "Detect COVID-19 from Chest X-Ray images using Deep Learning". The thesis was submitted to Metropolia University of Applied Sciences in May 2020. It aims to build a deep convolutional neural network to detect COVID-19 using only chest X-ray images. The model achieves an accuracy of 93% at detecting COVID-19 patients versus healthy patients, despite being trained on a small dataset of 115 images for each class.
This thesis aims to develop deep learning models to detect COVID-19 pneumonia in chest X-ray images. The author trains two models: 1) A binary classifier to distinguish COVID-19 pneumonia from non-COVID cases, which classifies all test cases correctly. 2) A four-class classifier to identify COVID-19, viral pneumonia, bacterial pneumonia, and normal cases, which achieves an average accuracy of 93% on the test set. Gradient-weighted Class Activation Mapping is used to interpret the four-class model and finds it can focus on patchy areas characteristic of COVID-19 pneumonia to make accurate predictions.
The adaptive mechanisms include the following AI paradigms that exhibit an ability to learn or adapt to new environments:
Swarm Intelligence (SI),
Artificial Neural Networks (ANN),
Evolutionary Computation (EC),
Artificial Immune Systems (AIS), and
Fuzzy Systems (FS).
This document is a research project submission for a Master's in Data Analytics. It proposes using a convolutional neural network to classify multiple types of skin lesions from dermoscopic images. Specifically, it will use a modified VGG19 network, replacing the classification layers with global average pooling, dropout, and two fully connected layers with softmax activation. The model will be tested on the ISIC 2018 dataset containing over 10,000 images across 7 lesion classes, after preprocessing the images. The goal is to assist dermatologists in identifying lesion types.
The document provides background information on machine learning and discusses its application to predicting COVID-19. It outlines the objectives of developing a machine learning model to predict whether a patient has COVID-19 based on their clinical information and identifying influential features. The document describes conducting a literature review and experiment to determine the most suitable machine learning techniques and influential features. It also defines the scope of the thesis and provides an outline of the following chapters.
IRJET- A Survey on Medical Image Interpretation for Predicting PneumoniaIRJET Journal
This document summarizes research on using machine learning and deep learning techniques to interpret medical images and predict pneumonia. It first discusses how medical image analysis is an active field for machine learning. It then reviews several related studies on using convolutional neural networks (CNNs) and transfer learning to classify chest x-rays and detect pneumonia. Specifically, it examines research on developing CNN models for pneumonia classification and using pre-trained CNN architectures like VGG16, VGG19, and ResNet with transfer learning. The document concludes that computer-aided diagnosis systems using deep learning can provide accurate predictions to assist radiologists in pneumonia diagnosis from chest x-rays.
http://imatge-upc.github.io/telecombcn-2016-dlcv/
Deep learning technologies are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of big annotated data and affordable GPU hardware has allowed the training of neural networks for data analysis tasks which had been addressed until now with hand-crafted features. Architectures such as convolutional neural networks, recurrent neural networks and Q-nets for reinforcement learning have shaped a brand new scenario in signal processing. This course will cover the basic principles and applications of deep learning to computer vision problems, such as image classification, object detection or text captioning.
The document compares the performance of different machine learning models for detecting COVID-19 from CT scans, including single models like SVM, NB, MLP, CNN and ensemble models like AdaBoost and GBDT. Based on accuracy, precision, recall, F1-score and MCC metrics, the SVM model achieved the best performance with an accuracy of 99.2%, followed by CNN and AdaBoost. While MLP, NB and GBDT showed lower performance, CNN had the advantage of automatically detecting important image features.
Prospects of Deep Learning in Medical ImagingGodswll Egegwu
A SEMINAR Presentation on the Prospects of Deep Learning in Medical Imaging Presented to the Department of Computer Science, Nasarawa State Polytechnic, Lafia.
BY:
EGEGWU, GODSWILL
08166643792
http://facebook.com/godswill.egegwu
http://egegwugodswill.name.ng
Chest X-ray Pneumonia Classification with Deep LearningBaoTramDuong2
This document discusses using deep learning models to classify chest x-ray images as either normal or pneumonia. It obtained a dataset of over 5,800 pediatric chest x-rays from a Chinese hospital. Various deep learning models were explored, including multilayer perceptrons, convolutional neural networks, and transfer learning with VGG16, which achieved 92% validation accuracy. The document recommends future work such as distinguishing between viral and bacterial pneumonia and combining models with SVM. It also discusses recommendations to reduce childhood pneumonia prevalence.
Developed Project with 3 more colleagues for Pneumonia Detection from Chest X-ray images using Convolutional Neural Network. Used confusion matrix, Recall, Precision for check the model performance on testing Data
The document discusses the potential applications of deep learning in healthcare. It begins by explaining that deep learning models can improve accuracy of diagnosis, prognosis, and risk prediction by analyzing large datasets. It then discusses how deep learning can optimize hospital processes like resource allocation and patient flow by early and accurate prediction of diseases. Finally, it mentions that deep learning can help identify patient subgroups for personalized and precision medicine approaches.
A New Algorithm for Fully Automatic Brain Tumor Segmentation with 3-D Convolu...Christopher Mehdi Elamri
This document describes a new algorithm for fully automatic brain tumor segmentation using 3D convolutional neural networks. The algorithm uses 3D convolutional filters to preserve spatial information, and a high-bias CNN architecture to increase effective data size and reduce model variance. On a dataset of 274 brain MR images, the algorithm achieved a median Dice score of 89% for whole tumor segmentation, significantly outperforming past methods. This demonstrates the effectiveness of generalizing low-bias high-variance methods like CNNs to learn from medium-sized datasets.
deep learning applications in medical image analysis brain tumorVenkat Projects
The tremendous success of machine learning algorithms at image recognition tasks in recent years intersects with a time of dramatically increased use of electronic medical records and diagnostic imaging. This review introduces the machine learning algorithms as applied to medical image analysis, focusing on convolutional neural networks, and emphasizing clinical aspects of the _eld. The advantage of machine learning in an era of medical big data is that signi_cant hierarchal relationships within the data can be discovered algorithmically without laborious hand-crafting of features. We cover key research areas and applications of medical image classi_cation, localization, detection, segmentation, and registration. We conclude by discussing research obstacles, emerging trends, and possible future directions.
https://imatge.upc.edu/web/publications/video-retrieval-specific-persons-specific-locations
This thesis explores good practices for improving the detection of specific people in specific places. An approach combining recurrent and convolutional neural network have been considered to perform face detection. However, other more conventional methods have been tested, obtaining the best results by exploiting a deformable part model approach. A CNN is also used to obtain the face feature vectors and, with the purpose of helping in the face recognition, an approach to perform query expansion has been also developed. Furthermore, in order to be able to evaluate the different configurations in our non-labelled dataset, a user interface has been used to annotate the images and be able to obtain a precision of the system. Finally, different fusion and normalization strategies has been explored with the aim of combining the scores obtained from the face recognition with the ones obtained in the place recognition.
This talk will cover various medical applications of deep learning including tumor segmentation in histology slides, MRI, CT, and X-Ray data. Also, more complicated tasks such as cell counting where the challenge is to count how many objects are in an image. It will also cover generative adversarial networks and how they can be used for medical applications. This presentation is accessible to non-doctors and non-computer scientists.
The document discusses using a U-Net convolutional neural network to automatically segment brain tumors in MRI images. It aims to eliminate the need for domain expertise by using deep learning to extract hierarchical features. The U-Net model is trained on the BRATS 2017 dataset and is able to segment tumors with 5% higher accuracy than previous methods, as measured by the Dice similarity coefficient. The system could be expanded to analyze additional MRI modalities and further improve automated tumor detection.
Techniques of Brain Cancer Detection from MRI using Machine LearningIRJET Journal
The document discusses techniques for detecting brain cancer from MRI scans using machine learning. It first provides background on brain tumors and MRI. It then outlines the cancer detection process, including pre-processing the MRI data, segmenting the images, extracting features, and classifying tumors using techniques like CNNs, SVMs, MLP, and Naive Bayes. The document reviews related work applying these techniques and compares their results, finding accuracy can be improved with larger, higher resolution datasets.
The document describes a study that aims to detect brain tumors and edema in MRI images using MATLAB. It discusses how MRI is commonly used to identify brain anomalies. The proposed methodology uses basic image processing techniques in MATLAB, including preprocessing, enhancement, segmentation, and morphological operations to detect and segment tumors and edema. The final output highlights the boundaries between tumors and edema superimposed on the original MRI image to aid physicians in diagnosis and surgical planning.
Brain tumour segmentation based on local independent projection based classif...eSAT Journals
This document summarizes a research paper on brain tumour segmentation using local independent projection based classification. The proposed method uses MRI images and consists of four main steps: preprocessing using median filtering, feature extraction using patches, tumour segmentation using local independent projection classification, and post processing to analyze the tumour region. Local independent projection classification treats segmentation as a classification problem and uses local anchor embedding and softmax regression to improve performance. The method was able to classify tumour and edema regions and calculate the tumour area and perimeter pixels.
Brain Tumor Detection and Classification using Adaptive BoostingIRJET Journal
1. The document describes a system for detecting and classifying brain tumors using MRI images.
2. The system uses techniques like preprocessing, segmentation using k-means clustering, feature extraction with discrete wavelet transform and principal component analysis for dimension reduction, and classification with decision trees and adaptive boosting.
3. Adaptive boosting combines multiple weak learners or decision trees into a strong classifier and focuses on misclassified examples to improve accuracy, achieving 100% accuracy for tumor detection and classification in the system.
Ghaziabad, India - Early Detection of Various Types of Skin Cancer Using Deep...Vidit Goyal
This presentation discusses using convolutional neural networks (CNNs) for intelligent skin cancer detection. CNNs can help address issues like limited doctor availability in rural areas and the high cost of cancer detection. Research shows CNNs can achieve 91% accurate cancer diagnosis compared to 79% for experienced physicians. The presentation then explains how CNNs work, discussing concepts like convolutional layers, pooling layers, activation functions, and transfer learning. It describes applying a CNN model trained on ImageNet to a skin cancer dataset in order to recognize 3 common cancer types. The goal is developing an automatic early-stage cancer detection system using CNNs and cloud computing to reduce human effort and costs while improving accuracy.
Twirls, whirls, spins and turns the science of dizzinessRebeccawilliams98
The document summarizes an experiment that tested whether participating in sports helps people cope with dizziness. Volunteers with and without sports experience were spun in a chair and then tested on tasks like walking a straight line and catching/throwing a ball. The results showed that volunteers involved in sports like camogie, hockey and athletics reported lower dizziness ratings and performed better on the tasks compared to the non-athlete control group. This suggests regularly participating in sports may help the body learn to cope with dizziness and maintain balance.
Denis is an award winning, Okanagan based designer of spectacular, luxury homes. A select group of projects showcasing his work is featured on this site.
The documentary The Undateables uses a mix of narration, observation, and interviews to tell the stories of people who have disabilities and find it difficult to date. It follows a linear, chronological structure so viewers can easily understand what is happening. Close-ups are used to show subjects' emotions, like loneliness and happiness, and make the audience feel empathy. The setting changes from homes to dates as people's situations improve. Costumes are casual but people dress up for dates to make an effort. Music and voiceovers reinforce the message and elicit sympathy from viewers.
IRJET- A Survey on Medical Image Interpretation for Predicting PneumoniaIRJET Journal
This document summarizes research on using machine learning and deep learning techniques to interpret medical images and predict pneumonia. It first discusses how medical image analysis is an active field for machine learning. It then reviews several related studies on using convolutional neural networks (CNNs) and transfer learning to classify chest x-rays and detect pneumonia. Specifically, it examines research on developing CNN models for pneumonia classification and using pre-trained CNN architectures like VGG16, VGG19, and ResNet with transfer learning. The document concludes that computer-aided diagnosis systems using deep learning can provide accurate predictions to assist radiologists in pneumonia diagnosis from chest x-rays.
http://imatge-upc.github.io/telecombcn-2016-dlcv/
Deep learning technologies are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of big annotated data and affordable GPU hardware has allowed the training of neural networks for data analysis tasks which had been addressed until now with hand-crafted features. Architectures such as convolutional neural networks, recurrent neural networks and Q-nets for reinforcement learning have shaped a brand new scenario in signal processing. This course will cover the basic principles and applications of deep learning to computer vision problems, such as image classification, object detection or text captioning.
The document compares the performance of different machine learning models for detecting COVID-19 from CT scans, including single models like SVM, NB, MLP, CNN and ensemble models like AdaBoost and GBDT. Based on accuracy, precision, recall, F1-score and MCC metrics, the SVM model achieved the best performance with an accuracy of 99.2%, followed by CNN and AdaBoost. While MLP, NB and GBDT showed lower performance, CNN had the advantage of automatically detecting important image features.
Prospects of Deep Learning in Medical ImagingGodswll Egegwu
A SEMINAR Presentation on the Prospects of Deep Learning in Medical Imaging Presented to the Department of Computer Science, Nasarawa State Polytechnic, Lafia.
BY:
EGEGWU, GODSWILL
08166643792
http://facebook.com/godswill.egegwu
http://egegwugodswill.name.ng
Chest X-ray Pneumonia Classification with Deep LearningBaoTramDuong2
This document discusses using deep learning models to classify chest x-ray images as either normal or pneumonia. It obtained a dataset of over 5,800 pediatric chest x-rays from a Chinese hospital. Various deep learning models were explored, including multilayer perceptrons, convolutional neural networks, and transfer learning with VGG16, which achieved 92% validation accuracy. The document recommends future work such as distinguishing between viral and bacterial pneumonia and combining models with SVM. It also discusses recommendations to reduce childhood pneumonia prevalence.
Developed Project with 3 more colleagues for Pneumonia Detection from Chest X-ray images using Convolutional Neural Network. Used confusion matrix, Recall, Precision for check the model performance on testing Data
The document discusses the potential applications of deep learning in healthcare. It begins by explaining that deep learning models can improve accuracy of diagnosis, prognosis, and risk prediction by analyzing large datasets. It then discusses how deep learning can optimize hospital processes like resource allocation and patient flow by early and accurate prediction of diseases. Finally, it mentions that deep learning can help identify patient subgroups for personalized and precision medicine approaches.
A New Algorithm for Fully Automatic Brain Tumor Segmentation with 3-D Convolu...Christopher Mehdi Elamri
This document describes a new algorithm for fully automatic brain tumor segmentation using 3D convolutional neural networks. The algorithm uses 3D convolutional filters to preserve spatial information, and a high-bias CNN architecture to increase effective data size and reduce model variance. On a dataset of 274 brain MR images, the algorithm achieved a median Dice score of 89% for whole tumor segmentation, significantly outperforming past methods. This demonstrates the effectiveness of generalizing low-bias high-variance methods like CNNs to learn from medium-sized datasets.
deep learning applications in medical image analysis brain tumorVenkat Projects
The tremendous success of machine learning algorithms at image recognition tasks in recent years intersects with a time of dramatically increased use of electronic medical records and diagnostic imaging. This review introduces the machine learning algorithms as applied to medical image analysis, focusing on convolutional neural networks, and emphasizing clinical aspects of the _eld. The advantage of machine learning in an era of medical big data is that signi_cant hierarchal relationships within the data can be discovered algorithmically without laborious hand-crafting of features. We cover key research areas and applications of medical image classi_cation, localization, detection, segmentation, and registration. We conclude by discussing research obstacles, emerging trends, and possible future directions.
https://imatge.upc.edu/web/publications/video-retrieval-specific-persons-specific-locations
This thesis explores good practices for improving the detection of specific people in specific places. An approach combining recurrent and convolutional neural network have been considered to perform face detection. However, other more conventional methods have been tested, obtaining the best results by exploiting a deformable part model approach. A CNN is also used to obtain the face feature vectors and, with the purpose of helping in the face recognition, an approach to perform query expansion has been also developed. Furthermore, in order to be able to evaluate the different configurations in our non-labelled dataset, a user interface has been used to annotate the images and be able to obtain a precision of the system. Finally, different fusion and normalization strategies has been explored with the aim of combining the scores obtained from the face recognition with the ones obtained in the place recognition.
This talk will cover various medical applications of deep learning including tumor segmentation in histology slides, MRI, CT, and X-Ray data. Also, more complicated tasks such as cell counting where the challenge is to count how many objects are in an image. It will also cover generative adversarial networks and how they can be used for medical applications. This presentation is accessible to non-doctors and non-computer scientists.
The document discusses using a U-Net convolutional neural network to automatically segment brain tumors in MRI images. It aims to eliminate the need for domain expertise by using deep learning to extract hierarchical features. The U-Net model is trained on the BRATS 2017 dataset and is able to segment tumors with 5% higher accuracy than previous methods, as measured by the Dice similarity coefficient. The system could be expanded to analyze additional MRI modalities and further improve automated tumor detection.
Techniques of Brain Cancer Detection from MRI using Machine LearningIRJET Journal
The document discusses techniques for detecting brain cancer from MRI scans using machine learning. It first provides background on brain tumors and MRI. It then outlines the cancer detection process, including pre-processing the MRI data, segmenting the images, extracting features, and classifying tumors using techniques like CNNs, SVMs, MLP, and Naive Bayes. The document reviews related work applying these techniques and compares their results, finding accuracy can be improved with larger, higher resolution datasets.
The document describes a study that aims to detect brain tumors and edema in MRI images using MATLAB. It discusses how MRI is commonly used to identify brain anomalies. The proposed methodology uses basic image processing techniques in MATLAB, including preprocessing, enhancement, segmentation, and morphological operations to detect and segment tumors and edema. The final output highlights the boundaries between tumors and edema superimposed on the original MRI image to aid physicians in diagnosis and surgical planning.
Brain tumour segmentation based on local independent projection based classif...eSAT Journals
This document summarizes a research paper on brain tumour segmentation using local independent projection based classification. The proposed method uses MRI images and consists of four main steps: preprocessing using median filtering, feature extraction using patches, tumour segmentation using local independent projection classification, and post processing to analyze the tumour region. Local independent projection classification treats segmentation as a classification problem and uses local anchor embedding and softmax regression to improve performance. The method was able to classify tumour and edema regions and calculate the tumour area and perimeter pixels.
Brain Tumor Detection and Classification using Adaptive BoostingIRJET Journal
1. The document describes a system for detecting and classifying brain tumors using MRI images.
2. The system uses techniques like preprocessing, segmentation using k-means clustering, feature extraction with discrete wavelet transform and principal component analysis for dimension reduction, and classification with decision trees and adaptive boosting.
3. Adaptive boosting combines multiple weak learners or decision trees into a strong classifier and focuses on misclassified examples to improve accuracy, achieving 100% accuracy for tumor detection and classification in the system.
Ghaziabad, India - Early Detection of Various Types of Skin Cancer Using Deep...Vidit Goyal
This presentation discusses using convolutional neural networks (CNNs) for intelligent skin cancer detection. CNNs can help address issues like limited doctor availability in rural areas and the high cost of cancer detection. Research shows CNNs can achieve 91% accurate cancer diagnosis compared to 79% for experienced physicians. The presentation then explains how CNNs work, discussing concepts like convolutional layers, pooling layers, activation functions, and transfer learning. It describes applying a CNN model trained on ImageNet to a skin cancer dataset in order to recognize 3 common cancer types. The goal is developing an automatic early-stage cancer detection system using CNNs and cloud computing to reduce human effort and costs while improving accuracy.
Twirls, whirls, spins and turns the science of dizzinessRebeccawilliams98
The document summarizes an experiment that tested whether participating in sports helps people cope with dizziness. Volunteers with and without sports experience were spun in a chair and then tested on tasks like walking a straight line and catching/throwing a ball. The results showed that volunteers involved in sports like camogie, hockey and athletics reported lower dizziness ratings and performed better on the tasks compared to the non-athlete control group. This suggests regularly participating in sports may help the body learn to cope with dizziness and maintain balance.
Denis is an award winning, Okanagan based designer of spectacular, luxury homes. A select group of projects showcasing his work is featured on this site.
The documentary The Undateables uses a mix of narration, observation, and interviews to tell the stories of people who have disabilities and find it difficult to date. It follows a linear, chronological structure so viewers can easily understand what is happening. Close-ups are used to show subjects' emotions, like loneliness and happiness, and make the audience feel empathy. The setting changes from homes to dates as people's situations improve. Costumes are casual but people dress up for dates to make an effort. Music and voiceovers reinforce the message and elicit sympathy from viewers.
This document appears to be an assignment for a graphic design student named Christina Sewell. The assignment involves designing a book sleeve, which is completed in three parts: book cover research, designing draft and final versions of the front, back and spine of the book sleeve, and printing the finished book sleeve. Samples of Christina's book cover research and draft/final designs are included in the document.
The document outlines a project to identify and describe fish models displayed in plaques at the MDC gallery. There were 28 fish models received as a gift, but only 21 unique descriptions were needed as some were duplicates. The project faced challenges in providing succinct yet factual descriptions that were understandable to visitors who would only briefly view the display, within space constraints. Resources like field guides and government websites were used to aid in identification and learn key details about the fish to include in the descriptions.
Сергей Переслегин.
Презентация с семинара "Онтологический верстак" 21 сентября 2014 года.
Тема: Категория сложности.
Анонс мероприятия: http://sociosoft.ru/news/OV_21_sent
The document discusses developing a minimum viable product Collabrify client app for Windows Phone 8 to allow students to instantly share data using Google Protocol Buffers. It would enable creating collaborative sessions within any WP8 app. The client was modeled after an existing Java client which caused issues around server URLs, data formatting, and communication that were stumbling blocks. Next steps involve finishing the rest of the API calls, adding threading, and integrating the client into an IMLC app.
Count nouns refer to things that can be counted, can be both singular and plural, and use articles like "a" or "the" when indefinite or definite. Non-count nouns refer to things that generally cannot be counted, like substances, and have no plural form. They use "some" instead of articles when indefinite. Some examples of count nouns given are "books" and "students" and of non-count nouns are "water", "rice", and "education". The document provides a table contrasting properties of count and non-count nouns and advises determining which type a noun is based on whether it refers to something that can be easily counted.
The document summarizes the results of a questionnaire given to 10 people to help determine the target audience for a film trailer. Most respondents were female aged 18-25 who watch movies weekly and enjoy horror films. Trailers and online/social media were the most influential and common ways to find new movie releases. Magazines were not widely read. The results indicate targeting audiences on social media aged 18-25 who enjoy horror and are interested in film trailers.
Orion International Academy aims to prepare international students for success in college and a global society through its international programs. It offers a college preparatory curriculum with advanced courses, highly individualized support, and experienced faculty. Students can attend winter and summer programs to experience American schools and culture alongside American peers through activities like creative writing, chess, painting and more. Typical school days include classes, lunch, enrichment activities, and sports. Housing is available through dorms or homestays. Excursions around California are also provided. The document promotes that attending Orion sets students up for educational and career success.
Regress and Progress! An econometric characterization of the short-run relati...Matheus Albergaria
1. The paper uses structural vector autoregression (SVAR) models to examine the empirical validity of real business cycle (RBC) models based on technology shocks using Brazilian data.
2. The results cast doubt on some predictions of RBC models. Specifically, the estimated conditional correlations between labor input and productivity measures are negative for technology shocks and positive for non-technology shocks, whereas RBC models predict the opposite.
3. The labor input also displays a negative response to technology shocks over business cycles in the estimates, which challenges implications of RBC models. However, the authors note that the results do not definitively reject RBC models, but could stimulate new theoretical and empirical work.
This document outlines Christina Sewell's process for creating a logo for a new organizational name called Advofemme. It begins with brainstorming potential name components, presents initial sketches and versions of the logo, and concludes with the finalized logo incorporating color and typography choices intended to represent strength and femininity.
This document discusses existing media used to target teenage audiences, including magazines, websites, and social media. It analyzes the We Love Pop and Top of the Pops magazines' websites, noting features like previews of magazine content, videos, competitions, and newsletters. It also briefly examines the magazines' Facebook pages and how they are used to share new issues and updates. The goal is to attract teenage audiences across different media platforms.
Η Αντισεισμική προστασία και ενίσχυση των κατασκευών ως μοχλός ανάπτυξηςGeorge Tsiamtsiakiris
Άρθρο με αφορμή το σεισμό της Κεφαλλονιάς και τις συνέπειες που είχε σε ανθρώπους, κτίρια και υποδομές. Τεχνική ανάλυση του σεισμού σε συνδυασμό με προτάσεις για την ενίσχυση των κτιρίων αλλά και για ένα νέο τρόπο ανάπτυξης της οικονομίας. Δημοσίευση στο περιοδικό "τα νέα των κατασκευαστών" τεύχος 84.
Article about earthquakes in Greece: "The role of earthquake Resistant Design of structures as an important factor in the growth of a country"
Retinal Vessels Segmentation Using Supervised Classifiers for Identification ...IOSR Journals
The risk of cardio vascular diseases can be identified by measuring the retinal blood vessel. The
identification of wrong blood vessel may result in wrong clinical diagnosis. This proposed system addresses the
problem of identifying the true vessel by vascular structure segmentation. In this proposed model the segmented
vascular structure is modelled as a vessel segment graph and the true vessels are identified by using supervised
classifier approach. This paper proposes a post processing step in diagnose cardiovascular diseases which can
be identified by tracking a true vessel from the optimal forest in the graph given a set on constraints.
The main cause of eye diseases in the working human is Diabetic retinopathy. Eye disease can
be prevented if detects early. The extraction of blood vessels from retinal images is an essential and challenging
task in medical diagnosis and analysis. This paper describes the effective and efficient extraction of blood
vessels from retinal image by using Kirsch’s templates. The Kirsch’s edge operators detect the edges using eight
filters, generated by the compass rotation mechanism. The method is used to automatic detection of landmark
features of the fundus, such as the optic disc, fovea and blood vessels.
Abstract:—The main cause of eye diseases in the working human is Diabetic retinopathy. Eye disease can
be prevented if detects early. The extraction of blood vessels from retinal images is an essential and challenging
task in medical diagnosis and analysis. This paper describes the effective and efficient extraction of blood
vessels from retinal image by using Kirsch’s templates. The Kirsch’s edge operators detect the edges using eight
filters, generated by the compass rotation mechanism. The method is used to automatic detection of landmark
features of the fundus, such as the optic disc, fovea and blood vessels.
Keywords: —Diabetic retinopathy, Retinal image, Oculist
This document proposes fusing eye vein and finger vein biometrics for multimodal authentication. It extracts features from eye vein and finger vein images separately, then concatenates the feature vectors. Experimental results on public databases show this technique achieves more accurate identity verification than single biometrics, with lower false rejection and acceptance rates. The fused template provides better discrimination than individual features.
International Journal of Engineering Research and DevelopmentIJERD Editor
Electrical, Electronics and Computer Engineering,
Information Engineering and Technology,
Mechanical, Industrial and Manufacturing Engineering,
Automation and Mechatronics Engineering,
Material and Chemical Engineering,
Civil and Architecture Engineering,
Biotechnology and Bio Engineering,
Environmental Engineering,
Petroleum and Mining Engineering,
Marine and Agriculture engineering,
Aerospace Engineering.
Biometric Iris Recognition Based on Hybrid Techniqueijsc
This document presents a study on implementing an iris recognition system using a hybrid technique. The system utilizes several image processing and machine learning techniques. It begins with preprocessing the iris image, including capturing, resizing and converting to grayscale. Histogram equalization is then used for enhancement. Two-dimensional discrete wavelet transform (2D DWT) is applied for feature extraction. Various edge detection algorithms including Canny, Prewitt, Roberts and Sobel are used to detect iris boundaries. The features are then stored in a vector for classification. The system is tested on different iris images and analysis shows 2D DWT and Canny edge detection provide adequate results for feature extraction and iris recognition.
Biometric Iris Recognition Based on Hybrid Technique ijsc
Iris Recognition is one of the important biometric recognition systems that identify people based on their eyes and iris. In this paper the iris recognition algorithm is implemented via histogram equalization and wavelet techniques. In this paper the iris recognition approach is implemented via many steps, these steps are concentrated on image capturing, enhancement and identification. Different types of edge detection mechanisms; Canny scheme, Prewitt scheme, Roberts scheme and Sobel scheme are used to detect iris boundaries in the eyes digital image. The implemented system gives adequate results via different types of iris images.
This document discusses automatic detection of blood vessels in digital retinal images using computer vision and image processing (CVIP) tools. It begins with an overview of eye diseases like diabetic retinopathy and glaucoma that can be detected by observing blood vessels in retinal images. It then describes 6 common approaches to blood vessel extraction, including pattern recognition, model-based, tracking-based, and neural network approaches. The document outlines the methods used in the study, including preprocessing retinal images, extracting blood vessels using tools like filters, and postprocessing the results. It provides examples of blood vessel extraction and suggests areas for future work, such as developing techniques to better detect minor blood vessels and separate blood vessels from other structures.
This document discusses various soft computing techniques for iris recognition, specifically focusing on two neural network approaches: Competitive neural network Learning Vector Quantization (LVQ) and Adaptive Resonance Associative Map (ARAM). It provides an overview of iris recognition as a biometric method, summarizes preprocessing steps like localization, segmentation, and normalization of iris images. It also describes feature extraction and matching steps. Finally, it defines artificial neural networks and discusses how LVQ and ARAM can be used for pattern matching in iris recognition applications.
This document presents a student's proposal for a human retina identification system using biometric technology. The proposal discusses how the unique patterns of blood vessels in the retina can be used to identify individuals with high accuracy. The proposed system will involve segmenting retinal images to extract features like branch points and endpoints, and then storing these features as templates to compare new images against for matching. The student believes this technology provides strong security but also has disadvantages like intrusiveness and high costs that need to be addressed.
IRJET- Retinal Blood Vessel Tree Segmentation using Fast Marching MethodIRJET Journal
1) The document describes a study that used the Fast Marching Method (FMM) to segment retinal blood vessels from fundus images.
2) The FMM algorithm was validated using two public datasets, DRIVE and STARE, achieving segmentation accuracy of 93% on DRIVE images within 5-10 minutes and 90% accuracy on STARE images within 15 minutes.
3) By comparing FMM to other techniques like matched filters, the results showed FMM performance was close to higher resolution methods and overcame some other techniques.
Detection of Macular Edema by using Various Techniques of Feature Extraction ...IRJET Journal
This document presents a review of techniques for automatically detecting diabetic retinopathy by analyzing color fundus images. Diabetic retinopathy occurs when blood vessels in the retina are damaged from diabetes, and can lead to vision loss if left untreated. The document discusses existing work on feature extraction and classification methods for detecting signs of diabetic retinopathy like exudates and macular edema. It proposes a new method that focuses on extracting texture features from the region around the macula in order to accurately detect high-risk macular edema cases.
IRJET- Detection of White Blood Sample Cells using CNNIRJET Journal
This document presents a study that uses a convolutional neural network (CNN) to classify four types of white blood cells (WBCs) from microscope images of blood samples. The CNN model achieved 81% accuracy on a dataset of 15,000 labeled cell images. The CNN framework segments individual cells from images and extracts features to classify each cell as one of four types: neutrophils, lymphocytes, eosinophils, or monocytes. This automated classification approach using deep learning techniques could help diagnose blood-related diseases by reducing the time and expertise required for manual classification of cells under a microscope.
IRJET- Detection of White Blood Sample Cells using CNNIRJET Journal
This document presents a study that uses a convolutional neural network (CNN) to classify four types of white blood cells (WBCs) from microscope images of blood samples. The CNN model achieved 81% accuracy on a dataset of 15,000 labeled cell images. The CNN framework extracts features from the raw pixel data and classifies each cell image as one of the four WBC types. This automated classification approach reduces errors compared to traditional machine learning models and manual inspection by experts. The CNN architecture and training process are described along with experimental results demonstrating the model's performance on the WBC classification task.
This document presents a new iris segmentation method for iris recognition systems. The proposed method uses Canny edge detection and Hough transform to locate the iris boundary after finding the pupil boundary using image gray levels. Experiments on the CASIA iris image database of 756 images show the method can accurately detect the iris boundary in 99.2% of images. This is an improvement over other existing segmentation techniques. The key steps of the proposed method are preprocessing, segmentation using Canny edge detection and Hough transform, normalization using the rubber sheet model, feature encoding with Gabor wavelets, and matching with Hamming distance.
A Comprehensive Approach for Multi Biometric Recognition Using Sclera Vein an...IJTET Journal
Sclera and finger print vein fusion is a new biometric approach for uniquely identifying humans. First, Sclera vein is identified and refined using image enhancement techniques. Then Y shape feature extraction algorithm is used to obtain Y shape pattern which are then fused with finger vein pattern. Second, Finger vein pattern is obtained using CCD camera by passing infrared light through the finger. The obtained image is then enhanced. A line shape feature extraction algorithm is used to get line patterns from enhanced finger vein image. Finally Sclera vein image pattern and Finger vein image pattern were combined to get the final fused image. The image thus obtained can be used to uniquely identify a person. The proposed multimodal system will produce accurate results as it combines two main traits of an individual. Therefore, it can be used in human identification and authentication systems.
Human recognition system based on retina vascular network characteristicscbnaikodi
This document proposes a human recognition system based on retina vascular network characteristics. The system uses fundus images of a person's retina as input. It performs pre-processing such as histogram equalization and edge detection using techniques like Sobel and Prewitt filters. It then compares the processed retina image to images stored in a database. If a matching image is found, the person is authorized, otherwise they are unauthorized. The system can accurately authenticate individuals and has applications in security environments like banking, military, and government that require high security.
Lung Cancer Detection using Convolutional Neural NetworkIRJET Journal
The document presents a study on detecting lung cancer using convolutional neural networks. Specifically, it uses the YOLO framework to accurately detect lung tumors and their locations in CT images. The proposed system first collects CT images and pre-processes them before training a YOLO object detection model. The trained model is then used to detect and localize tumors in test images and provide classification. Evaluation shows the model can successfully pinpoint tumors attached to blood vessels and distinguish between different types of lung cancer. The authors aim to improve the model through expanding the dataset and exploring updated deep learning techniques.
Retinal recognition uses the unique pattern of blood vessels in the retina to identify individuals. It is considered the most reliable biometric since the retina develops randomly and is difficult to alter. However, retinal scanners are invasive, expensive, and not widely accepted. They work by capturing an image of the retina using infrared light and extracting over 400 data points to create a template for identification. Factors like eye movement, distance from the lens, or a dirty lens can cause errors in scanning.
Segmentation of Blood Vessels and Optic Disc in Retinal Imagesresearchinventy
Retinal image analysis is increasingly prominent as a non-intrusive diagnosis method in modern ophthalmology. In this paper, we present a novel method to segment blood vessels and optic disc in the fundus retinal images. The method could be used to support non-intrusive diagnosis in modern ophthalmology since the morphology of the blood vessel and the optic disc is an important indicator for diseases like diabetic retinopathy, glaucoma and hypertension. Our method takes as first step the extraction of the retina vascular tree using the graph cut technique. The blood vessel information is then used to estimate the location of the optic disc. The optic disc segmentation is performed using two alternative methods. The Markov Random Field (MRF) image reconstruction method segments the optic disc by removing vessels from the optic disc region and the Compensation Factor method segments the optic disc using prior local intensity knowledge of the vessels. The proposed method is tested on three public data sets, DIARETDB1, DRIVE and STARE. The results and comparison with alternative methods show that our method achieved exceptional performance in segmenting the blood vessel and optic disc.
Global Situational Awareness of A.I. and where its headedvikram sood
You can see the future first in San Francisco.
Over the past year, the talk of the town has shifted from $10 billion compute clusters to $100 billion clusters to trillion-dollar clusters. Every six months another zero is added to the boardroom plans. Behind the scenes, there’s a fierce scramble to secure every power contract still available for the rest of the decade, every voltage transformer that can possibly be procured. American big business is gearing up to pour trillions of dollars into a long-unseen mobilization of American industrial might. By the end of the decade, American electricity production will have grown tens of percent; from the shale fields of Pennsylvania to the solar farms of Nevada, hundreds of millions of GPUs will hum.
The AGI race has begun. We are building machines that can think and reason. By 2025/26, these machines will outpace college graduates. By the end of the decade, they will be smarter than you or I; we will have superintelligence, in the true sense of the word. Along the way, national security forces not seen in half a century will be un-leashed, and before long, The Project will be on. If we’re lucky, we’ll be in an all-out race with the CCP; if we’re unlucky, an all-out war.
Everyone is now talking about AI, but few have the faintest glimmer of what is about to hit them. Nvidia analysts still think 2024 might be close to the peak. Mainstream pundits are stuck on the wilful blindness of “it’s just predicting the next word”. They see only hype and business-as-usual; at most they entertain another internet-scale technological change.
Before long, the world will wake up. But right now, there are perhaps a few hundred people, most of them in San Francisco and the AI labs, that have situational awareness. Through whatever peculiar forces of fate, I have found myself amongst them. A few years ago, these people were derided as crazy—but they trusted the trendlines, which allowed them to correctly predict the AI advances of the past few years. Whether these people are also right about the next few years remains to be seen. But these are very smart people—the smartest people I have ever met—and they are the ones building this technology. Perhaps they will be an odd footnote in history, or perhaps they will go down in history like Szilard and Oppenheimer and Teller. If they are seeing the future even close to correctly, we are in for a wild ride.
Let me tell you what we see.
Predictably Improve Your B2B Tech Company's Performance by Leveraging DataKiwi Creative
Harness the power of AI-backed reports, benchmarking and data analysis to predict trends and detect anomalies in your marketing efforts.
Peter Caputa, CEO at Databox, reveals how you can discover the strategies and tools to increase your growth rate (and margins!).
From metrics to track to data habits to pick up, enhance your reporting for powerful insights to improve your B2B tech company's marketing.
- - -
This is the webinar recording from the June 2024 HubSpot User Group (HUG) for B2B Technology USA.
Watch the video recording at https://youtu.be/5vjwGfPN9lw
Sign up for future HUG events at https://events.hubspot.com/b2b-technology-usa/
State of Artificial intelligence Report 2023kuntobimo2016
Artificial intelligence (AI) is a multidisciplinary field of science and engineering whose goal is to create intelligent machines.
We believe that AI will be a force multiplier on technological progress in our increasingly digital, data-driven world. This is because everything around us today, ranging from culture to consumer products, is a product of intelligence.
The State of AI Report is now in its sixth year. Consider this report as a compilation of the most interesting things we’ve seen with a goal of triggering an informed conversation about the state of AI and its implication for the future.
We consider the following key dimensions in our report:
Research: Technology breakthroughs and their capabilities.
Industry: Areas of commercial application for AI and its business impact.
Politics: Regulation of AI, its economic implications and the evolving geopolitics of AI.
Safety: Identifying and mitigating catastrophic risks that highly-capable future AI systems could pose to us.
Predictions: What we believe will happen in the next 12 months and a 2022 performance review to keep us honest.
4th Modern Marketing Reckoner by MMA Global India & Group M: 60+ experts on W...Social Samosa
The Modern Marketing Reckoner (MMR) is a comprehensive resource packed with POVs from 60+ industry leaders on how AI is transforming the 4 key pillars of marketing – product, place, price and promotions.
06-04-2024 - NYC Tech Week - Discussion on Vector Databases, Unstructured Data and AI
Discussion on Vector Databases, Unstructured Data and AI
https://www.meetup.com/unstructured-data-meetup-new-york/
This meetup is for people working in unstructured data. Speakers will come present about related topics such as vector databases, LLMs, and managing data at scale. The intended audience of this group includes roles like machine learning engineers, data scientists, data engineers, software engineers, and PMs.This meetup was formerly Milvus Meetup, and is sponsored by Zilliz maintainers of Milvus.
The Ipsos - AI - Monitor 2024 Report.pdfSocial Samosa
According to Ipsos AI Monitor's 2024 report, 65% Indians said that products and services using AI have profoundly changed their daily life in the past 3-5 years.
2. Fig. 3. Flow diagram of proposed retinal recognition system
fication is very rare in literature. Landmark based methods for
retinal registration were presented in [12]-[13] using vessel
bifurcations and crossover points as feature points. Another
retinal registration method based on location of optic disc was
presented by [14]. A cross correlation coefficient based retinal
identification was done in [15]. They first registered the input
image and then matching is done by correlating the vascular
pattern. Bevilacqua et al. [16] proposed a vascular bifurcation
and cross over point based system for retinal authentication.
The comparison was done by means of accumulation matrix.
In this paper, we present a new method for secure personal
identification based on human retina. The main contribution
which we have made during this research is the generation of
database for retinal recognition which is very rare and only
one database is available online for retinal recognition. The
paper includes other contributions as well such as accurate
segmentation of vascular pattern, reliable extraction of feature
points and formulation of feature vectors for vascular feature
points.
The remaining paper is arranged as follows: Section II
contains the proposed system and explains all three stages
in detail. The experimental results are given in section III
followed by conclusion in last section.
II. PROPOSED SYSTEM
Biometric based security systems are the reliable source of
security for highly sensitive areas. Human retina consists of
blood vessels which form a unique pattern and it is almost
impossible to forge it in a false individual. The proposed
system consists of three stages; i.e. preprocessing, feature
extraction and vascular matching. In preprocessing, it performs
blood vessel enhancement and segmentation to extract the
vascular pattern from digital retinal image. Then, it extracts
and validates the feature points from vascular pattern just like
fingerprint minutiae points and makes a feature vector for each
point. In last stage, the system performs vascular matching us-
ing mahalanobis distances. Figure 3 shows the complete flow
diagram for proposed system. Like any other biometric system,
our system broadly consists of two modules i.e. enrollment
and identification modules. In enrollment modules, it captures
the retinal images from a number of persons and saves their
feature vectors in databases by doing off line processing. The
identification module performs online processing by capturing
the retinal image of a person which is to be identified and
system matches his/her feature vector with the ones which are
already saved in the database to extract the true identity of
person.
A. Preprocessing
The basis of retinal recognition is the vascular pattern
which shows uniqueness among different human beings. It is
important to extract the complete vascular structure accurately
form input retinal image. The first phase of our proposed
system is preprocessing in which system extracts the vascular
pattern using Gabor wavelet and multilayered thresholding
technique. Preprocessing is also performed to remove noise
and extra region from input fundus image [17].
1) Blood Vessel Enhancement: In proposed system, we
have used Gabor wavelet with a fixed scale value of 4 which
is selected empirically and a change of 100
in orientation.
2013 IEEE Symposium on Industrial Electronics & Applications (ISIEA2013), September 22-25, 2013, Kuching, Malaysia
91
3. Fig. 4. Flow diagram for proposed multilayered thresholding technique for blood vessel segmentation
This leads to 18 different wavelet responses out of which we
selected maximum wavelet response for each pixel [18].
2) Blood vessel segmentation: The wavelet transformation
enhances the blood vessels by giving high responses for vas-
cular areas but still thick vessels have high wavelet response
as compare to thin vessels. So it is hard to find one optimal
threshold value for accurate vascular extraction without any
supervised algorithm. This is of great importance especially
in case of PDR as new abnormal blood vessels are normally
very thin. We presented a recursive supervised multilayered
thresholding-based method for accurate vascular segmentation
[18]. Figure 4 shows the flow diagram of proposed multilay-
ered thresholding technique for vascular pattern segmentation.
The extracted vascular pattern consists of blood vessels
of variable thickness. In order to make the width of blood
vessels equal to single pixel, we apply morphological thinning
operation [19]. Figure 5 shows the step by step output for
preprocessing stage.
Fig. 5. Preprocessing: a)Green Channel retinal image; b)Enhanced blood
vessels; c)Segmented blood vessels; d)Thinned blood vessels
B. Feature Extraction
The preprocessing phase extracts the thinned vascular struc-
ture from input retinal image. Next phase is feature extraction.
1) Feature Points Extraction: The main features of vascular
pattern are vessel ending and bifurcation points (fig. 6) just like
fingerprint minutiae points [1].
Fig. 6. Structures of vascular features; i.e. vessel ending and bifurcations
The system uses crossing number method (eq. 5) to extract
the vascular endings and bifurcations [20]. E(p) = 3 and
E(p) = 1 correspond to vessel bifurcations and endings
respectively.
2013 IEEE Symposium on Industrial Electronics & Applications (ISIEA2013), September 22-25, 2013, Kuching, Malaysia
92
4. 2) Feature Points Validation: The feature points obtained
in feature extraction phase need to be validated before the
matching process. In order to eliminate false endings and bifur-
cations due to spurs and small breaks, we apply a windowing
technique. In this, we take a window of size 9 × 9 with all
initial values equal to 0 and place the candidate feature point;
i.e. ending or bifurcation, at the center of window. Initialize the
center point with a value of −1. Vessel endings have only one
connected branch, assign 1 to all connected pixels and count
0 to 1 transitions while moving in clockwise direction along
the boundary of window. The count should be equal to 1 for a
valid vessel ending. We repeat the same procedure for vessel
bifurcations and count transitions. Just like vessel ending, if
the count is equal to 3 then it will be a valid bifurcation. Figure
7 shows the true vessel endings and bifurcations and also the
validation processes for both.
Fig. 7. Feature Validation: a) Vessel ending; b) Validation of vessel endings;
c) Vessel Bifurcation; d) Validation of vessel bifurcation
3) Feature Set Formation: Once the feature points are
extracted and validated, the system forms a feature vector
for each feature point. The system forms a feature vector
for each candidate feature point by calculating its distance
and relative angles with its four nearest feature points. The
proposed system represents each feature point with a feature
set of form < φ11, φ12, φ13, φ14, d11, d12, d13, d14 > where
φxy and dxy are the relative angle and distance between two
feature points x and y respectively. The relative angles are used
to have rotation invariant matching process. Figure 8 shows the
formation of feature vector for a feature point.
C. Feature Matching for Retinal Identification
Feature vectors for different retinal images are saved in
database. The last stage identifies the input retinal image
by matching the feature vectors of query image with the
ones stored in database. The proposed system calculates a
Fig. 8. Four nearest feature points and their relative orientations for center
feature point
Feature Distance (FD) by calculating Mahalanobis distance
[21] between feature vectors of all feature points in query
image and templates stored in database. The matching phase
finally computes a score value which represents that how
similar two retinal image are when compared with each other.
This value is computed by counting the number of feature
vectors from database matched with the query image feature
vectors on the basis of FD.
III. EXPERIMENTAL RESULTS
The person recognition system based on biometrics require
thorough testing and validation. There are only a few publicly
available databases for retinal recognition as compared to
fingerprint or facial recognition. VARIA [23] is the only
database that is truly formed for retinal recognition systems. It
includes 233 retinal images of 139 different persons with a res-
olution of 768 × 584. To further evaluate the proposed retina
identification system, we designed our own database with
the help of armed forces institute of ophthalmology (AFIO),
Pakistan and we named it as Retinal Identification DataBase
(RIDB). RIDB is collected for 20 different individuals with
5 images per person and overall it contains 100 images of
resolution 1504 × 1000. In order to check the validity of
person identification, the system is tested on both retinal image
databases and results are shown in Table-II.
TABLE I
RECOGNITION RATE OF PROPOSED METHOD
Database Total Correctly Wrongly Recognition
images recognized recognized rate (%)
VARIA 233 232 1 99.57
RIDB 100 97 3 97
In order to further check the validity of proposed system, we
2013 IEEE Symposium on Industrial Electronics & Applications (ISIEA2013), September 22-25, 2013, Kuching, Malaysia
93
5. used false acceptance rate (FAR), false rejection rate (FRR),
equal error rate (ERR) and recognition rate as performance
merits and they are defined as follows
• False Acceptance Rate (FAR) is the rate of individuals
which are incorrectly accepted. It measures the number of
cases in which input vector is matched to a non matching
vector present in the database.
• False Rejection Rate (FRR) is the rate of individuals
which are incorrectly rejected. It measures the number
of cases in which input vector is not matched to actual
vector present in the database.
• Receiver Operating Characteristic (ROC) is a curve
which shows the relation between FAR and FRR and us-
ing ROC we can trade off between these two parameters.
ROC curves are used to compute the area under the curve
(AUC).
• Equal Error Rate (EER) is the rate at which both FAR
and FRR have same values. The value of the EER can
be easily obtained from the ROC curve. In general, the
device with the lowest EER is most accurate.
• Recognition Rate (RR) is computed by running the system
on test data and it is the number of persons which are
correctly classified.
In biometrics, there is always a trade off between FAR and
FRR and their error rate depends on the value of threshold. If
we lower the threshold, it will reduce the FRR but will increase
the FAR until we reach a point (zero FRR) where FRR has
a value of 0. In the same way, zero FAR is a point where
FAR has a zero value and it can be achieved by increasing the
threshold value. We computed FAR and FRR values against
different values of threshold and computed the values for zero
FRR, EER and zero FAR. Table II summarizes the FAR and
FRR values against different threshold values. Figure 9 shows
TABLE II
FAR AND FRR VALUES VS THRESHOLD
Threshold FAR FRR
0.24 0.41 0
0.30 0.33 0.01
0.50 0.09 0.03
0.54 0.05 0.05
0.60 0.02 0.09
0.65 0 0.10
the FAR and FRR curves for proposed system. The values of
FAR, FRR and EER are highlighted with different threshold
values. It shows that at a threshold value of 0.542, we have
EER for FAR and FRR which is 0.0557 for proposed system.
IV. CONCLUSION
Automated person identification is very important to im-
prove the security level especially in highly sensitive areas.
Human retina provides a reliable source for biometric based
system which is almost impossible to forge. In this paper,
we presented an automated system for person identification
based on human retina. We proposed a three stage algorithm
Fig. 9. FAR vs FRR curves for proposed system with different threshold
values
in this paper consisting of preprocessing, feature extraction
and vascular matching. The first step enhanced and segmented
blood vessel using wavelets and multilayered thresholding. In
order to use vascular pattern for matching, the proposed system
used vessel endings and bifurcations as feature points and
formed translation and rotation invariant feature vectors based
on relative angles and distances. Matching is performed using
Mahalanobis distance and results demonstrated the validity of
proposed system.
REFERENCES
[1] S. Prabhakar, S. Pankanti, and A. K. Jain, “Biometric recognition:
Security and privacy concerns”, IEEE Security and Privacy, vol. 1, No.
2, pp. 33-42, 2003.
[2] Ravi Das, “Retinal recognition Biometric technology in practice”,
Keesing Journal of Documents & Identity, vol. 22, pp. 11-14, 2007.
[3] Simon, C., Goldstein, I., “A New Scientific Method of Identification”,
New York State Journal of Medicine, vol. 35, No. 18, pp. 901-906, 1935.
[4] Tower, P., “The Fundus Oculi in Monozygotic Twins: Report of Six Pairs
of Identical Twins”, Archives of Ophthalmology 54, pp. 225-239, 1955.
[5] J. Staal, M. D. Abramoff, M. Niemeijer, M. A. Viergever and B. van
Ginneken, “Ridge-based vessel segmentation in color images of the
retina”, IEEE Trans. Med. Imag., vol. 23, No. 4, pp. 501-509, 2004.
[6] J. V. B. Soares, J. J. G. Leandro, R. M. Cesar, H. F. Jelinek and M.
J. Cree., “Retinal vessel segmentation using the 2-D gabor wavelet and
supervised classification”, IEEE Trans. on Med. Imag, vol. 2, No. 9,
1214-1222, 2006
[7] G. G. Yen andW.-F. Leong: “A sorting system for hierarchical grading of
diabetic fundus images, A preliminary study”. IEEE Trans. Inf. Technol.
Biomed., vol. 12(1), 118-130 ,2008.
[8] S. Chaudhuri, S. Chatterjee, N. Katz, M. Nelson, and M. Goldbaum:
“Detection of blood vessels in retinal images using two-dimensional
matched filters”. IEEE Trans. Med. Imag. 8(3), 263-269,1989.
[9] M. U. Akram, S. Khalid, S. A Khan, “Identification and Classification
of Microaneurysms for Early Detection of Diabetic Retinopathy”, Pattern
Recognition (Elsevier), Vol 46, No.1, 107-116, 2013
[10] D. Marn, A. Aquino, M. Emilio G. A., and J. M. Bravo, “A New
Supervised Method for Blood Vessel Segmentation in Retinal Images
by Using Gray-Level and Moment Invariants-Based Features”, IEEE
Transactions on Medical Imaging, vol. 30, no. 1, 2011.
[11] A. Hoover, V. Kouznetsova, and M. Goldbaum, “Locating blood vessels
in retinal images by piecewise threshold probing of a matched filter
response”. IEEE Trans. Med. Imag. 19(3), 203-210,2000.
[12] M. Ortega, M.G. Penedo, J. Rouco, N. Barreira, and M.J. Carreira,
“Retinal verification using a feature points-based biometric pattern”,
EURASIP J. Adv. Signal Process, pp. 1-13, 2009.
2013 IEEE Symposium on Industrial Electronics & Applications (ISIEA2013), September 22-25, 2013, Kuching, Malaysia
94
6. [13] Z.W. Xu, X.X. Guo, X.Y. Hu, X. Chen, and Z.X. Wang, “The identifi-
cation and recognition based on point for blood vessel of ocular fundus”,
In Proc. ICB 2006 LNCS 3832, pp. 770-776, 2006.
[14] H. Farzin, H. Abrishami-Moghaddam, and M.-S. Moin, “A novel retinal
identification system”, EURASIP J. Adv. Signal Process, pp. 1-10, 2008.
[15] K. Fukuta, T. Nakagawa, Y. Hayashi, Y. Hatanaka, T. Hara, H. Fujita,
“Personal Identification Based on Blood Vessels of Retinal Fundus
Images”, Proc. of SPIE 6914, 2008.
[16] V. Bevilacqua, L. Cariello, D. Columbo, D. Daleno, M. D. Fabiano,
M. Giannini, G. Mastronardi, M. Castellano, “Retinal Fundus Biometric
Analysis for Personal Identifications”, ICIC LNAI 5227, pp. 1229-1237,
2008.
[17] A. Tariq, M. U. Akram, “An Automated System for Colored Retinal
Image Background and Noise Segmentation”, IEEE Symposium on
Industrial Electronics and Applications (ISIEA 2010), pp. 405-409, 2010.
[18] M. U. Akram and S. A. Khan, “Multilayered Thresholding Based Blood
Vessel Segmentation for Screening of Diabetic Retinopathy”, Engineering
with Computers (EWCO), Vol. 29, No. 2, pp. 165-173, 2013.
[19] R. C. Gonzalez and R. E. Woods, “Digital image processing”, Prentice
hall, second edition, 2002.
[20] A. K. Jain, R. Bolle, and S. Pankanti, “Biometrics: Personal Identifica-
tion in a Networked Society”, Springer- Verlag New York Inc (C), 1999,
[21] R. O. Duda, P. E. Hart, and D. G. Stork, “Pattern Classification”, New
York, Wiley, 2001.
[22] A. Hoover, M. Goldbaum, “Locating the optic nerve in a retinal image
using the fuzzy convergence of the blood vessels”, IEEE Trans. on
Medical Imaging, vol 22, No. 8, pp. 951-958, 2003.
[23] VARIA,“Varpa retinal images for authentication”,
http://www.varpa.es/varia.html.
2013 IEEE Symposium on Industrial Electronics & Applications (ISIEA2013), September 22-25, 2013, Kuching, Malaysia
95