The document discusses object recognition in computer vision. It begins with an overview of object recognition, describing it as the task of finding and identifying objects in images. It then discusses several specific applications of object recognition, including fingerprint recognition and license plate recognition. Fingerprint recognition involves extracting features called minutiae from fingerprint images, which are ridge endings and bifurcations. License plate recognition uses an ALPR system to segment character images, normalize them, and recognize the characters.
Image segmentation is based on three principal concepts
Detection of discontinuities.
Thresholding
Region Processing
Morphological Watershed Image Segmentation embodies many of the concepts of above three approaches
Chen Sagiv, co founder and co CEO of SagivTech, gave an introduction talk to Computer Vision at She Codes branch in Google Campus TLV.
In the talk an overview was given on what is computer vision, where it is used, some basic notions and algorithms and the AI revolution.
Image segmentation is based on three principal concepts
Detection of discontinuities.
Thresholding
Region Processing
Morphological Watershed Image Segmentation embodies many of the concepts of above three approaches
Chen Sagiv, co founder and co CEO of SagivTech, gave an introduction talk to Computer Vision at She Codes branch in Google Campus TLV.
In the talk an overview was given on what is computer vision, where it is used, some basic notions and algorithms and the AI revolution.
Kim Steenstrup Pedersen, lektor, Image Section, Department of Computer Science, København Universitet
Overblik over kunstig intelligens og digital billedanalyse. For øjeblikket sker der en rivende udvikling indenfor kunstig intelligens og især inden for analyse af digitale billeder og film. Vi ser jævnlige historier i pressen om nye fantastiske gennembrud indenfor kunstig intelligens (en del af disse historier udspringer fra store virksomheder som Google, Facebook og Amazon). Det er nærliggende at spørge – kan jeg anvende kunstig intelligens på min billedsamling? I dette foredrag vil jeg give et overblik over hvad kunstig intelligens og digital billedanalyse er og hvad det kan anvendes til. Jeg vil også give et indblik i styrker og svagheder ved eksisterende metoder og specielt hvad man skal være opmærksom på hvis man ønsker at anvende kunstig intelligens på sine billedsamlinger.
Information to Wisdom: Commonsense Knowledge Extraction and Compilation - Part 3Dr. Aparna Varde
This is the 3rd part of the tutorial on commonsense knowledge (CSK) at ACM WSDM 2021 by Simon Razniewski, Niket Tandon and Aparna Varde. It focuses on evaluation of the acquired knowledge, both intrinsic & extrinsic, as well as highlights, outlook with a brief perspective on COVID and open issues for further research.
Abstract: Commonsense knowledge is a foundational cornerstone of artificial intelligence applications. Whereas information extraction and knowledge base construction for instance-oriented assertions, such as Brad Pitt’s birth date, or Angelina Jolie’s movie awards, has received much attention, commonsense knowledge on general concepts (politicians, bicycles, printers) and activities (eating pizza, fixing printers) has only been tackled recently. In this tutorial we present state-of-the-art methodologies towards the compilation and consolidation of such commonsense knowledge (CSK). We cover text-extraction-based, multi-modal and Transformer-based techniques, with special focus on the issues of web search and ranking, as of relevance to the WSDM community.
An introduction to machine learning. I gave a talk on this, the video can be found here:
http://www.techgig.com/expert-speak/Introduction-to-Machine-Learning-616
Presented at the 2011 IEEE 7th International Conference on Intelligent Computer Communication and Processing (ICCP 2011), August 26th, 2011 in Cluj-Napoca, Romania.
Publication: http://bit.ly/x1OpFL
Abstract:
In this paper we introduce a system for semantic understanding of traffic scenes. The system detects objects in video images captured in real vehicular traffic situations, classifies them, maps them to the OpenCyc1 ontology and finally generates descriptions of the traffic scene in CycL or cvasi-natural language. We employ meta-classification methods based on AdaBoost and Random forest algorithms for identifying interest objects like: cars, pedestrians, poles in traffic and we derive a set of annotations for each traffic scene. These annotations are mapped to OpenCyc concepts and predicates, spatiotemporal rules for object classification and scene understanding are then asserted in the knowledge base. Finally, we show that the system performs well in understanding traffic scene situations and summarizing them. The novelty of the approach resides in the combination of stereo-based object detection and recognition methods with logic based spatio-temporal reasoning.
Overview of Computer Vision For Footwear IndustryTanvir Moin
Computer vision is an interdisciplinary field that focuses on enabling computers to interpret and analyze visual data from the world around us. It involves the development of algorithms and techniques that allow machines to understand images and videos, just as humans do.
The main goal of computer vision is to create machines that can "see" and understand the world around them, and then use that information to make decisions or take actions. This can involve tasks such as object recognition, scene reconstruction, facial recognition, and image segmentation.
Computer vision has a wide range of applications in various fields, such as healthcare, entertainment, transportation, robotics, and security. Some examples include medical image analysis, autonomous vehicles, augmented reality, and surveillance systems.
In recent years, the development of deep learning techniques, particularly convolutional neural networks (CNNs), has greatly advanced the field of computer vision, allowing machines to achieve state-of-the-art performance on various visual recognition tasks.
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Search and Society: Reimagining Information Access for Radical FuturesBhaskar Mitra
The field of Information retrieval (IR) is currently undergoing a transformative shift, at least partly due to the emerging applications of generative AI to information access. In this talk, we will deliberate on the sociotechnical implications of generative AI for information access. We will argue that there is both a critical necessity and an exciting opportunity for the IR community to re-center our research agendas on societal needs while dismantling the artificial separation between the work on fairness, accountability, transparency, and ethics in IR and the rest of IR research. Instead of adopting a reactionary strategy of trying to mitigate potential social harms from emerging technologies, the community should aim to proactively set the research agenda for the kinds of systems we should build inspired by diverse explicitly stated sociotechnical imaginaries. The sociotechnical imaginaries that underpin the design and development of information access technologies needs to be explicitly articulated, and we need to develop theories of change in context of these diverse perspectives. Our guiding future imaginaries must be informed by other academic fields, such as democratic theory and critical theory, and should be co-developed with social science scholars, legal scholars, civil rights and social justice activists, and artists, among others.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Let's dive deeper into the world of ODC! Ricardo Alves (OutSystems) will join us to tell all about the new Data Fabric. After that, Sezen de Bruijn (OutSystems) will get into the details on how to best design a sturdy architecture within ODC.
"Impact of front-end architecture on development cost", Viktor TurskyiFwdays
I have heard many times that architecture is not important for the front-end. Also, many times I have seen how developers implement features on the front-end just following the standard rules for a framework and think that this is enough to successfully launch the project, and then the project fails. How to prevent this and what approach to choose? I have launched dozens of complex projects and during the talk we will analyze which approaches have worked for me and which have not.
2. Outline
• Introduction (Computer Vision)
• History
• Human Vision Vs. Computer Vision
• Main Goal of Computer Vision
• Significance of Computer Vision
• Connections to other Disciplines
• Key Stages in Digital Image Processing
• Object Recognition
• What is Object Recognition?
• What is Pattern Recognition?
• Approaches
• Applications
• Main Components
• Gender Example
2
3. Outline
• Fingerprint Recognition
• Definition
• Fingerprint Matching Using Ridge-End and Burification
• Fingerprint Image
• Binarization
• Thinning
• Minutiae Extraction
• Car Number Plate Recognition
• What is an ALPR System?
• ALPR Procedure
• Characters Recognition
• Characters Segmentation
• Normalization of Characters
• New Innovations in Object Recognition
• References 3
4. Brief History of Computer Vision
• 1966: Minsky assigns computer vision
as an undergrad summer project
• 1960’s: interpretation of synthetic
worlds
• 1970’s: some progress on interpreting
selected images
• 1980’s: ANNs come and go; shift
toward geometry and increased
mathematical rigor
• 1990’s: face recognition; statistical
analysis in vogue
• 2000’s: broader recognition; large
annotated datasets available; video
processing starts
• 2030’s: robot uprising?
Guzman ‘68
Ohta Kanade ‘78
Turk and Pentland ‘91
4
5. Vision is the process of discovering what is present
in the world and where it is by looking
Human Vision
Computational algo implemented in this
massive network of neurons; they obtain their
inputs from retina, & produce as output an
“understanding” of the scene in view
But what does it mean to “understand”
the scene? What algos & data
representation are used by brain?
5
6. Computer Vision is the study of analysis of pictures and
videos in order to achieve results similar to those as by
humans
Computer Vision
Analogously, given a set of TV camera
What computer architectures, data
structures & algorithms should use
to create a machine that can “see”
as we do?
6
7. Human Vision VS Computer Vision
What we see What a computer sees
7
8. Main Goal of Computer Vision
Every picture tells a story!!
* write computer programs that can interpret images * 8
11. What is Digital Image Processing?
•The continuum from image processing to
computer vision can be broken up into low-,
mid- and high-level processes
Low Level Process
Input: Image
Output: Image
Examples: Noise
removal, image
sharpening
Mid Level Process
Input: Image
Output: Attributes
Examples: Object
recognition,
segmentation
High Level Process
Input: Attributes
Output: Understanding
Examples: Scene
understanding,
autonomous navigation
11
12. Key Stages in Digital Image Processing
Image
Acquisition
Image
Restoration
Morphological
Processing
Segmentation
Representation
& Description
Image
Enhancement
Object
Recognition
Problem Domain
Colour Image
Processing
Image
Compression 12
22. What is Object Recognition?
• Last step in image processing
• It is the task of finding and identifying objects in an image
or video sequence
Like human understanding, it includes :
• Detection – of separate objects
• Description – of their geometry and positions in 3D
• Classification – as being one of a known class
• Identification – of the particular instance
• Understanding – of spatial relationships between objects
22
28. Scene and context categorization/Understanding
• outdoor
• city
• …
Slide credit Fei-Fei, Fergus, Torralba CVPR07 Short Course
29. Learning and Adaptation
• Supervised learning
– A teacher provides a category label or cost for
each pattern in the training set
• Unsupervised learning
– The system forms clusters or “natural groupings”
of the input patterns
29
31. What is Pattern Recognition?
• A pattern is an object, process or event that can be given a
name.
• A pattern class (or category) is a set of patterns sharing
common attributes and usually originating from the same
source.
• During recognition (or classification) given objects are
assigned to prescribed classes.
• A classifier is a machine which performs classification.
“The assignment of a physical object or event to one of several
prespecified categeries” -- Duda & Hart
33. 33
Components of Pattern Recognition
(Cont’d)
• Data acquisition and sensing
• Pre-processing
Removal of noise in data.
Isolation of patterns of interest from the background.
• Feature extraction
Finding a new representation in terms of features.
(Better for further processing)
34. 34
Components of Pattern Recognition
(Cont’d)
• Model learning and estimation
Learning a mapping between features and pattern groups.
• Classification
Using learned models to assign a pattern to a predefined
category
• Post-processing
Evaluation of confidence in decisions.
Exploitation of context to improve performances.
36. Pattern Representation
• A pattern is represented by a set of d features,
or attributes, viewed as a d-dimensional
feature vector.
1 2
( , , , )
T
d
x x xx
37. Basic concepts
y x
n
x
x
x
2
1
Feature vector
- A vector of observations
(measurements).
- is a point in feature space .
Hidden state
- Cannot be directly measured.
- Patterns with equal hidden state belong to the same class.
Xx
x X
Yy
Task
- To design a classifer (decision rule)
which decides about a hidden state based on an onbservation.
YX:q
Pattern
38. Feature Extraction
Task: to extract features which are good for classification.
Good features: • Objects from the same class have similar feature values.
• Objects from different classes have different values.
“Good” features “Bad” features
39. Feature Extraction Methods
k
m
m
m
2
1
n
x
x
x
2
11
φ
2
φ
n
φ
k
m
m
m
m
3
2
1
n
x
x
x
2
1
Feature extraction Feature selection
Problem can be expressed as optimization of parameters of featrure extractor
Supervised methods: objective function is a criterion of separability (discriminability)
of labeled examples, e.g., linear discriminat analysis (LDA).
Unsupervised methods: lower dimesional representation which preserves important
characteristics of input data is sought for, e.g., principal component analysis (PCA).
φ(θ)
40. Classifier
A classifier partitions feature space X into class-labeled regions such that
||21 Y
XXXX }0{||21 Y
XXX and
1
X
3
X
2
X
1
X
1
X
2
X
3
X
The classification consists of determining to which region a feature vector x belongs to.
Borders between decision boundaries are called decision regions.
41. Representation of classifier
A classifier is typically represented as a set of discriminant functions
||,,1,:)(f YX ii
x
The classifier assigns a feature vector x to the i-the class if )(f)(f xx ji ij
)(f1
x
)(f2
x
)(f ||
xY
maxx y
Feature vector
Discriminant function
Class identifier
42. of 15
Block diagram
Both definitions may be depicted by the following
block diagram.
Object Pattern
Class /
Category
Class /
Category
Class /
Category
The process consists of two major operations:
Feature extraction
Classification
43. of 15
Example : Gender
Assume an algorithm to recognize the gender of a
student in a university, where the available input is
several features of the students (of course, the gender
cannot be one of the features).
The student to be classified is
the Object, The gender (Male
or Female) are the Classes,
and the input which is referred
to the student is the Pattern.
44. of 15
What is a Feature?
Example: Possible features of a student:
• Number of eyes x {0, 1, 2}
• Hair color x {0 =Black, 1 =Blond, 2 =Red ,…}
• Wear glasses or not x {0, 1}
• Hair length [cm] x [0..100]
• Shoe size [U.S] x [3.5, 4, 4.5, .. ,14]
• Height [cm] x [40..240]
• Weight [kg] x [30..600]
Feature is a scalar x which is quantitatively
describes a property of the Object.
45. of 15
What is Feature Extraction?
“When we have two or more classes, feature extraction
consist of choosing those features which are most
effective for preserving class separability”(Fukunaga p. 441)
Assume we choose the shoe size of the
student as a feature. The selection is
heuristically and seems reasonable.
46. Alon Slapak of 15
What is a Pattern?
Pattern is a n-tuple X (vector) of N scalars xi
i [1,N], which are called the Features.
Conventional form of a pattern is:
1 2
, ,
T
N i
X x x x x X V
Where V is known as the Feature Space, and
N is the dimension of V.
47. of 15
Possible patterns for the gender problem:
We can use the shoe size alone to have:
X Shoe size
We can combine the height and the weight to have:
,
T
X height weight
We can even combine the height, weight and the shoe size to be on the safe side:
, ,
T
X height weight shoe size
Or, we can use them all:
# , , , , , ,
T
X of eyes hair color glasses hair length height weight shoe size
48. of 15
Example
Assume we are using the
height and the weight of
each of the students in
the university as a
pattern.
120 130 140 150 160 170 180 190 200 210 220
30
40
50
60
70
80
90
100
110
120
height [cm]
weight[kg]
The height and the weight
are both features, which
span a feature space V of
dimension 2.
Each student is characterized by a
vector of two feature: (height,
weight).
Since the male students and the
female students differ from each
other in height and weight, we are
expected to have two separated
clusters.
Female
s
Males
Each of the students is
represented as a point in
the feature space. Patterns
of male students are
depicted in blue, and those
of female students – in red.
49. of 15
What is a Class?
“Class is a set of patterns that share some
common properties” (Wang p.10)
In our example, the Male students and the Female
students are two classes of objects that share a
common gender.
50. of 15
What is Classification?
Classification is a mathematical function or
algorithm which assigns a feature to one
of the classes.
Example:
We can draw a line
between the two
clusters in the gender
example, and every
student will be classified
as a female or male
according to this line.
120 130 140 150 160 170 180 190 200 210 220
30
40
50
60
70
80
90
100
110
120
height [cm]
weight[kg] Males
Females
51. of 15
Clusters Separation
Misclassifications are a consequence of the
separation of the clusters. The separation
of clusters is quantified using two major
methods:
0 10 20 30 40 50 60 70 80 90 100
0
10
20
30
40
50
60
70
80
90
100
Separable clusters
0 10 20 30 40 50 60 70 80 90 100
0
10
20
30
40
50
60
70
80
90
100
Almost separable clusters
0 10 20 30 40 50 60 70 80 90 100
0
10
20
30
40
50
60
70
80
90
100
Non-separable clusters
1. Mathematically: there are several separation criteria’s.
2. “Intuitively”: overlapping of the clusters.
52. of 15
Classification Quality
WARNING!!! Although the idea is well illustrated, it is a bad habit to
judge a classification quality according to the visual representation
of clusters.
The classification quality
is strongly depends on
the clusters separation
The clusters separation
strongly depends on the
features selection
Feature selection is of
paramount importance in
classification quality
54. Fingerprint
The popular Biometric used to authenticate person is Fingerprint which
is unique and permanent throughout a person’s life
Fingerprint is the pattern of ridges and valleys
The ridges have characteristics, called minutiae, are the ridge ending
and the ridge bifurcation
Ridge ending is defined as the point where ridge ends abruptly
Ridge bifurcation is defined as the point where a ridge forks into branch
ridges
54
57. Fingerprint Recognition
Fingerprint recognition or fingerprint authentication refers to the
method of verifying a match between two human fingerprint
Fingerprint recognition techniques have the advantage to use low-
cost standard capturing device
However , recognition of the fingerprint becomes a complex
computer vision problem , especially when dealing with noisy and
low quality images
A minutia matching is widely used for fingerprint recognition and can be
classified as ridge ending and ridge bifurcation
57
59. Fingerprint Image
• The input fingerprint image is the gray scale
image of a person, which has intensity values
ranging from 0 to 255
• A number of methods are used to acquire
fingerprints
• The inked impression method remains the
most popular one
• Inkless fingerprint scanners are also present
59
61. Binarization
Binarization is used to convert gray scale image into
binary image by fixing the threshold value
The pixel values above the threshold are set to ‘1’ and the
pixel values below the threshold are set to ‘0’ respectively
61
63. Thinning
The binarized image is thinned
using Block Filter
To reduce the thickness of all ridge
lines to a single pixel width to
extract minutiae points effectively
Thinning does not change the
location of minutiae points
compared to original fingerprint
63
65. Minutiae
Extraction
Classification of ridge-end and ridge
bifurcation points is done by creating matrix
Crossing Number is used to locate the
minutiae points in fingerprint image
Crossing Number is defined as half of the
sum of differences between intensity values
of two adjacent pixels
65
66. • If crossing Number is 1 minutiae points are
classified as Termination
• If crossing Number is 2 minutiae points are
classified as Normal ridge
• If crossing Number 3 or greater than 3
minutiae points are classified as Bifurcation
66
70. Image Acquisition
Iimage.jpg = Input Image
acquisition from reader.
Timage.jpg = Template
Image retrieve from
database.
70
71. Computation of
Points
After the detection of minutiae points,
matching algorithm require to calculate
total number of available points in the
fingerprint image separately
To perform this computation two
counter variables are used to count
both ridge-end and bifurcation points
71
73. Location Detection
of Points
Each minutiae point in the fingerprint image
has a specific location.
This location information of particular point is
significant to store for further matching of
fingerprints.
The location of every point in the digital image
is given by pixel position, so that it can be
taken and stored separately for both ridge-end
and bifurcation points.
73
75. Amount and Location
Matching
In the previous steps, all the required
information about points is computed and
stored
Now, this is the matching step, here the
algorithm compares the computed values
with the stored values
This algorithm first, compares the
combination of both amounts of ridge-end
and bifurcation points with stored data
If the match occurs, the algorithm then
compares the location of ridge points with
stored location data
75
79. • Some toll road
requirements we encounter:
– 99.9% image capture
– 99% overall plate read
accuracy
– 99% OCR accuracy on 90%
of capture plate
images – number and state
What is an ALPR System?
79
80. The procedure is based on extraction of plate
region, segmentation of plate characters and
recognition of characters
Recognized character
80
82. • In the segmentation of plate characters, the car
number plate is segmented into its constituent
parts to obtain its characters individually . Image
filtering from unwanted spots and noise.
• Dilation of image to separate characters from each
others.
Segmented plate number
Characters Segmentation
82
83. It is done by finding starting and end points of
characters in horizontal direction.
Characters separated individually
Separating the plate characters
83
84. Normalization of characters
Normalization is to refine the characters into a
block containing no extra white spaces (pixels) in all
the four sides of the characters.
Sometimes called contrast stretching.
Then each character should be equal in size.
84
85. Below an example of normalized character where the
character fill all the 4 sides
Normalized character
85
89. Supermarket scanner recognizes objects without barcodes
Uses object recognition to identify foods at the supermarket checkout line.
The technology uses a camera that compares the food that is being scanned to a
large, expandable database of products.
That camera filters out background "noise" in its picture, so that it only sees
objects held close to its lens against a neutral black background.
The technology recognizes supermarket items at check out without requiring a
bar code...making bar codes obsolete for check out purposes.
It uses proprietary pattern recognition technology and claims it can operate at
high speeds.
This object recognition system requires a database that contains the information
about the items in the supermarket.
This system claims to be able to make very precise identification of produce.
89
90. Google patents new object recognition technology, likely has
plans to use with YouTube
90
91. Google patents new object recognition technology,
likely has plans to use with YouTube
It’s known as “automatic large scale video object recognition.”
It can actually recognize the difference between a variety of
objects, not just human faces.
After recognizing an object it then labels it with certain tags. If you
are wondering how it does this, there is a special object name
repository involved.
This database would hold at least 50,000 object names,
information and shapes that would allow for easy identification.
91
93. Software that does this is usually only available to government agencies and research
facilities.
It also guesses celebrity names.
This is the new version; it works very well, particularly with vehicles, products, brands,
and well-known "things".
Take a picture of a foreign t-shirt label... Android Eye will tell you the brand, and where
the shirt is from. Take a picture of a tree... a ball... a person... the results are endless.
Take a picture of a car... Android Eye will tell you the make and model of the car.
Android Eye is an advanced Object Recognition app. Take a picture of any object, and
Android Eye will tell you what it is.
Android Eye
93
95. A Google Glass App Knows What You're Looking At
An app for Google’s wearable computer Glass can
recognize objects in front of a person wearing the device.
Google has shown that the camera integrated into Google
Glass, the company’s head-worn computer, can capture
some striking video.
They built an app that uses that camera to recognize what
a person is looking at.
The app was built at an employee hack session held by the
company this month to experiment with ways to
demonstrate their new image recognition service.
The app can either work on photos taken by a person
wearing Glass, or constantly grab images from the device’s
camera.
Those are sent to the cloud or a nearby computer for
processing by AlchemyAPI’s image recognition software.
The software sends back its best guess at what it sees and
then Glass will display, or speak, the verdict.
95
It is the process of extracting useful information from digital image process of extracting useful information from digital imagesFinding objects of interest in imagesProperties of objects (size, shape, color)Recognition of objectsAlso known as machine vision, robot vision, computational vision, or image understanding