This document discusses support vector machines (SVMs) for pattern classification. It begins with an introduction to SVMs, noting that they construct a hyperplane to maximize the margin of separation between positive and negative examples. It then covers finding the optimal hyperplane for linearly separable and nonseparable patterns, including allowing some errors in classification. The document discusses solving the optimization problem using quadratic programming and Lagrange multipliers. It also introduces the kernel trick for applying SVMs to non-linear decision boundaries using a kernel function to map data to a higher-dimensional feature space. Examples are provided of applying SVMs to the XOR problem and computer experiments classifying a double moon dataset.
This is a presentation on the Yolo(You Only Look Once) object detection system. This is a state-of-the-art system that is works very fast. The presentation has been derived from the paper cited below
@article{yolov3,
title={YOLOv3: An Incremental Improvement},
author={Redmon, Joseph and Farhadi, Ali},
journal = {arXiv},
year={2018}
}
This is a presentation on the Yolo(You Only Look Once) object detection system. This is a state-of-the-art system that is works very fast. The presentation has been derived from the paper cited below
@article{yolov3,
title={YOLOv3: An Incremental Improvement},
author={Redmon, Joseph and Farhadi, Ali},
journal = {arXiv},
year={2018}
}
Neural networks Self Organizing Map by Engr. Edgar Carrillo IIEdgar Carrillo
This presentation talks about neural networks and self organizing maps. In this presentation,Engr. Edgar Caburatan Carrillo II also discusses its applications.
Abstract : For many years, Machine Learning has focused on a key issue: the design of input features to solve prediction tasks. In this presentation, we show that many learning tasks from structured output prediction to zero-shot learning can benefit from an appropriate design of output features, broadening the scope of regression. As an illustration, I will briefly review different examples and recent results obtained in my team.
We review our recent progress in the development of graph kernels. We discuss the hash graph kernel framework, which makes the computation of kernels for graphs with vertices and edges annotated with real-valued information feasible for large data sets. Moreover, we summarize our general investigation of the benefits of explicit graph feature maps in comparison to using the kernel trick. Our experimental studies on real-world data sets suggest that explicit feature maps often provide sufficient classification accuracy while being computed more efficiently. Finally, we describe how to construct valid kernels from optimal assignments to obtain new expressive graph kernels. These make use of the kernel trick to establish one-to-one correspondences. We conclude by a discussion of our results and their implication for the future development of graph kernels.
Anomaly detection using deep one class classifier홍배 김
- Anomaly detection의 다양한 방법을 소개하고
- Support Vector Data Description (SVDD)를 이용하여
cluster의 모델링을 쉽게 하도록 cluster의 형상을 단순화하고
boundary근방의 애매한 point를 처리하는 방법 소개
⭐⭐⭐⭐⭐ Device Free Indoor Localization in the 28 GHz band based on machine lea...Victor Asanza
By exploiting the received power change in a communication link produced by the presence of a human body in an otherwise empty room, this work evaluates indoor free device localization methods in the 28 GHz band using machine learning techniques. For this objective, a database is built using results from ray tracing simulations of a system comprised of 4 receivers and up to 2 transmitters, while a person is standing within the room. Transmitters are equipped with uniform linear arrays that switch their main beams sequentially at 21 angles, whereas the receivers operate with omnidirectional antennas. Statistical localization error reduction of at least 16% over a global-based classification technique can be obtained through the combination of two independent classifiers using one transmitter and a reduction of at least 19% for 2 transmitters. An additional improvement is achieved by combining each independent classifier with a regression algorithm. Results also suggest that the number of examples per class and size of the blocks (strips) in which the study area is partitioned play a role in the localization error.
Design of neuronal processor based on back-propagation, Tunis Science University
Design of neural network processor architecture aimed at performing high-speed operations and having learning capability. Co-simulation with: SystemC & Qt. [2] (Implementation on Xilinx Spartan3).
Fast Object Recognition from 3D Depth Data with Extreme Learning MachineSoma Boubou
Object recognition from RGB-D sensors has recently emerged as a renowned and challenging research topic. The current systems often require large amounts of time to train the models and to classify new data. We proposed an effective and fast object recognition approach from 3D data acquired from depth sensors such as Structure or Kinect sensors.
Our contribution in this work} is to present a novel fast and effective approach for real-time object recognition from 3D depth data:
- First, we extract simple but effective frame-level features, which we name as differential frames, from the raw depth data.
- Second, we build a recognition system based on Extreme Learning Machine classifier with a Local Receptive Field (ELM-LRF).
Similar to Neural Networks: Support Vector machines (20)
Unit 8 - Information and Communication Technology (Paper I).pdfThiyagu K
This slides describes the basic concepts of ICT, basics of Email, Emerging Technology and Digital Initiatives in Education. This presentations aligns with the UGC Paper I syllabus.
Synthetic Fiber Construction in lab .pptxPavel ( NSTU)
Synthetic fiber production is a fascinating and complex field that blends chemistry, engineering, and environmental science. By understanding these aspects, students can gain a comprehensive view of synthetic fiber production, its impact on society and the environment, and the potential for future innovations. Synthetic fibers play a crucial role in modern society, impacting various aspects of daily life, industry, and the environment. ynthetic fibers are integral to modern life, offering a range of benefits from cost-effectiveness and versatility to innovative applications and performance characteristics. While they pose environmental challenges, ongoing research and development aim to create more sustainable and eco-friendly alternatives. Understanding the importance of synthetic fibers helps in appreciating their role in the economy, industry, and daily life, while also emphasizing the need for sustainable practices and innovation.
How to Make a Field invisible in Odoo 17Celine George
It is possible to hide or invisible some fields in odoo. Commonly using “invisible” attribute in the field definition to invisible the fields. This slide will show how to make a field invisible in odoo 17.
Palestine last event orientationfvgnh .pptxRaedMohamed3
An EFL lesson about the current events in Palestine. It is intended to be for intermediate students who wish to increase their listening skills through a short lesson in power point.
2024.06.01 Introducing a competency framework for languag learning materials ...Sandy Millin
http://sandymillin.wordpress.com/iateflwebinar2024
Published classroom materials form the basis of syllabuses, drive teacher professional development, and have a potentially huge influence on learners, teachers and education systems. All teachers also create their own materials, whether a few sentences on a blackboard, a highly-structured fully-realised online course, or anything in between. Despite this, the knowledge and skills needed to create effective language learning materials are rarely part of teacher training, and are mostly learnt by trial and error.
Knowledge and skills frameworks, generally called competency frameworks, for ELT teachers, trainers and managers have existed for a few years now. However, until I created one for my MA dissertation, there wasn’t one drawing together what we need to know and do to be able to effectively produce language learning materials.
This webinar will introduce you to my framework, highlighting the key competencies I identified from my research. It will also show how anybody involved in language teaching (any language, not just English!), teacher training, managing schools or developing language learning materials can benefit from using the framework.
Ethnobotany and Ethnopharmacology:
Ethnobotany in herbal drug evaluation,
Impact of Ethnobotany in traditional medicine,
New development in herbals,
Bio-prospecting tools for drug discovery,
Role of Ethnopharmacology in drug evaluation,
Reverse Pharmacology.
This is a presentation by Dada Robert in a Your Skill Boost masterclass organised by the Excellence Foundation for South Sudan (EFSS) on Saturday, the 25th and Sunday, the 26th of May 2024.
He discussed the concept of quality improvement, emphasizing its applicability to various aspects of life, including personal, project, and program improvements. He defined quality as doing the right thing at the right time in the right way to achieve the best possible results and discussed the concept of the "gap" between what we know and what we do, and how this gap represents the areas we need to improve. He explained the scientific approach to quality improvement, which involves systematic performance analysis, testing and learning, and implementing change ideas. He also highlighted the importance of client focus and a team approach to quality improvement.
The French Revolution, which began in 1789, was a period of radical social and political upheaval in France. It marked the decline of absolute monarchies, the rise of secular and democratic republics, and the eventual rise of Napoleon Bonaparte. This revolutionary period is crucial in understanding the transition from feudalism to modernity in Europe.
For more information, visit-www.vavaclasses.com
Welcome to TechSoup New Member Orientation and Q&A (May 2024).pdfTechSoup
In this webinar you will learn how your organization can access TechSoup's wide variety of product discount and donation programs. From hardware to software, we'll give you a tour of the tools available to help your nonprofit with productivity, collaboration, financial management, donor tracking, security, and more.
Thesis Statement for students diagnonsed withADHD.ppt
Neural Networks: Support Vector machines
1. CHAPTER 06
SUPPORT VECTOR MACHINES
CSC445: Neural Networks
Prof. Dr. Mostafa Gadal-Haqq M. Mostafa
Computer Science Department
Faculty of Computer & Information Sciences
AIN SHAMS UNIVERSITY
(some of the figures in this presentation are copyrighted to Pearson Education, Inc.)
2. ASU-CSC445: Neural Networks Prof. Dr. Mostafa Gadal-Haqq
Introduction
Optimal Hyperplane for Linearly Separable Pattern
Quadratic Optimization for Finding the Optimal Hyperplan
Optimal Hyperplane for Nonseparable Patterns
Underlying Philosophy of SVM for Pattern Calssification
SVM viewed as Kernel Machine
The XOR problem
Computer Experiment
2
Outlines
3. ASU-CSC445: Neural Networks Prof. Dr. Mostafa Gadal-Haqq 3
Introduction
The main idea of the SVMs may be summed up as
follows:
“Given a training samples, the SVM constructs a
hyperplane as decision surface in such a way the
margin of separation between positive and negative
examples is maximized.”
4. ASU-CSC445: Neural Networks Prof. Dr. Mostafa Gadal-Haqq 4
Linearly Separable Patterns
SVM is a binary learning machine.
Binary classification is the task of separating classes in
feature space.
wTx + b = 0
wTx + b < 0
wTx + b > 0
bxwxg T
)(
5. ASU-CSC445: Neural Networks Prof. Dr. Mostafa Gadal-Haqq 5
Linearly Separable Patterns
Which of the linear separators is optimal?
6. ASU-CSC445: Neural Networks Prof. Dr. Mostafa Gadal-Haqq
Optimal Decision Boundary
The optimal decision boundary is the one that
maximize the margin
6
r
ρ
8. ASU-CSC445: Neural Networks Prof. Dr. Mostafa Gadal-Haqq
The Margin
||||)(then
,0since
||||
)()(
||||
,)(
wrxg
bxw
w
w
w
rbxwxg
w
w
rxxbxwxg
P
T
T
P
T
P
T
8
9. ASU-CSC445: Neural Networks Prof. Dr. Mostafa Gadal-Haqq
The Margin
1
||||
1
1
||||
1
||||
)(
11)(
dif
w
dif
w
w
xg
r
dforbxwxg T
9
r
ρ
1bxwT
1 bxwT
0 bxwT
||||
2
2
w
r
Then the margin is given as:
10. ASU-CSC445: Neural Networks Prof. Dr. Mostafa Gadal-Haqq
Optimal Decision Boundary
Let {x1, ..., xn} be our data set and let di {1,-1} be the
class label of xi
The decision boundary should classify all points
correctly.
That is, we have a constrained optimization problem
Maximize = 𝟐𝒓 =
𝟐
𝒘
, or Minimize 𝒘
Subject to 𝒅𝒊(𝒘 𝑻 𝒙 ± 𝒃) ≥ 𝟏
10
11. ASU-CSC445: Neural Networks Prof. Dr. Mostafa Gadal-Haqq
The Optimization Problem
Introduce Lagrange multipliers ,
That is, the Lagrange function:
Is to be minimized with respect to w and b, i.e,
𝜕𝑱(𝒘,𝒃,)
𝜕𝒘
= 𝟎 ; and
𝜕𝑱(𝒘,𝒃, )
𝜕𝒃
= 𝟎
)1][(||||
2
1
),,(
1
2
bxwdwbwJ i
T
i
N
i
i
11
12. ASU-CSC445: Neural Networks Prof. Dr. Mostafa Gadal-Haqq
Solving the Optimization Problem
Need to optimize a quadratic function subject to linear
constraints.
The solution involves constructing a dual problem where a
Lagrange multiplier αi is associated with every constraint in the
primary problem:
Find 𝛼1…𝛼 𝑁such that
𝑸 𝜶 = 𝛼𝑖 −
1
2
𝛼𝑖 𝛼𝑗 𝑑𝑖 𝑑𝑗x 𝑖x𝑗𝑗𝑖
𝑵
𝒊=𝟏
is maximized and
(1) 𝛼𝑖 𝑑𝑖𝑗
(2) 𝛼1 ≥ 0 ∀ 𝑖
12
13. ASU-CSC445: Neural Networks Prof. Dr. Mostafa Gadal-Haqq
The Optimization Problem
The solution has the form:
and such that 𝒊 ≠ 𝟎
Each non-zero αi indicates that corresponding xi is a support vector.
Then the classifying function will have the form:
Notice that it relies on an inner product between the test point x and the
support vectors xi
Also keep in mind that solving the optimization problem involved computing
the inner products xi
Txj between all training points!
13
ii
N
i
i xd
1
w iii
N
i
idb xx1
1
bdxg iii
N
i
i
xx)(
1
14. ASU-CSC445: Neural Networks Prof. Dr. Mostafa Gadal-Haqq
6=1.4
The Optimization Problem
Support vectors are samples that have non-zero
Class 1
Class 2
1=0.8
2=0
3=0
4=0
5=0
7=0
8=0.6
9=0
10=0
15. ASU-CSC445: Neural Networks Prof. Dr. Mostafa Gadal-Haqq
Optimal Hyperplane for Nonseparable Patterns
Figure 6.3 Soft margin hyperplane (a) Data point xi (belonging to class C1,
represented by a small square) falls inside the region of separation, but on the
correct side of the decision surface. (b) Data point xi (belonging to class C2,
represented by a small circle) falls on the wrong side of the decision surface.
15
16. ASU-CSC445: Neural Networks Prof. Dr. Mostafa Gadal-Haqq
Optimal Hyperplane for Nonseparable Patterns
We allow “error” xi in classification
16
ξi
ξi
17. ASU-CSC445: Neural Networks Prof. Dr. Mostafa Gadal-Haqq
Soft Margin Hyperplane
The old formulation:
The new formulation incorporating relaxed variables:
Parameter C can be viewed as a way to control overfitting.
17
Find w and b such that
∅ 𝑾 =
𝟏
𝟐
𝑾 𝑻
𝑾 is minimized and for all {(xi ,yi)}
Subject to: 𝒅𝒊(𝒘 𝑻
𝒙 ± 𝒃) ≥ 𝟏
Find w and b such that
∅ 𝐖 =
𝟏
𝟐
𝐖 𝐓 𝐖 + 𝐜 𝝃𝒊𝒊 is minimized for all {(xi ,yi)}
Subject to: 𝒅𝒊(𝒘 𝑻 𝒙 ± 𝒃) ≥ 𝟏 , and ξi ≥ 0 for all i
18. ASU-CSC445: Neural Networks Prof. Dr. Mostafa Gadal-Haqq
Soft Margin Hyperplane
Again, xi with non-zero αi will be support vectors.
Solution to the dual problem is:
𝑾 = 𝜶𝒊 𝒅𝒊 𝒙𝒊𝒊
and
𝒃 = 𝒅𝒊 𝟏 − 𝝃𝒊 − 𝑾 𝑻
𝒙𝒊
18
19. ASU-CSC445: Neural Networks Prof. Dr. Mostafa Gadal-Haqq
Extension to Non-linear Decision Boundary
Key idea: transform xi to a higher dimensional space
Input space: the space of xi
Feature space: the “kernel” space of f(xi)
19
f( )
f( )
f( )
f( )f( )
f( )
f( )
f( )
f(.)
f( )
f( )
f( )
f( )
f( )
f( )
f( )
f( )
f( )
f( )
Feature spaceInput space
20. ASU-CSC445: Neural Networks Prof. Dr. Mostafa Gadal-Haqq
Kernel Trick
The linear classifier relies on inner product between
vectors:
𝑲 𝐱 𝒊, 𝐱 𝒋 = 𝐱𝒊
𝑻 𝐱 𝒋
If every datapoint is mapped into high-dimensional space
via some transformation Φ: x → φ(x), the inner product
becomes:
𝑲 𝐱 𝒊, 𝐱 𝒋 = 𝛟 𝐱𝐢
𝑻 𝛟(𝐱 𝒋)
A kernel function is some function that corresponds to
an inner product into some feature space.
K (x, xj) needs to satisfy a technical condition (Mercer
condition) in order for f(.) to exist
20
21. ASU-CSC445: Neural Networks Prof. Dr. Mostafa Gadal-Haqq
Mercer’s Theorem
𝑲 = 𝒌(𝒙𝒊, 𝒙𝒋) ∀𝒊, 𝒋 has to be non-negative definite or
positive semidefinite , that is, it satisfies:
𝒂 𝑻K𝒂 ≥ 𝟎
Some of kernel functions that satisfy Mercer’s condition:
21
22. ASU-CSC445: Neural Networks Prof. Dr. Mostafa Gadal-Haqq
The SVM viewed as Kernel Machine
Figure 6.5 Architecture of support vector machine, using a
radial-basis function network.
22
23. ASU-CSC445: Neural Networks Prof. Dr. Mostafa Gadal-Haqq
The XOR Problem
For the two dimensional vectors x=[x1 x2];
Define the following Kernel:
𝒌 x,x𝒊 = 𝟏 + x 𝑻
x𝒊
2
Need to show that
K(xi,xj)= φ(xi)Tφ(xj)
K(xi,xj)=(1 + xi
Txj)2
= 1+ xi1
2xj1
2 + 2 xi1xj1 xi2xj2+ xi2
2xj2
2 + 2xi1xj1 + 2xi2xj2=
= [1 xi1
2 √2 xi1xi2 xi2
2 √2xi1 √2xi2]T [1 xj1
2 √2 xj1xj2 xj2
2 √2xj1 √2xj2]
= φ(xi)Tφ(xj),
where
φ(x) = [1 x1
2 √2 x1x2 x2
2 √2x1 √2x2]
23
24. ASU-CSC445: Neural Networks Prof. Dr. Mostafa Gadal-Haqq
The XOR Problem
Which give the optimal hyperplane as:
−𝒙 𝟏 𝒙 𝟐 = 𝟎
This yields
Figure 6.6 (a) Polynomial machine for solving the XOR problem. (b) Induced
images in the feature space due to the four data points of the XOR problem.
24
(1, -1)
(-1,1)
(-1, -1)
(1,1)
-1.0
25. ASU-CSC445: Neural Networks Prof. Dr. Mostafa Gadal-Haqq
Conclusion
SVM is a useful alternative to neural networks
Two key concepts of SVM: maximize the margin
and the kernel trick
Many active research is taking place on areas
related to SVM
Many SVM implementations are available on the
web for you to try on your data set!
25
26. ASU-CSC445: Neural Networks Prof. Dr. Mostafa Gadal-Haqq
Computer Experiment
Figure 6.7 Experiment on SVM for the double-moon of Fig. 1.8 with
distance d = –6.
26
27. ASU-CSC445: Neural Networks Prof. Dr. Mostafa Gadal-Haqq
Computer Experiment
Figure 6.8 Experiment on SVM for the double-moon of Fig. 1.8 with
distance d = –6.5.
27