This document provides an overview of linear classifiers and support vector machines (SVMs) for text classification. It explains that SVMs find the optimal separating hyperplane between classes by maximizing the margin between the hyperplane and the closest data points of each class. The document discusses how SVMs can be extended to non-linear classification through kernel methods and feature spaces. It also provides details on solving the SVM optimization problem and using SVMs for classification.
This document provides an introduction to support vector machines (SVMs) for text classification. It discusses how SVMs find an optimal separating hyperplane that maximizes the margin between classes. SVMs can handle non-linear classification through the use of kernels, which map data into a higher-dimensional feature space. The document outlines the mathematical formulations of linear and soft-margin SVMs, explains how the kernel trick allows evaluating inner products implicitly in that feature space, and summarizes how SVMs are used for classification tasks.
This document provides an introduction and overview of support vector machines (SVMs) for text classification. It discusses how SVMs find an optimal separating hyperplane between classes that maximizes the margin between the classes. It explains how SVMs can handle non-linear classification through the use of kernels to map data into higher-dimensional feature spaces. The document also discusses evaluation metrics like precision, recall, and accuracy for text classification and the differences between micro-averaging and macro-averaging of these metrics.
This document summarizes support vector machines (SVMs). It explains that SVMs find the optimal separating hyperplane that maximizes the margin between two classes of data points. The hyperplane is determined by support vectors, which are the data points closest to the hyperplane. SVMs can be solved as a quadratic programming problem. The document also discusses how kernels can map data into higher dimensional spaces to make non-separable problems separable by SVMs.
This document discusses support vector machines (SVMs) for classification tasks. It describes how SVMs find the optimal separating hyperplane with the maximum margin between classes in the training data. This is formulated as a quadratic optimization problem that can be solved using algorithms that construct a dual problem. Non-linear SVMs are also discussed, using the "kernel trick" to implicitly map data into higher-dimensional feature spaces. Common kernel functions and the theoretical justification for maximum margin classifiers are provided.
support vector machine algorithm in machine learningSamGuy7
The objective of the support vector machine algorithm is to find a hyperplane in an N-dimensional space(N — the number of features) that distinctly classifies the
SVMs are known for their effectiveness in high-dimensional spaces and their ability to handle complex data patterns. data points
This document discusses support vector machines (SVMs) for classification tasks. It describes how SVMs find the optimal separating hyperplane with the maximum margin between classes in the training data. This is formulated as a quadratic optimization problem that can be solved using algorithms that construct a dual problem. Non-linear SVMs are also discussed, using the "kernel trick" to implicitly map data to higher dimensions where a linear separator can be found.
The document provides a course calendar for a class on Bayesian estimation methods. It lists the dates and topics to be covered over 15 class periods from September to January. The topics progress from basic concepts like Bayes estimation and the Kalman filter, to more modern methods like particle filters, hidden Markov models, Bayesian decision theory, and applications of principal component analysis and independent component analysis. One class is noted as having no class.
This document provides an introduction to support vector machines (SVMs) for text classification. It discusses how SVMs find an optimal separating hyperplane that maximizes the margin between classes. SVMs can handle non-linear classification through the use of kernels, which map data into a higher-dimensional feature space. The document outlines the mathematical formulations of linear and soft-margin SVMs, explains how the kernel trick allows evaluating inner products implicitly in that feature space, and summarizes how SVMs are used for classification tasks.
This document provides an introduction and overview of support vector machines (SVMs) for text classification. It discusses how SVMs find an optimal separating hyperplane between classes that maximizes the margin between the classes. It explains how SVMs can handle non-linear classification through the use of kernels to map data into higher-dimensional feature spaces. The document also discusses evaluation metrics like precision, recall, and accuracy for text classification and the differences between micro-averaging and macro-averaging of these metrics.
This document summarizes support vector machines (SVMs). It explains that SVMs find the optimal separating hyperplane that maximizes the margin between two classes of data points. The hyperplane is determined by support vectors, which are the data points closest to the hyperplane. SVMs can be solved as a quadratic programming problem. The document also discusses how kernels can map data into higher dimensional spaces to make non-separable problems separable by SVMs.
This document discusses support vector machines (SVMs) for classification tasks. It describes how SVMs find the optimal separating hyperplane with the maximum margin between classes in the training data. This is formulated as a quadratic optimization problem that can be solved using algorithms that construct a dual problem. Non-linear SVMs are also discussed, using the "kernel trick" to implicitly map data into higher-dimensional feature spaces. Common kernel functions and the theoretical justification for maximum margin classifiers are provided.
support vector machine algorithm in machine learningSamGuy7
The objective of the support vector machine algorithm is to find a hyperplane in an N-dimensional space(N — the number of features) that distinctly classifies the
SVMs are known for their effectiveness in high-dimensional spaces and their ability to handle complex data patterns. data points
This document discusses support vector machines (SVMs) for classification tasks. It describes how SVMs find the optimal separating hyperplane with the maximum margin between classes in the training data. This is formulated as a quadratic optimization problem that can be solved using algorithms that construct a dual problem. Non-linear SVMs are also discussed, using the "kernel trick" to implicitly map data to higher dimensions where a linear separator can be found.
The document provides a course calendar for a class on Bayesian estimation methods. It lists the dates and topics to be covered over 15 class periods from September to January. The topics progress from basic concepts like Bayes estimation and the Kalman filter, to more modern methods like particle filters, hidden Markov models, Bayesian decision theory, and applications of principal component analysis and independent component analysis. One class is noted as having no class.
The document discusses support vector machines (SVMs) for classification. It begins by defining classifiers and the difference between classification and clustering. It then introduces SVMs, explaining that they find optimal decision boundaries that separate classes through mapping data points into higher dimensional space. The document outlines linear and non-linear SVMs, describing how non-linear SVMs can find more complex separating structures through kernels. It also discusses supervised learning with SVMs and how to solve the optimization problem to train linear SVMs, including soft-margin classification to handle non-separable data.
The document discusses support vector machines (SVMs) for classification. It defines classifiers as using an object's characteristics to identify its class. SVMs create decision boundaries that maximize the margin between classes to perform classification. They can learn both linearly and non-linearly separable data using kernels to transform the data into a higher dimension where a linear separator can be found. The document outlines how SVMs solve a quadratic optimization problem to find the optimal separating hyperplane that maximizes the margin between classes.
The document discusses support vector machines (SVMs). SVMs find the optimal separating hyperplane between classes that maximizes the margin between them. They can handle nonlinear data using kernels to map the data into higher dimensions where a linear separator may exist. Key aspects include defining the maximum margin hyperplane, using regularization and slack variables to deal with misclassified examples, and kernels which implicitly map data into other feature spaces without explicitly computing the transformations. The regularization and gamma parameters affect model complexity, with regularization controlling overfitting and gamma influencing the similarity between points.
This document provides a tutorial on support vector machines (SVM) for binary classification. It outlines the key concepts of SVM including linear separable and non-separable cases, soft margin classification, solving the SVM optimization problem, kernel methods for non-linear classification, commonly used kernel functions, and relationships between SVM and other methods like logistic regression. Example code for using SVM from the scikit-learn Python package is also provided.
This document provides an introduction to support vector machines (SVM). It discusses the history and key concepts of SVM, including how SVM finds the optimal separating hyperplane with maximum margin between classes to perform linear classification. It also describes how SVM can learn nonlinear decision boundaries using kernel tricks to implicitly map inputs to high-dimensional feature spaces. The document gives examples of commonly used kernel functions and outlines the steps to perform classification with SVM.
These notes are a basic introduction to SVM, assuming almost no prior exposure. They contain some derivations, details, and explanations that not many SVM tutorials usually delve into. Thus, they're meant to augment primary course material (textbook or lecture notes) on SVMs and to help digest the course material.
This document provides an introduction to support vector machines (SVM). It discusses the history and development of SVM, including its introduction in 1992 and popularity due to success in handwritten digit recognition. The document then covers key concepts of SVM, including linear classifiers, maximum margin classification, soft margins, kernels, and nonlinear decision boundaries. Examples are provided to illustrate SVM classification and parameter selection.
This document provides an overview of machine learning techniques for classification and regression, including decision trees, linear models, and support vector machines. It discusses key concepts like overfitting, regularization, and model selection. For decision trees, it explains how they work by binary splitting of space, common splitting criteria like entropy and Gini impurity, and how trees are built using a greedy optimization approach. Linear models like logistic regression and support vector machines are covered, along with techniques like kernels, regularization, and stochastic optimization. The importance of testing on a holdout set to avoid overfitting is emphasized.
SVM is a supervised learning method that finds a hyperplane with maximum margin to separate classes. It uses kernels to map data to higher dimensions to allow for nonlinear separation. The objective is to minimize training error and model complexity by maximizing the margin between classes. SVMs solve a convex optimization problem that finds support vectors and determines the separating hyperplane using kernels, slack variables, and a cost parameter C to balance margin and errors. Parameter selection, like the kernel and its parameters, affects performance and is typically done through grid search and cross-validation.
This document discusses support vector machines (SVMs) for classification. It explains that SVMs find the optimal linear separator with the maximum margin between classes by solving a quadratic optimization problem. Non-linear SVMs are also discussed, which map data to a higher-dimensional feature space to allow for non-linear decision boundaries. The solution involves computing dot products between training examples, which can be done efficiently using kernel functions. SVMs have been successfully applied to various classification tasks and extended to other problems like regression.
This document provides an overview of support vector machines (SVMs) presented by Eric Xing at CMU. It discusses how SVMs find the optimal decision boundary between two classes by maximizing the margin between them. It introduces the concepts of support vectors, which are the data points that define the decision boundary, and the kernel trick, which allows SVMs to implicitly perform computations in higher-dimensional feature spaces without explicitly computing the feature mappings.
This document provides an overview of support vector machines (SVMs) for machine learning. It explains that SVMs find the optimal separating hyperplane that maximizes the margin between examples of separate classes. This is achieved by formulating SVM training as a convex optimization problem that can be solved efficiently. The document discusses how SVMs can handle non-linear decision boundaries using the "kernel trick" to implicitly map examples to higher-dimensional feature spaces without explicitly performing the mapping.
This document presents a splitting method for optimizing nonsmooth nonconvex problems of the form h(Ax) + g(x), where h is nonsmooth and nonconvex, A is a linear map, and g(x) is a convex regularizer. The method relaxes the problem by introducing an auxiliary variable w and minimizing a partially minimized objective with respect to x and w alternately using proximal gradient descent. Applications to problems in phase retrieval, semi-supervised learning, and stochastic shortest path are discussed. Convergence results and empirical performance on these applications are presented.
Support Vector Machines (SVMs) were proposed in 1963 and took shape in the late 1970s as part of statistical learning theory. They became popular in the last decade for classification, regression, and optimization tasks. SVMs find the optimal separating hyperplane that maximizes the margin between positive and negative examples by solving a quadratic programming optimization with linear constraints. This is done efficiently using kernel methods that implicitly map inputs to high-dimensional feature spaces without explicitly computing the mappings. Empirically, SVMs have been shown to generalize well and are among the best performing classifiers.
1. The document discusses various machine learning classification algorithms including neural networks, support vector machines, logistic regression, and radial basis function networks.
2. It provides examples of using straight lines and complex boundaries to classify data with neural networks. Maximum margin hyperplanes are used for support vector machine classification.
3. Logistic regression is described as useful for binary classification problems by using a sigmoid function and cross entropy loss. Radial basis function networks can perform nonlinear classification with a kernel trick.
Support Vector Machine topic of machine learning.pptxCodingChamp1
Support Vector Machines (SVM) find the optimal separating hyperplane that maximizes the margin between two classes of data points. The hyperplane is chosen such that it maximizes the distance from itself to the nearest data points of each class. When data is not linearly separable, the kernel trick can be used to project the data into a higher dimensional space where it may be linearly separable. Common kernel functions include linear, polynomial, radial basis function (RBF), and sigmoid kernels. Soft margin SVMs introduce slack variables to allow some misclassification and better handle non-separable data. The C parameter controls the tradeoff between margin maximization and misclassification.
This document provides an introduction to support vector machines (SVMs). It discusses how SVMs can be used for binary classification, regression, and multi-class problems. SVMs find the optimal separating hyperplane that maximizes the margin between classes. Soft margins allow for misclassified points by introducing slack variables. Kernels are discussed for mapping data into higher dimensional feature spaces to perform linear separation. The document outlines the formulation of SVMs for classification and regression and discusses model selection and different kernel functions.
1) Linear regression aims to model the relationship between inputs (x) and outputs (y) with an equation of the form y = w0 + w1x1 + ... + wkxk + ε, where w are parameters and ε is error.
2) The residual sum of squares (RSS) measures the difference between predicted and observed y values, and linear regression aims to minimize RSS to fit the regression line.
3) Regularization techniques like ridge regression (L2 penalty) and lasso regression (L1 penalty) are used to prevent overfitting by penalizing coefficients, with a tuning parameter λ balancing fit and complexity.
This document discusses a theory solver for linear rational arithmetic (LRA). It begins with an overview of the basic solving process, including preprocessing to separate formulas into equations and bounds, and storing equations in a tableau data structure. It then describes how bounds are asserted on variables, which may tighten bounds or require updating the model if a bound conflicts with the current value assigned to a variable. Asserting a bound on a non-basic variable in particular may cause the values of basic variables to be adjusted. The document provides examples to illustrate these concepts.
Support Vector Machines is the the the the the the the the thesanjaibalajeessn
This document provides an overview of support vector machines (SVMs) and how they can be used for both linear and non-linear classification problems. It explains that SVMs find the optimal separating hyperplane that maximizes the margin between classes. For non-linearly separable data, the document introduces kernel functions, which map the data into a higher-dimensional feature space to allow for nonlinear decision boundaries through the "kernel trick" of computing inner products without explicitly performing the mapping.
Naive Bayes is a simple classification technique based on Bayes' theorem that assumes independence between predictors. It works well for large datasets and is easy to build. Some key points:
- It calculates the probability of class membership based on prior probabilities of classes and predictors.
- It is commonly used for text classification like spam filtering due to its speed and accuracy.
- Variants include Gaussian, Multinomial, and Bernoulli Naive Bayes for different data types.
- Limitations include its assumptions of independence and inability to tune parameters, but it remains a popular first approach for classification problems.
The document discusses the YOLO (You Only Look Once) algorithm for object detection. YOLO uses a single neural network to predict bounding boxes and class probabilities for objects in an image. It divides the image into grids and each grid predicts bounding boxes for objects and confidence scores. YOLO can be used for applications like autonomous driving, wildlife detection, and security systems to identify objects in real-time.
The document discusses support vector machines (SVMs) for classification. It begins by defining classifiers and the difference between classification and clustering. It then introduces SVMs, explaining that they find optimal decision boundaries that separate classes through mapping data points into higher dimensional space. The document outlines linear and non-linear SVMs, describing how non-linear SVMs can find more complex separating structures through kernels. It also discusses supervised learning with SVMs and how to solve the optimization problem to train linear SVMs, including soft-margin classification to handle non-separable data.
The document discusses support vector machines (SVMs) for classification. It defines classifiers as using an object's characteristics to identify its class. SVMs create decision boundaries that maximize the margin between classes to perform classification. They can learn both linearly and non-linearly separable data using kernels to transform the data into a higher dimension where a linear separator can be found. The document outlines how SVMs solve a quadratic optimization problem to find the optimal separating hyperplane that maximizes the margin between classes.
The document discusses support vector machines (SVMs). SVMs find the optimal separating hyperplane between classes that maximizes the margin between them. They can handle nonlinear data using kernels to map the data into higher dimensions where a linear separator may exist. Key aspects include defining the maximum margin hyperplane, using regularization and slack variables to deal with misclassified examples, and kernels which implicitly map data into other feature spaces without explicitly computing the transformations. The regularization and gamma parameters affect model complexity, with regularization controlling overfitting and gamma influencing the similarity between points.
This document provides a tutorial on support vector machines (SVM) for binary classification. It outlines the key concepts of SVM including linear separable and non-separable cases, soft margin classification, solving the SVM optimization problem, kernel methods for non-linear classification, commonly used kernel functions, and relationships between SVM and other methods like logistic regression. Example code for using SVM from the scikit-learn Python package is also provided.
This document provides an introduction to support vector machines (SVM). It discusses the history and key concepts of SVM, including how SVM finds the optimal separating hyperplane with maximum margin between classes to perform linear classification. It also describes how SVM can learn nonlinear decision boundaries using kernel tricks to implicitly map inputs to high-dimensional feature spaces. The document gives examples of commonly used kernel functions and outlines the steps to perform classification with SVM.
These notes are a basic introduction to SVM, assuming almost no prior exposure. They contain some derivations, details, and explanations that not many SVM tutorials usually delve into. Thus, they're meant to augment primary course material (textbook or lecture notes) on SVMs and to help digest the course material.
This document provides an introduction to support vector machines (SVM). It discusses the history and development of SVM, including its introduction in 1992 and popularity due to success in handwritten digit recognition. The document then covers key concepts of SVM, including linear classifiers, maximum margin classification, soft margins, kernels, and nonlinear decision boundaries. Examples are provided to illustrate SVM classification and parameter selection.
This document provides an overview of machine learning techniques for classification and regression, including decision trees, linear models, and support vector machines. It discusses key concepts like overfitting, regularization, and model selection. For decision trees, it explains how they work by binary splitting of space, common splitting criteria like entropy and Gini impurity, and how trees are built using a greedy optimization approach. Linear models like logistic regression and support vector machines are covered, along with techniques like kernels, regularization, and stochastic optimization. The importance of testing on a holdout set to avoid overfitting is emphasized.
SVM is a supervised learning method that finds a hyperplane with maximum margin to separate classes. It uses kernels to map data to higher dimensions to allow for nonlinear separation. The objective is to minimize training error and model complexity by maximizing the margin between classes. SVMs solve a convex optimization problem that finds support vectors and determines the separating hyperplane using kernels, slack variables, and a cost parameter C to balance margin and errors. Parameter selection, like the kernel and its parameters, affects performance and is typically done through grid search and cross-validation.
This document discusses support vector machines (SVMs) for classification. It explains that SVMs find the optimal linear separator with the maximum margin between classes by solving a quadratic optimization problem. Non-linear SVMs are also discussed, which map data to a higher-dimensional feature space to allow for non-linear decision boundaries. The solution involves computing dot products between training examples, which can be done efficiently using kernel functions. SVMs have been successfully applied to various classification tasks and extended to other problems like regression.
This document provides an overview of support vector machines (SVMs) presented by Eric Xing at CMU. It discusses how SVMs find the optimal decision boundary between two classes by maximizing the margin between them. It introduces the concepts of support vectors, which are the data points that define the decision boundary, and the kernel trick, which allows SVMs to implicitly perform computations in higher-dimensional feature spaces without explicitly computing the feature mappings.
This document provides an overview of support vector machines (SVMs) for machine learning. It explains that SVMs find the optimal separating hyperplane that maximizes the margin between examples of separate classes. This is achieved by formulating SVM training as a convex optimization problem that can be solved efficiently. The document discusses how SVMs can handle non-linear decision boundaries using the "kernel trick" to implicitly map examples to higher-dimensional feature spaces without explicitly performing the mapping.
This document presents a splitting method for optimizing nonsmooth nonconvex problems of the form h(Ax) + g(x), where h is nonsmooth and nonconvex, A is a linear map, and g(x) is a convex regularizer. The method relaxes the problem by introducing an auxiliary variable w and minimizing a partially minimized objective with respect to x and w alternately using proximal gradient descent. Applications to problems in phase retrieval, semi-supervised learning, and stochastic shortest path are discussed. Convergence results and empirical performance on these applications are presented.
Support Vector Machines (SVMs) were proposed in 1963 and took shape in the late 1970s as part of statistical learning theory. They became popular in the last decade for classification, regression, and optimization tasks. SVMs find the optimal separating hyperplane that maximizes the margin between positive and negative examples by solving a quadratic programming optimization with linear constraints. This is done efficiently using kernel methods that implicitly map inputs to high-dimensional feature spaces without explicitly computing the mappings. Empirically, SVMs have been shown to generalize well and are among the best performing classifiers.
1. The document discusses various machine learning classification algorithms including neural networks, support vector machines, logistic regression, and radial basis function networks.
2. It provides examples of using straight lines and complex boundaries to classify data with neural networks. Maximum margin hyperplanes are used for support vector machine classification.
3. Logistic regression is described as useful for binary classification problems by using a sigmoid function and cross entropy loss. Radial basis function networks can perform nonlinear classification with a kernel trick.
Support Vector Machine topic of machine learning.pptxCodingChamp1
Support Vector Machines (SVM) find the optimal separating hyperplane that maximizes the margin between two classes of data points. The hyperplane is chosen such that it maximizes the distance from itself to the nearest data points of each class. When data is not linearly separable, the kernel trick can be used to project the data into a higher dimensional space where it may be linearly separable. Common kernel functions include linear, polynomial, radial basis function (RBF), and sigmoid kernels. Soft margin SVMs introduce slack variables to allow some misclassification and better handle non-separable data. The C parameter controls the tradeoff between margin maximization and misclassification.
This document provides an introduction to support vector machines (SVMs). It discusses how SVMs can be used for binary classification, regression, and multi-class problems. SVMs find the optimal separating hyperplane that maximizes the margin between classes. Soft margins allow for misclassified points by introducing slack variables. Kernels are discussed for mapping data into higher dimensional feature spaces to perform linear separation. The document outlines the formulation of SVMs for classification and regression and discusses model selection and different kernel functions.
1) Linear regression aims to model the relationship between inputs (x) and outputs (y) with an equation of the form y = w0 + w1x1 + ... + wkxk + ε, where w are parameters and ε is error.
2) The residual sum of squares (RSS) measures the difference between predicted and observed y values, and linear regression aims to minimize RSS to fit the regression line.
3) Regularization techniques like ridge regression (L2 penalty) and lasso regression (L1 penalty) are used to prevent overfitting by penalizing coefficients, with a tuning parameter λ balancing fit and complexity.
This document discusses a theory solver for linear rational arithmetic (LRA). It begins with an overview of the basic solving process, including preprocessing to separate formulas into equations and bounds, and storing equations in a tableau data structure. It then describes how bounds are asserted on variables, which may tighten bounds or require updating the model if a bound conflicts with the current value assigned to a variable. Asserting a bound on a non-basic variable in particular may cause the values of basic variables to be adjusted. The document provides examples to illustrate these concepts.
Support Vector Machines is the the the the the the the the thesanjaibalajeessn
This document provides an overview of support vector machines (SVMs) and how they can be used for both linear and non-linear classification problems. It explains that SVMs find the optimal separating hyperplane that maximizes the margin between classes. For non-linearly separable data, the document introduces kernel functions, which map the data into a higher-dimensional feature space to allow for nonlinear decision boundaries through the "kernel trick" of computing inner products without explicitly performing the mapping.
Naive Bayes is a simple classification technique based on Bayes' theorem that assumes independence between predictors. It works well for large datasets and is easy to build. Some key points:
- It calculates the probability of class membership based on prior probabilities of classes and predictors.
- It is commonly used for text classification like spam filtering due to its speed and accuracy.
- Variants include Gaussian, Multinomial, and Bernoulli Naive Bayes for different data types.
- Limitations include its assumptions of independence and inability to tune parameters, but it remains a popular first approach for classification problems.
The document discusses the YOLO (You Only Look Once) algorithm for object detection. YOLO uses a single neural network to predict bounding boxes and class probabilities for objects in an image. It divides the image into grids and each grid predicts bounding boxes for objects and confidence scores. YOLO can be used for applications like autonomous driving, wildlife detection, and security systems to identify objects in real-time.
The document discusses the Naive Bayes classification technique. It begins by introducing Naive Bayes and its use of Bayes' theorem with the "naive" assumption of independence among predictors. It then explains how Naive Bayes works by converting data into frequency tables and calculating posterior probabilities. It addresses problems like zero frequencies and provides examples of its use for classification. The document concludes that Naive Bayes is surprisingly effective despite its simplicity.
The document describes a Tours and Travel Management System project called "TRAVEL". It has the following key points:
- The project was created by 5 members to help people easily plan trips, book hotels, transportation, and other travel arrangements.
- The system has features for administrators to manage customer and package details, and for users to book/cancel packages, update their profile, and make payments.
- Design documents include DFD diagrams, ER diagrams, screenshots, and details on the database tables created.
- The goals are to reduce human effort in travel planning, maintain efficient records in an Access database, and improve system security, speed and future features over time.
This document describes a Travel and Tourism project created by students Singh, Md. Saiful Hasan, and Vishal Rai. The project aims to help people easily plan trips without worrying about booking details. It allows administrators to manage the system and users to book and cancel travel packages. The system was developed using methodologies like DFD, ER diagrams, and tables were created in a login and contact database. Snapshots show interfaces for login, signup, home page, packages, and billing. Future plans include improving efficiency, security, and allowing users to view popular destinations and comments. The conclusion states the system will reduce errors and human effort in booking travel arrangements.
This document discusses support vector machines (SVM) and linear discriminant functions. It provides an overview of SVM, explaining that SVM aims to maximize the margin between the separating hyperplane and the closest examples. It discusses finding the optimal hyperplane by maximizing the margin using a quadratic programming formulation. It also addresses the linearly non-separable case using slack variables. The document provides examples of SVM using MATLAB code to solve a sample classification problem.
BREEDING METHODS FOR DISEASE RESISTANCE.pptxRASHMI M G
Plant breeding for disease resistance is a strategy to reduce crop losses caused by disease. Plants have an innate immune system that allows them to recognize pathogens and provide resistance. However, breeding for long-lasting resistance often involves combining multiple resistance genes
What is greenhouse gasses and how many gasses are there to affect the Earth.moosaasad1975
What are greenhouse gasses how they affect the earth and its environment what is the future of the environment and earth how the weather and the climate effects.
The debris of the ‘last major merger’ is dynamically youngSérgio Sacani
The Milky Way’s (MW) inner stellar halo contains an [Fe/H]-rich component with highly eccentric orbits, often referred to as the
‘last major merger.’ Hypotheses for the origin of this component include Gaia-Sausage/Enceladus (GSE), where the progenitor
collided with the MW proto-disc 8–11 Gyr ago, and the Virgo Radial Merger (VRM), where the progenitor collided with the
MW disc within the last 3 Gyr. These two scenarios make different predictions about observable structure in local phase space,
because the morphology of debris depends on how long it has had to phase mix. The recently identified phase-space folds in Gaia
DR3 have positive caustic velocities, making them fundamentally different than the phase-mixed chevrons found in simulations
at late times. Roughly 20 per cent of the stars in the prograde local stellar halo are associated with the observed caustics. Based
on a simple phase-mixing model, the observed number of caustics are consistent with a merger that occurred 1–2 Gyr ago.
We also compare the observed phase-space distribution to FIRE-2 Latte simulations of GSE-like mergers, using a quantitative
measurement of phase mixing (2D causticality). The observed local phase-space distribution best matches the simulated data
1–2 Gyr after collision, and certainly not later than 3 Gyr. This is further evidence that the progenitor of the ‘last major merger’
did not collide with the MW proto-disc at early times, as is thought for the GSE, but instead collided with the MW disc within
the last few Gyr, consistent with the body of work surrounding the VRM.
Remote Sensing and Computational, Evolutionary, Supercomputing, and Intellige...University of Maribor
Slides from talk:
Aleš Zamuda: Remote Sensing and Computational, Evolutionary, Supercomputing, and Intelligent Systems.
11th International Conference on Electrical, Electronics and Computer Engineering (IcETRAN), Niš, 3-6 June 2024
Inter-Society Networking Panel GRSS/MTT-S/CIS Panel Session: Promoting Connection and Cooperation
https://www.etran.rs/2024/en/home-english/
When I was asked to give a companion lecture in support of ‘The Philosophy of Science’ (https://shorturl.at/4pUXz) I decided not to walk through the detail of the many methodologies in order of use. Instead, I chose to employ a long standing, and ongoing, scientific development as an exemplar. And so, I chose the ever evolving story of Thermodynamics as a scientific investigation at its best.
Conducted over a period of >200 years, Thermodynamics R&D, and application, benefitted from the highest levels of professionalism, collaboration, and technical thoroughness. New layers of application, methodology, and practice were made possible by the progressive advance of technology. In turn, this has seen measurement and modelling accuracy continually improved at a micro and macro level.
Perhaps most importantly, Thermodynamics rapidly became a primary tool in the advance of applied science/engineering/technology, spanning micro-tech, to aerospace and cosmology. I can think of no better a story to illustrate the breadth of scientific methodologies and applications at their best.
This presentation explores a brief idea about the structural and functional attributes of nucleotides, the structure and function of genetic materials along with the impact of UV rays and pH upon them.
hematic appreciation test is a psychological assessment tool used to measure an individual's appreciation and understanding of specific themes or topics. This test helps to evaluate an individual's ability to connect different ideas and concepts within a given theme, as well as their overall comprehension and interpretation skills. The results of the test can provide valuable insights into an individual's cognitive abilities, creativity, and critical thinking skills
ANAMOLOUS SECONDARY GROWTH IN DICOT ROOTS.pptxRASHMI M G
Abnormal or anomalous secondary growth in plants. It defines secondary growth as an increase in plant girth due to vascular cambium or cork cambium. Anomalous secondary growth does not follow the normal pattern of a single vascular cambium producing xylem internally and phloem externally.
The use of Nauplii and metanauplii artemia in aquaculture (brine shrimp).pptxMAGOTI ERNEST
Although Artemia has been known to man for centuries, its use as a food for the culture of larval organisms apparently began only in the 1930s, when several investigators found that it made an excellent food for newly hatched fish larvae (Litvinenko et al., 2023). As aquaculture developed in the 1960s and ‘70s, the use of Artemia also became more widespread, due both to its convenience and to its nutritional value for larval organisms (Arenas-Pardo et al., 2024). The fact that Artemia dormant cysts can be stored for long periods in cans, and then used as an off-the-shelf food requiring only 24 h of incubation makes them the most convenient, least labor-intensive, live food available for aquaculture (Sorgeloos & Roubach, 2021). The nutritional value of Artemia, especially for marine organisms, is not constant, but varies both geographically and temporally. During the last decade, however, both the causes of Artemia nutritional variability and methods to improve poorquality Artemia have been identified (Loufi et al., 2024).
Brine shrimp (Artemia spp.) are used in marine aquaculture worldwide. Annually, more than 2,000 metric tons of dry cysts are used for cultivation of fish, crustacean, and shellfish larva. Brine shrimp are important to aquaculture because newly hatched brine shrimp nauplii (larvae) provide a food source for many fish fry (Mozanzadeh et al., 2021). Culture and harvesting of brine shrimp eggs represents another aspect of the aquaculture industry. Nauplii and metanauplii of Artemia, commonly known as brine shrimp, play a crucial role in aquaculture due to their nutritional value and suitability as live feed for many aquatic species, particularly in larval stages (Sorgeloos & Roubach, 2021).
The binding of cosmological structures by massless topological defectsSérgio Sacani
Assuming spherical symmetry and weak field, it is shown that if one solves the Poisson equation or the Einstein field
equations sourced by a topological defect, i.e. a singularity of a very specific form, the result is a localized gravitational
field capable of driving flat rotation (i.e. Keplerian circular orbits at a constant speed for all radii) of test masses on a thin
spherical shell without any underlying mass. Moreover, a large-scale structure which exploits this solution by assembling
concentrically a number of such topological defects can establish a flat stellar or galactic rotation curve, and can also deflect
light in the same manner as an equipotential (isothermal) sphere. Thus, the need for dark matter or modified gravity theory is
mitigated, at least in part.
1. Introduction to Information Retrieval
1
Linear classifiers: Which Hyperplane?
Lots of possible solutions for a, b, c.
Some methods find a separating hyperplane,
but not the optimal one [according to some
criterion of expected goodness]
E.g., perceptron
Support Vector Machine (SVM) finds an
optimal* solution.
Maximizes the distance between the
hyperplane and the “difficult points” close to
decision boundary
One intuition: if there are no points near the
decision surface, then there are no very
uncertain classification decisions
This line
represents the
decision
boundary:
ax + by − c = 0
2. Introduction to Information Retrieval
2
Another intuition
If you have to place a fat separator between classes,
you have less choices, and so the capacity of the
model has been decreased
Sec. 15.1
3. Introduction to Information Retrieval
3
Support Vector Machine (SVM)
Support vectors
Maximizes
margin
SVMs maximize the margin around
the separating hyperplane.
A.k.a. large margin classifiers
The decision function is fully
specified by a subset of training
samples, the support vectors.
Solving SVMs is a quadratic
programming problem
Seen by many as the most
successful current text
classification method*
*but other discriminative methods
often perform very similarly
Sec. 15.1
Narrower
margin
4. Introduction to Information Retrieval
4
w: decision hyperplane normal vector
xi: data point i
yi: class of data point i (+1 or -1) NB: Not 1/0
Classifier is: f(xi) = sign(wTxi + b)
Functional margin of xi is: yi (wTxi + b)
But note that we can increase this margin simply by scaling w, b….
Functional margin of dataset is twice the minimum
functional margin for any point
The factor of 2 comes from measuring the whole width of the
margin
Maximum Margin: Formalization
Sec. 15.1
5. Introduction to Information Retrieval
5
Geometric Margin
Distance from example to the separator is
Examples closest to the hyperplane are support vectors.
Margin ρ of the separator is the width of separation between support vectors
of classes.
w
x
w b
y
r
T
+
=
r
ρ
x
x′
w
Derivation of finding r:
Dotted line x’−x is perpendicular to
decision boundary so parallel to w.
Unit vector is w/|w|, so line is rw/|w|.
x’ = x – yrw/|w|.
x’ satisfies wTx’+b = 0.
So wT(x –yrw/|w|) + b = 0
Recall that |w| = sqrt(wTw).
So wTx –yr|w| + b = 0
So, solving for r gives:
r = y(wTx + b)/|w|
Sec. 15.1
6. Introduction to Information Retrieval
6
Linear SVM Mathematically
The linearly separable case
Assume that all data is at least distance 1 from the hyperplane, then the
following two constraints follow for a training set {(xi ,yi)}
For support vectors, the inequality becomes an equality
Then, since each example’s distance from the hyperplane is
The margin is:
wTxi + b ≥ 1 if yi = 1
wTxi + b ≤ −1 if yi = −1
w
2
=
r
w
x
w b
y
r
T
+
=
Sec. 15.1
7. Introduction to Information Retrieval
7
Linear Support Vector Machine (SVM)
Hyperplane
wT x + b = 0
Extra scale constraint:
mini=1,…,n |wTxi + b| = 1
This implies:
wT(xa–xb) = 2
ρ = ||xa–xb||2 = 2/||w||2
wT x + b = 0
wTxa + b = 1
wTxb + b = -1
ρ
Sec. 15.1
8. Introduction to Information Retrieval
8
Linear SVMs Mathematically (cont.)
Then we can formulate the quadratic optimization problem:
A better formulation (min ||w|| = max 1/ ||w|| ):
Find w and b such that
is maximized; and for all {(xi , yi)}
wTxi + b ≥ 1 if yi=1; wTxi + b ≤ -1 if yi = -1
w
2
=
r
Find w and b such that
Φ(w) =½ wTw is minimized;
and for all {(xi ,yi)}: yi (wTxi + b) ≥ 1
Sec. 15.1
9. Introduction to Information Retrieval
9
Solving the Optimization Problem
This is now optimizing a quadratic function subject to linear constraints
Quadratic optimization problems are a well-known class of mathematical
programming problem, and many (intricate) algorithms exist for solving them
(with many special ones built for SVMs)
The solution involves constructing a dual problem where a Lagrange
multiplier αi is associated with every constraint in the primary problem:
Find w and b such that
Φ(w) =½ wTw is minimized;
and for all {(xi ,yi)}: yi (wTxi + b) ≥ 1
Find α1…αN such that
Q(α) =Σαi - ½ΣΣαiαjyiyjxi
Txj is maximized and
(1) Σαiyi = 0
(2) αi ≥ 0 for all αi
Sec. 15.1
10. Introduction to Information Retrieval
10
The Optimization Problem Solution
The solution has the form:
Each non-zero αi indicates that corresponding xi is a support vector.
Then the classifying function will have the form:
Notice that it relies on an inner product between the test point x and the
support vectors xi
We will return to this later.
Also keep in mind that solving the optimization problem involved
computing the inner products xi
Txj between all pairs of training points.
w =Σαiyixi b= yk- wTxk for any xk such that αk 0
f(x) = Σαiyixi
Tx + b
Sec. 15.1
11. Introduction to Information Retrieval
11
Soft Margin Classification
If the training data is not
linearly separable, slack
variables ξi can be added to
allow misclassification of
difficult or noisy examples.
Allow some errors
Let some points be moved
to where they belong, at a
cost
Still, try to minimize training
set errors, and to place
hyperplane “far” from each
class (large margin)
ξj
ξi
Sec. 15.2.1
12. Introduction to Information Retrieval
12
Soft Margin Classification
Mathematically
The old formulation:
The new formulation incorporating slack variables:
Parameter C can be viewed as a way to control overfitting
A regularization term
Find w and b such that
Φ(w) =½ wTw is minimized and for all {(xi ,yi)}
yi (wTxi + b) ≥ 1
Find w and b such that
Φ(w) =½ wTw + CΣξi is minimized and for all {(xi ,yi)}
yi (wTxi + b) ≥ 1- ξi and ξi ≥ 0 for all i
Sec. 15.2.1
13. Introduction to Information Retrieval
13
Soft Margin Classification – Solution
The dual problem for soft margin classification:
Neither slack variables ξi nor their Lagrange multipliers appear in the dual
problem!
Again, xi with non-zero αi will be support vectors.
Solution to the dual problem is:
Find α1…αN such that
Q(α) =Σαi - ½ΣΣαiαjyiyjxi
Txj is maximized and
(1) Σαiyi = 0
(2) 0 ≤ αi ≤ C for all αi
w = Σαiyixi
b = yk(1- ξk) - wTxk where k = argmax αk’
k’ f(x) = Σαiyixi
Tx + b
w is not needed explicitly for
classification!
Sec. 15.2.1
14. Introduction to Information Retrieval
14
Classification with SVMs
Given a new point x, we can score its projection
onto the hyperplane normal:
I.e., compute score: wTx + b = Σαiyixi
Tx + b
Decide class based on whether < or > 0
Can set confidence threshold t.
-1
0
1
Score > t: yes
Score < -t: no
Else: don’t know
Sec. 15.1
15. Introduction to Information Retrieval
15
Linear SVMs: Summary
The classifier is a separating hyperplane.
The most “important” training points are the support vectors; they define
the hyperplane.
Quadratic optimization algorithms can identify which training points xi are
support vectors with non-zero Lagrangian multipliers αi.
Both in the dual formulation of the problem and in the solution, training
points appear only inside inner products:
Find α1…αN such that
Q(α) =Σαi - ½ΣΣαiαjyiyjxi
Txj is maximized and
(1) Σαiyi = 0
(2) 0 ≤ αi ≤ C for all αi
f(x) = Σαiyixi
Tx + b
Sec. 15.2.1
16. Introduction to Information Retrieval
16
Non-linear SVMs
Datasets that are linearly separable (with some noise) work out great:
But what are we going to do if the dataset is just too hard?
How about … mapping data to a higher-dimensional space:
0
x2
x
0 x
0 x
Sec. 15.2.3
17. Introduction to Information Retrieval
17
Non-linear SVMs: Feature spaces
General idea: the original feature space can always
be mapped to some higher-dimensional feature
space where the training set is separable:
Φ: x → φ(x)
Sec. 15.2.3
18. Introduction to Information Retrieval
18
The “Kernel Trick”
The linear classifier relies on an inner product between vectors K(xi,xj)=xi
Txj
If every datapoint is mapped into high-dimensional space via some
transformation Φ: x → φ(x), the inner product becomes:
K(xi,xj)= φ(xi) Tφ(xj)
A kernel function is some function that corresponds to an inner product in
some expanded feature space.
Example:
2-dimensional vectors x=[x1 x2]; let K(xi,xj)=(1 + xi
Txj)2
,
Need to show that K(xi,xj)= φ(xi) Tφ(xj):
K(xi,xj)=(1 + xi
Txj)2
,= 1+ xi1
2xj1
2 + 2 xi1xj1 xi2xj2+ xi2
2xj2
2 + 2xi1xj1 + 2xi2xj2=
= [1 xi1
2 √2 xi1xi2 xi2
2 √2xi1 √2xi2]T [1 xj1
2 √2 xj1xj2 xj2
2 √2xj1 √2xj2]
= φ(xi)Tφ(xj) where φ(x) = [1 x1
2 √2 x1x2 x2
2 √2x1 √2x2]
Sec. 15.2.3
19. Introduction to Information Retrieval
19
Kernels
Why use kernels?
Make non-separable problem separable.
Map data into better representational space
Common kernels
Linear
Polynomial K(x,z) = (1+xTz)d
Gives feature conjunctions
Radial basis function (infinite dimensional space)
Haven’t been very useful in text classification
Sec. 15.2.3
20. Introduction to Information Retrieval
20
Most (over)used data set
21578 documents
9603 training, 3299 test articles (ModApte/Lewis split)
118 categories
An article can be in more than one category
Learn 118 binary category distinctions
Average document: about 90 types, 200 tokens
Average number of classes assigned
1.24 for docs with at least one category
Only about 10 out of 118 categories are large
Common categories
(#train, #test)
Evaluation: Classic Reuters-21578 Data Set
• Earn (2877, 1087)
• Acquisitions (1650, 179)
• Money-fx (538, 179)
• Grain (433, 149)
• Crude (389, 189)
• Trade (369,119)
• Interest (347, 131)
• Ship (197, 89)
• Wheat (212, 71)
• Corn (182, 56)
Sec. 15.2.4
21. Introduction to Information Retrieval
21
Reuters Text Categorization data set
(Reuters-21578) document
<REUTERS TOPICS="YES" LEWISSPLIT="TRAIN" CGISPLIT="TRAINING-SET"
OLDID="12981" NEWID="798">
<DATE> 2-MAR-1987 16:51:43.42</DATE>
<TOPICS><D>livestock</D><D>hog</D></TOPICS>
<TITLE>AMERICAN PORK CONGRESS KICKS OFF TOMORROW</TITLE>
<DATELINE> CHICAGO, March 2 - </DATELINE><BODY>The American Pork Congress
kicks off tomorrow, March 3, in Indianapolis with 160 of the nations pork producers from 44
member states determining industry positions on a number of issues, according to the National Pork
Producers Council, NPPC.
Delegates to the three day Congress will be considering 26 resolutions concerning various issues,
including the future direction of farm policy and the tax law as it applies to the agriculture sector.
The delegates will also debate whether to endorse concepts of a national PRV (pseudorabies virus)
control and eradication program, the NPPC said.
A large trade show, in conjunction with the congress, will feature the latest in technology in all
areas of the industry, the NPPC added. Reuter
</BODY></TEXT></REUTERS>
Sec. 15.2.4
22. Introduction to Information Retrieval
22
Per class evaluation measures
Recall: Fraction of docs in class i
classified correctly:
Precision: Fraction of docs assigned
class i that are actually about class i:
Accuracy: (1 - error rate) Fraction of
docs classified correctly:
cii
i
å
cij
i
å
j
å
cii
c ji
j
å
cii
cij
j
å
Sec. 15.2.4
23. Introduction to Information Retrieval
23
Micro- vs. Macro-Averaging
If we have more than one class, how do we combine
multiple performance measures into one quantity?
Macroaveraging: Compute performance for each
class, then average.
Microaveraging: Collect decisions for all classes,
compute contingency table, evaluate.
Sec. 15.2.4
24. Introduction to Information Retrieval
24
Micro- vs. Macro-Averaging: Example
Truth:
yes
Truth:
no
Classifi
er: yes
10 10
Classifi
er: no
10 970
Truth:
yes
Truth:
no
Classifi
er: yes
90 10
Classifi
er: no
10 890
Truth:
yes
Truth:
no
Classifier:
yes
100 20
Classifier:
no
20 1860
Class 1 Class 2 Micro Ave. Table
Macroaveraged precision: (0.5 + 0.9)/2 = 0.7
Microaveraged precision: 100/120 = .83
Microaveraged score is dominated by score on
common classes
Sec. 15.2.4
29. Introduction to Information Retrieval
29
Good practice department:
Make a confusion matrix
In a perfect classification, only the diagonal has non-zero entries
Look at common confusions and how they might be addressed
53
Class assigned by classifier
Actual
Class
This (i, j) entry means 53 of the docs actually in
class i were put in class j by the classifier.
Sec. 15.2.4
30. Introduction to Information Retrieval
30
The Real World
P. Jackson and I. Moulinier. 2002. Natural Language Processing for Online Applications
“There is no question concerning the commercial value of
being able to classify documents automatically by content.
There are myriad potential applications of such a capability
for corporate intranets, government departments, and
Internet publishers”
“Understanding the data is one of the keys to successful
categorization, yet this is an area in which most categorization
tool vendors are extremely weak. Many of the ‘one size fits
all’ tools on the market have not been tested on a wide range
of content types.”
Sec. 15.3
31. Introduction to Information Retrieval
31
The Real World
Gee, I’m building a text classifier for real, now!
What should I do?
How much training data do you have?
None
Very little
Quite a lot
A huge amount and its growing
Sec. 15.3.1
32. Introduction to Information Retrieval
32
Manually written rules
No training data, adequate editorial staff?
Never forget the hand-written rules solution!
If (wheat or grain) and not (whole or bread) then
Categorize as grain
In practice, rules get a lot bigger than this
Can also be phrased using tf or tf.idf weights
With careful crafting (human tuning on development
data) performance is high:
Construe: 94% recall, 84% precision over 675 categories
(Hayes and Weinstein 1990)
Amount of work required is huge
Estimate 2 days per class … plus maintenance
Sec. 15.3.1
33. Introduction to Information Retrieval
33
Very little data?
If you’re just doing supervised classification, you
should stick to something high bias
There are theoretical results that Naïve Bayes should do
well in such circumstances (Ng and Jordan 2002 NIPS)
The interesting theoretical answer is to explore semi-
supervised training methods:
Bootstrapping, EM over unlabeled documents, …
The practical answer is to get more labeled data as
soon as you can
How can you insert yourself into a process where humans
will be willing to label data for you??
Sec. 15.3.1
34. Introduction to Information Retrieval
34
A reasonable amount of data?
Perfect!
We can use all our clever classifiers
Roll out the SVM!
But if you are using an SVM/NB etc., you should
probably be prepared with the “hybrid” solution
where there is a Boolean overlay
Or else to use user-interpretable Boolean-like models like
decision trees
Users like to hack, and management likes to be able to
implement quick fixes immediately
Sec. 15.3.1
35. Introduction to Information Retrieval
35
A huge amount of data?
This is great in theory for doing accurate
classification…
But it could easily mean that expensive methods like
SVMs (train time) or kNN (test time) are quite
impractical
Naïve Bayes can come back into its own again!
Or other advanced methods with linear training/test
complexity like regularized logistic regression (though
much more expensive to train)
Sec. 15.3.1
36. Introduction to Information Retrieval
36
Accuracy as a function of data size
With enough data the choice
of classifier may not matter
much, and the best choice
may be unclear
Data: Brill and Banko on
context-sensitive spelling
correction
But the fact that you have to
keep doubling your data to
improve performance is a
little unpleasant
Sec. 15.3.1
37. Introduction to Information Retrieval
37
How many categories?
A few (well separated ones)?
Easy!
A zillion closely related ones?
Think: Yahoo! Directory, Library of Congress classification,
legal applications
Quickly gets difficult!
Classifier combination is always a useful technique
Voting, bagging, or boosting multiple classifiers
Much literature on hierarchical classification
Mileage fairly unclear, but helps a bit (Tie-Yan Liu et al. 2005)
May need a hybrid automatic/manual solution
Sec. 15.3.2
38. Introduction to Information Retrieval
38
How can one tweak performance?
Aim to exploit any domain-specific useful features
that give special meanings or that zone the data
E.g., an author byline or mail headers
Aim to collapse things that would be treated as
different but shouldn’t be.
E.g., part numbers, chemical formulas
Does putting in “hacks” help?
You bet!
Feature design and non-linear weighting is very important in the
performance of real-world systems
Sec. 15.3.2
39. Introduction to Information Retrieval
39
Upweighting
You can get a lot of value by differentially weighting
contributions from different document zones:
That is, you count as two instances of a word when
you see it in, say, the abstract
Upweighting title words helps (Cohen & Singer 1996)
Doubling the weighting on the title words is a good rule of thumb
Upweighting the first sentence of each paragraph helps
(Murata, 1999)
Upweighting sentences that contain title words helps (Ko
et al, 2002)
Sec. 15.3.2
40. Introduction to Information Retrieval
40
Two techniques for zones
1. Have a completely separate set of
features/parameters for different zones like the title
2. Use the same features (pooling/tying their
parameters) across zones, but upweight the
contribution of different zones
Commonly the second method is more successful: it
costs you nothing in terms of sparsifying the data,
but can give a very useful performance boost
Which is best is a contingent fact about the data
Sec. 15.3.2
41. Introduction to Information Retrieval
41
Text Summarization techniques in text
classification
Text Summarization: Process of extracting key pieces
from text, normally by features on sentences
reflecting position and content
Much of this work can be used to suggest weightings
for terms in text categorization
See: Kolcz, Prabakarmurthi, and Kalita, CIKM 2001: Summarization
as feature selection for text categorization
Categorizing purely with title,
Categorizing with first paragraph only
Categorizing with paragraph with most keywords
Categorizing with first and last paragraphs, etc.
Sec. 15.3.2
42. Introduction to Information Retrieval
42
Does stemming/lowercasing/… help?
As always, it’s hard to tell, and empirical evaluation is
normally the gold standard
But note that the role of tools like stemming is rather
different for TextCat vs. IR:
For IR, you often want to collapse forms of the verb
oxygenate and oxygenation, since all of those documents
will be relevant to a query for oxygenation
For TextCat, with sufficient training data, stemming does
no good. It only helps in compensating for data sparseness
(which can be severe in TextCat applications). Overly
aggressive stemming can easily degrade performance.
Sec. 15.3.2
43. Introduction to Information Retrieval
43
Measuring Classification
Figures of Merit
Not just accuracy; in the real world, there are
economic measures:
Your choices are:
Do no classification
That has a cost (hard to compute)
Do it all manually
Has an easy-to-compute cost if doing it like that now
Do it all with an automatic classifier
Mistakes have a cost
Do it with a combination of automatic classification and manual
review of uncertain/difficult/”new” cases
Commonly the last method is most cost efficient and is
adopted
44. Introduction to Information Retrieval
44
A common problem: Concept Drift
Categories change over time
Example: “president of the united states”
1999: clinton is great feature
2010: clinton is bad feature
One measure of a text classification system is how
well it protects against concept drift.
Favors simpler models like Naïve Bayes
Feature selection: can be bad in protecting against
concept drift
45. Introduction to Information Retrieval
45
Summary
Support vector machines (SVM)
Choose hyperplane based on support vectors
Support vector = “critical” point close to decision boundary
(Degree-1) SVMs are linear classifiers.
Kernels: powerful and elegant way to define similarity metric
Perhaps best performing text classifier
But there are other methods that perform about as well as SVM, such
as regularized logistic regression (Zhang & Oles 2001)
Partly popular due to availability of good software
SVMlight is accurate and fast – and free (for research)
Now lots of good software: libsvm, TinySVM, ….
Comparative evaluation of methods
Real world: exploit domain specific structure!
46. Introduction to Information Retrieval
Resources for today’s lecture
Christopher J. C. Burges. 1998. A Tutorial on Support Vector Machines for Pattern Recognition
S. T. Dumais. 1998. Using SVMs for text categorization, IEEE Intelligent Systems, 13(4)
S. T. Dumais, J. Platt, D. Heckerman and M. Sahami. 1998. Inductive learning algorithms and
representations for text categorization. CIKM ’98, pp. 148-155.
Yiming Yang, Xin Liu. 1999. A re-examination of text categorization methods. 22nd Annual
International SIGIR
Tong Zhang, Frank J. Oles. 2001. Text Categorization Based on Regularized Linear
Classification Methods. Information Retrieval 4(1): 5-31
Trevor Hastie, Robert Tibshirani and Jerome Friedman. Elements of Statistical Learning: Data
Mining, Inference and Prediction. Springer-Verlag, New York.
T. Joachims, Learning to Classify Text using Support Vector Machines. Kluwer, 2002.
Fan Li, Yiming Yang. 2003. A Loss Function Analysis for Classification Methods in Text
Categorization. ICML 2003: 472-479.
Tie-Yan Liu, Yiming Yang, Hao Wan, et al. 2005. Support Vector Machines Classification with
Very Large Scale Taxonomy, SIGKDD Explorations, 7(1): 36-43.
‘Classic’ Reuters-21578 data set: http://www.daviddlewis.com /resources
/testcollections/reuters21578/
Ch. 15