The document presents a support vector machine (SVM) model for predicting yarn properties from spinning variables. The SVM model architecture includes modules for data acquisition from a yarn production process, an SVM-based process simulator for model training, and a user interface. Model selection involves choosing appropriate parameters, and experimental results show the SVM model maintains predictive accuracy better than artificial neural network models, particularly for noisy real-world production data. The study demonstrates SVMs can provide an alternative to predict yarn quality more reliably.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
This document proposes applying boosting techniques to attraction-based demand models that are popular in pricing optimization. It formulates a multinomial likelihood for a semiparametric demand choice model (DCM) where product utility is specified without a fixed functional form. Gradient boosting is used to maximize the likelihood and estimate the nonparametric utility functions. The boosted tree-based approach flexibly models utility as a sum of trees, addressing limitations of existing DCMs like non-stationary demand and nonlinear attribute effects.
Phenomenological Decomposition Heuristics for Process Design Synthesis of Oil...Alkis Vazacopoulos
The processing of a raw material is a phenomenon that varies its quantity and quality along a specific network and logics and logistics to transform it into final products. To capture the production framework in a mathematical programming model, a full space formulation integrating discrete design variables and quantity-quality relations gives rise to large scale non-convex mixed-integer nonlinear models, which are often difficult to solve. In order to overcome this problem, we propose a phenomenological decomposition heuristic to solve separately in a first stage the quantity and logic variables in a mixed-integer linear model, and in a second stage the quantity and quality variables in a nonlinear programming formulation. By considering different fuel demand scenarios, the problem becomes a two-stage stochastic programming model, where nonlinear models for each demand scenario are iteratively restricted by the process design results. Two examples demonstrate the tailor-made decomposition scheme to construct the complex oil-refinery process design in a quantitative manner.
D E S I G N A N D A N A L Y S I S O F A L G O R I T H M S J N T U M O D E L...guest3f9c6b
This document contains four sets of questions for a Design and Analysis of Algorithms exam. Each set contains 8 questions related to algorithms and their analysis. The questions cover topics such as performance analysis, matrix multiplication, binary search trees, greedy algorithms, dynamic programming, graph algorithms, NP-completeness, and more. Students must answer any 5 of the 8 questions in each set, which involve explaining concepts, proving statements, writing pseudocode, and analyzing time complexity.
This document compares the performance of genetic algorithms and niching methods for clustering undirected weighted graphs. It discusses how genetic algorithms can converge prematurely on local optima for complex problems like clustering that have many potential solutions. Niching methods like deterministic crowding are introduced to maintain population diversity and allow the search of multiple peaks in parallel. The paper applies genetic algorithms and deterministic crowding to the graph clustering problem and compares their results on test graphs, finding that deterministic crowding is more computationally demanding but provides better optimization.
The document proposes new methods for finding the fuzzy optimal solution to fuzzy transportation problems. It develops fuzzy versions of the Vogel's approximation method and MODI method to solve fuzzy transportation problems without converting them to classical problems. The methods are illustrated through numerical examples and the results are discussed. The key contributions are new algorithms for directly obtaining the fuzzy optimal solution rather than crisp solutions, making the approaches applicable to real-world problems with uncertain parameters.
IJCER (www.ijceronline.com) International Journal of computational Engineerin...ijceronline
This document presents a method for solving fuzzy assignment problems where costs are represented by linguistic variables and fuzzy numbers. Linguistic variables are used to convert qualitative cost data into quantitative fuzzy numbers. Yager's ranking method is applied to rank the fuzzy numbers, transforming the fuzzy assignment problem into a crisp one. The resulting crisp problem is then solved using the Hungarian method to find the optimal assignment that minimizes total cost. A numerical example demonstrates the approach, showing a fuzzy cost matrix converted to crisp values and solved. The method allows handling assignment problems with imprecise, qualitative cost data using fuzzy logic concepts.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
This document proposes applying boosting techniques to attraction-based demand models that are popular in pricing optimization. It formulates a multinomial likelihood for a semiparametric demand choice model (DCM) where product utility is specified without a fixed functional form. Gradient boosting is used to maximize the likelihood and estimate the nonparametric utility functions. The boosted tree-based approach flexibly models utility as a sum of trees, addressing limitations of existing DCMs like non-stationary demand and nonlinear attribute effects.
Phenomenological Decomposition Heuristics for Process Design Synthesis of Oil...Alkis Vazacopoulos
The processing of a raw material is a phenomenon that varies its quantity and quality along a specific network and logics and logistics to transform it into final products. To capture the production framework in a mathematical programming model, a full space formulation integrating discrete design variables and quantity-quality relations gives rise to large scale non-convex mixed-integer nonlinear models, which are often difficult to solve. In order to overcome this problem, we propose a phenomenological decomposition heuristic to solve separately in a first stage the quantity and logic variables in a mixed-integer linear model, and in a second stage the quantity and quality variables in a nonlinear programming formulation. By considering different fuel demand scenarios, the problem becomes a two-stage stochastic programming model, where nonlinear models for each demand scenario are iteratively restricted by the process design results. Two examples demonstrate the tailor-made decomposition scheme to construct the complex oil-refinery process design in a quantitative manner.
D E S I G N A N D A N A L Y S I S O F A L G O R I T H M S J N T U M O D E L...guest3f9c6b
This document contains four sets of questions for a Design and Analysis of Algorithms exam. Each set contains 8 questions related to algorithms and their analysis. The questions cover topics such as performance analysis, matrix multiplication, binary search trees, greedy algorithms, dynamic programming, graph algorithms, NP-completeness, and more. Students must answer any 5 of the 8 questions in each set, which involve explaining concepts, proving statements, writing pseudocode, and analyzing time complexity.
This document compares the performance of genetic algorithms and niching methods for clustering undirected weighted graphs. It discusses how genetic algorithms can converge prematurely on local optima for complex problems like clustering that have many potential solutions. Niching methods like deterministic crowding are introduced to maintain population diversity and allow the search of multiple peaks in parallel. The paper applies genetic algorithms and deterministic crowding to the graph clustering problem and compares their results on test graphs, finding that deterministic crowding is more computationally demanding but provides better optimization.
The document proposes new methods for finding the fuzzy optimal solution to fuzzy transportation problems. It develops fuzzy versions of the Vogel's approximation method and MODI method to solve fuzzy transportation problems without converting them to classical problems. The methods are illustrated through numerical examples and the results are discussed. The key contributions are new algorithms for directly obtaining the fuzzy optimal solution rather than crisp solutions, making the approaches applicable to real-world problems with uncertain parameters.
IJCER (www.ijceronline.com) International Journal of computational Engineerin...ijceronline
This document presents a method for solving fuzzy assignment problems where costs are represented by linguistic variables and fuzzy numbers. Linguistic variables are used to convert qualitative cost data into quantitative fuzzy numbers. Yager's ranking method is applied to rank the fuzzy numbers, transforming the fuzzy assignment problem into a crisp one. The resulting crisp problem is then solved using the Hungarian method to find the optimal assignment that minimizes total cost. A numerical example demonstrates the approach, showing a fuzzy cost matrix converted to crisp values and solved. The method allows handling assignment problems with imprecise, qualitative cost data using fuzzy logic concepts.
This annual planning document outlines the topics, learning objectives, outcomes, and activities for teaching mathematics to Form 5 students over 4 weeks. Week 1 focuses on number bases, with the objective of understanding and using numbers in bases two, eight, and five. Students will learn to state, write, convert, and perform computations on numbers in different bases. Weeks 2-4 cover graphs of functions, with the objective of understanding and using graphs of linear, quadratic, cubic, and reciprocal functions. Students will learn to draw and analyze graphs of various types of functions.
Annual Planning for Mathematics Form 4 2011sue sha
This document provides a 3-week annual planning outline for Form 4 mathematics in 2011. It outlines the topics, learning outcomes, and points to note for each week. Week 1 covers significant figures and standard form. Students will learn to round numbers, perform operations, and solve problems involving these concepts. Week 2 focuses on quadratic expressions and equations, including identifying, factorizing, and solving them. Week 3 is about sets, including defining, representing, determining subsets and complements, and comparing sets. The plan provides learning objectives and emphasizes applying the concepts to everyday situations and using calculators when relevant.
This document outlines a mathematics learning plan spanning 26 weeks. It includes the learning areas, outcomes, and notes for each week. The main topics covered are standard form, quadratic expressions/equations, sets, mathematical reasoning, straight lines, statistics, and probability. Learning outcomes focus on understanding key concepts and performing related calculations and problem solving for each topic.
This document discusses variations of the interval linear assignment problem. It begins with an introduction to assignment problems and defines them as problems that assign resources to activities to minimize cost or maximize profit on a one-to-one basis. It then provides the mathematical model for standard assignment problems and discusses variations such as non-square matrices, maximization/minimization objectives, constrained assignments, and alternate optimal solutions. The document also gives examples of managerial applications and provides two numerical examples solving interval linear assignment problems using an interval Hungarian method.
This document discusses endogenous benchmarking of mutual funds using bootstrap data envelopment analysis (DEA) in R. It aims to benchmark funds using multiple outputs, stochastic dominance indicators, and bootstrap analysis for robust evaluation. The study uses DEA with daily return mean and upside potential mean as outputs and return variance as the input to evaluate select sector funds over 6 months. Descriptive statistics of the technical efficiency scores from input-oriented, output-oriented, and graph hyperbolic DEA models are provided. Bootstrapping techniques including naive and smoothed bootstrap, bias correction, and confidence intervals are also introduced.
The document provides examples of linear programming problems and their formulations. It discusses the key components of a linear programming problem including decision variables, objective function, and constraints.
Example 1 describes a manufacturing problem with constraints on grinding and polishing hours. The objective is to maximize profit by determining the optimal production quantities of two models.
Example 2 formulates a production problem with constraints on raw materials and labor hours. The objective is to maximize profit by determining the optimal production quantities of two products.
Example 3 formulates a farming problem with constraints on fertilizer requirements. The objective is to minimize cost by determining the optimal quantities of two fertilizer mixtures to purchase.
This document provides a set of 10 exercises for students taking a course in Mathematics for Economics and Business. The exercises are based on sections from the course textbook and cover topics like quadratic functions, supply and demand analysis, marginal product of labor, profit maximization, differentiation, matrix operations, systems of equations, and utility theory. Students can earn up to 2 extra points on their final exam grade by completing and submitting the optional exercises by the deadline of June 15th.
This document contains a 10-page machine learning exam consisting of 5 problems testing knowledge of concepts like Naive Bayes, decision trees, neural networks, reinforcement learning, overfitting avoidance, and computational learning theory. The exam is closed book and allows one sheet of notes and a calculator. It will take place on May 6, 2003 in Room 3345 of the Engineering Hall from 7:15-9:15pm.
This document appears to be an exam for a course on image processing. It contains 20 multiple choice questions testing concepts related to image processing techniques. Some of the concepts addressed include image transformations, filtering, restoration, and color space conversions. The questions cover topics such as piecewise linear transformations, monotonic functions, histogram processing methods, distance metrics, and stages of image processing like acquisition and enhancement.
Considering Multiple Instances of Items in Combinatorial Reverse AuctionsShubhashis Shil
This paper proposes a genetic algorithm (GA) called GAMICRA to solve the winner determination problem in combinatorial reverse auctions when multiple instances of items are considered. GAMICRA modifies the chromosome representation and fitness function to account for multiple items. It includes two procedures, RemoveRedundancy and RemoveEmptiness, to repair infeasible chromosomes by ensuring the number of selected item instances does not exceed or fall below the buyer's requirements. Experimental results demonstrate GAMICRA finds solutions with minimum procurement cost in efficient processing time and does not suffer from inconsistency issues.
11.performance evaluation of geometric active contour (gac) and enhanced geom...Alexander Decker
This document summarizes a study that evaluated the performance of two medical image segmentation models: Geometric Active Contour (GAC) and Enhanced Geometric Active Contour (ENGAC). ENGAC was formulated by combining GAC with Kernel Principal Component Analysis to introduce more shape variability. The models were tested on brain MRI and CT images. ENGAC showed improved segmentation accuracy and ability to extract shapes compared to GAC, demonstrating the effectiveness of combining GAC with KPCA for medical image segmentation.
Performance evaluation of geometric active contour (gac) and enhanced geometr...Alexander Decker
The document summarizes a study that evaluates the performance of two medical image segmentation models: the Geometric Active Contour (GAC) model and an Enhanced Geometric Active Contour (ENGAC) model developed by the authors. The ENGAC model uses Kernel Principal Component Analysis to address limitations in the GAC model, such as deviation from object outlines and noise-induced edges. The authors trained both models on medical images and evaluated their segmentation accuracy.
The travel agent is planning a charter trip with three tour package types (Deluxe, Standard, Economy) that differ in flight seating/service, accommodations, meals, and tours. The agent must determine the number of each package type to offer to maximize total profit, subject to constraints on minimum/maximum percentages of each type and maximum Deluxe packages per aircraft. The objective is to maximize total profit calculated as the sum of per-package profits multiplied by the number of packages, minus the fixed aircraft cost.
This document contains 36 multiple choice questions about queuing theory and waiting line models. It covers topics like the characteristics of queuing systems, different types of queuing models (M/M/1, M/D/1, etc.), assumptions of queuing models, and using queuing theory to analyze real world systems. Several questions also provide word problems to test the application of queuing concepts to calculate metrics like average queue length and server utilization. The questions assess understanding of key queuing theory terminology, assumptions, models, and calculations.
Giapetto's Woodcarving manufactures wooden toys and wants to maximize weekly profit. It can produce soldiers for $27 each requiring $10 materials and 14 hours labor/overhead, or trains for $21 each requiring $9 materials and 10 hours labor/overhead. Each week it has 100 finishing hours, 80 carpentry hours, and demand for at most 40 soldiers. The problem is to determine how many of each toy to produce to maximize profit.
The document discusses considerations around moving property valuation from paper records to digital/cloud-based systems. It provides examples from Lucas County, Ohio where geographic information systems (GIS) are used to develop valuation models that incorporate spatial variables like neighborhood adjustments. Maps show response surfaces for location adjustments, rent rates, and expense rates across neighborhoods. The models aim to quantify spatial components rationally and explainably to various stakeholders.
The HARVEST Programme evaluates feature detectors and descriptors through indirect and direct benchmarks. Indirect benchmarks measure repeatability and matching scores on the affine covariant testbed to evaluate how features persist across transformations. Direct benchmarks evaluate features on image retrieval tasks using the Oxford 5k dataset to measure real-world performance. VLBenchmarks provides software for easily running these benchmarks and reproducing published results. It allows comparing features and selecting the best for a given application.
This document contains 80 questions related to digital signal and image processing. The questions cover topics such as image transforms, filters, noise, compression, segmentation, and more. Justification is required for some questions, while others involve calculations, derivations or explanations of key concepts. The questions vary in difficulty and mark allocation from 5 to 10 marks. They also specify the exam or year in which the question appeared previously.
ppt on summer training Vardhman's Mahavir Spinning Mills Baddi (weaving devis...Bharat Rana
The document provides details about the operations at Mahavir Spinning including warping, sizing, and drawing processes. It lists the equipment used such as warping machines, sizing machines, and describes parameters like yarn counts, tensions, and temperatures. The summary also provides an organizational chart and contact details for the heads of different departments at the facility.
The document provides an overview of the basics of spinning, including the key processes and equipment used to transform raw cotton into yarn. It describes the sequential steps of ginning, blow room, carding, drawing, simplex, ring spinning and cone winding. It also defines important spinning concepts like count, draft and twist. The overall spinning process aims to parallelize, attenuate and impart twist to fibers through successive drafting and twisting operations to produce a compact yarn package.
Yarn is composed of fibers that are twisted together. The amount of twist is measured in turns per inch and can be low, medium, or high. Twist direction is indicated by S or Z letters. Natural fiber yarns are made through processes like opening, carding, combing, drawing and roving to align fibers, then ring spinning draws, twists and winds them into yarn. Man-made fibers are extruded through spinnerets as filaments and solidified, then converted into yarns using wet, melt or dry spinning.
Assignment on parameter of different parts of ring frame machine of yarn iiPartho Biswas
The document discusses key parameters of different parts of a ring frame machine. It describes the functions of the apron, drafting system, ring and traveler. Parameters like roller diameter and pressure, apron and cradle lengths, ring diameter and lift, traveler size and number are discussed in detail for different yarn counts. The ideal twist multiplier for different fiber types and end uses is also covered.
This annual planning document outlines the topics, learning objectives, outcomes, and activities for teaching mathematics to Form 5 students over 4 weeks. Week 1 focuses on number bases, with the objective of understanding and using numbers in bases two, eight, and five. Students will learn to state, write, convert, and perform computations on numbers in different bases. Weeks 2-4 cover graphs of functions, with the objective of understanding and using graphs of linear, quadratic, cubic, and reciprocal functions. Students will learn to draw and analyze graphs of various types of functions.
Annual Planning for Mathematics Form 4 2011sue sha
This document provides a 3-week annual planning outline for Form 4 mathematics in 2011. It outlines the topics, learning outcomes, and points to note for each week. Week 1 covers significant figures and standard form. Students will learn to round numbers, perform operations, and solve problems involving these concepts. Week 2 focuses on quadratic expressions and equations, including identifying, factorizing, and solving them. Week 3 is about sets, including defining, representing, determining subsets and complements, and comparing sets. The plan provides learning objectives and emphasizes applying the concepts to everyday situations and using calculators when relevant.
This document outlines a mathematics learning plan spanning 26 weeks. It includes the learning areas, outcomes, and notes for each week. The main topics covered are standard form, quadratic expressions/equations, sets, mathematical reasoning, straight lines, statistics, and probability. Learning outcomes focus on understanding key concepts and performing related calculations and problem solving for each topic.
This document discusses variations of the interval linear assignment problem. It begins with an introduction to assignment problems and defines them as problems that assign resources to activities to minimize cost or maximize profit on a one-to-one basis. It then provides the mathematical model for standard assignment problems and discusses variations such as non-square matrices, maximization/minimization objectives, constrained assignments, and alternate optimal solutions. The document also gives examples of managerial applications and provides two numerical examples solving interval linear assignment problems using an interval Hungarian method.
This document discusses endogenous benchmarking of mutual funds using bootstrap data envelopment analysis (DEA) in R. It aims to benchmark funds using multiple outputs, stochastic dominance indicators, and bootstrap analysis for robust evaluation. The study uses DEA with daily return mean and upside potential mean as outputs and return variance as the input to evaluate select sector funds over 6 months. Descriptive statistics of the technical efficiency scores from input-oriented, output-oriented, and graph hyperbolic DEA models are provided. Bootstrapping techniques including naive and smoothed bootstrap, bias correction, and confidence intervals are also introduced.
The document provides examples of linear programming problems and their formulations. It discusses the key components of a linear programming problem including decision variables, objective function, and constraints.
Example 1 describes a manufacturing problem with constraints on grinding and polishing hours. The objective is to maximize profit by determining the optimal production quantities of two models.
Example 2 formulates a production problem with constraints on raw materials and labor hours. The objective is to maximize profit by determining the optimal production quantities of two products.
Example 3 formulates a farming problem with constraints on fertilizer requirements. The objective is to minimize cost by determining the optimal quantities of two fertilizer mixtures to purchase.
This document provides a set of 10 exercises for students taking a course in Mathematics for Economics and Business. The exercises are based on sections from the course textbook and cover topics like quadratic functions, supply and demand analysis, marginal product of labor, profit maximization, differentiation, matrix operations, systems of equations, and utility theory. Students can earn up to 2 extra points on their final exam grade by completing and submitting the optional exercises by the deadline of June 15th.
This document contains a 10-page machine learning exam consisting of 5 problems testing knowledge of concepts like Naive Bayes, decision trees, neural networks, reinforcement learning, overfitting avoidance, and computational learning theory. The exam is closed book and allows one sheet of notes and a calculator. It will take place on May 6, 2003 in Room 3345 of the Engineering Hall from 7:15-9:15pm.
This document appears to be an exam for a course on image processing. It contains 20 multiple choice questions testing concepts related to image processing techniques. Some of the concepts addressed include image transformations, filtering, restoration, and color space conversions. The questions cover topics such as piecewise linear transformations, monotonic functions, histogram processing methods, distance metrics, and stages of image processing like acquisition and enhancement.
Considering Multiple Instances of Items in Combinatorial Reverse AuctionsShubhashis Shil
This paper proposes a genetic algorithm (GA) called GAMICRA to solve the winner determination problem in combinatorial reverse auctions when multiple instances of items are considered. GAMICRA modifies the chromosome representation and fitness function to account for multiple items. It includes two procedures, RemoveRedundancy and RemoveEmptiness, to repair infeasible chromosomes by ensuring the number of selected item instances does not exceed or fall below the buyer's requirements. Experimental results demonstrate GAMICRA finds solutions with minimum procurement cost in efficient processing time and does not suffer from inconsistency issues.
11.performance evaluation of geometric active contour (gac) and enhanced geom...Alexander Decker
This document summarizes a study that evaluated the performance of two medical image segmentation models: Geometric Active Contour (GAC) and Enhanced Geometric Active Contour (ENGAC). ENGAC was formulated by combining GAC with Kernel Principal Component Analysis to introduce more shape variability. The models were tested on brain MRI and CT images. ENGAC showed improved segmentation accuracy and ability to extract shapes compared to GAC, demonstrating the effectiveness of combining GAC with KPCA for medical image segmentation.
Performance evaluation of geometric active contour (gac) and enhanced geometr...Alexander Decker
The document summarizes a study that evaluates the performance of two medical image segmentation models: the Geometric Active Contour (GAC) model and an Enhanced Geometric Active Contour (ENGAC) model developed by the authors. The ENGAC model uses Kernel Principal Component Analysis to address limitations in the GAC model, such as deviation from object outlines and noise-induced edges. The authors trained both models on medical images and evaluated their segmentation accuracy.
The travel agent is planning a charter trip with three tour package types (Deluxe, Standard, Economy) that differ in flight seating/service, accommodations, meals, and tours. The agent must determine the number of each package type to offer to maximize total profit, subject to constraints on minimum/maximum percentages of each type and maximum Deluxe packages per aircraft. The objective is to maximize total profit calculated as the sum of per-package profits multiplied by the number of packages, minus the fixed aircraft cost.
This document contains 36 multiple choice questions about queuing theory and waiting line models. It covers topics like the characteristics of queuing systems, different types of queuing models (M/M/1, M/D/1, etc.), assumptions of queuing models, and using queuing theory to analyze real world systems. Several questions also provide word problems to test the application of queuing concepts to calculate metrics like average queue length and server utilization. The questions assess understanding of key queuing theory terminology, assumptions, models, and calculations.
Giapetto's Woodcarving manufactures wooden toys and wants to maximize weekly profit. It can produce soldiers for $27 each requiring $10 materials and 14 hours labor/overhead, or trains for $21 each requiring $9 materials and 10 hours labor/overhead. Each week it has 100 finishing hours, 80 carpentry hours, and demand for at most 40 soldiers. The problem is to determine how many of each toy to produce to maximize profit.
The document discusses considerations around moving property valuation from paper records to digital/cloud-based systems. It provides examples from Lucas County, Ohio where geographic information systems (GIS) are used to develop valuation models that incorporate spatial variables like neighborhood adjustments. Maps show response surfaces for location adjustments, rent rates, and expense rates across neighborhoods. The models aim to quantify spatial components rationally and explainably to various stakeholders.
The HARVEST Programme evaluates feature detectors and descriptors through indirect and direct benchmarks. Indirect benchmarks measure repeatability and matching scores on the affine covariant testbed to evaluate how features persist across transformations. Direct benchmarks evaluate features on image retrieval tasks using the Oxford 5k dataset to measure real-world performance. VLBenchmarks provides software for easily running these benchmarks and reproducing published results. It allows comparing features and selecting the best for a given application.
This document contains 80 questions related to digital signal and image processing. The questions cover topics such as image transforms, filters, noise, compression, segmentation, and more. Justification is required for some questions, while others involve calculations, derivations or explanations of key concepts. The questions vary in difficulty and mark allocation from 5 to 10 marks. They also specify the exam or year in which the question appeared previously.
ppt on summer training Vardhman's Mahavir Spinning Mills Baddi (weaving devis...Bharat Rana
The document provides details about the operations at Mahavir Spinning including warping, sizing, and drawing processes. It lists the equipment used such as warping machines, sizing machines, and describes parameters like yarn counts, tensions, and temperatures. The summary also provides an organizational chart and contact details for the heads of different departments at the facility.
The document provides an overview of the basics of spinning, including the key processes and equipment used to transform raw cotton into yarn. It describes the sequential steps of ginning, blow room, carding, drawing, simplex, ring spinning and cone winding. It also defines important spinning concepts like count, draft and twist. The overall spinning process aims to parallelize, attenuate and impart twist to fibers through successive drafting and twisting operations to produce a compact yarn package.
Yarn is composed of fibers that are twisted together. The amount of twist is measured in turns per inch and can be low, medium, or high. Twist direction is indicated by S or Z letters. Natural fiber yarns are made through processes like opening, carding, combing, drawing and roving to align fibers, then ring spinning draws, twists and winds them into yarn. Man-made fibers are extruded through spinnerets as filaments and solidified, then converted into yarns using wet, melt or dry spinning.
Assignment on parameter of different parts of ring frame machine of yarn iiPartho Biswas
The document discusses key parameters of different parts of a ring frame machine. It describes the functions of the apron, drafting system, ring and traveler. Parameters like roller diameter and pressure, apron and cradle lengths, ring diameter and lift, traveler size and number are discussed in detail for different yarn counts. The ideal twist multiplier for different fiber types and end uses is also covered.
Textile it is a flexible woven material consisting of a network of natural or artificial fibres often referred to as thread or yarn derived from animals, plants,minerals,synthetics Some chemicals hazardous to human health or the environment.
This document discusses principles and machinery for yarn production. It describes how staple fibers from natural or chemical sources are processed through several steps including opening, cleaning, mixing, carding, and drawing to produce slivers and eventually yarns. The ring spinning process is identified as the most common worldwide for producing yarns from short staple fibers up to 40mm in length, suitable for fabrics like woven, knit, and braided textiles. Key machines involved in the production process are described along with their functions at each stage.
Air jet looms use compressed air to propel weft yarn across the warp yarn at rates up to 2850 meters per minute, allowing for multicolor weft insertion of up to 6 colors. Air jet looms have advantages like bidirectional computer communication, automatic pick repair, and controls on weft insertion timing, but their main disadvantage is higher power consumption due to compressed air use.
The document discusses the benefits of exercise for mental health. Regular physical activity can help reduce anxiety and depression and improve mood and cognitive functioning. Exercise causes chemical changes in the brain that may help protect against mental illness and improve symptoms.
The document discusses various spinning techniques, including rotor spinning. It provides a history of rotor spinning, describing its development from early prototypes in the 1950s to widespread commercial use by the 1970s. It explains the basic operational sequence of rotor spinning, which involves feeding a sliver of fibers into a rapidly rotating rotor that separates, compacts, and twists the fibers into yarn. The document compares properties of rotor-spun and ring-spun yarns.
The document discusses various techniques for testing yarn characteristics and quality, including twist, count, strength, and evenness. It describes machines like the automatic twist tester and Uster evenness tester that can accurately measure attributes like twists per meter and coefficient of variation. Maintaining proper tension, reducing friction, and following testing standards are important for obtaining precise yarn assessment. A variety of testing helps ensure high quality from raw materials to finished fabrics.
This document discusses ring spun yarn production. It provides details on the production process including bale management, blow room operations, carding, drawing, combing, roving using a simplex machine, ring spinning, autoconing, heat setting, and packing. Production parameters are given for 24s, 30s, and 40s ring spun yarn as well as 24s and 30s combed yarn. The document provides a comprehensive overview of the ring spinning process from raw cotton to finished yarn.
This document provides an overview of the layout and machinery used in a spinning plant. It describes the key processes including blow room, carding, draw frame, combing, speed frame, ring frame, winding, and conditioning. It lists common machinery manufacturers and provides links to related textile technology Facebook pages and the author's blog.
Textile yarn manufacturing involves several key steps. Fibers are first opened and cleaned through blowroom and carding processes. Drawing further arranges fibers into parallel strands called slivers. Roving attenuates slivers and adds twist. Ring frames then spin roving into yarn using drafts and twist. Combing upgrades raw materials by removing short fibers. The processes work to arrange, draft, and twist fibers into consistent yarns for weaving or other uses.
This document provides an introduction to support vector machines (SVMs). It discusses how SVMs can be used for binary classification, regression, and multi-class problems. SVMs find the optimal separating hyperplane that maximizes the margin between classes. Soft margins allow for misclassified points by introducing slack variables. Kernels are discussed for mapping data into higher dimensional feature spaces to perform linear separation. The document outlines the formulation of SVMs for classification and regression and discusses model selection and different kernel functions.
This document discusses the optimal design of CMOS operational amplifiers (op-amps) using geometric programming. It begins by introducing CMOS op-amps and the goal of sizing transistors to meet performance specifications while minimizing area and power. The problem is formulated as a convex optimization that can be solved efficiently. Numerical experiments show the approach finds the globally optimal solution. The accuracy of performance predictions is verified against circuit simulations. Finally, the document provides background on geometric programming and defines the standard form of a geometric program that is used to solve the CMOS op-amp sizing problem.
A simple framework for contrastive learning of visual representationsDevansh16
Link: https://machine-learning-made-simple.medium.com/learnings-from-simclr-a-framework-contrastive-learning-for-visual-representations-6c145a5d8e99
If you'd like to discuss something, text me on LinkedIn, IG, or Twitter. To support me, please use my referral link to Robinhood. It's completely free, and we both get a free stock. Not using it is literally losing out on free money.
Check out my other articles on Medium. : https://rb.gy/zn1aiu
My YouTube: https://rb.gy/88iwdd
Reach out to me on LinkedIn. Let's connect: https://rb.gy/m5ok2y
My Instagram: https://rb.gy/gmvuy9
My Twitter: https://twitter.com/Machine01776819
My Substack: https://devanshacc.substack.com/
Live conversations at twitch here: https://rb.gy/zlhk9y
Get a free stock on Robinhood: https://join.robinhood.com/fnud75
This paper presents SimCLR: a simple framework for contrastive learning of visual representations. We simplify recently proposed contrastive self-supervised learning algorithms without requiring specialized architectures or a memory bank. In order to understand what enables the contrastive prediction tasks to learn useful representations, we systematically study the major components of our framework. We show that (1) composition of data augmentations plays a critical role in defining effective predictive tasks, (2) introducing a learnable nonlinear transformation between the representation and the contrastive loss substantially improves the quality of the learned representations, and (3) contrastive learning benefits from larger batch sizes and more training steps compared to supervised learning. By combining these findings, we are able to considerably outperform previous methods for self-supervised and semi-supervised learning on ImageNet. A linear classifier trained on self-supervised representations learned by SimCLR achieves 76.5% top-1 accuracy, which is a 7% relative improvement over previous state-of-the-art, matching the performance of a supervised ResNet-50. When fine-tuned on only 1% of the labels, we achieve 85.8% top-5 accuracy, outperforming AlexNet with 100X fewer labels.
Comments: ICML'2020. Code and pretrained models at this https URL
Subjects: Machine Learning (cs.LG); Computer Vision and Pattern Recognition (cs.CV); Machine Learning (stat.ML)
Cite as: arXiv:2002.05709 [cs.LG]
(or arXiv:2002.05709v3 [cs.LG] for this version)
Submission history
From: Ting Chen [view email]
[v1] Thu, 13 Feb 2020 18:50:45 UTC (5,093 KB)
[v2] Mon, 30 Mar 2020 15:32:51 UTC (5,047 KB)
[v3] Wed, 1 Jul 2020 00:09:08 UTC (5,829 KB)
System 1 and System 2 were basic early systems for image matching that used color and texture matching. Descriptor-based approaches like SIFT provided more invariance but not perfect invariance. Patch descriptors like SIFT were improved by making them more invariant to lighting changes like color and illumination shifts. The best performance came from combining descriptors with color invariance. Representing images as histograms of visual word occurrences captured patterns in local image patches and allowed measuring similarity between images. Large vocabularies of visual words provided more discriminative power but were costly to compute and store.
This document discusses probabilistic error bounds for order reduction of smooth nonlinear models. It begins with motivation for using reduced order models (ROM) in computationally intensive applications and the need for error metrics. It then provides background on Dixon's theory for probabilistic error bounds, which has mostly been used for linear models. The document outlines snapshot and gradient-based reduction algorithms to reduce the response and parameter interfaces of a model. It defines different types of errors that can occur from reducing these interfaces and discusses propagating the errors across interfaces using Dixon's theory. Numerical tests and results are briefly mentioned along with conclusions.
Prpagation of Error Bounds Across reduction interfacesMohammad
This document summarizes the motivation, background, algorithms, and theory behind developing probabilistic error bounds for order reduction of smooth nonlinear models. It discusses how reduced order models (ROM) play an important role in computationally intensive applications and the need to provide error metrics with ROM predictions. It then describes snapshot and gradient-based reduction algorithms used at the response and parameter interfaces, respectively. It introduces different types of errors that can occur from reducing the response space only, parameter space only, or both spaces simultaneously, and how Dixon's theory can be used to estimate these relative errors.
We approach the screening problem - i.e. detecting which inputs of a computer model significantly impact the output - from a formal Bayesian model selection point of view. That is, we place a Gaussian process prior on the computer model and consider the $2^p$ models that result from assuming that each of the subsets of the $p$ inputs affect the response. The goal is to obtain the posterior probabilities of each of these models. In this talk, we focus on the specification of objective priors on the model-specific parameters and on convenient ways to compute the associated marginal likelihoods. These two problems that normally are seen as unrelated, have challenging connections since the priors proposed in the literature are specifically designed to have posterior modes in the boundary of the parameter space, hence precluding the application of approximate integration techniques based on e.g. Laplace approximations. We explore several ways of circumventing this difficulty, comparing different methodologies with synthetic examples taken from the literature.
Authors: Gonzalo Garcia-Donato (Universidad de Castilla-La Mancha) and Rui Paulo (Universidade de Lisboa)
For the full video of this presentation, please visit:
https://www.embedded-vision.com/platinum-members/xnor/embedded-vision-training/videos/pages/may-2019-embedded-vision-summit-rastegari
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Mohammad Rastegari, Chief Technology Officer at Xnor.ai, presents the "Methods for Creating Efficient Convolutional Neural Networks" tutorial at the May 2019 Embedded Vision Summit.
In the past few years, convolutional neural networks (CNNs) have revolutionized several application domains in AI and computer vision. The biggest challenge with state-of-the-art CNNs is the massive compute demands that prevent these models from being used in many embedded systems and other resource-constrained environments.
In this talk, Rastegari explains and contrasts several recent techniques that enable CNN models with high accuracy to consume very little memory and processor resources. These methods include a variety of algorithmic and optimization approaches to deep learning models. Quantization, sparsification and compact model design are three of the major techniques for efficient CNNs, which are discussed in the context of computer vision applications including detection, recognition and segmentation.
This document provides an overview of support vector machines (SVMs) presented by Eric Xing at CMU. It discusses how SVMs find the optimal decision boundary between two classes by maximizing the margin between them. It introduces the concepts of support vectors, which are the data points that define the decision boundary, and the kernel trick, which allows SVMs to implicitly perform computations in higher-dimensional feature spaces without explicitly computing the feature mappings.
Machine Learning in Speech and Language Processingbutest
This document summarizes a tutorial on machine learning in speech and language processing presented on March 19, 2005 at ICASSP'05 in Philadelphia. The second half of the tutorial focused on kernel and margin-based classifiers, including an overview of statistical learning theory, kernel classifiers such as support vector machines (SVMs), and margin-based classifiers. Examples of applications to speech and natural language processing were provided.
Multivariable Control System Design for Quadruple Tank Process using Quantita...IDES Editor
This paper focus on design of multivariable
controller for Quadruple Tank Process, a two input two
output system with large plant uncertainty using QFT
methodology. In the present work, a new approach using
Quantitative Feedback Theory (QFT) is formulated for
design of a robust two degree of freedom controller for
Quadruple Tank Process. The design is done in frequency
domain. This paper presents a design method for a 2 x 2
multiple input multiple output system. The plant
uncertainties are transformed into equivalent external
disturbance sets, and the design problem becomes one of
the external disturbance attenuation. The objective is to
find compensator functions which guarantee that the
system performance bounds are satisfied over the range
of plant uncertainty. The methodology is successfully
applied to design a two degree of freedom compensator
Quadruple Tank Process.
The document discusses denoising techniques for images captured by single-sensor digital cameras using a color filter array (CFA). It compares principal component analysis (PCA) and independent component analysis (ICA) based denoising of CFA images. PCA and ICA are linear adaptive transforms that can be used to represent image data in a way that better distinguishes signal from noise. The document outlines the PCA and ICA algorithms and discusses how K-means clustering can be used with them. It generates noise to add to a reference image and implements PCA and ICA based denoising in MATLAB. Performance is evaluated using metrics like PSNR, WPSNR, SSIM and correlation coefficient.
This document presents a method for compressed sensing image recovery using adaptive nonlinear filtering. Compressed sensing allows reconstruction of sparse signals from incomplete measurements. It proposes using nonlinear filtering strategies in an iterative framework to avoid image recovery problems. The method initializes parameters, updates bound constraints, applies a nonlinear filter, and checks for convergence. Experimental results show the peak signal-to-noise ratio, CPU time, and recovered image to evaluate performance. The technique provides efficient, stable and fast image recovery from compressed measurements with low computational cost.
Random Matrix Theory and Machine Learning - Part 4Fabian Pedregosa
Deep learning models with millions or billions of parameters should overfit according to classical theory, but they do not. The emerging theory of double descent seeks to explain why larger neural networks can generalize well. Random matrix theory provides a tractable framework to model double descent through random feature models, where the number of random features controls model capacity. In the high-dimensional limit, the test error of random feature regression exhibits a double descent shape that can be computed analytically.
1) The document discusses production function analysis using stochastic frontier models like the translog and Cobb-Douglas functions.
2) It explains the specifications of the translog and Cobb-Douglas production functions and how they are used to estimate production elasticities and returns to scale.
3) The stochastic frontier approach models production as equal to the deterministic production function plus noise, minus inefficiency, allowing estimation of technical efficiency for each firm.
This document discusses modal analysis of a stepped steel bar using MATLAB and ANSYS. It develops the global stiffness and mass matrices of the bar, determines the lowest natural frequency and mode shape using MATLAB, then verifies the results in ANSYS. The lowest frequency is determined to be 64.9 Hz using MATLAB and 64.212 Hz using ANSYS, showing close agreement between the two simulation methods.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
This document summarizes an academic paper presented at the International Conference on Emerging Trends in Engineering and Management in 2014. The paper proposes a design and implementation of an elliptic curve scalar multiplier on a field programmable gate array (FPGA) using the Karatsuba algorithm. It aims to reduce hardware complexity by using a polynomial basis representation of finite fields and projective coordinate representation of elliptic curves. Key mathematical concepts like finite fields, point addition, and point doubling that are important to elliptic curve cryptography are also discussed at a high level.
Deep learning for molecules, introduction to chainer chemistryKenta Oono
1) The document introduces machine learning and deep learning techniques for predicting chemical properties, including rule-based approaches versus learning-based approaches using neural message passing algorithms.
2) It discusses several graph neural network models like NFP, GGNN, WeaveNet and SchNet that can be applied to molecular graphs to predict characteristics. These models update atom representations through message passing and graph convolution operations.
3) Chainer Chemistry is introduced as a deep learning framework that can be used with these graph neural network models for chemical property prediction tasks. Examples of tasks include drug discovery and molecular generation.
An Efficient And Safe Framework For Solving Optimization ProblemsLisa Muthukumar
This document describes a new optimization framework called QuadOpt that combines interval analysis techniques with safe linear relaxations to provide rigorous and efficient global optimization. QuadOpt uses consistency techniques from QuadSolver to reduce variable domains and computes a safe lower bound on a linear relaxation of the problem. It performs branch and bound search to rigorously bound the global optimum. Experimental results on test problems show that QuadOpt provides certified solutions with fewer splits than other rigorous methods while being faster than nonsafe solvers.
Este documento analiza el modelo de negocio de YouTube. Explica que YouTube y otros sitios de video online representan un nuevo modelo de negocio para contenidos audiovisuales debido al cambio en los hábitos de consumo causado por las nuevas tecnologías. Describe cómo YouTube aprovecha la participación de los usuarios para mejorar continuamente y atraer una audiencia diferente a la de los medios tradicionales.
The defense was successful in portraying Michael Jackson favorably to the jury in several ways:
1) They dressed Jackson in ornate costumes that conveyed images of purity, innocence, and humility.
2) Jackson was shown entering the courtroom as if on a red carpet, emphasizing his celebrity status.
3) Jackson appeared vulnerable, childlike, and in declining health during the trial, eliciting sympathy from jurors.
4) Defense attorney Tom Mesereau effectively presented a coherent narrative of Jackson as a victim and portrayed Neverland as a place of refuge, undermining the prosecution's arguments.
Michael Jackson was born in 1958 in Gary, Indiana and rose to fame in the 1960s as the lead singer of The Jackson 5, topping music charts in the 1970s. As a solo artist in the 1980s, his album Thriller broke music records. In the 1990s and 2000s, Jackson faced several legal issues related to child abuse allegations while continuing to release music. He married Lisa Marie Presley and Debbie Rowe and had two children before his death in 2009.
Popular Reading Last Updated April 1, 2010 Adams, Lorraine The ...butest
This document appears to be a list of popular books from various authors. It includes over 150 book titles across many genres such as fiction, non-fiction, memoirs, and novels. The books cover a wide range of topics from politics to cooking to autobiographies.
The prosecution lost the Michael Jackson trial due to several key mistakes and weaknesses in their case:
1) The lead prosecutor, Thomas Sneddon, was too personally invested in the case against Jackson, having pursued him for over a decade without success.
2) Sneddon's opening statement was disorganized and weak, failing to effectively outline the prosecution's case.
3) The accuser's mother was not credible and damaged the prosecution's case through her erratic testimony, history of lies and con artist behavior.
4) Many prosecution witnesses were not credible due to prior lawsuits against Jackson, debts owed to him, or having been fired by him. Several witnesses even took the Fifth Amendment.
Here are three examples of public relations from around the world:
1. The UK government's "Be Clear on Cancer" campaign which aims to raise awareness of cancer symptoms and encourage early diagnosis.
2. Samsung's global brand marketing and sponsorship activities which aim to increase brand awareness and favorability of Samsung products worldwide.
3. The Brazilian government's efforts to improve its international image and relations with other countries through strategic communication and diplomacy.
The three most important functions of public relations are:
1. Media relations because the media is how most organizations reach their key audiences. Strong media relationships are crucial.
2. Writing, because written communication is at the core of public relations and how most information is
Michael Jackson Please Wait... provides biographical information about Michael Jackson including his birthdate, birthplace, parents, height, interests, idols, favorite foods, films, and more. It discusses his background, career highlights including influential albums like Thriller, and films he appeared in such as The Wiz and Moonwalker. The document contains photos and details about Jackson's life and illustrious music career.
The MYnstrel Free Press Volume 2: Economic Struggles, Meet Jazzbutest
The document discusses the process of manufacturing celebrity and its negative byproducts. It argues that celebrities are rarely the best in their individual pursuits like singing, dancing, etc. but become famous due to being products of a system controlled by wealthy elites. This system stifles opportunities for worthy artists and creates feudalism. The document also asserts that manufactured celebrities should not be viewed as role models due to behaviors like drug abuse and narcissism that result from the celebrity-making process.
Michael Jackson was a child star who rose to fame with the Jackson 5 in the late 1960s and early 1970s. As a solo artist in the 1970s and 1980s, he had immense commercial success with albums like Off the Wall, Thriller, and Bad, which featured hit singles and groundbreaking music videos. However, his career and public image were plagued by controversies related to allegations of child sexual abuse in the 1990s and 2000s. He continued recording and performing but faced ongoing media scrutiny into his private life until his death in 2009.
Social Networks: Twitter Facebook SL - Slide 1butest
The document discusses using social networking tools like Twitter and Facebook in K-12 education. Twitter allows students and teachers to share short updates and can be used to give parents a window into classroom activities. Facebook allows targeted advertising that could be used to promote educational activities. Both tools could help facilitate communication between schools and communities if used properly while managing privacy and security concerns.
Facebook has over 300 million active users who log on daily, and allows brands to create public profile pages to interact with users. Pages are for brands and organizations only, while groups can be made by any user about any topic. Pages do not show admin names and have no limits on fans, while groups display admin names and are limited to 5,000 members. Content on pages should aim to provoke action from subscribers and establish a regular posting schedule using a conversational tone.
Executive Summary Hare Chevrolet is a General Motors dealership ...butest
Hare Chevrolet is a car dealership located in Noblesville, Indiana that has successfully used social media platforms like Twitter, Facebook, and YouTube to create a positive brand image. They invest significant time interacting directly with customers online to foster a sense of community rather than overtly advertising. As a result, Hare Chevrolet has built a large, engaged audience on social media and serves as a model for how brands can use online presences strategically.
Welcome to the Dougherty County Public Library's Facebook and ...butest
This document provides instructions for signing up for Facebook and Twitter accounts. It outlines the sign up process for both platforms, including filling out forms with name, email, password and other details. It describes how the platforms will then search for friends and suggest people to connect with. It also explains how to search for and follow the Dougherty County Public Library page on both Facebook and Twitter once signed up. The document concludes by thanking participants and providing a contact for any additional questions.
Paragon Software announces the release of Paragon NTFS for Mac OS X 8.0, which provides full read and write access to NTFS partitions on Macs. It is the fastest NTFS driver on the market, achieving speeds comparable to native Mac file systems. Paragon NTFS for Mac 8.0 fully supports the latest Mac OS X Snow Leopard operating system in 64-bit mode and allows easy transfer of files between Windows and Mac partitions without additional hardware or software.
This document provides compatibility information for Olympus digital products used with Macintosh OS X. It lists various digital cameras, photo printers, voice recorders, and accessories along with their connection type and any notes on compatibility. Some products require booting into OS 9.1 for software compatibility or do not support devices that need a serial port. Drivers and software are available for download from Olympus and other websites for many products to enable use with OS X.
To use printers managed by the university's Information Technology Services (ITS), students and faculty must install the ITS Remote Printing software on their Mac OS X computer. This allows them to add network printers, log in with their ITS account credentials, and print documents while being charged per page to funds in their pre-paid ITS account. The document provides step-by-step instructions for installing the software, adding a network printer, and printing to that printer from any internet connection on or off campus. It also explains the pay-in-advance printing payment system and how to check printing charges.
The document provides an overview of the Mac OS X user interface for beginners, including descriptions of the desktop, login screen, desktop elements like the dock and hard disk, and how to perform common tasks like opening files and folders. It also addresses frequently asked questions for Windows users switching to Mac OS X, such as where documents are stored, how to save or find documents, and what the equivalent of the C: drive is in Mac OS X. The document concludes with sections on file management tasks like creating and deleting folders, organizing files within applications, using Spotlight search, and an overview of the Dashboard feature.
This document provides a checklist for securing Mac OS X version 10.5, focusing on hardening the operating system, securing user accounts and administrator accounts, enabling file encryption and permissions, implementing intrusion detection, and maintaining password security. It describes the Unix infrastructure and security framework that Mac OS X is built on, leveraging open source software and following the Common Data Security Architecture model. The checklist can be used to audit a system or harden it against security threats.
This document summarizes a course on web design that was piloted in the summer of 2003. The course was a 3 credit course that met 4 times a week for lectures and labs. It covered topics such as XHTML, CSS, JavaScript, Photoshop, and building a basic website. 18 students from various majors enrolled. Student and instructor evaluations found the course to be very successful overall, though some improvements were suggested like ensuring proper software and pairing programming/non-programming students. The document also discusses implications of incorporating web design material into existing computer science curriculums.
1. 19th International Conference on Production Research
AN INTELLIGENT REASONING MODEL FOR YARN MANUFACTURE
Jian-Guo Yang, Fu Zhou, Jing-Zhu Pang, Zhi-Jun Lv
College Of Mechanical Engineering, University of DongHua, Ren Min Bei Road 2999, Song Jiang Zone,
Shanghai, P R China
Abstract
Although many works have been done to construct prediction models on yarn processing quality, the
relation between spinning variables and yarn properties has not been established conclusively so far.
Support vector machines (SVMs), based on statistical learning theory, are gaining applications in the areas
of machine learning and pattern recognition because of the high accuracy and good generalization
capability. This study briefly introduces the SVM regression algorithms, and presents the SVM based
system architecture for predicting yarn properties. Model selection which amounts to search in hyper-
parameter space is performed for study of suitable parameters with grid-research method. Experimental
results have been compared with those of ANN models. The investigation indicates that in the small data
sets and real-life production, SVM models are capable of remaining the stability of predictive accuracy,
and more suitable for noisy and dynamic spinning process
Keywords:
Support vector machines, Structure risk minimization, Predictive model, Kernel function, Yarn quality
1 INTRODUCTION dimensional feature space. The unknown parameters w
Changing economic and political conditions and the and b in Equation (1) are estimated using the training set,
increasing globalisation of the market mean that the textile G. To avoid over fitting and thereby improving the
sector faces ever challenges. To stay competitive, there is generalization capability, following regularized functional
an increasing need for companies to invest in new involving summation of the empirical risk and a complexity
products. Along the textile chain, innovative technologies 2
term w , is minimized [3]
and solutions are required to continuously optimize the
production process. High quality standards and an M
1
∑
2 2
extensive technical and trade know-how are thus R reg = Remp + λ w = f ( xi ) − y i ε
+λ w
prerequisite to keep abreast of the growing dynamics of M i =1
the sector [1]. Although many works have been done to
(2)
construct prediction models on yarn processing quality,
the relation between spinning variables and yarn where λis a regularization constant and the cost function
properties has not been established conclusively so far.. defined by
The increasing quality demands from the spinners make
clear the need to explore innovative ways of quality f ( x) − y − ε ( f ( x) − y ≥ ε )
f ( x) − y ε = ,
prediction furthermore. The widespread use of artificial
0 ( f ( x) − y < ε )
intelligence (AI) has created a revolution in the domain of
quality prediction, for example, application of artificial (3)
neural network (ANN) in textile engineering [2]. This study is called Vapnik’s “ε-insensitive loss function”. It can be
presents a support vector machines based intelligent shown that the minimizing function has the following form:
predictive model for yarn process quality. The relative M
algorithm, model selection and experiments are presented
in detail. f ( x, α , α * ) = ∑ (α i − α i* )k ( x i , x) + b (4)
i =1
2 SVM REGRESSION ALGORITHMS
2.1 Paper title and authors with α iα i* = 0 , α i , α i* ≥ 0 and the kernel function
The main objective of regression is to approximate a k ( x i , x ) describes the dot product in the D-dimensional
function g(x) from a given noisy set of samples
feature space.
G = {( x i , y i )}iN 1
= obtained from the function g. The
basic idea of support vector machines (SVM) for
k ( xi , x j ) = φ ( x i ), φ ( x j ) (5)
regression is to map the data x into a high dimensional It is important to note that the featuresΦj need not be
feature space via a nonlinear mapping and to perform a computed; rather what is needed is the kernel function
linear regression in this feature space. that is very simple and has a known analytical form. The
D only condition required is that the kernel function has to
f ( x ) = ∑ wi φ i ( x ) + b (1) satisfy Mercer’s condition. Some of the mostly used
i =1 kernels include linear, polynomial, radial basis function,
and sigmoid. Note also that for Vapnik’s ε-insensitive loss
where w denotes the weight vector, b is a constant known
function, the Lagrange multipliers α i , α i are sparse, i.e.
*
as “bias”, {φ i ( x )}iD 1
= are called features. Thus, the
they result in nonzero values after the optimization (2)
problem of nonlinear regression in lower-dimensional input
only if they are on the boundary, which means that they
space is transformed into a linear regression in the high-
satisfy the Karush–Kuhn–Tucker conditions. The
2. 19th International Conference on Production Research
coefficients α i , α i are
*
obtained by maximizing the various data from yarn production process into
engineering database. The reasoning machines are a
following form: SVM-based yarn process simulator in nature, which are
1 M used to train the predictive models, and then make some
Max : R (α * ,α ) = − ∑(αi* − αi )(α *j − α j ) K ( xi , x j )
2 i , j =1
real-world process decision in term of the different raw
materials inputs
M M
− ε ∑ αi* +αi ) + ∑y i (αi* −αi )
( (6) 3.2 Model Selections
i=1 i=1 In the yarn predictive learning task, the appropriate model
M and parameter estimation method should be selected to
.S .T . ∑α
(
i=1
*
i α
− i)
(7)
obtain a high level of performance of the learning
machine. Lacking a priori information about the accuracy
0 ≤α , α ≤C
*
i
i
of the y-values, it can be difficult to come up with a
reasonable value of ε a prior. Instead, one would rather
specify the degree of sparseness and let the algorithms
Only a number of coefficients α i , α i will be different from
*
automatically compute ε from the data. This is the idea of
zero, and the data points associated to them are called ν-SVM, a modification of the originalε-SVM introduced by
support vectors. Parameters C and εare free and have to Schőlkopf, Smola, Williamson et al [6], which were used to
be decided by the user. Computing b requires a more construct the yarn predictive model in our study. Under
direct use of the Karush–Kuhn–Tucker conditions that the approach, the usually parameters to be chosen are
lead to the quadratic programming problems stated the following:
above. The key idea is to pick those values for a point xk the penalty term C which determines the tradeoff
between the complexity of the decision function and
on the margin, i.e. α k or α k in
*
the open interval (0, C). the number of training examples misclassified;
One xk would be sufficient but for stability purposes it is the sparsity parameter ν in accordance with the
noise that is in the output values in order to get the
recommended that one take the average over all points highest generalization accuracy.
Raw User Interface Yarn
Material Yarn Quality Prediction Properties
SVM-based Process Simulator
Reasoning Machines
Textile Engineering Database
Data Acquisition
Yarn Production Process
Fig.1 Yarn Quality Predictive Model Architecture
on the margin. More detailed description of SVM for the kernel function such that K ( x, y)
regression can be found in Ref. [3~6]
3 SVM BASED YARN PREDICTIVE MODEL According to the reference [7], the sparsity parameter ν
usually may be choose in the interval [0.3, 0.6], here
3.1 Model Architecture ν=0.583. And radial basis function (RBF) kernel, given by
Considering some salient features of SVM such as the Equitation 8 is used:
absence of local minima, the sparseness of the solution 2
K ( x, y ) = exp(− x − y / 2σ 2 ) (8)
and the improved generalization, there was proposed
SVM-based yarn quality prediction system (shown as where σ is the width of the RBF kernel parameter.
Fig.1). The system architecture mainly consists of three The RBF kernel nonlinearly maps samples into a higher
modules, i.e. data acquisition, reasoning machine, and dimensional space, so it, unlike the linear kernel, can
user interface. Among them, the user interface provides handle the case when the relation between inputs and
friendly interactive operation with the model, including outputs is nonlinear. In addition, the sigmoid kernel
data cleaning, model training, parameter selection, and so behaves like RBF for certain parameters. The reason
on. The data acquisition collects and transforms the
3. 19th International Conference on Production Research
using RBF kernels is the number of hyper-parameters and the width of the RBF kernel parameter σ. To optimize
which influences the complexity of model selection. The the two parameters, the “grid-search” method above was
polynomial kernel has more hyper-parameters than the applied in the present work. In fact, optimizing the model
RBF kernel. Finally, for the RBF kernel, it has less parameters need an iterative process which can
numerical difficulties; and a key point is 0 < k ( x, y ) < 1 continuously shrink the searching area and as a result,
obtain a satisfying solution. Table1 lists the final searching
in contrast to polynomial kernels of which kernel values area and optimal values of the four SVM models,
may go to infinity or zero while the degree is large. respectively.
Moreover, it is noted that the sigmoid kernel is not valid
(i.e. not the inner product of two vectors) under some After the completion of model development or training, all
parameters [4]. the models based on SVM (and ANN) were subjected to
the unseen testing data set. Statistical parameters such
3.3 Optimization of Model Parameter as the correlation coefficient between the actual and
Obviously, in the SVM model there are still two key predicted values (R), mean squared error, and mean error
parameters need choosing: C and σ. Unfortunately, it is %, were used to compare the predictive power of the
difficult to know beforehand which C and σ are the best SVM-based and ANN-based models. Results are shown
for one problem. Our goal is just about to identify good (C, in Table2. It has observed that for ANN models, the mean
σ) so that the model can accurately predict unknown data error (%) of three models is more than 10% except that
(i.e., testing data). Therefore, a common way is to the CV% remains about 5%, and the correlation
separate training data to two parts of which one is coefficient (R) of the CV% and EB models is very low,
considered unknown in training the model. Then the shown as 0.76 and 0.67 respectively. However, for SVM
prediction accuracy on data sets can more precisely models, the mean error (%) is less than 10% except that
reflect the performance on predicting unknown data. The the ED is still high, and the correlation coefficient (R) of all
procedure for improved model is called as cross- models is improved to more than 0.80. On the other hand,
validation. The cross-validation procedure can also the cases with over 10% error also decrease from 5 and 6
prevent the over-fitting problem furthermore. In this study, in ANN models to 2 and 3 in SVM models. In fact, among
the regression function was built with a given set of all four yarn properties considered in our work, end-down
parameters {C, σ}.The performance of the parameter set per 1000 spindle hours could be affected by different
is measured by the computational risk, here mean operators and observers [10], which data often result in
squared error (MSE, see Equation 9) on the last subset. undermining the prediction accuracy of various regression
The above procedure is repeated p times, so that each models. Even so, for ED, almost all statistical parameters
subset is used once for testing. Averaging the MSE over using SVM model seem to be much better than using
the p trials gives an estimate of the expected ANN model
p −1 5 CONCLUSIONS
generalization error for training on sets of size ⋅l , l Support vector machines are a new learning-by-
p example paradigm with many potential applications in
is the number of training data. science and engineering. The salient features of SVM
include the absence of local minima, the sparseness of
1 p q
MSE = ∑∑ ( yti( j ) − y (pij ) ) 2
pq j =1 i =1
(9)
the solution and the improved generalization. SVMs being
a relatively new technique, their application on textile
production have hitherto been quite limited. However, the
where q is the sample number of tested subset in the elegance of the formalism involved and their successful
use in diverse science and engineering applications
training set; y ti j ) and y (pij )
(
are the i th observed value and confirm the expectations raised in this appealing learning
from examples approach. In this study, we presented the
predicted value under j th tested subset, respectively. In
SVM model for predicting the yarn properties and
order to capture the better pairs of (C, σ), a “grid-search” compared with the BP neural network model. We have
[8] on C and σ is employed in this work. Firstly, in term of found that like ANN model, the SVM model is able to
possible range of the two parameters, C and σ were predict to a reasonably good accuracy in most of cases.
divided r pairs; then each pair of the parameters was tried And a more interested phenomenon is that in small data
using cross-validation and the one with the best cross- set and real-life production, the predictive power of ANN
validation accuracy was picked up as optimal parameters models appears to decrease, while SVM models are still
of the model. capable of remaining the stability of predictive accuracy to
4 THE EXPERIMENTS STUDY some extent. The experimental results indicate that the
SVM models are more suitable for noisy and dynamic
In this work, a small population (a total of twenty-six
spinning process. Of course, like other emerging industrial
different data samples) from real worsted spinning was
techniques, applied issues on SVM reaffirm the due
acquired. To demonstrate the generalization performance
commitment to their further development and
of SVM model, different experiments were completed and
investigation, such as the problems how to design the
comparisons with ANN models.To make problem more
kernel function and how to set the SVM hyper-parameters
simply, like most ANN models[2, 9], some fibre properties
(to make the industrial model development more easily).
and processing information were selected as the SVM
Our research thus far demonstrates that SVMs are able to
model’s inputs, which were mean fibre diameter (MFD,
provide an alternative solution for the spinners to predict
μm), diameter distribute (CVD, %), hauteur (HT, mm),
yarn properties more correctly and reliably
fiber length distribution (CVH, %), short fiber content
(SFC, %), yarn count (CT, tex), twist (TW, t.p.m), draft 6 ACKNOWLEDGMENT
ratio (DR), spinning speed (SS, r.p.m), traveler number This research was supported by national science
(TN). Four yarn properties, namely unevenness (CV %), foundation and technology support plan of the People
elongation at break (EB, %), break force (BF, cN) and Republic of China, under contract number 70371040 and
end-down per 1000 spindle hour (ED), served as the SVM 2006BAF01A44 respectively.
model’s outputs. 7 REFERENCES
One of the primary aspects of developing a SVM
regression model is the selection of the penalty term C
4. 19th International Conference on Production Research
[1] Renate Esswein, “Knowledge assures quality”, [7] Athanassia Chalimourda, B. Scholkopt, A. Smola,
International Textile Bulletin, 2004, Vol15, no2, “Experimentally Optimal ν in Support Vector
17~21, Regression for Different Noise Models and
[2] R. Chattonpadhyay and A. Guha, “Artificial Neural Parameter Settings”, IEEE trans. on Neural Netw.,
Networks: Applications to Textiles”, Textile Progress, 2004, Vol17, no2, 127-141
2004, Vol35, no1, 1~42, [8] Chih-Wei Hsu, Chih-Chung Chang, and Chih-Jen
[3] V. David Sanchez A, “Advanced Support Vector Lin, A practical guide to support vector classification,
Machines and Kernel Methods”, Neurocomputing, available at http://www.csie.ntu.edu.tw/~cjlin/paper
2003, Vol55, no3, 5-20 , [9] Refael B., Lijing W., Xungai W., “Predicting worsted
[4] V. N. Vapnik, 1999, The Nature of Statistical Learning spinning performance with an artificial neural
Theory, 2nd ed., Berlin: Springer, 31-188, network model”, Textile Res. J. , 2004, Vol74, no.8,
757-763,
[5] B. Scholkopf, C. Burges, and A. Smola, 1999,
Advances in Kernel Methods—Support Vector [10] Peter R. Lord, 2003, Handbook of Yarn Production
Learning. Cambridge, MA: MIT Press, 5-73, (Technology, Science and Economics), Abinhton
England: Woodhead publishing Limited, 95-212
[6] B. Scholkopf, Smola A. and Williamson. R.C., et al,
“New support vector algorithms”, Neural
Computation, 2000, Vol12, no4, 1207-1245,
Table1 The optimal values of σand C
Output parameter Optimal value
CV % =
σC 0.973, = 1606
Elongation at break =
σC 0.016, = 14.55
Breaking force =
σC 0.012, = 101.19
Ends-down =
σC 0.287, = 2.975
Table2 Comparison of the predictive power of the SVM-based and ANN-based models
Predicted value using ANN model Predicted value using SVM model
Sample No.
CV% EB BF ED CV% EB BF ED
W21 19.32 13.81 113.89 70.41 19.66 12.85 116.24 72.06
W22 20.52 16.55 61.91 75.78 20.88 12.25 76.87 72.40
W23 15.62 12.32 153.46 39.40 16.84 15.59 156.57 42.22
W24 20.66 16.55 61.91 75.78 20.75 12.25 76.87 72.40
W25 22.60 19.77 47.00 69.84 19.66 12.76 76.86 59.31
W26 20.70 11.87 66.76 79.22 21.20 12.59 66.62 81.27
Correlation
0.76 0.67 0.96 0.88 0.88 0.80 0.99 0.91
coefficient. R
Mean squared error 0.01 0.12 0.07 0.03 0.003 0.05 0.01 0.03
Mean error% 5.73 24.35 13.67 19.99 2.85 9.23 5.52 17.29
Cases with
1 6 5 6 0 2 2 3
over 10% error