This document presents a study comparing different regression models for predicting costs based on production levels. It finds that a cubic polynomial regression model provides a better fit than linear regression or the high-low method. The study uses cost and production data from a company to build linear, quadratic, and cubic regression models. It finds the cubic polynomial regression has the highest R-squared value and lowest p-value, indicating it is the best-fitting model. The study concludes that polynomial regression generally provides a better approach for cost prediction than conventional linear regression or the high-low method.
Transportation Problem with Pentagonal Intuitionistic Fuzzy Numbers Solved Us...IJERA Editor
This paper presents a solution methodology for transportation problem in an intuitionistic fuzzy environment in
which cost are represented by pentagonal intuitionistic fuzzy numbers. Transportation problem is a particular
class of linear programming, which is associated with day to day activities in our real life. It helps in solving
problems on distribution and transportation of resources from one place to another. The objective is to satisfy
the demand at destination from the supply constraints at the minimum transportation cost possible. The problem
is solved using a ranking technique called Accuracy function for pentagonal intuitionistic fuzzy numbers and
Russell’s Method
This document discusses various types of regression modeling and linear regression. It provides examples of linear regression analysis on fraud data and discusses assessing goodness of fit. It also briefly covers non-linear regression, problem areas like heteroskedasticity and collinearity, and model selection methods. Linear regression is presented geometrically and the assumptions and computations of ordinary least squares regression are explained.
How Business mathematics assist Business in decision makingFahad Fu
The presentation discusses how mathematics, including matrices, coordinate geometry, functions, limits, continuity, differentiation, and maxima and minima, can assist in business decision making. Several group members each presented on a mathematics topic and provided examples of how it is used, such as using matrices to analyze production elements or differentiation to determine optimal production levels for maximizing profit. The document concludes by solving an example maximizing profit by finding the production rate that balances total cost and total revenue functions.
MIXTURES OF TRAINED REGRESSION CURVESMODELS FOR HANDRITTEN ARABIC CHARACTER R...ijaia
In this paper, we demonstrate how regression curves can be used to recognize 2D non-rigid handwritten shapes. Each shape is represented by a set of non-overlapping uniformly distributed landmarks. The underlying models utilize 2nd order of polynomials to model shapes within a training set. To estimate the regression models, we need to extract the required coefficients which describe the variations for a set of shape class. Hence, a least square method is used to estimate such modes. We proceed then, by training these coefficients using the apparatus Expectation Maximization algorithm. Recognition is carried out by finding the least error landmarks displacement with respect to the model curves. Handwritten isolated Arabic characters are used to evaluate our approach.
Expectation Maximization Algorithm with Combinatorial AssumptionLoc Nguyen
Expectation maximization (EM) algorithm is a popular and powerful mathematical method for parameter estimation in case that there exist both observed data and hidden data. The EM process depends on an implicit relationship between observed data and hidden data which is specified by a mapping function in traditional EM and a joint probability density function (PDF) in practical EM. However, the mapping function is vague and impractical whereas the joint PDF is not easy to be defined because of heterogeneity between observed data and hidden data. The research aims to improve competency of EM by making it more feasible and easier to be specified, which removes the vagueness. Therefore, the research proposes an assumption that observed data is the combination of hidden data which is realized as an analytic function where data points are numerical. In other words, observed points are supposedly calculated from hidden points via regression model. Mathematical computations and proofs indicate feasibility and clearness of the proposed method which can be considered as an extension of EM.
Medical Conferences, Pharma Conferences, Engineering Conferences, Science Conferences, Manufacturing Conferences, Social Science Conferences, Business Conferences, Scientific Conferences Malaysia, Thailand, Singapore, Hong Kong, Dubai, Turkey 2014 2015 2016
Global Research & Development Services (GRDS) is a leading academic event organizer, publishing Open Access Journals and conducting several professionally organized international conferences all over the globe annually. GRDS aims to disseminate knowledge and innovation with the help of its International Conferences and open access publications. GRDS International conferences are world-class events which provide a meaningful platform for researchers, students, academicians, institutions, entrepreneurs, industries and practitioners to create, share and disseminate knowledge and innovation and to develop long-lasting network and collaboration.
GRDS is a blend of Open Access Publications and world-wide International Conferences and Academic events. The prime mission of GRDS is to make continuous efforts in transforming the lives of people around the world through education, application of research and innovative ideas.
Global Research & Development Services (GRDS) is also active in the field of Research Funding, Research Consultancy, Training and Workshops along with International Conferences and Open Access Publications.
International Conferences 2014 – 2015
Malaysia Conferences, Thailand Conferences, Singapore Conferences, Hong Kong Conferences, Dubai Conferences, Turkey Conferences, Conference Listing, Conference Alerts
Conditional mixture model for modeling attributed dyadic dataLoc Nguyen
Dyadic data contains co-occurrences of objects, which is often modeled by finite mixture model which in turn is learned by expectation maximization (EM) algorithm. Objects in traditional dyadic data are identified by names, causing the drawback which is that it is impossible to extract implicit valuable knowledge under objects. In this research, I propose the so-called attributed dyadic data (ADD) in which each object has an informative attribute and each co-occurrence of two objects is associated with a value. ADD is flexible and covers most of structures / forms of dyadic data. Conditional mixture model (CMM), which is a variant of finite mixture model, is applied into learning ADD. Moreover, a significant feature of CMM is that any co-occurrence of two objects is based on some conditional variable. As a result, CMM can predict or estimate co-occurrent values based on regression model, which extends applications of ADD and CMM.
Mathematics can assist in decision making through functions, straight lines, and coordinate geometry. Functions show the relationship between variables, like cost and revenue. Straight lines represent linear relationships using slope and the coordinates of points. Coordinate geometry uses the x and y coordinates of points on a plane to calculate distances and midpoints. Together, these mathematical concepts can be used to model real-world scenarios and help evaluate different choices. For example, a town mayor is trying to determine the best location "K" to build a rescue squad based on minimizing the distance to existing houses at coordinates A(2,3) and B(6,-4).
Transportation Problem with Pentagonal Intuitionistic Fuzzy Numbers Solved Us...IJERA Editor
This paper presents a solution methodology for transportation problem in an intuitionistic fuzzy environment in
which cost are represented by pentagonal intuitionistic fuzzy numbers. Transportation problem is a particular
class of linear programming, which is associated with day to day activities in our real life. It helps in solving
problems on distribution and transportation of resources from one place to another. The objective is to satisfy
the demand at destination from the supply constraints at the minimum transportation cost possible. The problem
is solved using a ranking technique called Accuracy function for pentagonal intuitionistic fuzzy numbers and
Russell’s Method
This document discusses various types of regression modeling and linear regression. It provides examples of linear regression analysis on fraud data and discusses assessing goodness of fit. It also briefly covers non-linear regression, problem areas like heteroskedasticity and collinearity, and model selection methods. Linear regression is presented geometrically and the assumptions and computations of ordinary least squares regression are explained.
How Business mathematics assist Business in decision makingFahad Fu
The presentation discusses how mathematics, including matrices, coordinate geometry, functions, limits, continuity, differentiation, and maxima and minima, can assist in business decision making. Several group members each presented on a mathematics topic and provided examples of how it is used, such as using matrices to analyze production elements or differentiation to determine optimal production levels for maximizing profit. The document concludes by solving an example maximizing profit by finding the production rate that balances total cost and total revenue functions.
MIXTURES OF TRAINED REGRESSION CURVESMODELS FOR HANDRITTEN ARABIC CHARACTER R...ijaia
In this paper, we demonstrate how regression curves can be used to recognize 2D non-rigid handwritten shapes. Each shape is represented by a set of non-overlapping uniformly distributed landmarks. The underlying models utilize 2nd order of polynomials to model shapes within a training set. To estimate the regression models, we need to extract the required coefficients which describe the variations for a set of shape class. Hence, a least square method is used to estimate such modes. We proceed then, by training these coefficients using the apparatus Expectation Maximization algorithm. Recognition is carried out by finding the least error landmarks displacement with respect to the model curves. Handwritten isolated Arabic characters are used to evaluate our approach.
Expectation Maximization Algorithm with Combinatorial AssumptionLoc Nguyen
Expectation maximization (EM) algorithm is a popular and powerful mathematical method for parameter estimation in case that there exist both observed data and hidden data. The EM process depends on an implicit relationship between observed data and hidden data which is specified by a mapping function in traditional EM and a joint probability density function (PDF) in practical EM. However, the mapping function is vague and impractical whereas the joint PDF is not easy to be defined because of heterogeneity between observed data and hidden data. The research aims to improve competency of EM by making it more feasible and easier to be specified, which removes the vagueness. Therefore, the research proposes an assumption that observed data is the combination of hidden data which is realized as an analytic function where data points are numerical. In other words, observed points are supposedly calculated from hidden points via regression model. Mathematical computations and proofs indicate feasibility and clearness of the proposed method which can be considered as an extension of EM.
Medical Conferences, Pharma Conferences, Engineering Conferences, Science Conferences, Manufacturing Conferences, Social Science Conferences, Business Conferences, Scientific Conferences Malaysia, Thailand, Singapore, Hong Kong, Dubai, Turkey 2014 2015 2016
Global Research & Development Services (GRDS) is a leading academic event organizer, publishing Open Access Journals and conducting several professionally organized international conferences all over the globe annually. GRDS aims to disseminate knowledge and innovation with the help of its International Conferences and open access publications. GRDS International conferences are world-class events which provide a meaningful platform for researchers, students, academicians, institutions, entrepreneurs, industries and practitioners to create, share and disseminate knowledge and innovation and to develop long-lasting network and collaboration.
GRDS is a blend of Open Access Publications and world-wide International Conferences and Academic events. The prime mission of GRDS is to make continuous efforts in transforming the lives of people around the world through education, application of research and innovative ideas.
Global Research & Development Services (GRDS) is also active in the field of Research Funding, Research Consultancy, Training and Workshops along with International Conferences and Open Access Publications.
International Conferences 2014 – 2015
Malaysia Conferences, Thailand Conferences, Singapore Conferences, Hong Kong Conferences, Dubai Conferences, Turkey Conferences, Conference Listing, Conference Alerts
Conditional mixture model for modeling attributed dyadic dataLoc Nguyen
Dyadic data contains co-occurrences of objects, which is often modeled by finite mixture model which in turn is learned by expectation maximization (EM) algorithm. Objects in traditional dyadic data are identified by names, causing the drawback which is that it is impossible to extract implicit valuable knowledge under objects. In this research, I propose the so-called attributed dyadic data (ADD) in which each object has an informative attribute and each co-occurrence of two objects is associated with a value. ADD is flexible and covers most of structures / forms of dyadic data. Conditional mixture model (CMM), which is a variant of finite mixture model, is applied into learning ADD. Moreover, a significant feature of CMM is that any co-occurrence of two objects is based on some conditional variable. As a result, CMM can predict or estimate co-occurrent values based on regression model, which extends applications of ADD and CMM.
Mathematics can assist in decision making through functions, straight lines, and coordinate geometry. Functions show the relationship between variables, like cost and revenue. Straight lines represent linear relationships using slope and the coordinates of points. Coordinate geometry uses the x and y coordinates of points on a plane to calculate distances and midpoints. Together, these mathematical concepts can be used to model real-world scenarios and help evaluate different choices. For example, a town mayor is trying to determine the best location "K" to build a rescue squad based on minimizing the distance to existing houses at coordinates A(2,3) and B(6,-4).
Linear algebra application in linear programming Lahiru Dilshan
Linear programming is used to maximize or minimize quantities subject to constraints. It can be applied to problems with any number of variables and constraints, as long as the relationships are linear. Key aspects include defining an objective function to optimize, determining the feasible region where all constraints are satisfied, and finding extreme points where the objective function may be maximized or minimized. An example problem involves determining how to allocate candy mixtures to maximize revenue given constraints on available ingredients. The optimal solution is found at an extreme point within the bounded feasible region.
- The document discusses the comparison between graphical and simplex methods for solving linear programming problems involving maximization.
- It explains that the graphical method is used for problems with two decision variables, while the simplex method can handle any number of decision variables.
- The simplex method converts inequalities into equations by introducing slack or surplus variables, while the graphical method assumes inequalities are equations.
- An example problem is presented and the first two iterations of the simplex method are shown to solve the problem and maximize profit.
Bayesian Estimation for Missing Values in Latin Square Designinventionjournals
International Journal of Mathematics and Statistics Invention (IJMSI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJMSI publishes research articles and reviews within the whole field Mathematics and Statistics, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
This document provides an overview of linear programming and the graphical method for solving two-variable linear programming problems. It defines linear programming as involving maximizing or minimizing a linear objective function subject to linear constraints. The graphical method is described as using a graph in the first quadrant to find the feasible region defined by the constraints and then determine the optimal solution by evaluating the objective function at the boundary points. An example problem is presented to demonstrate finding the feasible region and optimal solution graphically. Special cases like alternative optima and infeasible/unbounded problems are also mentioned.
MIXTURES OF TRAINED REGRESSION CURVES MODELS FOR HANDWRITTEN ARABIC CHARACTER...gerogepatton
In this paper, we demonstrate how regression curves can be used to recognize 2D non-rigid handwritten shapes. Each shape is represented by a set of non-overlapping uniformly distributed landmarks. The underlying models utilize 2nd order of polynomials to model shapes within a training set. To estimate the regression models, we need to extract the required coefficients which describe the variations for a set of shape class. Hence, a least square method is used to estimate such modes. We proceed then, by training these coefficients using the apparatus Expectation Maximization algorithm. Recognition is carried out by finding the least error landmarks displacement with respect to the model curves. Handwritten isolated Arabic characters are used to evaluate our approach.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
This document presents a method for solving an assignment problem where the costs are triangular intuitionistic fuzzy numbers rather than certain values. It introduces the concepts of intuitionistic fuzzy sets and triangular intuitionistic fuzzy numbers, and defines operations and a ranking method for comparing them. The paper formulates the intuitionistic fuzzy assignment problem mathematically as an optimization problem that minimizes the total intuitionistic fuzzy cost while satisfying constraints that each job is assigned to exactly one machine. It describes using an intuitionistic fuzzy Hungarian method to solve this type of assignment problem.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
The Vasicek model is one of the earliest stochastic models for modeling the term structure of interest rates. It represents the movement of interest rates as a function of market risk, time, and the equilibrium value the rate tends to revert to. This document discusses parameter estimation techniques for the Vasicek one-factor model using least squares regression and maximum likelihood estimation on historical interest rate data. It also covers simulating the term structure and pricing zero-coupon bonds under the Vasicek model. The two-factor Vasicek model is introduced as an extension of the one-factor model.
1) The document discusses linear programming and its graphical solution method. It provides examples of forming linear programming models and using graphs to find the feasible region and optimal solution.
2) A toy manufacturing example is presented and modeled using linear programming with the objective of maximizing weekly profit. The feasible region is graphed and the optimal solution is identified.
3) Another example involving a wood products company is modeled and solved graphically to determine the optimal production mix to maximize profits. Corner points of the feasible region are identified and evaluated to find the optimal solution.
This document appears to be an assignment submission for a financial engineering course. It includes a plagiarism declaration signed by the student, Andrew Hair. The assignment contains 11 questions addressing interest rate derivatives and modeling using the Vasicek model. Code is provided in MATLAB to generate simulations and analyze interest rate data based on the questions.
- The document discusses a correlation analysis between per capita cheese consumption and deaths from bedsheet entanglement using annual data from 2000-2009.
- Computing the correlation coefficient results in a highly statistically significant correlation. However, examining plots of the data reveals the means are trending over time, violating the assumption of constant means.
- This implies the estimates and statistical tests are unreliable and the results may be statistically spurious. To address this, the data can be detrended using auxiliary regressions to remove the trends before reanalyzing the correlation.
On The Numerical Solution of Picard Iteration Method for Fractional Integro -...DavidIlejimi
ff(xx),
ff(xx) + λλ ∫ KK(xx, tt)uu0(tt)dddd,
xx
0
(14)
1. This document discusses the Picard iteration method for solving fractional integro-differential equations. The fractional derivative is considered in the Caputo sense.
2. The proposed Picard iteration method reduces fractional integro-differential equations to standard integral equations of the second kind.
3. Some test problems are considered to demonstrate the accuracy and convergence of the presented Picard iteration method for solving fractional integro-differential equations. Numerical results show the approach is easy and accurate.
IRJET- Optimization of 1-Bit ALU using Ternary LogicIRJET Journal
This document summarizes a research paper that proposes a novel approach to implementing a 1-bit arithmetic logic unit (ALU) using ternary logic. Ternary logic offers potential advantages over binary logic, including reduced transistor count and hardware. The authors designed a 1-bit ALU using ternary logic gates (T-gates) for ternary arithmetic and logic operations. Simulation results showed the ternary logic ALU design achieved a 25% reduction in transistor usage compared to an equivalent binary logic ALU design. The ternary logic ALU design approach could potentially be extended to multi-bit ALUs for applications where reduced transistor count is important.
Quantitative Methods for Lawyers - Class #22 - Regression Analysis - Part 1Daniel Katz
This document discusses regression analysis techniques for predicting lawyer hourly rates. It provides an example regression model that estimates rate based on city, firm size, years of experience, practice area, and other independent variables. Graphs and equations are shown to illustrate how regression can be used to model the relationship between a dependent variable (rate) and multiple independent predictors. The document also discusses key regression concepts like the regression coefficient, standard error, and interpreting statistical significance.
Heptagonal Fuzzy Numbers by Max Min MethodYogeshIJTSRD
In this paper, we propose another methodology for the arrangement of fuzzy transportation problem under a fuzzy environment in which transportation costs are taken as fuzzy Heptagonal numbers. The fuzzy numbers and fuzzy values are predominantly used in various fields. Here, we are converting fuzzy Heptagonal numbers into crisp value by using range technique and then solved by the MAX MIN method for the transportation problem. M. Revathi | K. Nithya "Heptagonal Fuzzy Numbers by Max-Min Method" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-5 | Issue-3 , April 2021, URL: https://www.ijtsrd.com/papers/ijtsrd38280.pdf Paper URL: https://www.ijtsrd.com/mathemetics/applied-mathamatics/38280/heptagonal-fuzzy-numbers-by-maxmin-method/m-revathi
This document discusses duality in linear programming. It defines the dual problem as another linear program systematically constructed from the original or primal problem, such that the optimal solutions of one provide the optimal solutions of the other. The document provides rules for constructing the dual problem based on whether the primal problem is a maximization or minimization problem. It also gives examples of writing the dual of a primal problem and solving both problems to verify the optimal objective values are equal. Finally, it discusses economic interpretations of duality and the relationship between primal and dual problems and solutions.
A Novel Multiplication Algorithm Based On Euclidean Division For Multiplying ...Jim Webb
This document presents a novel multiplication algorithm based on the Euclidean division algorithm (EDA) for multiplying large integers. The algorithm directly applies EDA when the integers are different sizes, and with a modification when they are the same size. It is supported by the property that the product of consecutive Fibonacci numbers is equal to the sum of squares of preceding terms. A Python implementation shows the algorithm can multiply integers with thousands to millions of digits. The algorithm is well-suited for multiplying unequal integers and could have applications in number theory.
Some Engg. Applications of Matrices and Partial DerivativesSanjaySingh011996
This document contains a submission by three students to Dr. Sona Raj Mam regarding partial differentiation, matrices and determinants, and eigenvectors and eigenvalues. It provides examples of how these mathematical concepts are applied in fields like engineering. Partial differentiation is used in economics to analyze demand and in image processing for edge detection. Matrices and determinants allow representing linear transformations in graphics software. Eigenvalues and eigenvectors have applications in areas like computer science, smartphone apps, and modeling structures in civil engineering. The document also provides real-world examples and references textbooks and websites for further information.
This document summarizes research on using particle swarm optimization to reconstruct microwave images of two-dimensional dielectric scatterers. It formulates the inverse scattering problem as an optimization problem to find the dielectric parameter distribution that minimizes the difference between measured and simulated scattered field data. Numerical results show that a particle swarm optimization approach can accurately reconstruct the shape and dielectric properties of a test cylindrical scatterer, with lower background reconstruction error than a genetic algorithm approach. The research demonstrates that particle swarm optimization is a suitable technique for high-dimensional microwave imaging problems.
Determination of Optimal Product Mix for Profit Maximization using Linear Pro...IJERA Editor
This paper demonstrates the use of liner programming methods in order to determine the optimal product mix for
profit maximization. There had been several papers written to demonstrate the use of linear programming in
finding the optimal product mix in various organization. This paper is aimed to show the generic approach to be
taken to find the optimal product mix.
Linear algebra application in linear programming Lahiru Dilshan
Linear programming is used to maximize or minimize quantities subject to constraints. It can be applied to problems with any number of variables and constraints, as long as the relationships are linear. Key aspects include defining an objective function to optimize, determining the feasible region where all constraints are satisfied, and finding extreme points where the objective function may be maximized or minimized. An example problem involves determining how to allocate candy mixtures to maximize revenue given constraints on available ingredients. The optimal solution is found at an extreme point within the bounded feasible region.
- The document discusses the comparison between graphical and simplex methods for solving linear programming problems involving maximization.
- It explains that the graphical method is used for problems with two decision variables, while the simplex method can handle any number of decision variables.
- The simplex method converts inequalities into equations by introducing slack or surplus variables, while the graphical method assumes inequalities are equations.
- An example problem is presented and the first two iterations of the simplex method are shown to solve the problem and maximize profit.
Bayesian Estimation for Missing Values in Latin Square Designinventionjournals
International Journal of Mathematics and Statistics Invention (IJMSI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJMSI publishes research articles and reviews within the whole field Mathematics and Statistics, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
This document provides an overview of linear programming and the graphical method for solving two-variable linear programming problems. It defines linear programming as involving maximizing or minimizing a linear objective function subject to linear constraints. The graphical method is described as using a graph in the first quadrant to find the feasible region defined by the constraints and then determine the optimal solution by evaluating the objective function at the boundary points. An example problem is presented to demonstrate finding the feasible region and optimal solution graphically. Special cases like alternative optima and infeasible/unbounded problems are also mentioned.
MIXTURES OF TRAINED REGRESSION CURVES MODELS FOR HANDWRITTEN ARABIC CHARACTER...gerogepatton
In this paper, we demonstrate how regression curves can be used to recognize 2D non-rigid handwritten shapes. Each shape is represented by a set of non-overlapping uniformly distributed landmarks. The underlying models utilize 2nd order of polynomials to model shapes within a training set. To estimate the regression models, we need to extract the required coefficients which describe the variations for a set of shape class. Hence, a least square method is used to estimate such modes. We proceed then, by training these coefficients using the apparatus Expectation Maximization algorithm. Recognition is carried out by finding the least error landmarks displacement with respect to the model curves. Handwritten isolated Arabic characters are used to evaluate our approach.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
This document presents a method for solving an assignment problem where the costs are triangular intuitionistic fuzzy numbers rather than certain values. It introduces the concepts of intuitionistic fuzzy sets and triangular intuitionistic fuzzy numbers, and defines operations and a ranking method for comparing them. The paper formulates the intuitionistic fuzzy assignment problem mathematically as an optimization problem that minimizes the total intuitionistic fuzzy cost while satisfying constraints that each job is assigned to exactly one machine. It describes using an intuitionistic fuzzy Hungarian method to solve this type of assignment problem.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
The Vasicek model is one of the earliest stochastic models for modeling the term structure of interest rates. It represents the movement of interest rates as a function of market risk, time, and the equilibrium value the rate tends to revert to. This document discusses parameter estimation techniques for the Vasicek one-factor model using least squares regression and maximum likelihood estimation on historical interest rate data. It also covers simulating the term structure and pricing zero-coupon bonds under the Vasicek model. The two-factor Vasicek model is introduced as an extension of the one-factor model.
1) The document discusses linear programming and its graphical solution method. It provides examples of forming linear programming models and using graphs to find the feasible region and optimal solution.
2) A toy manufacturing example is presented and modeled using linear programming with the objective of maximizing weekly profit. The feasible region is graphed and the optimal solution is identified.
3) Another example involving a wood products company is modeled and solved graphically to determine the optimal production mix to maximize profits. Corner points of the feasible region are identified and evaluated to find the optimal solution.
This document appears to be an assignment submission for a financial engineering course. It includes a plagiarism declaration signed by the student, Andrew Hair. The assignment contains 11 questions addressing interest rate derivatives and modeling using the Vasicek model. Code is provided in MATLAB to generate simulations and analyze interest rate data based on the questions.
- The document discusses a correlation analysis between per capita cheese consumption and deaths from bedsheet entanglement using annual data from 2000-2009.
- Computing the correlation coefficient results in a highly statistically significant correlation. However, examining plots of the data reveals the means are trending over time, violating the assumption of constant means.
- This implies the estimates and statistical tests are unreliable and the results may be statistically spurious. To address this, the data can be detrended using auxiliary regressions to remove the trends before reanalyzing the correlation.
On The Numerical Solution of Picard Iteration Method for Fractional Integro -...DavidIlejimi
ff(xx),
ff(xx) + λλ ∫ KK(xx, tt)uu0(tt)dddd,
xx
0
(14)
1. This document discusses the Picard iteration method for solving fractional integro-differential equations. The fractional derivative is considered in the Caputo sense.
2. The proposed Picard iteration method reduces fractional integro-differential equations to standard integral equations of the second kind.
3. Some test problems are considered to demonstrate the accuracy and convergence of the presented Picard iteration method for solving fractional integro-differential equations. Numerical results show the approach is easy and accurate.
IRJET- Optimization of 1-Bit ALU using Ternary LogicIRJET Journal
This document summarizes a research paper that proposes a novel approach to implementing a 1-bit arithmetic logic unit (ALU) using ternary logic. Ternary logic offers potential advantages over binary logic, including reduced transistor count and hardware. The authors designed a 1-bit ALU using ternary logic gates (T-gates) for ternary arithmetic and logic operations. Simulation results showed the ternary logic ALU design achieved a 25% reduction in transistor usage compared to an equivalent binary logic ALU design. The ternary logic ALU design approach could potentially be extended to multi-bit ALUs for applications where reduced transistor count is important.
Quantitative Methods for Lawyers - Class #22 - Regression Analysis - Part 1Daniel Katz
This document discusses regression analysis techniques for predicting lawyer hourly rates. It provides an example regression model that estimates rate based on city, firm size, years of experience, practice area, and other independent variables. Graphs and equations are shown to illustrate how regression can be used to model the relationship between a dependent variable (rate) and multiple independent predictors. The document also discusses key regression concepts like the regression coefficient, standard error, and interpreting statistical significance.
Heptagonal Fuzzy Numbers by Max Min MethodYogeshIJTSRD
In this paper, we propose another methodology for the arrangement of fuzzy transportation problem under a fuzzy environment in which transportation costs are taken as fuzzy Heptagonal numbers. The fuzzy numbers and fuzzy values are predominantly used in various fields. Here, we are converting fuzzy Heptagonal numbers into crisp value by using range technique and then solved by the MAX MIN method for the transportation problem. M. Revathi | K. Nithya "Heptagonal Fuzzy Numbers by Max-Min Method" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-5 | Issue-3 , April 2021, URL: https://www.ijtsrd.com/papers/ijtsrd38280.pdf Paper URL: https://www.ijtsrd.com/mathemetics/applied-mathamatics/38280/heptagonal-fuzzy-numbers-by-maxmin-method/m-revathi
This document discusses duality in linear programming. It defines the dual problem as another linear program systematically constructed from the original or primal problem, such that the optimal solutions of one provide the optimal solutions of the other. The document provides rules for constructing the dual problem based on whether the primal problem is a maximization or minimization problem. It also gives examples of writing the dual of a primal problem and solving both problems to verify the optimal objective values are equal. Finally, it discusses economic interpretations of duality and the relationship between primal and dual problems and solutions.
A Novel Multiplication Algorithm Based On Euclidean Division For Multiplying ...Jim Webb
This document presents a novel multiplication algorithm based on the Euclidean division algorithm (EDA) for multiplying large integers. The algorithm directly applies EDA when the integers are different sizes, and with a modification when they are the same size. It is supported by the property that the product of consecutive Fibonacci numbers is equal to the sum of squares of preceding terms. A Python implementation shows the algorithm can multiply integers with thousands to millions of digits. The algorithm is well-suited for multiplying unequal integers and could have applications in number theory.
Some Engg. Applications of Matrices and Partial DerivativesSanjaySingh011996
This document contains a submission by three students to Dr. Sona Raj Mam regarding partial differentiation, matrices and determinants, and eigenvectors and eigenvalues. It provides examples of how these mathematical concepts are applied in fields like engineering. Partial differentiation is used in economics to analyze demand and in image processing for edge detection. Matrices and determinants allow representing linear transformations in graphics software. Eigenvalues and eigenvectors have applications in areas like computer science, smartphone apps, and modeling structures in civil engineering. The document also provides real-world examples and references textbooks and websites for further information.
This document summarizes research on using particle swarm optimization to reconstruct microwave images of two-dimensional dielectric scatterers. It formulates the inverse scattering problem as an optimization problem to find the dielectric parameter distribution that minimizes the difference between measured and simulated scattered field data. Numerical results show that a particle swarm optimization approach can accurately reconstruct the shape and dielectric properties of a test cylindrical scatterer, with lower background reconstruction error than a genetic algorithm approach. The research demonstrates that particle swarm optimization is a suitable technique for high-dimensional microwave imaging problems.
Determination of Optimal Product Mix for Profit Maximization using Linear Pro...IJERA Editor
This paper demonstrates the use of liner programming methods in order to determine the optimal product mix for
profit maximization. There had been several papers written to demonstrate the use of linear programming in
finding the optimal product mix in various organization. This paper is aimed to show the generic approach to be
taken to find the optimal product mix.
Determination of Optimal Product Mix for Profit Maximization using Linear Pro...IJERA Editor
This document demonstrates using linear programming to determine the optimal product mix for a manufacturing firm to maximize profit. The firm produces n products using m raw materials. The problem is formulated as a linear program to maximize total profit subject to raw material constraints. The optimal solution is found using the simplex method and provides the quantities of each product (v1, v2, etc.) that maximize total profit (z0). The solution may show some product quantities as zero, indicating those products should not be produced to maximize profit under the given constraints.
Economics
Curve Fitting
macroeconomics
Curve fitting helps in capturing the trend in the data by assigning a single function
across the entire range.
If the functional relationship between the two quantities being graphed is known to be
within additive or multiplicative constants, it is common practice to transform the data in
such a way that the resulting line is a straight line.(by plotting) A process of quantitatively
estimating the trend of the outcomes, also known as regression or curve fitting, therefore
becomes necessary.
For a series of data, curve fitting is used to find the best fit curve. The produced equation is
used to find points anywhere along the curve. It also uses interpolation (exact fit to the data)
and smoothing.
Some people also refer it as regression analysis instead of curve fitting. The curve fitting
process fits equations of approximating curves to the raw field data. Nevertheless, for a
given set of data, the fitting curves of a given type are generally NOT unique.
Smoothing of the curve eliminates components like seasonal, cyclical and random
variations. Thus, a curve with a minimal deviation from all data points is desired. This
best-fitting curve can be obtained by the method of least squares.
What is curve fitting Curve fitting?
Curve fitting is the process of constructing a curve, or mathematical functions, which possess closest
proximity to the series of data. By the curve fitting we can mathematically construct the functional
relationship between the observed fact and parameter values, etc. It is highly effective in mathematical
modelling some natural processes.
What is a fitting model?
A fit model (sometimes fitting model) is a person who is used by a fashion designer or
clothing manufacturer to check the fit, drape and visual appearance of a design on a
'real' human being, effectively acting as a live mannequin.
What is a model fit statistics?
The goodness of fit of a statistical model describes how well it fits a set of
observations. Measures of goodness of fit typically summarize the discrepancy
between observed values and the values expected under the model in question.
What is a commercial model?
Commercial modeling is a more generalized type of modeling. There are high
fashion models, and then there are commercial models. ... They can model for
television, commercials, websites, magazines, newspapers, billboards and any other
type of advertisement. Most people who tell you they are models are “commercial”
models.
What is the exponential growth curve?
Growth of a system in which the amount being added to the system is proportional to the
amount already present: the bigger the system is, the greater the increase. ( See geometric
progression.) Note : In everyday speech, exponential growth means runaway expansion, such
as in population growth.
Why is population exponential?
Exponential population growth: When resources are unlimited, populations
exhibit exponential growth, resulting in a J-shaped curve.
Regression is a statistical technique used to model relationships between variables. The key steps are to identify variables, select a dependent variable to predict, examine relationships visually, and find a way to predict the dependent variable using other variables. Correlation coefficients measure the strength of relationships between 0-1. Positive relationships have variables moving in the same direction, while negative relationships have them moving in opposite directions. Non-linear regression can model curvilinear relationships using quadratic terms. Logistic regression is used for categorical dependent variables.
Master of Computer Application (MCA) – Semester 4 MC0079Aravind NC
The document describes mathematical models and provides examples of different types of models. It discusses linear vs nonlinear models, deterministic vs probabilistic models, static vs dynamic models, discrete vs continuous models, and deductive vs inductive vs floating models. It also explains the Erlang family of distributions used in queuing systems and provides the probability density function and cumulative distribution function. Finally, it outlines the graphical method algorithm for solving a linear programming problem with two variables in 8 steps.
Mc0079 computer based optimization methods--phpapp02Rabby Bhatt
This document discusses mathematical models and provides examples of different types of mathematical models. It begins by defining a mathematical model as a description of a system using mathematical concepts and language. It then classifies mathematical models in several ways, such as linear vs nonlinear, deterministic vs probabilistic, static vs dynamic, discrete vs continuous, and deductive vs inductive vs floating. The document provides examples and explanations of each type of model. It also discusses using finite queuing tables to analyze queuing systems with a finite population size. In summary, the document outlines different ways to classify mathematical models and provides examples of applying various types of models.
11.fuzzy inventory model with shortages in man power planningAlexander Decker
This document presents a fuzzy inventory model to determine the optimal time for an employee to quit their current job based on factors like the declining real wage over time and costs associated with changing jobs. The model extends an existing economic order quantity (EOQ) model to account for uncertainty in costs using fuzzy set theory. Membership functions are defined to represent the fuzziness of parameters like real income, costs, and constraints. A fuzzy nonlinear programming problem is formulated and solved using Lagrange multipliers to obtain the optimal solution under fuzzy conditions. The results are compared to the classical crisp model and sensitivity analysis is performed.
This document discusses using simple linear regression to describe relationships between variables in data. It explains that regression finds the linear equation that best describes how a dependent variable (y) changes with an independent variable (x). The equation is the line that minimizes the sum of the squared residuals (deviations from the observed data points). Examples are given of regression analyses conducted to estimate the cost of computer networks based on number of computers, estimate real estate values based on house size, and forecast housing starts based on mortgage rates.
Bender’s Decomposition Method for a Large Two-stage Linear Programming Modeldrboon
Linear Programming method (LP) can solve many problems in operations research and can obtain optimal solutions. But, the problems with uncertainties cannot be solved so easily. These uncertainties increase the complexity scale of the problems to become a large-scale LP model. The discussion started with the mathematical models. The objective is to minimize the number of the system variables subjecting to the decision variable coefficients and their slacks and surpluses. Then, the problems are formulated in the form of a Two-stage Stochastic Linear (TSL) model incorporated with the Bender’s Decomposition method. In the final step, the matrix systems are set up to support the MATLAB programming development of the primal-dual simplex and the Bender’s decomposition method, and applied to solve the example problem with the assumed four numerical sets of the decision variable coefficients simultaneously. The simplex method (primal) failed to determine the results and it was computational time-consuming. The comparison of the ordinary primal, primal-random, and dual method, revealed advantageous of the primal-random. The results yielded by the application of Bender’s decomposition method were proven to be the optimal solutions at a high level of confidence.
Propagation of Error Bounds due to Active Subspace ReductionMohammad
This document summarizes the propagation of error bounds due to active subspace reduction in computational models. It presents two algorithms for performing active subspace reduction: one that is gradient-free and reduces the response or state space, and one that is gradient-based and reduces the parameter space. It then develops a theorem for propagating error bounds across multiple reductions, both in the parameter and response spaces. Numerical experiments on an analytic function and a nuclear reactor pin cell model are used to validate the error bound approach.
1. The document presents an analysis of a coupled fluid flow and deformation model using active subspaces to perform dimension reduction and global sensitivity analysis.
2. Important parameters for the fluid flow model are permeability (k), viscosity (μ), and concentration (c), while all parameters influence the deformation model, except initial porosity (φ0).
3. The coupling between the models is shown to be one-way from the fluid flow to the deformation.
International Journal of Computational Engineering Research(IJCER)ijceronline
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology.
This document is a lab manual for a Computer Graphics and Multimedia course. It includes:
- An introduction and contact information for the instructor.
- A table of contents listing various sections such as the course timetable, syllabus, list of programs to be completed, and important exam questions.
- Details of the course syllabus which covers topics like raster graphics, 2D and 3D transformations, color models, multimedia systems, compression techniques and more.
- A list of 11 programming experiments related to computer graphics algorithms and multimedia, including DDA line drawing, Bresenham's circle algorithm, transformations and compression.
- Descriptions and code for the first 3 experiments - DDA line
The document discusses a method for 3D object recognition from 2D images using centroidal representation. It involves several steps: filtering and binarizing the image, detecting edges, calculating the object center point, extracting features around the centroid, and creating mathematical models using wavelet transforms and autoregression. Centroidal samples represent distances from the center to the boundary every 45 degrees. Wavelet transforms and autoregression are used to create scale and position invariant representations of the object for recognition.
IRJET- An Efficient Reverse Converter for the Three Non-Coprime Moduli Set {4...IRJET Journal
This paper proposes a new and efficient reverse converter for converting residue numbers to decimal numbers for the three moduli set {6, 10, 15} which shares the common factor of 5. The proposed converter replaces larger multipliers used in previous converters with smaller multipliers and adders, reducing the hardware requirements. The hardware implementation of the proposed converter is presented and compared to other state-of-the-art converters, showing it performs better with fewer adders and multipliers. The proposed converter efficiently implements reverse conversion for the non-coprime three moduli set while requiring less hardware than previous approaches.
Geoid height determination is one of the major problems of geodesy because usage of satellite
techniques in geodesy isgetting increasing. Geoid heights can be determined using different methods according
to the available data. Soft computing methods such as Fuzzy logic and neural networks became so popular that
they are used to solve many engineering problems. Fuzzy logic theory and later developments in uncertainty
assessment have enabled us to develop more precise models for our requirements. In this study, How to
construct the best fuzzy model is examined. For this purpose, three different data sets were taken and two
different kinds (two inpust one output and three inputs one output) fuzzy model were formed for the calculation
of geoid heights in Istanbul (Turkey). The Fuzzy models results of these were compared with geoid heights
obtained by GPS/levelling methods. The fuzzy approximation models were tested on the test points.
A Correlative Information-Theoretic Measure for Image SimilarityFarah M. Altufaili
A hybrid measure is proposed for assessing the similarity among gray-scale images. The well-known Structural Similarity Index Measure (SSIM) has been designed using a statistical approach that fails under significant noise (low PSNR). The proposed measure, denoted by SjhCorr2, uses a combination of two parts: the first part is information - theoretic, while the second part is based on 2D correlation. The concept
of symmetric joint histogram is used in the information - heoretic part. The new measure shows the advantages of statistical approaches and information - theoretic approaches. The proposed similarity approach is robust under noise. The new measure outperforms the classical SSIM in detecting image similarity at low PSN.
Similar to Polynomial regression model of making cost prediction in mixed cost analysis (20)
Abnormalities of hormones and inflammatory cytokines in women affected with p...Alexander Decker
Women with polycystic ovary syndrome (PCOS) have elevated levels of hormones like luteinizing hormone and testosterone, as well as higher levels of insulin and insulin resistance compared to healthy women. They also have increased levels of inflammatory markers like C-reactive protein, interleukin-6, and leptin. This study found these abnormalities in the hormones and inflammatory cytokines of women with PCOS ages 23-40, indicating that hormone imbalances associated with insulin resistance and elevated inflammatory markers may worsen infertility in women with PCOS.
A usability evaluation framework for b2 c e commerce websitesAlexander Decker
This document presents a framework for evaluating the usability of B2C e-commerce websites. It involves user testing methods like usability testing and interviews to identify usability problems in areas like navigation, design, purchasing processes, and customer service. The framework specifies goals for the evaluation, determines which website aspects to evaluate, and identifies target users. It then describes collecting data through user testing and analyzing the results to identify usability problems and suggest improvements.
A universal model for managing the marketing executives in nigerian banksAlexander Decker
This document discusses a study that aimed to synthesize motivation theories into a universal model for managing marketing executives in Nigerian banks. The study was guided by Maslow and McGregor's theories. A sample of 303 marketing executives was used. The results showed that managers will be most effective at motivating marketing executives if they consider individual needs and create challenging but attainable goals. The emerged model suggests managers should provide job satisfaction by tailoring assignments to abilities and monitoring performance with feedback. This addresses confusion faced by Nigerian bank managers in determining effective motivation strategies.
A unique common fixed point theorems in generalized dAlexander Decker
This document presents definitions and properties related to generalized D*-metric spaces and establishes some common fixed point theorems for contractive type mappings in these spaces. It begins by introducing D*-metric spaces and generalized D*-metric spaces, defines concepts like convergence and Cauchy sequences. It presents lemmas showing the uniqueness of limits in these spaces and the equivalence of different definitions of convergence. The goal of the paper is then stated as obtaining a unique common fixed point theorem for generalized D*-metric spaces.
A trends of salmonella and antibiotic resistanceAlexander Decker
This document provides a review of trends in Salmonella and antibiotic resistance. It begins with an introduction to Salmonella as a facultative anaerobe that causes nontyphoidal salmonellosis. The emergence of antimicrobial-resistant Salmonella is then discussed. The document proceeds to cover the historical perspective and classification of Salmonella, definitions of antimicrobials and antibiotic resistance, and mechanisms of antibiotic resistance in Salmonella including modification or destruction of antimicrobial agents, efflux pumps, modification of antibiotic targets, and decreased membrane permeability. Specific resistance mechanisms are discussed for several classes of antimicrobials.
A transformational generative approach towards understanding al-istifhamAlexander Decker
This document discusses a transformational-generative approach to understanding Al-Istifham, which refers to interrogative sentences in Arabic. It begins with an introduction to the origin and development of Arabic grammar. The paper then explains the theoretical framework of transformational-generative grammar that is used. Basic linguistic concepts and terms related to Arabic grammar are defined. The document analyzes how interrogative sentences in Arabic can be derived and transformed via tools from transformational-generative grammar, categorizing Al-Istifham into linguistic and literary questions.
A time series analysis of the determinants of savings in namibiaAlexander Decker
This document summarizes a study on the determinants of savings in Namibia from 1991 to 2012. It reviews previous literature on savings determinants in developing countries. The study uses time series analysis including unit root tests, cointegration, and error correction models to analyze the relationship between savings and variables like income, inflation, population growth, deposit rates, and financial deepening in Namibia. The results found inflation and income have a positive impact on savings, while population growth negatively impacts savings. Deposit rates and financial deepening were found to have no significant impact. The study reinforces previous work and emphasizes the importance of improving income levels to achieve higher savings rates in Namibia.
A therapy for physical and mental fitness of school childrenAlexander Decker
This document summarizes a study on the importance of exercise in maintaining physical and mental fitness for school children. It discusses how physical and mental fitness are developed through participation in regular physical exercises and cannot be achieved solely through classroom learning. The document outlines different types and components of fitness and argues that developing fitness should be a key objective of education systems. It recommends that schools ensure pupils engage in graded physical activities and exercises to support their overall development.
A theory of efficiency for managing the marketing executives in nigerian banksAlexander Decker
This document summarizes a study examining efficiency in managing marketing executives in Nigerian banks. The study was examined through the lenses of Kaizen theory (continuous improvement) and efficiency theory. A survey of 303 marketing executives from Nigerian banks found that management plays a key role in identifying and implementing efficiency improvements. The document recommends adopting a "3H grand strategy" to improve the heads, hearts, and hands of management and marketing executives by enhancing their knowledge, attitudes, and tools.
This document discusses evaluating the link budget for effective 900MHz GSM communication. It describes the basic parameters needed for a high-level link budget calculation, including transmitter power, antenna gains, path loss, and propagation models. Common propagation models for 900MHz that are described include Okumura model for urban areas and Hata model for urban, suburban, and open areas. Rain attenuation is also incorporated using the updated ITU model to improve communication during rainfall.
A synthetic review of contraceptive supplies in punjabAlexander Decker
This document discusses contraceptive use in Punjab, Pakistan. It begins by providing background on the benefits of family planning and contraceptive use for maternal and child health. It then analyzes contraceptive commodity data from Punjab, finding that use is still low despite efforts to improve access. The document concludes by emphasizing the need for strategies to bridge gaps and meet the unmet need for effective and affordable contraceptive methods and supplies in Punjab in order to improve health outcomes.
A synthesis of taylor’s and fayol’s management approaches for managing market...Alexander Decker
1) The document discusses synthesizing Taylor's scientific management approach and Fayol's process management approach to identify an effective way to manage marketing executives in Nigerian banks.
2) It reviews Taylor's emphasis on efficiency and breaking tasks into small parts, and Fayol's focus on developing general management principles.
3) The study administered a survey to 303 marketing executives in Nigerian banks to test if combining elements of Taylor and Fayol's approaches would help manage their performance through clear roles, accountability, and motivation. Statistical analysis supported combining the two approaches.
A survey paper on sequence pattern mining with incrementalAlexander Decker
This document summarizes four algorithms for sequential pattern mining: GSP, ISM, FreeSpan, and PrefixSpan. GSP is an Apriori-based algorithm that incorporates time constraints. ISM extends SPADE to incrementally update patterns after database changes. FreeSpan uses frequent items to recursively project databases and grow subsequences. PrefixSpan also uses projection but claims to not require candidate generation. It recursively projects databases based on short prefix patterns. The document concludes by stating the goal was to find an efficient scheme for extracting sequential patterns from transactional datasets.
A survey on live virtual machine migrations and its techniquesAlexander Decker
This document summarizes several techniques for live virtual machine migration in cloud computing. It discusses works that have proposed affinity-aware migration models to improve resource utilization, energy efficient migration approaches using storage migration and live VM migration, and a dynamic consolidation technique using migration control to avoid unnecessary migrations. The document also summarizes works that have designed methods to minimize migration downtime and network traffic, proposed a resource reservation framework for efficient migration of multiple VMs, and addressed real-time issues in live migration. Finally, it provides a table summarizing the techniques, tools used, and potential future work or gaps identified for each discussed work.
A survey on data mining and analysis in hadoop and mongo dbAlexander Decker
This document discusses data mining of big data using Hadoop and MongoDB. It provides an overview of Hadoop and MongoDB and their uses in big data analysis. Specifically, it proposes using Hadoop for distributed processing and MongoDB for data storage and input. The document reviews several related works that discuss big data analysis using these tools, as well as their capabilities for scalable data storage and mining. It aims to improve computational time and fault tolerance for big data analysis by mining data stored in Hadoop using MongoDB and MapReduce.
1. The document discusses several challenges for integrating media with cloud computing including media content convergence, scalability and expandability, finding appropriate applications, and reliability.
2. Media content convergence challenges include dealing with the heterogeneity of media types, services, networks, devices, and quality of service requirements as well as integrating technologies used by media providers and consumers.
3. Scalability and expandability challenges involve adapting to the increasing volume of media content and being able to support new media formats and outlets over time.
This document surveys trust architectures that leverage provenance in wireless sensor networks. It begins with background on provenance, which refers to the documented history or derivation of data. Provenance can be used to assess trust by providing metadata about how data was processed. The document then discusses challenges for using provenance to establish trust in wireless sensor networks, which have constraints on energy and computation. Finally, it provides background on trust, which is the subjective probability that a node will behave dependably. Trust architectures need to be lightweight to account for the constraints of wireless sensor networks.
This document discusses private equity investments in Kenya. It provides background on private equity and discusses trends in various regions. The objectives of the study discussed are to establish the extent of private equity adoption in Kenya, identify common forms of private equity utilized, and determine typical exit strategies. Private equity can involve venture capital, leveraged buyouts, or mezzanine financing. Exits allow recycling of capital into new opportunities. The document provides context on private equity globally and in developing markets like Africa to frame the goals of the study.
This document discusses a study that analyzes the financial health of the Indian logistics industry from 2005-2012 using Altman's Z-score model. The study finds that the average Z-score for selected logistics firms was in the healthy to very healthy range during the study period. The average Z-score increased from 2006 to 2010 when the Indian economy was hit by the global recession, indicating the overall performance of the Indian logistics industry was good. The document reviews previous literature on measuring financial performance and distress using ratios and Z-scores, and outlines the objectives and methodology used in the current study.
“An Outlook of the Ongoing and Future Relationship between Blockchain Technologies and Process-aware Information Systems.” Invited talk at the joint workshop on Blockchain for Information Systems (BC4IS) and Blockchain for Trusted Data Sharing (B4TDS), co-located with with the 36th International Conference on Advanced Information Systems Engineering (CAiSE), 3 June 2024, Limassol, Cyprus.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
Infrastructure Challenges in Scaling RAG with Custom AI modelsZilliz
Building Retrieval-Augmented Generation (RAG) systems with open-source and custom AI models is a complex task. This talk explores the challenges in productionizing RAG systems, including retrieval performance, response synthesis, and evaluation. We’ll discuss how to leverage open-source models like text embeddings, language models, and custom fine-tuned models to enhance RAG performance. Additionally, we’ll cover how BentoML can help orchestrate and scale these AI components efficiently, ensuring seamless deployment and management of RAG systems in the cloud.
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
“Building and Scaling AI Applications with the Nx AI Manager,” a Presentation...
Polynomial regression model of making cost prediction in mixed cost analysis
1. Mathematical Theory and Modeling www.iiste.org
ISSN 2224-5804 (Paper) ISSN 2225-0522 (Online)
Vol.2, No.2, 2012
Polynomial Regression Model of Making Cost Prediction In
Mixed Cost Analysis
Isaac, O. Ajao (Corresponding author)
Department of Mathematics aand Statistics, The Federal Polytechnic, Ado-Ekiti,
PMB 5351, Ado-Ekiti, Ekiti state, Nigeria.
Tel: +2348035252017 E-mail: isaac_seyi@yahoo.com
Adedeji, A. Abdullahi
Department of Mathematics aand Statistics, The Federal Polytechnic, Ado-Ekiti,
PMB 5351, Ado-Ekiti, Ekiti state, Nigeria.
Tel: +2348062632084 E-mail: anzwers2003@yahoo.com
Ismail, I. Raji
Department of Mathematics aand Statistics, The Federal Polytechnic, Ado-Ekiti,
PMB 5351, Ado-Ekiti, Ekiti state, Nigeria.
Tel: +2348029023836 E-mail: rajimaths@yahoo.com
Abstract
Regression analysis is used across business fields for tasks as diverse as systematic risk estimation,
production and operations management, and statistical inference. This paper presents the cubic polynomial
least square regression as a robust alternative method of making cost prediction in business rather than the
usual linear regression.The study reveals that polynomial regression is a better alternative with a very high
coefficient of determination.
Keywords: Polynomial regression, linear regression, high-low method, cost prediction, mixed cost.
1. Introduction
Current practice in teaching regression analysis relies on the investigation of data sets for users with
techniques that allow description and inference. There are many alternatives, however, for actual learner
computation of regression coefficients and summary statistics. Kmenta (1971) presents a computational
design that allows users to complete the calculations with only a pencil and paper. Brigham (1968) suggests
that learners might simply construct a scatter plot and a ruler to visually approximate the regression line.
Gujarati (2009) recommends the use of statistical packages which are now easily accessible to users on
mainframe and micro computers (Mundrake, G.A., & Brown, B.J. (1989)).
Mixed costs have both a fixed portion and a variable portion. There are handful of methods used by
managers to break mixed costs in the two manageable components - fixed and variable costs. The process
of breaking mixed costs into fixed and variable portions allow us to use the costs to predict and plan for the
future since we have a good insight on how these costs behave at various activity levels. We often call the
process of separating mixed cost into fixed and variable component, cost estimation. The methods
14
2. Mathematical Theory and Modeling www.iiste.org
ISSN 2224-5804 (Paper) ISSN 2225-0522 (Online)
Vol.2, No.2, 2012
commonly used are the Scatter graph, High-low method, and the Ordinary least square linear regression.
The goal of cost estimation is to determine the amount of fixed and variable costs so that a cost equation
can be used to predict future costs.
2. Data and method
The high-low method uses the highest and the lowest activity levels over a period of time to estimate the
portion of a mixed cost that is variable and portion that is fixed. Because it uses only the high and low
activity levels to calculate the variable and fixed costs, it may be misleading if the high and low activity
levels are not repreentative of the normal activity. The high-low method is most accurate when the high and
low levels of activity are representation of the majority of the points.
y2 y1
Variable cost per unit (b) =
x2 x1
Where y2 = the total cost at highest level of activity
y1 = the total cost at lowest level of activity
x2 = are the number of units at highest level of activity; and
x1 = are the number of units at highest level of activity
In other words, variable cost per unit is equal to the slope of the cost level line (i.e. change in total cost /
change in number of units produced).
Total fixed cost (a) = y2 bx2 y1 bx1
The high-low method can be quite misleading. The reason is that cost data are rarely linear and inferences
are based on only two observations, either of which could be statistical anomaly or outlier. The goal of least
squares is to define a line so that it fits through a set of points on a graph. Where the cummulative sum of
squared distance between the points and the line is minimized, hence the name “least squares”.
2.2 Polynomial Regression model
In statistics, polynomial regression is a form of linear regression in which the relationship between the
independent variable x and the dependent variable y is modeled as an nth order polynomial. Polynomial
regression fits a nonlinear relationship between the value of x and the corresponding conditional mean of
y, denoted as ( y x) ( Fan, Jianqing (1996)) and (Magee, Lonnie (1998)). Although polynomial fits a
non linear model to the data, as statistical estimation problem it is linear, in the sense that the regression
15
3. Mathematical Theory and Modeling www.iiste.org
ISSN 2224-5804 (Paper) ISSN 2225-0522 (Online)
Vol.2, No.2, 2012
function ( y x) is linear in the unknown parameters that are estimated from the data.
2.3 The model
yi 0 1 xi 2 xi 2 + ei i 1, 2,...n. (i)
Matematically a parabola is represented by the equation (i), also known as quadratic function, or more
generally, a second-degree polynomial in the variable x, the highest power of of x represents the degree of
the polynomial. If x3 were added to the preceeding function (Gujarati, 2009) and (Studenmund, A.H., &
Cassidy, H.J. (1987)), it would be a third-degree polynomial, and so on.
The stochastic version of equation (i) may be written as
yi 0 1 xi 2 xi 2 + 3 xi 3 + ei i 1, 2,...n (ii)
Which is called a second-degree polynomial regression
The general kth degree polynomial regression is written as:
yi 0 1 xi 2 xi 2 +. . .+ k xi k + ei i 1, 2,...n
where
0 , 1 , k are the parameters of the model,
i is a random error term.
3. Data Presentation and Analysis
All analyses were done using MINITAB 11. The scattergram in fig(i) suggests the type of regression
model that will fit the data in the table above. From this figure it is clear that the relationship between total
cost and output resembles the elongated S-curve. It is noticed that the total cost curve first increases
gradually and then rapidly, as predicted by the celebrated law of diminishing returns. This S-shape of the
total cost curve can be captured by the following cubic or third-degree polynomial:
yi 0 1 xi 2 xi 2 + 3 xi 3 + ei i 1, 2,...n
Where y = total cost and
x = output
3.1 Using the High-Low method
2 000 000 500 000
Variable cost per unit (slope) = 13.04 per unit , that is N 13.04 per unit
175 000 60 000
TC = FC + VC (X)
16
4. Mathematical Theory and Modeling www.iiste.org
ISSN 2224-5804 (Paper) ISSN 2225-0522 (Online)
Vol.2, No.2, 2012
Where X = number of units
Using: Total cost (TC) = N 2 000 000
Variable cost per unit (VC) = N 13.04 and
X = 175 000
To obtain total fixed cost (FC)
N2 000 000 = FC + N 13.04 (175 000)
FC =N 2 000 000 – N2 282 000 = - N 282 000.
The line of best fit from the above equations becomes:
TC = - N 282 000 + N 13.04 (X) (vi)
The negative amount of fixed costs is not realistic and leads me to believe that either the total costs at either
the high point or at the low point are not representative. The high low method of determining the fixed and
variable portions of a mixed cost relies on only two sets of data: the costs at the highest level of activity,
and the costs at the lowest level of activity. If either set of data is flawed, the calculation can result in an
unreasonable, negative amount of fixed cost. It is possible that at the highest point of activity the costs were
out of line from the normal relationship—referred to as an outlier.
4. Discussion of Results
The R-Square value is a statistical calculation that characterizes how well a particular line fits a set of data.
As a general rule, the closer R2 is to 1.00 the better; as this would represent a perfect fit where every point
falls exactly on the resulting line. The models with the lowest P-value and highest R2 which are 0.0000895
and 0.874 are the linear and polynomial cubic regression models respectively (table 4).
The negative amount of fixed costs is not realistic and leads me to believe that either the total costs at either
the high point or at the low point are not representative. The high low method of determining the fixed and
variable portions of a mixed cost relies on only two sets of data: the costs at the highest level of activity,
and the costs at the lowest level of activity. If either set of data is flawed, the calculation can result in an
unreasonable, negative amount of fixed cost. It is possible that at the highest point of activity the costs were
out of line from the normal relationship—referred to as an outlier. All these are indications of it’s crude and
unscientific nature.
5. Conclusion and Recommendation
Based on the results of the analyses it can be concluded that Polynomial regression model is better than
the conventional Linear regression and High-Low methods, especially when analysing data relating to cost
and production functions.
It is obvious that Linear and Quadratic models are not too bad for prediction with respect to the data used in
this research paper, but the Cubic polynomial regression is better. It is therefore recommended that data
17
5. Mathematical Theory and Modeling www.iiste.org
ISSN 2224-5804 (Paper) ISSN 2225-0522 (Online)
Vol.2, No.2, 2012
analysts should endeavour to always plot a simple scatter diagram before using any regression model in
order to know the type of relationship that exists between the variable of interest.
References
Brigham, E.F. (1986). Fundamental of financial management (4th ed.). Chicago: Dryden Press.
Fan, Jianqing (1996). "1.1 From linear regression to nonlinear regression". Local Polynomial Modelling
and Its Applications. Monographs on Statistics and Applied Probability. Chapman & Hall/CRC
Gujarati, D.N. and Porter, D.C. (2009). Basic Econometrics. New York: McGraw-Hall.
http://www.studyzone.org/testprep/math4/d/linegraph4l.cfm: Data on Monthly unit production and the
associated costs
Kmenta, J. (1971). Elements of econometrics. New York: Macmillan
Magee, Lonnie (1998). "Non-local Behavior in Polynomial Regressions". The American Statistician
(American Statistical Association) 52 (1): 20–22.
Mundrake, G.A., & Brown, B.J. (1989). Applicacation of microcomputer software to university level course
instruction. Journal of Education for Business, 64(3), 124-128.
Stein, S.H. (1990). Understanding Regression Analysis. Journal of Education for Business, 65(6) 264-269.
Studenmund, A.H., & Cassidy, H.J. (1987), Using Econometric: A practical guide. Boston: Little, Brown.
Appendix
Table 1: Monthly unit production and the associated costs
(sorted from low to high)
months Units (x) Cost (y)
Oct 60 000 N 500 000
Nov 65 000 N 940 000
Mar 75 000 N 840 000
Sept 80 000 N 910 000
18
6. Mathematical Theory and Modeling www.iiste.org
ISSN 2224-5804 (Paper) ISSN 2225-0522 (Online)
Vol.2, No.2, 2012
Feb 90 000 N 1 100 000
Dec 95 000 N 1 500 000
Jan 100 000 N 1 250 000
Aug 115 000 N 1 400 000
Apr 120 000 N 1 400 000
Jun 130 000 N 1 200 000
May 140 000 N 1 500 000
Jul 175 000 N2 000 000
Fig.(i): The curve of the total cost
19
7. Mathematical Theory and Modeling www.iiste.org
ISSN 2224-5804 (Paper) ISSN 2225-0522 (Online)
Vol.2, No.2, 2012
The total cost curve
2000000
1500000
cost
1000000
500000
60000 110000 160000
units
Table (2): Regression (Linear)
The regression equation is
y = 138533 + 10.3 x (iii)
Predictor Coef StDev T P
Constant 138533 178518 0.78 0.456
x 10.343 1.643 6.30 0.000
S = 184068 R-Sq = 79.9% R-Sq(adj) = 77.8%
Analysis of Variance
Source DF SS MS F P
Regression 1 1.34336E+12 1.34336E+12 39.65 0.000
Error 10 3.38811E+11 33881051933
Total 11 1.68217E+12
20
8. Mathematical Theory and Modeling www.iiste.org
ISSN 2224-5804 (Paper) ISSN 2225-0522 (Online)
Vol.2, No.2, 2012
Fig. (ii): Plot of the Linear regression model
linear regression model for total cost
Y = 138533 + 10.3435X
R-Sq = 0.799
2500000
2000000
1500000
cost
1000000
500000 Regression
95% CI
95% PI
0
60000 110000 160000
units
Table (3): Polynomial Regression (Quadratic)
Y = -136015 + 15.6406X - 2.33E-05X**2 (iv)
R-Sq = 0.804
Analysis of Variance
SOURCE DF SS MS F P
Regression 2 1.35E+12 6.76E+11 18.4624 6.53E-04
Error 9 3.30E+11 3.66E+10
Total 11 1.68E+12
SOURCE DF Seq SS F P
Linear 1 1.34E+12 39.6492 8.95E-05
Quadratic 1 9.15E+09 0.249846 0.629176
21
9. Mathematical Theory and Modeling www.iiste.org
ISSN 2224-5804 (Paper) ISSN 2225-0522 (Online)
Vol.2, No.2, 2012
Fig. (iii): Plot of the Quadratic regression model
Quadratic regression model for total cost
Y = -136015 + 15.6406X - 2.33E-05X**2
R-Sq = 0.804
2500000
2000000
1500000
cost
1000000
500000 Regression
95% CI
95% PI
0
60000 110000 160000
units
Table (4): Polynomial Regression (Cubic)
Y = -3888396 + 125.375X - 1.02E-03X**2 + 2.84E-09X**3 (v)
R-Sq = 0.874
Analysis of Variance
SOURCE DF SS MS F P
Regression 3 1.47E+12 4.90E+11 18.5547 5.82E-04
Error 8 2.11E+11 2.64E+10
Total 11 1.68E+12
SOURCE DF Seq SS F P
Linear 1 1.34E+12 39.6492 8.95E-05
Quadratic 1 9.15E+09 0.249846 0.629176
Cubic 1 1.18E+11 4.47643 6.73E-02
Fig. (iv): Plot of the Cubic regression model
22
10. Mathematical Theory and Modeling www.iiste.org
ISSN 2224-5804 (Paper) ISSN 2225-0522 (Online)
Vol.2, No.2, 2012
Cubic regression model for total cost
Y = -3888396 + 125.375X - 1.02E-03X**2 + 2.84E-09X**3
R-Sq = 0.874
2500000
2000000
1500000
cost
1000000
500000 Regression
95% CI
95% PI
0
60000 110000 160000
units
23