Based on the BDM (Bit Decision Making) method, the present work presents two contributions: first, the
illustration of the use of the technique known as SOP (Sum Of Products) in order to systematize the
process to obtain the correlation function for sub-system’s mathematical modelling, and second,the provision of capacity to manage a greater than binary but a finite - discrete set of possible subjective qualifications of suppliers at any criterion.
- The document summarizes a lecture on using micro data with characteristics-based choice models. It discusses two key advantages of micro data: 1) It provides information on how observed individual characteristics interact with product characteristics. 2) It includes data on individuals who did not purchase products as well as second choices, giving insight into unobserved product characteristics.
- The model specifies utility as depending on observed and unobserved individual characteristics as well as product characteristics. Micro data on first choices matches individual characteristics to chosen products, while second choice data helps account for unobserved characteristics by holding individual conditions constant.
Sabatelli relationship between the uncompensated price elasticity and the inc...GLOBMOD Health
Income and price elasticity of demand quantify the responsiveness of markets to changes in income and in prices, respectively. Under the assumptions of utility maximization and preference independence (additive preferences), mathematical relationships between income elasticity values and the uncompensated own and cross price elasticity of demand are here derived using the differential approach to demand analysis.
Citation: Sabatelli L (2016) Relationship between the Uncompensated Price Elasticity and the Income Elasticity of Demand under Conditions of Additive Preferences. PLoS ONE 11(3): e0151390. doi:10.1371/journal.pone.0151390
http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0151390
A product family with a common platform paradigm can increase the flexibility and responsiveness of the product- manufacturing process and help take away market share from competitors that develop one product at a time. The recently developed Comprehensive Product Platform Planning (CP3 ) method allows (i) the formation of sub-families of products, and (ii) the simultaneous identification and quantification of platform/scaling design variables. The CP3 model is founded on a generalized commonality matrix representation of the product-platform-plan. In this paper, a new commonality index is developed and introduced in CP3 to simultaneously account for the degree of inter-product commonalities and for the overlap between groups of products sharing different platform variables. To maximize both the performance of the product family and the new commonality measure, we develop and apply an advanced mixed-discrete Particle Swarm Optimization (MDPSO) algorithm. In the MDPSO algo- rithm, the discrete variables are updated using a deterministic nearest-feasible-vertex criterion after each iteration of the conventional PSO. Such an approach is expected to avoid the undesirable discrepancy in the rate of evolution of discrete and continuous variables. To prevent a premature stagnation of solutions (likely in conventional PSO), while solving the high dimensional MINLP problem presented by CP3, we introduce a new adaptive diversity-preservation technique. This technique first characterizes the population diversity and then applies a stochastic update of the discrete variables based on the estimated diversity measure. The potential of the new CP3 optimization methodology is illustrated through its application to design a family of universal electric motors. The optimized platform plans provide helpful insights into the importance of accounting for the overlap between different product platforms, when quantifying the effective commonality in the product family.
This document discusses sources of identification from market level data and solutions to precision problems when estimating demand models. It focuses on adding assumptions like a pricing equation to bring more information from existing data. The pricing equation assumes Nash equilibrium in prices and is estimated jointly with the demand equation. This adds degrees of freedom compared to estimating demand alone. The pricing equation provides information on price elasticities and markups that help identify demand parameters. The document also discusses using micro data and adding cost function assumptions to the model.
This document summarizes a lecture on analyzing demand systems for differentiated products. It discusses:
1) Demand systems provide information to analyze firm incentives and responses to policy changes. They are important for welfare analysis and constructing price indices.
2) Demand models can consider representative or heterogeneous agents, and model demand in product or characteristic space. Heterogeneous agent models in characteristic space are preferred as they allow combining different data sources.
3) Demand estimation requires simulating aggregate demand from individual demands, which provides unbiased estimates that can be made precise with large simulations.
On Prognosis of Forecast Technique of Replacement of Old Products (Innovative...BRNSS Publication Hub
In this paper, the prognosis and analysis of the process of the replacement of new lines of innovative products must take into account the capacity of the market, the possible demand for different versions of the product, the presence in the market of competing products – analogs, the proximity of the market to saturation, etc. In this paper, an approach of analysis of the replacement of new lines of innovative products (innovative technologies), recommendations for replacement of line of old products on line of new products (innovative technologies) has been introduced.
This document provides an overview of multidimensional scaling (MDS) and overall similarity perceptual maps. It discusses the challenges with attribute rating perceptual maps and how MDS can help address these challenges by mapping similarities and dissimilarities among items based on consumer perceptions. The document outlines the history, methodology, implementation steps, and example application of MDS for creating a perceptual map of search engines. Key considerations for using MDS include having a large sample size of subjects and brands/products.
- The document summarizes a lecture on using micro data with characteristics-based choice models. It discusses two key advantages of micro data: 1) It provides information on how observed individual characteristics interact with product characteristics. 2) It includes data on individuals who did not purchase products as well as second choices, giving insight into unobserved product characteristics.
- The model specifies utility as depending on observed and unobserved individual characteristics as well as product characteristics. Micro data on first choices matches individual characteristics to chosen products, while second choice data helps account for unobserved characteristics by holding individual conditions constant.
Sabatelli relationship between the uncompensated price elasticity and the inc...GLOBMOD Health
Income and price elasticity of demand quantify the responsiveness of markets to changes in income and in prices, respectively. Under the assumptions of utility maximization and preference independence (additive preferences), mathematical relationships between income elasticity values and the uncompensated own and cross price elasticity of demand are here derived using the differential approach to demand analysis.
Citation: Sabatelli L (2016) Relationship between the Uncompensated Price Elasticity and the Income Elasticity of Demand under Conditions of Additive Preferences. PLoS ONE 11(3): e0151390. doi:10.1371/journal.pone.0151390
http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0151390
A product family with a common platform paradigm can increase the flexibility and responsiveness of the product- manufacturing process and help take away market share from competitors that develop one product at a time. The recently developed Comprehensive Product Platform Planning (CP3 ) method allows (i) the formation of sub-families of products, and (ii) the simultaneous identification and quantification of platform/scaling design variables. The CP3 model is founded on a generalized commonality matrix representation of the product-platform-plan. In this paper, a new commonality index is developed and introduced in CP3 to simultaneously account for the degree of inter-product commonalities and for the overlap between groups of products sharing different platform variables. To maximize both the performance of the product family and the new commonality measure, we develop and apply an advanced mixed-discrete Particle Swarm Optimization (MDPSO) algorithm. In the MDPSO algo- rithm, the discrete variables are updated using a deterministic nearest-feasible-vertex criterion after each iteration of the conventional PSO. Such an approach is expected to avoid the undesirable discrepancy in the rate of evolution of discrete and continuous variables. To prevent a premature stagnation of solutions (likely in conventional PSO), while solving the high dimensional MINLP problem presented by CP3, we introduce a new adaptive diversity-preservation technique. This technique first characterizes the population diversity and then applies a stochastic update of the discrete variables based on the estimated diversity measure. The potential of the new CP3 optimization methodology is illustrated through its application to design a family of universal electric motors. The optimized platform plans provide helpful insights into the importance of accounting for the overlap between different product platforms, when quantifying the effective commonality in the product family.
This document discusses sources of identification from market level data and solutions to precision problems when estimating demand models. It focuses on adding assumptions like a pricing equation to bring more information from existing data. The pricing equation assumes Nash equilibrium in prices and is estimated jointly with the demand equation. This adds degrees of freedom compared to estimating demand alone. The pricing equation provides information on price elasticities and markups that help identify demand parameters. The document also discusses using micro data and adding cost function assumptions to the model.
This document summarizes a lecture on analyzing demand systems for differentiated products. It discusses:
1) Demand systems provide information to analyze firm incentives and responses to policy changes. They are important for welfare analysis and constructing price indices.
2) Demand models can consider representative or heterogeneous agents, and model demand in product or characteristic space. Heterogeneous agent models in characteristic space are preferred as they allow combining different data sources.
3) Demand estimation requires simulating aggregate demand from individual demands, which provides unbiased estimates that can be made precise with large simulations.
On Prognosis of Forecast Technique of Replacement of Old Products (Innovative...BRNSS Publication Hub
In this paper, the prognosis and analysis of the process of the replacement of new lines of innovative products must take into account the capacity of the market, the possible demand for different versions of the product, the presence in the market of competing products – analogs, the proximity of the market to saturation, etc. In this paper, an approach of analysis of the replacement of new lines of innovative products (innovative technologies), recommendations for replacement of line of old products on line of new products (innovative technologies) has been introduced.
This document provides an overview of multidimensional scaling (MDS) and overall similarity perceptual maps. It discusses the challenges with attribute rating perceptual maps and how MDS can help address these challenges by mapping similarities and dissimilarities among items based on consumer perceptions. The document outlines the history, methodology, implementation steps, and example application of MDS for creating a perceptual map of search engines. Key considerations for using MDS include having a large sample size of subjects and brands/products.
This document provides an introduction to dynamic demand modeling for storable and durable goods. It discusses how consumer stockpiling behavior and sales can lead to biases in static demand estimation. A simple demand model is then presented that accounts for demand anticipation effects using two consumer types - storers and non-storers. The model assumes perfect foresight, a fixed storage period, and derives different purchasing patterns between the four states of current and past price periods.
This document discusses methods for estimating the output gap and decomposing it into observable components. It provides a unified framework by formulating most output gap estimation methods as linear filters. This allows the output gap estimate to be expressed as a weighted average of observed macroeconomic data over time. The document demonstrates how to decompose an output gap estimate into the contributions made by different data series, like output, inflation, and unemployment. It also shows how to analyze how output gap estimates are revised as new data is incorporated using this linear filter framework. The framework provides insight into which data each method uses and how it weights them to estimate an unobserved output gap.
This document summarizes a research paper that develops a measure of dynamic efficiency within a dynamic discrete choice structural inventory model. The model allows a firm to decide each period whether to order (ait = 1) or not order (ait = 0) individual products. Using data from a Portuguese firm, the model is estimated using a two-stage approach. In the first stage, transition probabilities of state variables are estimated nonparametrically. In the second stage, parameters are estimated using Bayesian methods. A counterfactual experiment indicates the firm's actual decisions diverged from optimal decisions in at least 12.71% of cases.
This document discusses modeling the skewness and kurtosis of box office revenue data using the Box-Cox power exponential (BCPE) distribution within the generalized additive models for location, scale and shape (GAMLSS) framework. It finds that the BCPE distribution provides a better fit than the traditionally used Pareto–Levy–Mandelbrot distribution. The flexible four-parameter BCPE distribution allows modeling the location, scale, skewness, and kurtosis parameters of box office revenues as smooth functions of explanatory variables like opening revenues and number of screens. This overcomes limitations of previous models and provides a better understanding of box office revenues across different time periods.
Maintenance dynamics, tools for machines functionality in aAlexander Decker
This document discusses maintenance dynamics tools for improving machine functionality in a competitive environment. The author developed models to assist in comprehensive maintenance planning even when machines are stressed to meet customer demand. These models were tested on polyethene bag production machines over 3 months. Data collected before and after introducing the models was analyzed, showing increased machine functionality with the models despite aging, compared to past functionality. The models provide strategies for maintenance based on output levels, demand, and maintenance severity levels to optimize production capacity and functionality in different competitive situations.
Applied Business Statistics ,ken black , ch 3 part 2AbdelmonsifFadl
This document contains excerpts from Chapter 3 and Chapter 12 of the 6th edition of the textbook "Business Statistics" by Ken Black. Chapter 3 discusses measures of shape such as skewness and the coefficient of skewness. Chapter 12 introduces regression analysis and correlation, covering topics like the Pearson correlation coefficient, least squares regression, and residual analysis. Examples are provided to demonstrate calculating the correlation coefficient and estimating the regression equation to predict costs from number of passengers for an airline.
Applied Business Statistics ,ken black , ch 3 part 1AbdelmonsifFadl
This document provides an overview of descriptive statistics concepts including measures of central tendency (mean, median, mode), measures of variability (range, standard deviation, variance), and how to calculate these measures from both ungrouped and grouped data. It defines key terms, explains how to compute various statistics, and includes example problems and solutions. The learning objectives are to understand and be able to compute different descriptive statistics and apply concepts like the empirical rule and Chebyshev's theorem.
Applied Business Statistics ,ken black , ch 6AbdelmonsifFadl
This chapter summary covers key concepts about continuous probability distributions discussed in Chapter 6 of the textbook "Business Statistics, 6th ed." by Ken Black. The chapter objectives are to understand the uniform distribution, appreciate the importance of the normal distribution, and know how to solve normal distribution problems. It discusses the uniform, normal, and exponential distributions. It explains how to calculate probabilities using the normal distribution and z-scores. It also discusses when the normal distribution can be used to approximate the binomial distribution.
Presented by: Thomas F. Coleman, Director, Waterloo Research Institute in Insurance, Securities and Quantitative Finance at University of Waterloo
MLX FinTech Conference II, Toronto, May 2018.
More info at: https://www.machinelearningx.net
Applied Business Statistics ,ken black , ch 5AbdelmonsifFadl
This document discusses discrete probability distributions from Chapter 5 of the textbook "Business Statistics, 6th ed." by Ken Black. It defines discrete and continuous random variables and distributions. It describes how to calculate the mean and variance of a discrete distribution. It also introduces the binomial and Poisson distributions and provides examples of how to calculate probabilities using them. For the binomial distribution example, it calculates the probability of getting two or fewer unemployed workers in a random sample of 20 Jackson, Mississippi residents. For the Poisson distribution example, it calculates the probability of more than 7 customers arriving at a bank in a 4-minute interval, given an average arrival rate of 3.2 customers every 4 minutes.
International journal of applied sciences and innovation vol 2015 - no 2 - ...sophiabelthome
This document describes using a simulation model to determine the optimal order quantity for a wholesale supplier. Regression analysis was used to forecast quarterly sales for 2007. A simulation model was built in Excel to express the company's sales and inventory schedule. By varying order quantities and simulating demand, profit distributions were found. The order quantities that minimized risk and showed relatively high profit for each quarter were determined to be the optimal order quantities. These were 310,000m for Q1, 270,000m for Q2, 250,000m for Q3, and 440,000m for Q4.
The document provides information about solving a linear programming problem to maximize profits for a swimwear manufacturing company called Super Swim. The company makes two products - shorts and briefs - and wants to determine the optimal production quantities of each given material, labor, and other constraints. The summary is:
1. Super Swim manufactures shorts and briefs and wants to determine the optimal production quantities of each to maximize profits within material, labor, and other constraints.
2. The decision variables are the number of shorts (S) and briefs (B) to produce. The objective is to maximize total profit, which is a linear function of S and B.
3. The constraints include limits on available material
Polynomial regression model of making cost prediction in mixed cost analysisAlexander Decker
This document presents a study comparing different regression models for predicting costs based on production levels. It finds that a cubic polynomial regression model provides a better fit than linear regression or the high-low method. The study uses cost and production data from a company to build linear, quadratic, and cubic regression models. It finds the cubic polynomial regression has the highest R-squared value and lowest p-value, indicating it is the best-fitting model. The study concludes that polynomial regression generally provides a better approach for cost prediction than conventional linear regression or the high-low method.
11.polynomial regression model of making cost prediction in mixed cost analysisAlexander Decker
This document presents a study comparing different regression models for predicting costs based on production levels. It finds that a cubic polynomial regression model provides a better fit than linear regression or the high-low method. The study uses cost and production data from a company to build linear, quadratic, and cubic regression models. It finds the cubic polynomial regression has the highest R-squared value and lowest p-value, indicating it is best able to model the cost patterns in the data. The document concludes the cubic polynomial regression provides a better approach for cost prediction than traditional linear regression or high-low methods.
International Journal of Mathematics and Statistics Invention (IJMSI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJMSI publishes research articles and reviews within the whole field Mathematics and Statistics, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
Econometrics of High-Dimensional Sparse ModelsNBER
The document discusses high-dimensional sparse econometric models where the number of predictors (p) is much larger than the sample size (n). It outlines an approach for estimating regression functions using penalization methods like the LASSO. Specifically, it discusses:
1. Using the LASSO estimator to minimize squared errors while penalizing the l1-norm of coefficients, inducing sparsity.
2. Choosing the optimal penalty level as a function of the error variance and sample size. Variants like the square-root LASSO provide a tuning-free approach.
3. Examples showing how sparse approximations can better capture patterns in population data than traditional low-dimensional approximations.
The document compares using oversampling versus small area estimation methods to estimate poverty indicators at a small area level using data from the EU-SILC survey and census for Tuscany, Italy. It estimates headcount ratios and poverty gaps for the province of Pisa by gender of head of household using direct estimates from standard and oversampled EU-SILC data as well as M-quantile small area estimates, finding that small area estimates have higher values and lower standard errors.
This document analyzes the dual beta model for stocks listed on the Karachi Stock Exchange between 1997-2007. It finds that beta, a measure of risk, varies between bull and bear markets for some stocks. Specifically, the beta was higher in bear markets than bull markets for 9 of the 15 stocks analyzed, while the reverse was true for the other 6 stocks. The analysis uses a dual beta model that estimates separate betas for bull and bear periods to test if the betas are statistically different between the two market conditions. The results provide some evidence that beta varies depending on whether the overall market is rising or falling.
This document appears to be an assignment submission for a financial engineering course. It includes a plagiarism declaration signed by the student, Andrew Hair. The assignment contains 11 questions addressing interest rate derivatives and modeling using the Vasicek model. Code is provided in MATLAB to generate simulations and analyze interest rate data based on the questions.
La persona se hizo protagonista de un viaje a Grecia, pero olvidó dónde estaba luego de llegar. Con pistas como una canción griega y una imagen de una casa en la isla de Mykonos, pudo recordar que estaba en Mykonos, Grecia. Encontró una nota que mencionaba "Petros e Irene Puerto Hamburgo 200", y con Google descubrió que fue a Mykonos para conocer la historia del pelícano Petros que vivió allí durante 30 años.
El documento anuncia una exposición sobre "Sabiduría y visiones de un príncipe judío en el exilio" que se llevará a cabo del 4 al 16 de mayo de 2008 de 19:00 a 20:30 horas en la dirección 18 Av. 0-85 Zona 15 Vista Hermosa II a cargo del expositor Pr. Rudy Méndez.
This document provides an introduction to dynamic demand modeling for storable and durable goods. It discusses how consumer stockpiling behavior and sales can lead to biases in static demand estimation. A simple demand model is then presented that accounts for demand anticipation effects using two consumer types - storers and non-storers. The model assumes perfect foresight, a fixed storage period, and derives different purchasing patterns between the four states of current and past price periods.
This document discusses methods for estimating the output gap and decomposing it into observable components. It provides a unified framework by formulating most output gap estimation methods as linear filters. This allows the output gap estimate to be expressed as a weighted average of observed macroeconomic data over time. The document demonstrates how to decompose an output gap estimate into the contributions made by different data series, like output, inflation, and unemployment. It also shows how to analyze how output gap estimates are revised as new data is incorporated using this linear filter framework. The framework provides insight into which data each method uses and how it weights them to estimate an unobserved output gap.
This document summarizes a research paper that develops a measure of dynamic efficiency within a dynamic discrete choice structural inventory model. The model allows a firm to decide each period whether to order (ait = 1) or not order (ait = 0) individual products. Using data from a Portuguese firm, the model is estimated using a two-stage approach. In the first stage, transition probabilities of state variables are estimated nonparametrically. In the second stage, parameters are estimated using Bayesian methods. A counterfactual experiment indicates the firm's actual decisions diverged from optimal decisions in at least 12.71% of cases.
This document discusses modeling the skewness and kurtosis of box office revenue data using the Box-Cox power exponential (BCPE) distribution within the generalized additive models for location, scale and shape (GAMLSS) framework. It finds that the BCPE distribution provides a better fit than the traditionally used Pareto–Levy–Mandelbrot distribution. The flexible four-parameter BCPE distribution allows modeling the location, scale, skewness, and kurtosis parameters of box office revenues as smooth functions of explanatory variables like opening revenues and number of screens. This overcomes limitations of previous models and provides a better understanding of box office revenues across different time periods.
Maintenance dynamics, tools for machines functionality in aAlexander Decker
This document discusses maintenance dynamics tools for improving machine functionality in a competitive environment. The author developed models to assist in comprehensive maintenance planning even when machines are stressed to meet customer demand. These models were tested on polyethene bag production machines over 3 months. Data collected before and after introducing the models was analyzed, showing increased machine functionality with the models despite aging, compared to past functionality. The models provide strategies for maintenance based on output levels, demand, and maintenance severity levels to optimize production capacity and functionality in different competitive situations.
Applied Business Statistics ,ken black , ch 3 part 2AbdelmonsifFadl
This document contains excerpts from Chapter 3 and Chapter 12 of the 6th edition of the textbook "Business Statistics" by Ken Black. Chapter 3 discusses measures of shape such as skewness and the coefficient of skewness. Chapter 12 introduces regression analysis and correlation, covering topics like the Pearson correlation coefficient, least squares regression, and residual analysis. Examples are provided to demonstrate calculating the correlation coefficient and estimating the regression equation to predict costs from number of passengers for an airline.
Applied Business Statistics ,ken black , ch 3 part 1AbdelmonsifFadl
This document provides an overview of descriptive statistics concepts including measures of central tendency (mean, median, mode), measures of variability (range, standard deviation, variance), and how to calculate these measures from both ungrouped and grouped data. It defines key terms, explains how to compute various statistics, and includes example problems and solutions. The learning objectives are to understand and be able to compute different descriptive statistics and apply concepts like the empirical rule and Chebyshev's theorem.
Applied Business Statistics ,ken black , ch 6AbdelmonsifFadl
This chapter summary covers key concepts about continuous probability distributions discussed in Chapter 6 of the textbook "Business Statistics, 6th ed." by Ken Black. The chapter objectives are to understand the uniform distribution, appreciate the importance of the normal distribution, and know how to solve normal distribution problems. It discusses the uniform, normal, and exponential distributions. It explains how to calculate probabilities using the normal distribution and z-scores. It also discusses when the normal distribution can be used to approximate the binomial distribution.
Presented by: Thomas F. Coleman, Director, Waterloo Research Institute in Insurance, Securities and Quantitative Finance at University of Waterloo
MLX FinTech Conference II, Toronto, May 2018.
More info at: https://www.machinelearningx.net
Applied Business Statistics ,ken black , ch 5AbdelmonsifFadl
This document discusses discrete probability distributions from Chapter 5 of the textbook "Business Statistics, 6th ed." by Ken Black. It defines discrete and continuous random variables and distributions. It describes how to calculate the mean and variance of a discrete distribution. It also introduces the binomial and Poisson distributions and provides examples of how to calculate probabilities using them. For the binomial distribution example, it calculates the probability of getting two or fewer unemployed workers in a random sample of 20 Jackson, Mississippi residents. For the Poisson distribution example, it calculates the probability of more than 7 customers arriving at a bank in a 4-minute interval, given an average arrival rate of 3.2 customers every 4 minutes.
International journal of applied sciences and innovation vol 2015 - no 2 - ...sophiabelthome
This document describes using a simulation model to determine the optimal order quantity for a wholesale supplier. Regression analysis was used to forecast quarterly sales for 2007. A simulation model was built in Excel to express the company's sales and inventory schedule. By varying order quantities and simulating demand, profit distributions were found. The order quantities that minimized risk and showed relatively high profit for each quarter were determined to be the optimal order quantities. These were 310,000m for Q1, 270,000m for Q2, 250,000m for Q3, and 440,000m for Q4.
The document provides information about solving a linear programming problem to maximize profits for a swimwear manufacturing company called Super Swim. The company makes two products - shorts and briefs - and wants to determine the optimal production quantities of each given material, labor, and other constraints. The summary is:
1. Super Swim manufactures shorts and briefs and wants to determine the optimal production quantities of each to maximize profits within material, labor, and other constraints.
2. The decision variables are the number of shorts (S) and briefs (B) to produce. The objective is to maximize total profit, which is a linear function of S and B.
3. The constraints include limits on available material
Polynomial regression model of making cost prediction in mixed cost analysisAlexander Decker
This document presents a study comparing different regression models for predicting costs based on production levels. It finds that a cubic polynomial regression model provides a better fit than linear regression or the high-low method. The study uses cost and production data from a company to build linear, quadratic, and cubic regression models. It finds the cubic polynomial regression has the highest R-squared value and lowest p-value, indicating it is the best-fitting model. The study concludes that polynomial regression generally provides a better approach for cost prediction than conventional linear regression or the high-low method.
11.polynomial regression model of making cost prediction in mixed cost analysisAlexander Decker
This document presents a study comparing different regression models for predicting costs based on production levels. It finds that a cubic polynomial regression model provides a better fit than linear regression or the high-low method. The study uses cost and production data from a company to build linear, quadratic, and cubic regression models. It finds the cubic polynomial regression has the highest R-squared value and lowest p-value, indicating it is best able to model the cost patterns in the data. The document concludes the cubic polynomial regression provides a better approach for cost prediction than traditional linear regression or high-low methods.
International Journal of Mathematics and Statistics Invention (IJMSI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJMSI publishes research articles and reviews within the whole field Mathematics and Statistics, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
Econometrics of High-Dimensional Sparse ModelsNBER
The document discusses high-dimensional sparse econometric models where the number of predictors (p) is much larger than the sample size (n). It outlines an approach for estimating regression functions using penalization methods like the LASSO. Specifically, it discusses:
1. Using the LASSO estimator to minimize squared errors while penalizing the l1-norm of coefficients, inducing sparsity.
2. Choosing the optimal penalty level as a function of the error variance and sample size. Variants like the square-root LASSO provide a tuning-free approach.
3. Examples showing how sparse approximations can better capture patterns in population data than traditional low-dimensional approximations.
The document compares using oversampling versus small area estimation methods to estimate poverty indicators at a small area level using data from the EU-SILC survey and census for Tuscany, Italy. It estimates headcount ratios and poverty gaps for the province of Pisa by gender of head of household using direct estimates from standard and oversampled EU-SILC data as well as M-quantile small area estimates, finding that small area estimates have higher values and lower standard errors.
This document analyzes the dual beta model for stocks listed on the Karachi Stock Exchange between 1997-2007. It finds that beta, a measure of risk, varies between bull and bear markets for some stocks. Specifically, the beta was higher in bear markets than bull markets for 9 of the 15 stocks analyzed, while the reverse was true for the other 6 stocks. The analysis uses a dual beta model that estimates separate betas for bull and bear periods to test if the betas are statistically different between the two market conditions. The results provide some evidence that beta varies depending on whether the overall market is rising or falling.
This document appears to be an assignment submission for a financial engineering course. It includes a plagiarism declaration signed by the student, Andrew Hair. The assignment contains 11 questions addressing interest rate derivatives and modeling using the Vasicek model. Code is provided in MATLAB to generate simulations and analyze interest rate data based on the questions.
La persona se hizo protagonista de un viaje a Grecia, pero olvidó dónde estaba luego de llegar. Con pistas como una canción griega y una imagen de una casa en la isla de Mykonos, pudo recordar que estaba en Mykonos, Grecia. Encontró una nota que mencionaba "Petros e Irene Puerto Hamburgo 200", y con Google descubrió que fue a Mykonos para conocer la historia del pelícano Petros que vivió allí durante 30 años.
El documento anuncia una exposición sobre "Sabiduría y visiones de un príncipe judío en el exilio" que se llevará a cabo del 4 al 16 de mayo de 2008 de 19:00 a 20:30 horas en la dirección 18 Av. 0-85 Zona 15 Vista Hermosa II a cargo del expositor Pr. Rudy Méndez.
Este documento presenta una lista de títulos de capítulos que parecen describir eventos y visiones bíblicas relacionadas con Daniel, incluyendo la victoria de Babilonia, Daniel en el horno ardiente, visiones proféticas de bestias y un hijo de hombre, y guerras mundiales que conducen a la victoria de Jerusalén.
Un profesor creó un estudio capítulo por capítulo en Power Point y Word para ayudar a los estudiantes a prepararse. El estudio incluye resúmenes y notas de cada capítulo para ofrecer una guía concisa.
El documento presenta una introducción a la tecnología RFID. Explica que RFID usa identificadores basados en radiofrecuencia, describe la estructura básica de un sistema RFID incluyendo etiquetas y lectores, y discute temas como los tipos de etiquetas, cómo funciona la tecnología, el código EPC y usos comunes y inusuales de RFID.
Este modelo pedagógico conductista se basa en el desarrollo de objetivos observables y medibles que los estudiantes deben lograr a través de actividades, estímulos y refuerzos programados. El maestro verifica el programa y refuerza el comportamiento esperado autorizando el paso a nuevos aprendizajes. Los objetivos guían la enseñanza y el refuerzo asegura y garantiza que los estudiantes lograron la competencia esperada.
Development of a family of products that satisfies different sectors of the market introduces significant challenges to today’s manufacturing industries – from development time to aftermarket services. A product family with a common platform paradigm offers a powerful solution to these daunting challenges. The Comprehensive Product Platform Planning (CP3) framework formulates a flexible product family model that (i) seeks to eliminate traditional boundaries between modular and scalable families, (ii) allows the formation of sub-families of products, and (iii) yield the optimal depth and number of platforms. In this paper, the CP3 framework introduces a solution strategy that obviates common assumptions; namely (i) the identification of platform/non-platform design variables and the determination of variable values are separate processes, and (ii) the cost reduction of creating product platforms is independent of the total number of each product manufactured. A new Cost Decay Function (CDF) is developed to approximate the reduction in cost with increasing commonalities among products, for a specified capacity of production. The Mixed Integer Non-Liner Programming (MINLP) problem, presented by the CP3 model, is solved using a novel Platform Segregating Mapping Function (PSMF). The proposed CP3 framework is implemented on a family of universal electric motors.
This document summarizes the analysis of data from a pharmaceutical company to model and predict the output variable (titer) from input variables in a biochemical drug production process. Several statistical models were evaluated including linear regression, random forest, and MARS. The analysis involved developing blackbox models using only controlled input variables, snapshot models using all input variables at each time point, and history models incorporating changes in input variables over time to predict titer values. Model performance was compared using cross-validation.
This document provides an overview of an Operations Research/Management Science course. It introduces some key OR/MS techniques like linear programming, simplex method, duality theory, PERT-CPM, game theory, decision analysis, Markov chains, queueing theory, inventory theory, forecasting, and simulation. It lists the textbook and references for the course. The grading breakdown and topic outline are also provided. Several applications of each technique are mentioned.
Design of optimized Interval Arithmetic MultiplierVLSICS Design
Many DSP and Control applications that require the user to know how various numerical errors(uncertainty) affect the result. This uncertainty is eliminated by replacing non-interval values with intervals. Since most DSPs operate in real time environments, fast processors are required to implement interval arithmetic. The goal is to develop a platform in which Interval Arithmetic operations are performed at the same computational speed as present day signal processors. So we have proposed the design and implementation of Interval Arithmetic multiplier, which operates with IEEE 754 numbers. The proposed unit consists of a floating point CSD multiplier, Interval operation selector. This architecture implements an algorithm which is faster than conventional algorithm of Interval multiplier . The cost overhead of the proposed unit is 30% with respect to a conventional floating point multiplier. The
performance of proposed architecture is better than that of a conventional CSD floating-point multiplier, as it can perform both interval multiplication and floating-point multiplication as well as Interval comparisons
This work was presented at 51st AIAA/SDM conference, Apr 14, 2010 in Orlando. The work presented in this paper was performed in collaboration with Prof. Achille Messac and Dr. Ritesh Khire.
Here are the key advantages and disadvantages of arrays:
Advantages of arrays:
- Increased directivity - Arrays allow signals to be reinforced in desired directions while cancelled in others, providing improved directivity over a single antenna.
- Increased gain - The increased directivity of an array leads to higher gain compared to a single antenna. This allows for longer communication ranges.
- Beam steering - By adjusting the phase or time delay of signals to each antenna, the main transmission direction of an array can be steered electronically without moving physical elements.
Disadvantages of arrays:
- Increased complexity - Arrays require additional hardware for power distribution to each antenna element, as well as phase/time delay control circuitry
Chapter 2 Linear Programming for business (1).pptxanimutsileshe1
This document discusses linear programming problems, including their formulation and solution. It begins by defining linear programming and its components, such as decision variables, objective functions, and constraints. It then outlines the assumptions of linear programming problems and provides an example of how to formulate a problem. Finally, it discusses two methods for solving linear programming problems - the graphical method and the simplex method.
IRJET- Optimization of 1-Bit ALU using Ternary LogicIRJET Journal
This document summarizes a research paper that proposes a novel approach to implementing a 1-bit arithmetic logic unit (ALU) using ternary logic. Ternary logic offers potential advantages over binary logic, including reduced transistor count and hardware. The authors designed a 1-bit ALU using ternary logic gates (T-gates) for ternary arithmetic and logic operations. Simulation results showed the ternary logic ALU design achieved a 25% reduction in transistor usage compared to an equivalent binary logic ALU design. The ternary logic ALU design approach could potentially be extended to multi-bit ALUs for applications where reduced transistor count is important.
This document discusses the process of fuzzy reasoning, which involves 3 steps: fuzzification, rule evaluation, and defuzzification. It describes each step in detail. Fuzzification involves transforming crisp inputs into fuzzy inputs using membership functions. Rule evaluation uses IF-THEN rules with fuzzy logic operators. Defuzzification transforms fuzzy outputs into crisp outputs using methods like center of gravity. The document provides examples and illustrations of each step in fuzzy reasoning. It also discusses applications of fuzzy logic in areas like cement kiln control.
The document discusses fuzzy reasoning and fuzzy inferencing. It describes the three main steps in fuzzy inferencing: 1) fuzzification, which transforms crisp inputs into fuzzy inputs using membership functions, 2) rule evaluation using IF-THEN rules with fuzzy logic operators, and 3) defuzzification to produce crisp outputs from fuzzy outputs using methods like center of gravity. It provides examples of applications that have benefited from fuzzy systems like cement kiln control and expert systems. Finally, it presents some questions related to fuzzy logic concepts.
A BI-OBJECTIVE MODEL FOR SVM WITH AN INTERACTIVE PROCEDURE TO IDENTIFY THE BE...gerogepatton
A support vector machine (SVM) learns the decision surface from two different classes of the input points, there are misclassifications in some of the input points in several applications. In this paper a bi-objective quadratic programming model is utilized and different feature quality measures are optimized simultaneously using the weighting method for solving our bi-objective quadratic programming problem. An important contribution will be added for the proposed bi-objective quadratic programming model by getting different efficient support vectors due to changing the weighting values. The numerical examples, give evidence of the effectiveness of the weighting parameters on reducing the misclassification between two classes of the input points. An interactive procedure will be added to identify the best compromise solution from the generated efficient solutions.
A BI-OBJECTIVE MODEL FOR SVM WITH AN INTERACTIVE PROCEDURE TO IDENTIFY THE BE...ijaia
A support vector machine (SVM) learns the decision surface from two different classes of the input points, there are misclassifications in some of the input points in several applications. In this paper a bi-objective quadratic programming model is utilized and different feature quality measures are optimized simultaneously using the weighting method for solving our bi-objective quadratic programming problem. An important contribution will be added for the proposed bi-objective quadratic programming model by getting different efficient support vectors due to changing the weighting values. The numerical examples, give evidence of the effectiveness of the weighting parameters on reducing the misclassification between two classes of the input points. An interactive procedure will be added to identify the best compromise solution from the generated efficient solutions.
Determination of Optimal Product Mix for Profit Maximization using Linear Pro...IJERA Editor
This paper demonstrates the use of liner programming methods in order to determine the optimal product mix for
profit maximization. There had been several papers written to demonstrate the use of linear programming in
finding the optimal product mix in various organization. This paper is aimed to show the generic approach to be
taken to find the optimal product mix.
Determination of Optimal Product Mix for Profit Maximization using Linear Pro...IJERA Editor
This document demonstrates using linear programming to determine the optimal product mix for a manufacturing firm to maximize profit. The firm produces n products using m raw materials. The problem is formulated as a linear program to maximize total profit subject to raw material constraints. The optimal solution is found using the simplex method and provides the quantities of each product (v1, v2, etc.) that maximize total profit (z0). The solution may show some product quantities as zero, indicating those products should not be produced to maximize profit under the given constraints.
This document discusses methods for estimating the output gap and decomposing it into observable components. It provides a unified framework by representing most estimation methods as linear filters. This allows the output gap estimate to be expressed as a weighted average of observed macroeconomic data over time. The document demonstrates how to decompose an output gap estimate into the contributions made by different data series, like output, inflation, and unemployment. It also shows how to analyze how estimates are revised as new data is incorporated. Understanding estimates as linear filters provides insight into which data drives the estimate and how sensitive it is to data revisions. The document applies these concepts to specific estimation techniques, including univariate filters, multivariate filters, VAR models, and DSGE models.
IRJET-Handwritten Digit Classification using Machine Learning ModelsIRJET Journal
This document summarizes a research paper that compares different machine learning models for classifying handwritten digits using the MNIST dataset. It finds that a Support Vector Machine Classifier achieves the highest accuracy of 98.3% compared to 96.3% for a Random Forest Classifier and 88.97% for a Logistic Regression model. The paper preprocesses the images, implements the three classifiers, and evaluates their performance based on accuracy and other metrics like precision and recall. It concludes that the SVM Classifier is the most accurate for classifying handwritten digits from the MNIST dataset.
IRJET - Movie Genre Prediction from Plot Summaries by Comparing Various C...IRJET Journal
This document discusses predicting movie genres from plot summaries using various classification algorithms. It analyzes Multinomial Naive Bayes, Logistic Regression, Random Forest, and Stochastic Gradient Descent algorithms on a dataset of over 40,000 movie plots labeled with 12 genres. It finds that Multinomial Naive Bayes performs best, achieving the highest AUC scores for predicting each genre individually. It then builds separate classifiers for each genre using the best performing algorithm to predict the genre of new movie plots.
The Comprehensive Product Platform Planning (CP3) framework presents a flexible mathematical model of the platform planning process, which allows (i) the formation of sub-families of products, and (ii) the simultaneous identification and quantification of plat- form/scaling design variables. The CP3 model is founded on a generalized commonality matrix that represents the product platform plan, and yields a mixed binary-integer non- linear programming problem. In this paper, we develop a methodology to reduce the high dimensional binary integer problem to a more tractable integer problem, where the com- monality matrix is represented by a set of integer variables. Subsequently, we determine the feasible set of values for the integer variables in the case of families with 3 − 7 kinds of products. The cardinality of the feasible set is found to be orders of magnitude smaller than the total number of unique combinations of the commonality variables. In addition, we also present the development of a generalized approach to Mixed-Discrete Non-Linear Optimization (MDNLO) that can be implemented through standard non-gradient based op- timization algorithms. This MDNLO technique is expected to provide a robust and compu- tationally inexpensive optimization framework for the reduced CP3 model. The generalized approach to MDNLO uses continuous optimization as the primary search strategy, how- ever, evaluates the system model only at the feasible locations in the discrete variable space.
soft computing stock market price prediction for the nigerian stock exchangeINFOGAIN PUBLICATION
Forecasting the price movements in stock market has been a major challenge for common investors, businesses, brokers and speculators because Stock Prices are considered to be very dynamic and susceptible to quick changes. As more and more money is being invested, the investor get anxious of the future trends of the stock prices in the market and thus, creating a high desirable need for a more’ intelligent’ prediction model. Two soft computing models- Artificial Neural Network (ANN) and Fuzzy Neuro hybrid model were used to forecast the next day’s closing price. The historical trading data was obtained from the Nigerian Stock Exchange for Dangote Sugar Refinery Plc . The results showed the power of Soft Computing techniques (SC) in stock Price Prediction.
Similar to AN IMPROVED DECISION SUPPORT SYSTEM BASED ON THE BDM (BIT DECISION MAKING) METHOD FOR SUPPLIER SELECTION USING BOOLEAN ALGEBRA (20)
Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapte...University of Maribor
Slides from talk presenting:
Aleš Zamuda: Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapter and Networking.
Presentation at IcETRAN 2024 session:
"Inter-Society Networking Panel GRSS/MTT-S/CIS
Panel Session: Promoting Connection and Cooperation"
IEEE Slovenia GRSS
IEEE Serbia and Montenegro MTT-S
IEEE Slovenia CIS
11TH INTERNATIONAL CONFERENCE ON ELECTRICAL, ELECTRONIC AND COMPUTING ENGINEERING
3-6 June 2024, Niš, Serbia
Using recycled concrete aggregates (RCA) for pavements is crucial to achieving sustainability. Implementing RCA for new pavement can minimize carbon footprint, conserve natural resources, reduce harmful emissions, and lower life cycle costs. Compared to natural aggregate (NA), RCA pavement has fewer comprehensive studies and sustainability assessments.
International Conference on NLP, Artificial Intelligence, Machine Learning an...gerogepatton
International Conference on NLP, Artificial Intelligence, Machine Learning and Applications (NLAIM 2024) offers a premier global platform for exchanging insights and findings in the theory, methodology, and applications of NLP, Artificial Intelligence, Machine Learning, and their applications. The conference seeks substantial contributions across all key domains of NLP, Artificial Intelligence, Machine Learning, and their practical applications, aiming to foster both theoretical advancements and real-world implementations. With a focus on facilitating collaboration between researchers and practitioners from academia and industry, the conference serves as a nexus for sharing the latest developments in the field.
We have compiled the most important slides from each speaker's presentation. This year’s compilation, available for free, captures the key insights and contributions shared during the DfMAy 2024 conference.
Understanding Inductive Bias in Machine LearningSUTEJAS
This presentation explores the concept of inductive bias in machine learning. It explains how algorithms come with built-in assumptions and preferences that guide the learning process. You'll learn about the different types of inductive bias and how they can impact the performance and generalizability of machine learning models.
The presentation also covers the positive and negative aspects of inductive bias, along with strategies for mitigating potential drawbacks. We'll explore examples of how bias manifests in algorithms like neural networks and decision trees.
By understanding inductive bias, you can gain valuable insights into how machine learning models work and make informed decisions when building and deploying them.
CHINA’S GEO-ECONOMIC OUTREACH IN CENTRAL ASIAN COUNTRIES AND FUTURE PROSPECTjpsjournal1
The rivalry between prominent international actors for dominance over Central Asia's hydrocarbon
reserves and the ancient silk trade route, along with China's diplomatic endeavours in the area, has been
referred to as the "New Great Game." This research centres on the power struggle, considering
geopolitical, geostrategic, and geoeconomic variables. Topics including trade, political hegemony, oil
politics, and conventional and nontraditional security are all explored and explained by the researcher.
Using Mackinder's Heartland, Spykman Rimland, and Hegemonic Stability theories, examines China's role
in Central Asia. This study adheres to the empirical epistemological method and has taken care of
objectivity. This study analyze primary and secondary research documents critically to elaborate role of
china’s geo economic outreach in central Asian countries and its future prospect. China is thriving in trade,
pipeline politics, and winning states, according to this study, thanks to important instruments like the
Shanghai Cooperation Organisation and the Belt and Road Economic Initiative. According to this study,
China is seeing significant success in commerce, pipeline politics, and gaining influence on other
governments. This success may be attributed to the effective utilisation of key tools such as the Shanghai
Cooperation Organisation and the Belt and Road Economic Initiative.
Low power architecture of logic gates using adiabatic techniquesnooriasukmaningtyas
The growing significance of portable systems to limit power consumption in ultra-large-scale-integration chips of very high density, has recently led to rapid and inventive progresses in low-power design. The most effective technique is adiabatic logic circuit design in energy-efficient hardware. This paper presents two adiabatic approaches for the design of low power circuits, modified positive feedback adiabatic logic (modified PFAL) and the other is direct current diode based positive feedback adiabatic logic (DC-DB PFAL). Logic gates are the preliminary components in any digital circuit design. By improving the performance of basic gates, one can improvise the whole system performance. In this paper proposed circuit design of the low power architecture of OR/NOR, AND/NAND, and XOR/XNOR gates are presented using the said approaches and their results are analyzed for powerdissipation, delay, power-delay-product and rise time and compared with the other adiabatic techniques along with the conventional complementary metal oxide semiconductor (CMOS) designs reported in the literature. It has been found that the designs with DC-DB PFAL technique outperform with the percentage improvement of 65% for NOR gate and 7% for NAND gate and 34% for XNOR gate over the modified PFAL techniques at 10 MHz respectively.
Advanced control scheme of doubly fed induction generator for wind turbine us...IJECEIAES
This paper describes a speed control device for generating electrical energy on an electricity network based on the doubly fed induction generator (DFIG) used for wind power conversion systems. At first, a double-fed induction generator model was constructed. A control law is formulated to govern the flow of energy between the stator of a DFIG and the energy network using three types of controllers: proportional integral (PI), sliding mode controller (SMC) and second order sliding mode controller (SOSMC). Their different results in terms of power reference tracking, reaction to unexpected speed fluctuations, sensitivity to perturbations, and resilience against machine parameter alterations are compared. MATLAB/Simulink was used to conduct the simulations for the preceding study. Multiple simulations have shown very satisfying results, and the investigations demonstrate the efficacy and power-enhancing capabilities of the suggested control system.
Advanced control scheme of doubly fed induction generator for wind turbine us...
AN IMPROVED DECISION SUPPORT SYSTEM BASED ON THE BDM (BIT DECISION MAKING) METHOD FOR SUPPLIER SELECTION USING BOOLEAN ALGEBRA
1. International Journal of Managing Public Sector Information and Communication Technologies (IJMPICT)
Vol. 6, No. 2, June 2015
DOI : 10.5121/ijmpict.2015.6203 31
AN IMPROVED DECISION SUPPORT SYSTEM BASED
ON THE BDM (BIT DECISION MAKING) METHOD
FOR SUPPLIER SELECTION USING BOOLEAN
ALGEBRA
Gabriel Almazán1
1
IndustrialEngineering Educational Program, Superior School of Tepeji, Basic Sciences
Institute, Autonomous University of Hidalgo, México
ABSTRACT
Based on the BDM (Bit Decision Making) method, the present work presents two contributions: first, the
illustration of the use of the technique known as SOP (Sum Of Products) in order to systematize the
process to obtain the correlation function for sub-system’s mathematical modelling, and second,the
provision of capacity to manage a greater than binary but a finite - discrete set of possible subjective
qualifications of suppliers at any criterion.
KEYWORDS
Supplier Selection, Decision Support Systems, BDM (Bit Decision Making), Multi Criteria Decision
Analysis, Boolean Algebra.
1. INTRODUCTION
The present work focuses on providing a contribution to the BDM(Bit Decision Making) method
reported on [1]. In order to accomplish this, we give some basic conceptual and theoretical
references that will provide a full operative mechanism to obtain the correlation function that
relates the logic binary values assigned to suppliers by the decision maker(s) (inputs) with each
sub-system output decision. As a consequence of this, an additional benefit will be the provision
of capacity to qualify suppliers in more than just two ways (“yes” or “no”), but in a finite-
discrete set of values.
In section 2 we give a brief exposition of BDM method and identify the opportunity areas in the
initial proposal, where we visualize our contributions. In section 3 we present the conceptual and
theoretical framework that will enable the BDM method to manage a discrete set of values for
supplier qualifications. In section 4, we illustrate the application of the conceptual and
theoretical background to the case presented at [1]. Finally, at section 5 we propose a variation
on a specific sub-system to visualize implications and the benefit of our proposal.
2. THE BDM METHOD
2.1. A brief description
In BDM method, the decision will be reached considering every supplier’s performance on sub-
systems of the form shown in Table 1.
2. International Journal of Managing Public Sector Information and Communication Technologies (IJMPICT)
Vol. 6, No. 2, June 2015
32
Table 1. Truth Table structure for any sub-system.
Decision Criteria
Pattern Number
First
Criterion
Second
Criterion
… Nth
Criterion
Output
1 yes/no yes/no yes/no yes/no Yes/No
2 yes/no yes/no yes/no yes/no Yes/No
… yes/no yes/no yes/no yes/no Yes/No
Mth Pattern yes/no yes/no yes/no yes/no Yes/No
The decision maker will configure the truth table by choosing the criteria combinations that will
output a “Yes” or “No” result. Therefore, each sub-system acts as a filter for suppliers. If the
output is “Yes”, the supplier can continue to the next sub-system, otherwise it will be
disqualified,and so on until reaching the last sub-system.
2.2. Opportunity Areas Identification
As can be seen in the reference paper [1], the way in which the logical function (that correlates
inputs and output in each sub-system) is generated, is not specified. On the other hand, the
original method is not able to manage a ranking scheme in supplier’sperformance qualifications
at any criterion; for instance, the following degrees:excellent, good, average, poor.
3. CONCEPTUAL AND THEORETICAL REFERENCES
3.1. Number of possible patterns
According to combinatorics, the total number of patters for a truth table of N binary variables
will be 2N
.
3.2. The SOP (Sum Of Products) Technique
To obtain a logical function that correlates binary inputs with a binary output, there are two
options: in a Sum Of Products (SOP) or in Product Of Sums (POS) form. The Sum of Products
(SOP) form will contain a list of terms in which all variables are ANDed (products), then ORed
(summed) together.To convert a truth table to a SOP expression, only the rows whose output
is“Yes” will produce a term. This AND term should produce a “Yes” with the values of variables
in that row; hence, all the variables with value “no” should be logically complemented
(inverted).
3.3. The minimal form
Boolean algebra axioms and theorems or Karnaugh maps (a graphical tool) could be used in
order to find the minimal expression of the logical function that correlates inputs and the output.
Basically, this tools help to identify which variables within a term -considerated jointly with
other terms, become irrelevant.
3.4. Binary Coding
A set of things (characters, symbols, values, etc.) can be represented with a group of bits.Being x
the number of objects, the number of bits required (n) can be calculated with:
=
log
2
3. International Journal of Managing Public Sector Information and Communication Technologies (IJMPICT)
Vol. 6, No. 2, June 2015
33
4. ILLUSTRATIVE APPLICATION OF THEORETICAL BACKGROUND.
4.1. Methodology to obtain sub-system’s correlation function
4.1.1. From truth table to correlation function
Let us take the “Business Strengths evaluation phase” presented at [1] and obtain its
mathematical model. First, we’ll need one bit to encode the “yes” and “no” supplier’s
qualifications; let us assign “1” and “0”, respectively. The resulting truth table is shown in Table
2.Additionally, let us apply the SOP technique. In doing so, we’ll obtain the product terms shown
in the last column. (Note: the symbol “|” indicates binary complement or “NOT” monary logic
operator).
Table 2. Truth Table for the Business Strengths phase.
Decision
Criteria
Pattern Number
X3 X4 X5 X6 X7 Output
Y2
Product Term
1 0 0 0 0 0 0
... 0 x x x x 0
16 0 1 1 1 1 0
17 1 0 0 0 0 0
18 1 0 0 0 1 0
19 1 0 0 1 0 0
20 1 0 0 1 1 1 X3.X4|.X5|.X6.X7
21 1 0 1 0 0 X
22 1 0 1 0 1 X
23 1 0 1 1 0 X
24 1 0 1 1 1 X (1) X3.X4|.X5.X6.X7
25 1 1 0 0 0 0
26 1 1 0 0 1 1 X3.X4.X5|.X6|.X7
27 1 1 0 1 0 1 X3.X4.X5|.X6.X7|
28 1 1 0 1 1 1 X3.X4.X5|.X6.X7
29 1 1 1 0 0 0
30 1 1 1 0 1 1 X3.X4.X5.X6|.X7
31 1 1 1 1 0 1 X3.X4.X5.X6.X7|
32 = (25
) 1 1 1 1 1 1 X3.X4.X5.X6.X7
Thus, considering all rows with “1” as output, the initial sub-system mathematical model would
be:
Y2 = X3.X4|.X5|.X6.X7 + X3.X4.X5|.X6|.X7 + X3.X4.X5|.X6.X7| + X3.X4.X5|.X6.X7 +
X3.X4.X5.X6|.X7 +X3.X4.X5.X6.X7| + X3.X4.X5.X6.X7
The symbol “X” on output means that the value could be either “0” or “1”, as is impossible that a
supplier has 3 last financial years of business positive turnover (profit),while no 1 or 2 years.
We’ll decide about this during the minimization process, as will be shown forward.
4.1.2. Finding the minimum expression with Boolean Algebra Axioms
In order to find the minimal expression of Y2, the following axioms of Boolean algebra can be
applied, being X, Y and Z, Boolean variables.
4. International Journal of Managing Public Sector Information and Communication Technologies (IJMPICT)
Vol. 6, No. 2, June 2015
34
i. X.Y = Y.X (commutativity)
ii. 1.X = X (identity, the neutral on the product)
iii. X + X = X (idempotency)
iv. X + X| = 1 (complementarity)
v. X+ (Y + Z) = (X+ Y) + Z) (associativity)
vi. X.(Y + Z) = (X.Y) +(X.Z) (distributivity)
Thus, Y2can be rewritten as:
Y2 =(X3.X4.X5|.X6|.X7 + X3.X4.X5|.X6.X7 +X3.X4.X5.X6|.X7 +X3.X4.X5.X6.X7) +
(X3.X4.X5|.X6.X7| + X3.X4.X5.X6.X7| + X3.X4.X5|.X6.X7 + X3.X4.X5.X6.X7) +
(X3.X4|.X5|.X6.X7 + X3.X4.X5|.X6.X7 + X3.X4|.X5.X6.X7+X3.X4.X5.X6.X7)
Term in red colour corresponds to “X” term. Looking at Table 2, the product terms considered
were:(26,28,30,32) + (27,31,28,32) + (20,28,24,32). The term 28 and 32 can be reused because
of axiom iii.
Applying axioms i, vi, iv and ii:
Y2 = (X3.X4.X5|.X7.(X6|+ X6) + X3.X4.X5.X7.(X6| + X6)) +
(X3.X4.X6.X7|.(X5| + X5) + X3.X4.X6.X7.(X5|+X5)) +
(X3. X5|.X6.X7.(X4|. + X4) + X3.X5.X6.X7.( X4|.+ X4))
= (X3.X4.X5|.X7+ X3.X4.X5.X7) +
(X3.X4.X6.X7|+ X3.X4. X6.X7) +
(X3. X5|.X6.X7+ X3.X5.X6.X7)
Then, Y2 can be finally minimized:
Y2=X3.X4.X7 + X3.X4.X6 + X3.X6.X7
If, as in [1], X3 should be “1”, Y2 can even reach the following form:
Y2= X4.X7 + X4.X6 + X6.X7
4.1.3. Finding the minimum expression with Karnaugh maps
If we apply, as another alternative, Karnaugh maps, the resulting one will look as shown in
Figure 1:
X3X4X5
X6X7
000 001 011 010 110 111 101 100
00 X
01 1 1 X
11 1 1 1 1
10 1 1 X
Figure 1. Karnaugh Map the Business Strengths phase.
Groups of a 2n
number of “1’s” should be identified, each of them generating a term in the output
function; the larger the group the minimum of variables in the term. In this case, the three 4 ones
5. International Journal of Managing Public Sector Information and Communication Technologies (IJMPICT)
Vol. 6, No. 2, June 2015
35
size possible groups are: 1) cells in orange, generating the term:X3.X6.X7, which contains the
variables who don’t change, disappearing X3,X4,X5 who do change; 2) cells in horizontal lines,
generating the term: X3.X4.X7; and 3) cells in vertical lines, generating the term: X3.X4.X6.
Hence, Y2 = X3.X4.X7 + X3.X4.X6 + X3.X6.X7, the same obtained with Boolean algebra
axioms.
In Karnaugh maps, any combination of the values of all input variables placed on any physical
adjacent cell will have only a bit change between them. For example, first row-first column
(000000) is adjacent with first row-second column (00100). Moreover, first row-first-column
(00000), even not physically adjacent, will be logically with first row-eighth column (10000).
As can be seen in the example, “1’s” can be reused, due to the fact that with Karnaugh maps we
are implicitly and graphically applying the Boolean algebra axioms.
Therefore, the three groups in Figure 1 are of adjacent one’s, eliminating as:
=
log
2
variables associated with each group. In this case, all groups were 8 bits long, and so, two
variables were eliminated from each term.
To demonstrate that this function performs identically with the version on [1], we create the
Excel sheet shown in Figure 2.
Figure 2. Karnaugh Map the Business Strengths phase.
5. RESULTS
5.1. More than “yes” or “no” criteria qualification
Let us take now the case of “Financial Quotation evaluation phase” in [1] to illustrate how this
capacity is provided. Let’s allow the Development (X9) criteria to be evaluated with four
6. International Journal of Managing Public Sector Information and Communication Technologies (IJMPICT)
Vol. 6, No. 2, June 2015
36
different degrees: excellent, good, average and poor. We’ll code these with “11”,”10”,”01” and
“00” 2-bit words, respectively. Besides, let’s consider the policy to approve the suppliers
qualified as “excellent” at “Development" (X9) criterion or those with at least an“average” note
in combination with any other of the remaining criteria, X8 and X10. Thus, the resulting truth
table will be as shown in Table 3.
Table 3. Truth Table for Technical phase.
Decision Criteria
Pattern Number
X8 X9 X10 Output
Y4’X9a X9b
1 0 0 0 0 0
2 0 0 0 1 0
3 0 0 1 0 0
4 0 0 1 1 1
5 0 1 0 0 0
6 0 1 0 1 1
7 0 1 1 0 1
8 0 1 1 1 1
9 1 0 0 0 0
10 1 0 0 1 0
11 1 0 1 0 1
12 1 0 1 1 1
13 1 1 0 0 1
14 1 1 0 1 1
15 1 1 1 0 1
16 = (24
) 1 1 1 1 1
The corresponding Karnaugh map is shown in Figure 3.
X8X9a
X9bX10
00 01 11 10
00 1
01 1 1
11 1 1 1 1
10 1 1 1
Figure 3. Karnaugh Map the Technical phase.
There are five groups of 4 one’s, resulting in the following mathematical model:
Y4’ = X8.X9a + X9a.X10 + X9b.X10+ X9a.X9b + X8.X9b
We can rewrite this on the following form:
Y4’ = X8.(X9a + X9b) + X10.(X9a + X9b) + X9a.X9b
That will be easier to code as an Excel formula.
7. International Journal of Managing Public Sector Information and Communication Technologies (IJMPICT)
Vol. 6, No. 2, June 2015
37
This correlation function can be introduced to the Excel spreadsheet and incorporated to the
whole automated decision support system for getting the decision-result for this entity, at this
phase.
6. CONCLUSIONS
With our proposal, the BDM method has been provided with the capacity to qualify suppliers in
a wider way, with the provision of the mechanism to operate.
Future works can revise the Quine-Mc Cluskey numeric method to obtain sub-systems
mathematical model or other methods that could be found in the literature or recently developed.
Also, good opportunities are in the application of this framework at specific scenarios.
REFERENCES
[1] N. Jigeesh, (2012) “A New Decision Support System for Supplier Selection using Boolean Algebra”,
International Journal of Managing Public Sector Information and Communication Technologies, Vol.
5, No. 3, pp. 11-24.
[2] P. Scherz, (2000) “Practical electronics for inventors”, New York: McGraw-Hill.
[3] R. Arnau,( 2015) “The application of Boolean algebra to industrial engineering decision problems”,
Master in Science in Industrial Engineering, Texas Technological College.
Authors
Gabriel Almazán, M. in Sc. (Telecommunications Engineering), Master
(Educational Technology), working as a Full Time Lecturer and Researcher
at Autonomous University of Hidalgo, México. Some of the topics he teaches
are Industrial and Digital Electronics. His research interests include Business
Intelligence, Sustainability, Industrial Symbiosis, Educational Technology,
revitalization of native Mexican languages andself, cooperative and
collaborative learning.