This document summarizes a research paper that develops a measure of dynamic efficiency within a dynamic discrete choice structural inventory model. The model allows a firm to decide each period whether to order (ait = 1) or not order (ait = 0) individual products. Using data from a Portuguese firm, the model is estimated using a two-stage approach. In the first stage, transition probabilities of state variables are estimated nonparametrically. In the second stage, parameters are estimated using Bayesian methods. A counterfactual experiment indicates the firm's actual decisions diverged from optimal decisions in at least 12.71% of cases.
This document discusses two methods for measuring consumer welfare using demand models: Hausman (1996) and the discrete choice model. Hausman estimates demand for cereal and values the introduction of Apple Cinnamon Cheerios at $78.1 million annually under perfect competition and $66.8 million under imperfect competition. The discrete choice model measures welfare as the inclusive value from a choice set and can value new products by simulating choices with and without them. It is more flexible but still relies on accurate demand estimation.
AN IMPROVED DECISION SUPPORT SYSTEM BASED ON THE BDM (BIT DECISION MAKING) ME...ijmpict
Based on the BDM (Bit Decision Making) method, the present work presents two contributions: first, the
illustration of the use of the technique known as SOP (Sum Of Products) in order to systematize the
process to obtain the correlation function for sub-system’s mathematical modelling, and second,the provision of capacity to manage a greater than binary but a finite - discrete set of possible subjective qualifications of suppliers at any criterion.
TRANSACTION PROFITABILITY USING HURI ALGORITHM [TPHURI]ijbiss
Business intelligence (BI) is formulation of business strategies which help organizations to achieve its
objectives and to predict its future. Data mining is often referred as BI in the domain of business. One of
the major tasks in data mining is Association Rule Mining (ARM). ARM techniques incorporated in BI
systems can be utilized in business decision-making such as retail shelf management, catalog design,
customer segmentation, cross-selling, quality improvement and bundling products marketing.
ARM technique is used for the identification of frequent itemsets from huge databases and then generating
strong association rules by considering each item having same value. But in a large number of real world
applications, items have different values according to their impact on the respective decision making
processes. Traditional ARM techniques cannot fulfil the arising demands from these applications. The data
mining researchers are continuously improving the quality of ARM technique by incorporating the utility of
items. The utility of item is decided by its contribution towards the business profit or quantity of the item
sold, etc. Hence Utility mining focuses on identifying the itemsets with high utilities.
Jyothi et al proposed HURI algorithm in [2] for producing high utility rare itemset according to users’
interest. An algorithm Transaction Profitability using HURI [TPHURI] is proposed in this paper which is
a modified version of HURI. TPHURI finds profitable transactions consisting of high utility rare items and
also finds the share of such items in the overall profit of the transactions.
IRJET-Handwritten Digit Classification using Machine Learning ModelsIRJET Journal
This document summarizes a research paper that compares different machine learning models for classifying handwritten digits using the MNIST dataset. It finds that a Support Vector Machine Classifier achieves the highest accuracy of 98.3% compared to 96.3% for a Random Forest Classifier and 88.97% for a Logistic Regression model. The paper preprocesses the images, implements the three classifiers, and evaluates their performance based on accuracy and other metrics like precision and recall. It concludes that the SVM Classifier is the most accurate for classifying handwritten digits from the MNIST dataset.
Understanding the Applicability of Linear & Non-Linear Models Using a Case-Ba...ijaia
This paper uses a case based study – “product sales estimation” on real-time data to help us understand
the applicability of linear and non-linear models in machine learning and data mining. A systematic
approach has been used here to address the given problem statement of sales estimation for a particular set
of products in multiple categories by applying both linear and non-linear machine learning techniques on
a data set of selected features from the original data set. Feature selection is a process that reduces the
dimensionality of the data set by excluding those features which contribute minimal to the prediction of the
dependent variable. The next step in this process is training the model that is done using multiple
techniques from linear & non-linear domains, one of the best ones in their respective areas. Data Remodeling
has then been done to extract new features from the data set by changing the structure of the
dataset & the performance of the models is checked again. Data Remodeling often plays a very crucial and
important role in boosting classifier accuracies by changing the properties of the given dataset. We then try
to explore and analyze the various reasons due to which one model performs better than the other & hence
try and develop an understanding about the applicability of linear & non-linear machine learning models.
The target mentioned above being our primary goal, we also aim to find the classifier with the best possible
accuracy for product sales estimation in the given scenario.
A production - Inventory model with JIT setup cost incorporating inflation an...IJMER
A production inventory model with Just-In-Time (JIT) set-up cost has been developed in which inflation and time value of money are considered under an imperfect production process. The demand rate is considered to be a function of advertisement cost and selling price. Unit production cost is considered incorporating several features like energy and labour cost, raw material cost and development cost of the manufacturing system. Development cost is assumed to be a function of reliability parameter.
Considering these phenomena, an analytic expression is obtained for the total profit of the model. The model provides an analytical solution to maximize the total profit function.A numerical example is presented to illustrate the model along with graphical analysis. Sensitivity analysis has been carried out to identify the most sensitive parameters of the model.
The portfolio contains 2D and 3D architectural designs including a cross section and 3D model created in AutoCAD, floor plans, wall sections, and elevations created in Revit, and a hand sketched site plan.
What Philosophy and Neuroscience can teach us about UXManuel Ebert
Today, I want to talk about three things. The first is the philosophical discipline of phenomenology. The second is what that has to do with how we use tools, and hence, with how we should design tools. The third is that neuroscientist say that the philosophers already got it right like a hundred years ago.
But first, let me take you to France. Paris, the city of love. In 1942. The city is under Nazi occupation. It’s night, long past the curfew, and outside SS patrols march down dark alleys with cobblestone streets. Inside, a man who would later become THE intellectual celebrity of post-war France together with his lifelong partner - think the Brangelina of the 50s - sat in front of a typewriter to work on the manuscript for his philosophical opus magnum “L’Être et le néant” - “Being and Nothingness”. Jean-Paul Sartre was a funny man. And not just because he had a disarmingly charming humour, but because he was actually funny looking. Most would actually describe him as plain ugly. He was short, ears sticking out, one eye pointing this way, one eye that way, he wore think glasses and was notoriously unkempt.
Some people already knew him as a playwright - he brought us such memorable quotes like “Hell is the other people”. Allegedly he wrote most of his plays to give the many young actresses he dated something to do - apparently his wit and intellect were rather seductive. But that night, he was tired. As he was typing on the manuscript, he tried to focus on the ideas, the concepts, the world the words he was typing were creating. But then his eyes became sore and weary, and the letters started to blur. He noticed how his attention shifted from the concepts to the letters that carry them. And then soon to his tired fingers that created those letters. Finally he couldn’t think about anything but his sore eyes and had trouble keeping them open.
Most ordinary people would just have decided to call it a night. But Sartre was anything but ordinary. He was a philosopher. Particularly, he was a phenomenologist. Phenomenology is the philosophical study of experiences, or more correctly, the study of structures of experiences. In a way, it’s a scientific discipline, a way of describing the world and theorising about it. But as opposed to physics or psychology, phenomenology is a first-person approach: you're analysing, deconstructing and describing your own experiences and consciousness. Phenomenologists don't study things for what they are, but for how they appear to us.
Sartre, being a phenomenologist, was quick to notice a certain pattern in his experiences. He noticed how his attention shifts from one thing to another - the ideas, the words, his fingers, his eyes. What's interesting here is that his attention shifts from the subject of perception to the medium of perception. First he perceives the ideas through the words, then the typed words themselves become centre of his attention. Finally, his own eyes become th
This document discusses two methods for measuring consumer welfare using demand models: Hausman (1996) and the discrete choice model. Hausman estimates demand for cereal and values the introduction of Apple Cinnamon Cheerios at $78.1 million annually under perfect competition and $66.8 million under imperfect competition. The discrete choice model measures welfare as the inclusive value from a choice set and can value new products by simulating choices with and without them. It is more flexible but still relies on accurate demand estimation.
AN IMPROVED DECISION SUPPORT SYSTEM BASED ON THE BDM (BIT DECISION MAKING) ME...ijmpict
Based on the BDM (Bit Decision Making) method, the present work presents two contributions: first, the
illustration of the use of the technique known as SOP (Sum Of Products) in order to systematize the
process to obtain the correlation function for sub-system’s mathematical modelling, and second,the provision of capacity to manage a greater than binary but a finite - discrete set of possible subjective qualifications of suppliers at any criterion.
TRANSACTION PROFITABILITY USING HURI ALGORITHM [TPHURI]ijbiss
Business intelligence (BI) is formulation of business strategies which help organizations to achieve its
objectives and to predict its future. Data mining is often referred as BI in the domain of business. One of
the major tasks in data mining is Association Rule Mining (ARM). ARM techniques incorporated in BI
systems can be utilized in business decision-making such as retail shelf management, catalog design,
customer segmentation, cross-selling, quality improvement and bundling products marketing.
ARM technique is used for the identification of frequent itemsets from huge databases and then generating
strong association rules by considering each item having same value. But in a large number of real world
applications, items have different values according to their impact on the respective decision making
processes. Traditional ARM techniques cannot fulfil the arising demands from these applications. The data
mining researchers are continuously improving the quality of ARM technique by incorporating the utility of
items. The utility of item is decided by its contribution towards the business profit or quantity of the item
sold, etc. Hence Utility mining focuses on identifying the itemsets with high utilities.
Jyothi et al proposed HURI algorithm in [2] for producing high utility rare itemset according to users’
interest. An algorithm Transaction Profitability using HURI [TPHURI] is proposed in this paper which is
a modified version of HURI. TPHURI finds profitable transactions consisting of high utility rare items and
also finds the share of such items in the overall profit of the transactions.
IRJET-Handwritten Digit Classification using Machine Learning ModelsIRJET Journal
This document summarizes a research paper that compares different machine learning models for classifying handwritten digits using the MNIST dataset. It finds that a Support Vector Machine Classifier achieves the highest accuracy of 98.3% compared to 96.3% for a Random Forest Classifier and 88.97% for a Logistic Regression model. The paper preprocesses the images, implements the three classifiers, and evaluates their performance based on accuracy and other metrics like precision and recall. It concludes that the SVM Classifier is the most accurate for classifying handwritten digits from the MNIST dataset.
Understanding the Applicability of Linear & Non-Linear Models Using a Case-Ba...ijaia
This paper uses a case based study – “product sales estimation” on real-time data to help us understand
the applicability of linear and non-linear models in machine learning and data mining. A systematic
approach has been used here to address the given problem statement of sales estimation for a particular set
of products in multiple categories by applying both linear and non-linear machine learning techniques on
a data set of selected features from the original data set. Feature selection is a process that reduces the
dimensionality of the data set by excluding those features which contribute minimal to the prediction of the
dependent variable. The next step in this process is training the model that is done using multiple
techniques from linear & non-linear domains, one of the best ones in their respective areas. Data Remodeling
has then been done to extract new features from the data set by changing the structure of the
dataset & the performance of the models is checked again. Data Remodeling often plays a very crucial and
important role in boosting classifier accuracies by changing the properties of the given dataset. We then try
to explore and analyze the various reasons due to which one model performs better than the other & hence
try and develop an understanding about the applicability of linear & non-linear machine learning models.
The target mentioned above being our primary goal, we also aim to find the classifier with the best possible
accuracy for product sales estimation in the given scenario.
A production - Inventory model with JIT setup cost incorporating inflation an...IJMER
A production inventory model with Just-In-Time (JIT) set-up cost has been developed in which inflation and time value of money are considered under an imperfect production process. The demand rate is considered to be a function of advertisement cost and selling price. Unit production cost is considered incorporating several features like energy and labour cost, raw material cost and development cost of the manufacturing system. Development cost is assumed to be a function of reliability parameter.
Considering these phenomena, an analytic expression is obtained for the total profit of the model. The model provides an analytical solution to maximize the total profit function.A numerical example is presented to illustrate the model along with graphical analysis. Sensitivity analysis has been carried out to identify the most sensitive parameters of the model.
The portfolio contains 2D and 3D architectural designs including a cross section and 3D model created in AutoCAD, floor plans, wall sections, and elevations created in Revit, and a hand sketched site plan.
What Philosophy and Neuroscience can teach us about UXManuel Ebert
Today, I want to talk about three things. The first is the philosophical discipline of phenomenology. The second is what that has to do with how we use tools, and hence, with how we should design tools. The third is that neuroscientist say that the philosophers already got it right like a hundred years ago.
But first, let me take you to France. Paris, the city of love. In 1942. The city is under Nazi occupation. It’s night, long past the curfew, and outside SS patrols march down dark alleys with cobblestone streets. Inside, a man who would later become THE intellectual celebrity of post-war France together with his lifelong partner - think the Brangelina of the 50s - sat in front of a typewriter to work on the manuscript for his philosophical opus magnum “L’Être et le néant” - “Being and Nothingness”. Jean-Paul Sartre was a funny man. And not just because he had a disarmingly charming humour, but because he was actually funny looking. Most would actually describe him as plain ugly. He was short, ears sticking out, one eye pointing this way, one eye that way, he wore think glasses and was notoriously unkempt.
Some people already knew him as a playwright - he brought us such memorable quotes like “Hell is the other people”. Allegedly he wrote most of his plays to give the many young actresses he dated something to do - apparently his wit and intellect were rather seductive. But that night, he was tired. As he was typing on the manuscript, he tried to focus on the ideas, the concepts, the world the words he was typing were creating. But then his eyes became sore and weary, and the letters started to blur. He noticed how his attention shifted from the concepts to the letters that carry them. And then soon to his tired fingers that created those letters. Finally he couldn’t think about anything but his sore eyes and had trouble keeping them open.
Most ordinary people would just have decided to call it a night. But Sartre was anything but ordinary. He was a philosopher. Particularly, he was a phenomenologist. Phenomenology is the philosophical study of experiences, or more correctly, the study of structures of experiences. In a way, it’s a scientific discipline, a way of describing the world and theorising about it. But as opposed to physics or psychology, phenomenology is a first-person approach: you're analysing, deconstructing and describing your own experiences and consciousness. Phenomenologists don't study things for what they are, but for how they appear to us.
Sartre, being a phenomenologist, was quick to notice a certain pattern in his experiences. He noticed how his attention shifts from one thing to another - the ideas, the words, his fingers, his eyes. What's interesting here is that his attention shifts from the subject of perception to the medium of perception. First he perceives the ideas through the words, then the typed words themselves become centre of his attention. Finally, his own eyes become th
Ensayo introduccion a la historia completoGaby Galeana
Este documento trata sobre la introducción a la historia y la relación entre la historia, los hombres y el tiempo. Explora conceptos como la elección del historiador, la historia y los hombres, el tiempo histórico, el ídolo de los orígenes, los límites de lo actual y lo inactual, comprender el presente a través del pasado y comprender el pasado a través del presente. Aunque es un texto complejo que contiene muchos detalles, en general discute la naturaleza de la historia y cómo esta puede ayudarnos a ent
El documento habla sobre las virtudes de San Martín de Porres y cómo podemos seguir su ejemplo. Describe que San Martín sirvió a los pobres y enfermos con humildad y prontitud, anteponiendo siempre las necesidades de los demás a las propias. También practicó la oración, penitencia y ayuno, y mostró una gran caridad hacia todos.
El documento describe el Monumento Natural Cerro Santa Ana, ubicado en la Península de Paraguaná en el estado Falcón, Venezuela. El Cerro Santa Ana fue declarado Monumento Natural en 1972 y cubre un área de 1900 hectáreas. El punto más alto del Cerro Santa Ana alcanza los 830 metros sobre el nivel del mar. El monumento contiene rocas ígneas, metamórficas y sedimentarias y presenta tres picos, siendo el más alto el Santa Ana.
Projektværktøjsdagen 2013: Kim Ullum Hansen, Project Director, Itera.
Itera fortæller om baggrunden for hvorfor energiselskabet SEAS-NVE valgte at investere i projekt- og projektporteføljeportal fra Itera.
Infinio is a small startup that provides a server-side I/O optimization layer that brings storage performance closer to applications without requiring new hardware. It uses a globally deduplicated read cache across an ESXi cluster's aggregated RAM to improve latency by 10x and reduce reads from storage by 65-85%. Infinio was founded in 2011 and is headquartered in Cambridge, MA. It has raised $24 million in funding and competes with companies like Quantum, StorMagic, and PernixData in the I/O optimization market.
This document provides information about using a spectrophotometer for quantitative analysis. It discusses how spectrophotometers work based on the Beer-Lambert law relating absorbance of light to analyte concentration. The key components of a spectrophotometer are described including the light source, wavelength selector, sample cuvette, detector, and readout device. General procedures are outlined for preparing standard solutions to generate a calibration curve and determining concentrations of unknown samples.
1) Ad Click Xpress offers advertising packs ($10 each) that earn $0.20 daily for 88 days, expiring after. With daily reinvestment of earnings into new packs, $1000 can grow to $4400 after 88 days.
2) Advertising panels (costing $20 each) are unlocked by converting 4 advertising packs each. Panels earn $60 when their 2x2 matrix is filled.
3) The program aims to triple an initial $1000 investment in 6-8 months through reinvesting daily earnings from packs into new packs and panels. The "XpressShift" feature ensures the program's sustainability by periodically converting packs into panels.
This document is the project report submitted by Mugesh.MK to partial fulfillment of the requirements for the degree of Master of Business Administration from G.R.D Institute of Management in Coimbatore, India. The project analyzes the financial performance of Ashok Leyland Ltd. over a five year period from 2008-2009 to 2012-2013. The report includes an introduction, industry profile, objectives, scope, research methodology, literature review and company profile. Financial concepts and ratios will be used to analyze data and interpret the financial position and performance of Ashok Leyland. Findings and suggestions will be provided.
Este documento es un edicto de la Secretaría de Planeación e Infraestructura del municipio de Santa Fe de Antioquia, Colombia. El edicto informa a tres personas específicas que se ha recibido una solicitud para obtener una licencia de construcción para un predio en la vereda El Espinal. Se les informa a estas personas para que puedan ejercer sus derechos con respecto a la solicitud, de acuerdo con la ley colombiana.
A Fuzzy Arithmetic Approach for Perishable Items in Discounted Entropic Order...Waqas Tariq
This paper uses fuzzy arithmetic approach to the system cost for perishable items with instant deterioration for the discounted entropic order quantity model. Traditional crisp system cost observes that some costs may belong to the uncertain factors. It is necessary to extend the system cost to treat also the vague costs. We introduce a new concept which we call entropy and show that the total payoff satisfies the optimization property. We show how special case of this problem reduce to perfect results, and how post deteriorated discounted entropic order quantity model is a generalization of optimization. It has been imperative to demonstrate this model by analysis, which reveals important characteristics of discounted structure. Further numerical experiments are conducted to evaluate the relative performance between the fuzzy and crisp cases in EnOQ and EOQ separately.
A Framework for Analyzing the Impact of Business Cycles on Endogenous GrowthGRAPE
This document summarizes a framework for analyzing the impact of business cycles on endogenous growth. It presents a model with intermediate goods producers that invest in R&D to improve quality and productivity. General equilibrium is achieved through free entry. Higher entry or fixed costs lower growth and the number of active firms. The model can analyze how exogenous productivity shocks influence growth and business cycles, and how entry costs impact dynamics along the cycle. Future work will examine impulse responses and the role of costs in optimizing growth given shock characteristics.
This document summarizes a lecture on analyzing demand systems for differentiated products. It discusses:
1) Demand systems provide information to analyze firm incentives and responses to policy changes. They are important for welfare analysis and constructing price indices.
2) Demand models can consider representative or heterogeneous agents, and model demand in product or characteristic space. Heterogeneous agent models in characteristic space are preferred as they allow combining different data sources.
3) Demand estimation requires simulating aggregate demand from individual demands, which provides unbiased estimates that can be made precise with large simulations.
Feature Importance Analysis with XGBoost in Tax auditMichael BENESTY
Presentation of a real use case at TAJ law firm (Deloitte Paris) of applying Machine learning on accounting to help clients to prepare their tax audit.
Use of quantitative techniques in economicsBalaji P
1. The document discusses three quantitative techniques used in economics: comparative static analysis, linear programming, and game theory.
2. Comparative static analysis compares economic equilibrium before and after changes in exogenous parameters like demand or supply. It examines how endogenous variables like price and quantity adjust.
3. Linear programming identifies the optimal allocation of limited resources to maximize profits or minimize costs. It formulates the problem as mathematical equations and graphs to find the best solution.
4. Game theory analyzes strategic decision-making in competitive situations. It models interactions between players and outcomes using payoff matrices and extensive forms to determine optimal strategies.
1) The document presents an entropic order quantity (EnOQ) model that incorporates cash discounts offered after products start to deteriorate. The model aims to optimize the payoff by handling multiple objectives like deterioration and demand that depends on inventory levels and selling price.
2) The model is formulated as a system of differential equations to capture inventory levels over time with deterioration and demand functions. Total profit per unit time is derived considering relevant costs like ordering, holding, and entropy costs as well as revenue.
3) A numerical example is provided to illustrate the model and the procedure for maximizing total profit per unit time subject to the constraint that discounted selling price must be greater than the unit cost.
Estimation of Static Discrete Choice Models Using Market Level DataNBER
This document discusses methods for estimating static discrete choice models using market-level data rather than individual consumer data. It covers several key topics:
1) The types of market-level and consumer-level data that can be used. Market-level data is easier to obtain but poses challenges for identification and estimation.
2) A common linear random coefficients logit model framework. It includes observed and unobserved product characteristics as well as observed and unobserved consumer heterogeneity.
3) The key challenges of estimating heterogeneity parameters without consumer-level data. It also discusses how to deal with potential endogeneity of unobserved product characteristics.
4) The two-step estimation approach when consumer-level data is available, and
Ensayo introduccion a la historia completoGaby Galeana
Este documento trata sobre la introducción a la historia y la relación entre la historia, los hombres y el tiempo. Explora conceptos como la elección del historiador, la historia y los hombres, el tiempo histórico, el ídolo de los orígenes, los límites de lo actual y lo inactual, comprender el presente a través del pasado y comprender el pasado a través del presente. Aunque es un texto complejo que contiene muchos detalles, en general discute la naturaleza de la historia y cómo esta puede ayudarnos a ent
El documento habla sobre las virtudes de San Martín de Porres y cómo podemos seguir su ejemplo. Describe que San Martín sirvió a los pobres y enfermos con humildad y prontitud, anteponiendo siempre las necesidades de los demás a las propias. También practicó la oración, penitencia y ayuno, y mostró una gran caridad hacia todos.
El documento describe el Monumento Natural Cerro Santa Ana, ubicado en la Península de Paraguaná en el estado Falcón, Venezuela. El Cerro Santa Ana fue declarado Monumento Natural en 1972 y cubre un área de 1900 hectáreas. El punto más alto del Cerro Santa Ana alcanza los 830 metros sobre el nivel del mar. El monumento contiene rocas ígneas, metamórficas y sedimentarias y presenta tres picos, siendo el más alto el Santa Ana.
Projektværktøjsdagen 2013: Kim Ullum Hansen, Project Director, Itera.
Itera fortæller om baggrunden for hvorfor energiselskabet SEAS-NVE valgte at investere i projekt- og projektporteføljeportal fra Itera.
Infinio is a small startup that provides a server-side I/O optimization layer that brings storage performance closer to applications without requiring new hardware. It uses a globally deduplicated read cache across an ESXi cluster's aggregated RAM to improve latency by 10x and reduce reads from storage by 65-85%. Infinio was founded in 2011 and is headquartered in Cambridge, MA. It has raised $24 million in funding and competes with companies like Quantum, StorMagic, and PernixData in the I/O optimization market.
This document provides information about using a spectrophotometer for quantitative analysis. It discusses how spectrophotometers work based on the Beer-Lambert law relating absorbance of light to analyte concentration. The key components of a spectrophotometer are described including the light source, wavelength selector, sample cuvette, detector, and readout device. General procedures are outlined for preparing standard solutions to generate a calibration curve and determining concentrations of unknown samples.
1) Ad Click Xpress offers advertising packs ($10 each) that earn $0.20 daily for 88 days, expiring after. With daily reinvestment of earnings into new packs, $1000 can grow to $4400 after 88 days.
2) Advertising panels (costing $20 each) are unlocked by converting 4 advertising packs each. Panels earn $60 when their 2x2 matrix is filled.
3) The program aims to triple an initial $1000 investment in 6-8 months through reinvesting daily earnings from packs into new packs and panels. The "XpressShift" feature ensures the program's sustainability by periodically converting packs into panels.
This document is the project report submitted by Mugesh.MK to partial fulfillment of the requirements for the degree of Master of Business Administration from G.R.D Institute of Management in Coimbatore, India. The project analyzes the financial performance of Ashok Leyland Ltd. over a five year period from 2008-2009 to 2012-2013. The report includes an introduction, industry profile, objectives, scope, research methodology, literature review and company profile. Financial concepts and ratios will be used to analyze data and interpret the financial position and performance of Ashok Leyland. Findings and suggestions will be provided.
Este documento es un edicto de la Secretaría de Planeación e Infraestructura del municipio de Santa Fe de Antioquia, Colombia. El edicto informa a tres personas específicas que se ha recibido una solicitud para obtener una licencia de construcción para un predio en la vereda El Espinal. Se les informa a estas personas para que puedan ejercer sus derechos con respecto a la solicitud, de acuerdo con la ley colombiana.
A Fuzzy Arithmetic Approach for Perishable Items in Discounted Entropic Order...Waqas Tariq
This paper uses fuzzy arithmetic approach to the system cost for perishable items with instant deterioration for the discounted entropic order quantity model. Traditional crisp system cost observes that some costs may belong to the uncertain factors. It is necessary to extend the system cost to treat also the vague costs. We introduce a new concept which we call entropy and show that the total payoff satisfies the optimization property. We show how special case of this problem reduce to perfect results, and how post deteriorated discounted entropic order quantity model is a generalization of optimization. It has been imperative to demonstrate this model by analysis, which reveals important characteristics of discounted structure. Further numerical experiments are conducted to evaluate the relative performance between the fuzzy and crisp cases in EnOQ and EOQ separately.
A Framework for Analyzing the Impact of Business Cycles on Endogenous GrowthGRAPE
This document summarizes a framework for analyzing the impact of business cycles on endogenous growth. It presents a model with intermediate goods producers that invest in R&D to improve quality and productivity. General equilibrium is achieved through free entry. Higher entry or fixed costs lower growth and the number of active firms. The model can analyze how exogenous productivity shocks influence growth and business cycles, and how entry costs impact dynamics along the cycle. Future work will examine impulse responses and the role of costs in optimizing growth given shock characteristics.
This document summarizes a lecture on analyzing demand systems for differentiated products. It discusses:
1) Demand systems provide information to analyze firm incentives and responses to policy changes. They are important for welfare analysis and constructing price indices.
2) Demand models can consider representative or heterogeneous agents, and model demand in product or characteristic space. Heterogeneous agent models in characteristic space are preferred as they allow combining different data sources.
3) Demand estimation requires simulating aggregate demand from individual demands, which provides unbiased estimates that can be made precise with large simulations.
Feature Importance Analysis with XGBoost in Tax auditMichael BENESTY
Presentation of a real use case at TAJ law firm (Deloitte Paris) of applying Machine learning on accounting to help clients to prepare their tax audit.
Use of quantitative techniques in economicsBalaji P
1. The document discusses three quantitative techniques used in economics: comparative static analysis, linear programming, and game theory.
2. Comparative static analysis compares economic equilibrium before and after changes in exogenous parameters like demand or supply. It examines how endogenous variables like price and quantity adjust.
3. Linear programming identifies the optimal allocation of limited resources to maximize profits or minimize costs. It formulates the problem as mathematical equations and graphs to find the best solution.
4. Game theory analyzes strategic decision-making in competitive situations. It models interactions between players and outcomes using payoff matrices and extensive forms to determine optimal strategies.
1) The document presents an entropic order quantity (EnOQ) model that incorporates cash discounts offered after products start to deteriorate. The model aims to optimize the payoff by handling multiple objectives like deterioration and demand that depends on inventory levels and selling price.
2) The model is formulated as a system of differential equations to capture inventory levels over time with deterioration and demand functions. Total profit per unit time is derived considering relevant costs like ordering, holding, and entropy costs as well as revenue.
3) A numerical example is provided to illustrate the model and the procedure for maximizing total profit per unit time subject to the constraint that discounted selling price must be greater than the unit cost.
Estimation of Static Discrete Choice Models Using Market Level DataNBER
This document discusses methods for estimating static discrete choice models using market-level data rather than individual consumer data. It covers several key topics:
1) The types of market-level and consumer-level data that can be used. Market-level data is easier to obtain but poses challenges for identification and estimation.
2) A common linear random coefficients logit model framework. It includes observed and unobserved product characteristics as well as observed and unobserved consumer heterogeneity.
3) The key challenges of estimating heterogeneity parameters without consumer-level data. It also discusses how to deal with potential endogeneity of unobserved product characteristics.
4) The two-step estimation approach when consumer-level data is available, and
This document summarizes a master's thesis that implemented a continuous sequential importance resampling (CSIR) algorithm to estimate predictive densities in stochastic volatility (SV) models. The thesis began with an introduction to relevant econometrics concepts. It then explained SV models and particle filtering approaches. The thesis described implementing and testing functions to develop an R package for CSIR estimation in SV models. Diagnostics and parameter estimates from simulated and real stock return data were reported. The thesis concluded by discussing the package's applications and potential for future development.
As optimization (or prescriptive analytics) has grown as a tool for business decision-making, a key factor in its success has been the adoption of model-based optimization. Using this approach, an analyst’s major work is to describe a problem of interest by means of an algebraic model, while the computation of a solution is left to general-purpose, off-the-shelf software. Powerful modeling systems manage the difficulties of translating between the human modeler’s ideas and the computer software’s needs. This tutorial introduces model-based optimization and offers a guide to its effective use.
This document presents a study of an economic order quantity (EOQ) inventory model for deteriorating items that considers the effects of inflation and time discounting. The model assumes stock-dependent and time-dependent demand as well as partial backlogging. The objective is to maximize total profit, which includes various costs and revenues. Analytical results are derived to find the optimal replenishment time and cycle length by solving equations simultaneously. Numerical examples are also presented to demonstrate the solution procedure and conduct sensitivity analysis.
IRJET- Effecient Support Itemset Mining using Parallel Map ReducingIRJET Journal
This document presents a study on using parallel MapReduce algorithms for efficient frequent itemset mining on high-dimensional datasets. It first summarizes existing frequent itemset mining algorithms like Apriori, Predictive Apriori, and Filtered Associator and their limitations in handling high-dimensional data due to the "curse of dimensionality." It then proposes using a parallel MapReduce approach and evaluates its performance on a high-dimensional dataset, showing improvements in execution time, load balancing, and robustness over the other algorithms. Experimental results demonstrate the efficiency of the proposed MapReduce algorithm for mining high-dimensional data.
Econometrics uses economic theory and statistical tools to quantify economic relationships and answer questions about economic data. An econometric model includes both a systematic component based on economic theory and an error term that represents unpredictable factors. Most economic data comes from non-experimental sources and is in time-series, cross-sectional, or panel form. The goal of econometrics is statistical inference like estimating parameters, predicting outcomes, and testing hypotheses using sample data. Econometric models incorporate probability distributions, random variables, and concepts like the mean, variance, and normal distribution to analyze economic data statistically.
Sabatelli relationship between the uncompensated price elasticity and the inc...GLOBMOD Health
Income and price elasticity of demand quantify the responsiveness of markets to changes in income and in prices, respectively. Under the assumptions of utility maximization and preference independence (additive preferences), mathematical relationships between income elasticity values and the uncompensated own and cross price elasticity of demand are here derived using the differential approach to demand analysis.
Citation: Sabatelli L (2016) Relationship between the Uncompensated Price Elasticity and the Income Elasticity of Demand under Conditions of Additive Preferences. PLoS ONE 11(3): e0151390. doi:10.1371/journal.pone.0151390
http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0151390
IRJET - Smart Dealing in Stock Market with the Tools of Mathematical Econ...IRJET Journal
1. The document discusses using algorithms and mathematical econometrics to make smart decisions in the stock market. It describes how algorithms are used to understand complex markets and social networks by combining economics, engineering, mathematics, and computer science.
2. Economics and computer science are linked because there are complex applications used by digital markets on sites like eBay and Amazon. Researchers want to understand topics like game theory and Nash equilibrium.
3. Most economists believe understanding computer algorithms is necessary to understand digital markets and online economic transactions. While the computer economy is young, it has already made progress in research problems and mechanisms related to digital auctions.
Anita Suurlaht (UCD School of Business). Correlation Dynamics in the G7 Stock...Eesti Pank
This paper analyzes changing stock market return correlations within the G7 markets from 1980 to 2017. It finds that there was a significant positive trend toward higher inter-market correlations over this period. Correlations are also higher during periods of overall higher market volatility and lower economic growth. The analysis shows that capital markets become more correlated during recessions and times of financial stress, as measured by an increasing TED spread. Synchronization methods are used to account for differences in market closing times.
"Optimisation of closed loop supply chain decisions using integrated game theoretic particle swarm algorithm"
Kalpit Patne, Visiting Fellow, SMART Infrastructure Facility presented a summary of his research as part of the SMART Seminar Series on 8 July 2016.
For more information, visit the event page at: http://smart.uow.edu.au/events/UOW217694.
Should Commodity Investors Follow Commodities' Prices?guasoni
Most institutional investors gain access to commodities through diversified index funds, even though mean-reverting prices and low correlation among commodities returns indicate that two-fund separation does not hold for commodities. In contrast to demand for stocks and bonds, we find that, on average, demand for commodities is largely insensitive to risk aversion, with intertemporal hedging demand playing a major role for more risk averse investors. Comparing the optimal strategies of investors who observe only the index to those of investors who observe all commodities, we find that information on commodity prices leads to significant welfare gains, even if trading is confined to the index only.
Profitable Itemset Mining using WeightsIRJET Journal
This document presents two new algorithms - Profitable Apriori and Profitable FP-Growth - that extend traditional association rule mining algorithms to consider profit and quantity of items. Traditional algorithms like Apriori and FP-Growth are binary and do not account for these factors. The new algorithms incorporate profit per unit and quantity to generate the most profitable itemsets. They are compared based on memory usage, runtime, and number of patterns produced. Both algorithms generate the same profitable patterns, but Profitable FP-Growth uses more memory while Profitable Apriori has a longer runtime due to candidate generation. The algorithms aim to identify truly profitable patterns for effective marketing unlike traditional methods.
Procurement Strategy for Manufacturing and JITIJERD Editor
Procurement involves the purchase of goods and services from the suppliers for the manufacturing units. Procuring the items in the most optimized way is very much desired to implement JIT effectively. Manufacturer-supplier relationship is very crucial for effective procurement policy. The present paper deals with various procurement strategies of manufacturing that include the optimization of manufacturing quantity along with the procurement of multiple input items. The procurement cycles for input items are considered in small lot sizes, which are essential for the effective implementation of JIT. The method of integer programming has been used to find the optimum integer values of number of orders for procuring the input items, which is usually found to be in fraction for a real manufacturing environment.
The document describes using gradient boosted trees to predict store sales for a competition on Kaggle. Key points:
- Gradient boosted trees were used to build an ensemble of decision trees that could accurately predict store sales based on historical data.
- Hyperparameters like learning rate, tree depth, and regularization were tuned using Bayesian optimization to improve prediction accuracy.
- External data on weather and search trends was incorporated to provide additional context.
- Feature engineering techniques like mean encoding and lagged variables were applied to better handle time series and categorical data.
- The model was evaluated using a 2014 validation set to mimic the 2015 test set and minimize overfitting. Cross validation was used to select the best model.
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
"Scaling RAG Applications to serve millions of users", Kevin GoedeckeFwdays
How we managed to grow and scale a RAG application from zero to thousands of users in 7 months. Lessons from technical challenges around managing high load for LLMs, RAGs and Vector databases.
Northern Engraving | Modern Metal Trim, Nameplates and Appliance PanelsNorthern Engraving
What began over 115 years ago as a supplier of precision gauges to the automotive industry has evolved into being an industry leader in the manufacture of product branding, automotive cockpit trim and decorative appliance trim. Value-added services include in-house Design, Engineering, Program Management, Test Lab and Tool Shops.
"$10 thousand per minute of downtime: architecture, queues, streaming and fin...Fwdays
Direct losses from downtime in 1 minute = $5-$10 thousand dollars. Reputation is priceless.
As part of the talk, we will consider the architectural strategies necessary for the development of highly loaded fintech solutions. We will focus on using queues and streaming to efficiently work and manage large amounts of data in real-time and to minimize latency.
We will focus special attention on the architectural patterns used in the design of the fintech system, microservices and event-driven architecture, which ensure scalability, fault tolerance, and consistency of the entire system.
Conversational agents, or chatbots, are increasingly used to access all sorts of services using natural language. While open-domain chatbots - like ChatGPT - can converse on any topic, task-oriented chatbots - the focus of this paper - are designed for specific tasks, like booking a flight, obtaining customer support, or setting an appointment. Like any other software, task-oriented chatbots need to be properly tested, usually by defining and executing test scenarios (i.e., sequences of user-chatbot interactions). However, there is currently a lack of methods to quantify the completeness and strength of such test scenarios, which can lead to low-quality tests, and hence to buggy chatbots.
To fill this gap, we propose adapting mutation testing (MuT) for task-oriented chatbots. To this end, we introduce a set of mutation operators that emulate faults in chatbot designs, an architecture that enables MuT on chatbots built using heterogeneous technologies, and a practical realisation as an Eclipse plugin. Moreover, we evaluate the applicability, effectiveness and efficiency of our approach on open-source chatbots, with promising results.
inQuba Webinar Mastering Customer Journey Management with Dr Graham HillLizaNolte
HERE IS YOUR WEBINAR CONTENT! 'Mastering Customer Journey Management with Dr. Graham Hill'. We hope you find the webinar recording both insightful and enjoyable.
In this webinar, we explored essential aspects of Customer Journey Management and personalization. Here’s a summary of the key insights and topics discussed:
Key Takeaways:
Understanding the Customer Journey: Dr. Hill emphasized the importance of mapping and understanding the complete customer journey to identify touchpoints and opportunities for improvement.
Personalization Strategies: We discussed how to leverage data and insights to create personalized experiences that resonate with customers.
Technology Integration: Insights were shared on how inQuba’s advanced technology can streamline customer interactions and drive operational efficiency.
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
AppSec PNW: Android and iOS Application Security with MobSFAjin Abraham
Mobile Security Framework - MobSF is a free and open source automated mobile application security testing environment designed to help security engineers, researchers, developers, and penetration testers to identify security vulnerabilities, malicious behaviours and privacy concerns in mobile applications using static and dynamic analysis. It supports all the popular mobile application binaries and source code formats built for Android and iOS devices. In addition to automated security assessment, it also offers an interactive testing environment to build and execute scenario based test/fuzz cases against the application.
This talk covers:
Using MobSF for static analysis of mobile applications.
Interactive dynamic security assessment of Android and iOS applications.
Solving Mobile app CTF challenges.
Reverse engineering and runtime analysis of Mobile malware.
How to shift left and integrate MobSF/mobsfscan SAST and DAST in your build pipeline.
[OReilly Superstream] Occupy the Space: A grassroots guide to engineering (an...Jason Yip
The typical problem in product engineering is not bad strategy, so much as “no strategy”. This leads to confusion, lack of motivation, and incoherent action. The next time you look for a strategy and find an empty space, instead of waiting for it to be filled, I will show you how to fill it in yourself. If you’re wrong, it forces a correction. If you’re right, it helps create focus. I’ll share how I’ve approached this in the past, both what works and lessons for what didn’t work so well.
What is an RPA CoE? Session 1 – CoE VisionDianaGray10
In the first session, we will review the organization's vision and how this has an impact on the COE Structure.
Topics covered:
• The role of a steering committee
• How do the organization’s priorities determine CoE Structure?
Speaker:
Chris Bolin, Senior Intelligent Automation Architect Anika Systems
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/temporal-event-neural-networks-a-more-efficient-alternative-to-the-transformer-a-presentation-from-brainchip/
Chris Jones, Director of Product Management at BrainChip , presents the “Temporal Event Neural Networks: A More Efficient Alternative to the Transformer” tutorial at the May 2024 Embedded Vision Summit.
The expansion of AI services necessitates enhanced computational capabilities on edge devices. Temporal Event Neural Networks (TENNs), developed by BrainChip, represent a novel and highly efficient state-space network. TENNs demonstrate exceptional proficiency in handling multi-dimensional streaming data, facilitating advancements in object detection, action recognition, speech enhancement and language model/sequence generation. Through the utilization of polynomial-based continuous convolutions, TENNs streamline models, expedite training processes and significantly diminish memory requirements, achieving notable reductions of up to 50x in parameters and 5,000x in energy consumption compared to prevailing methodologies like transformers.
Integration with BrainChip’s Akida neuromorphic hardware IP further enhances TENNs’ capabilities, enabling the realization of highly capable, portable and passively cooled edge devices. This presentation delves into the technical innovations underlying TENNs, presents real-world benchmarks, and elucidates how this cutting-edge approach is positioned to revolutionize edge AI across diverse applications.
"Frontline Battles with DDoS: Best practices and Lessons Learned", Igor IvaniukFwdays
At this talk we will discuss DDoS protection tools and best practices, discuss network architectures and what AWS has to offer. Also, we will look into one of the largest DDoS attacks on Ukrainian infrastructure that happened in February 2022. We'll see, what techniques helped to keep the web resources available for Ukrainians and how AWS improved DDoS protection for all customers based on Ukraine experience
Must Know Postgres Extension for DBA and Developer during MigrationMydbops
Mydbops Opensource Database Meetup 16
Topic: Must-Know PostgreSQL Extensions for Developers and DBAs During Migration
Speaker: Deepak Mahto, Founder of DataCloudGaze Consulting
Date & Time: 8th June | 10 AM - 1 PM IST
Venue: Bangalore International Centre, Bangalore
Abstract: Discover how PostgreSQL extensions can be your secret weapon! This talk explores how key extensions enhance database capabilities and streamline the migration process for users moving from other relational databases like Oracle.
Key Takeaways:
* Learn about crucial extensions like oracle_fdw, pgtt, and pg_audit that ease migration complexities.
* Gain valuable strategies for implementing these extensions in PostgreSQL to achieve license freedom.
* Discover how these key extensions can empower both developers and DBAs during the migration process.
* Don't miss this chance to gain practical knowledge from an industry expert and stay updated on the latest open-source database trends.
Mydbops Managed Services specializes in taking the pain out of database management while optimizing performance. Since 2015, we have been providing top-notch support and assistance for the top three open-source databases: MySQL, MongoDB, and PostgreSQL.
Our team offers a wide range of services, including assistance, support, consulting, 24/7 operations, and expertise in all relevant technologies. We help organizations improve their database's performance, scalability, efficiency, and availability.
Contact us: info@mydbops.com
Visit: https://www.mydbops.com/
Follow us on LinkedIn: https://in.linkedin.com/company/mydbops
For more details and updates, please follow up the below links.
Meetup Page : https://www.meetup.com/mydbops-databa...
Twitter: https://twitter.com/mydbopsofficial
Blogs: https://www.mydbops.com/blog/
Facebook(Meta): https://www.facebook.com/mydbops/
The Microsoft 365 Migration Tutorial For Beginner.pptxoperationspcvita
This presentation will help you understand the power of Microsoft 365. However, we have mentioned every productivity app included in Office 365. Additionally, we have suggested the migration situation related to Office 365 and how we can help you.
You can also read: https://www.systoolsgroup.com/updates/office-365-tenant-to-tenant-migration-step-by-step-complete-guide/
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
Choosing The Best AWS Service For Your Website + API.pptx
Cerdeira and silva (2010)
1. Dynamic Efficiency Measurement in a Discrete Choice Model:
An Application to Inventory Behaviour∗
Jorge Cerdeira
Elvira Silva†
Faculty of Economics,
Faculty of Economics & CEF.UP,
University of Porto
University of Porto
March 30, 2010
Abstract
Dynamic efficiency measurement at the firm level has been developed in the context of models in which
the firm´s decisions are continuous variables [e.g., Silva and Stefanou (2007), Nemoto and Goto (1999,
2003)]. In this paper, dynamic efficiency is investigated within a dynamic discrete choice structural model,
where firms decide over discrete rather than continuous variables. We analyze a dynamic programming
inventory model - in which a firm decides whether to order or not some products in each period - and
develop a measure of dynamic efficiency at the product level. In our model, we allow for the existence of
product heterogeneity as well as efficiency heterogeneity across products by including random coefficients
in the analysis.
Using a dataset with weekly information on prices, sales, orders and stocks for a Portuguese firm from
January 2008 to June 2009, we estimate the model with a two-stage approach. In the first stage, we provide
nonparametric estimates of the transition probabilities of the state variables. In the second stage, we use
the Bayesian estimation method proposed by Imai, Jain and Ching (2009), that allows simultaneously for
the solution of the dynamic programming problem and the estimation of the parameters. This method
includes in each iteration two steps: one solves the dynamic programming model and the other employs
the Markov Chain Monte Carlo (MCMC) algorithm to draw values from the posterior distributions of
the parameters.
We perform a counterfactual experiment to investigate what would be the firm’s optimal choices and
compare them with the actual choices. The counterfactuals results indicate that the actual decisions
diverge from the optimal decisions in at least 12.71% of the decisions.
Keywords: Dynamic Efficiency, Discrete Choice Structural Models, Dynamic Programming, Inventory
Model
JEL Classification: C15, C25, C61, D21
∗ The authors are grateful to Kenneth Train for helpful comments on Discrete Choice Methods with Simulation and to Victor
Aguirregabiria, Andrew Ching, Susumu Imai, Pedro Mira and Ariel Pakes for hints on Empirical Industrial Organization. They
also acknowledge Andrew Ching and Masakazu Ishihara for making available a C code that implements the Bayesian method
used in this paper. Jorge Cerdeira thanks the Portuguese Science Foundation (FCT) for financial support.
† Corresponding author. Address: Rua Dr. Roberto Frias, 4200-464 Porto, Portugal. Phone: 351-22-5571269. Fax: 351-225505050. Email: esilva@fep.up.pt
1
2. 1
Introduction
Dynamic efficiency measurement at the firm level has been developed in the context of
models in which firms decide over continuous variables [e.g., Silva and Stefanou (2007),
Nemoto and Goto (1999, 2003)]. There are, however, many situations in which the firm
makes decisions over discrete rather than continuous variables (e.g., a firm decides whether
to order or not some products in each period).
In this paper, dynamic (profit) efficiency is investigated within a dynamic discrete choice
structural model, where firms decide over discrete rather than continuous variables. We
analyze a dynamic programming inventory model - in which a firm decides whether to order
or not some products in each period - and develop a measure of dynamic efficiency at the
product level. For each product, we consider that, in the event of inefficiency, the firm
only gets a fraction of the maximum profit for that product. The model also allows for
the existence of product heterogeneity as well as efficiency heterogeneity across products by
including random coefficients in the analysis.
Using a dataset with weekly information on prices, sales, orders and stocks for a Portuguese
firm from January 2008 to June 2009, we estimate the model with a two-stage approach. In
the first stage, we provide nonparametric estimates of the transition probabilities of the state
variables. In the second stage, we use the Bayesian estimation method proposed by Imai,
Jain and Ching (2009), that allows simultaneously for the solution of the dynamic programming problem and the estimation of the parameters. This method includes in each iteration
two steps: one solves the dynamic programming model and the other employs the Markov
Chain Monte Carlo (MCMC) algorithm to draw values from the posterior distributions of
the parameters.
We perform a counterfactual experiment to investigate what would be the firm’s optimal
choices and compare them with the actual choices. The counterfactuals results indicate that
the actual decisions diverge from the optimal decisions in at least 12.71% of the decisions.
The paper is organized as follows. In section 2, we present the dynamic programming
inventory model. Section 3 goes into detail about the efficiency measurement in our model.
We present the estimation method in section 4 and the data and estimation results in sections
5 and 6. Our counterfactual experiment is outlined in section 7. Section 8 concludes the
paper.
2
The Inventory Model
In this section, we present a discrete choice dynamic programming model where a multiproduct firm decides, in every period, whether to order or not each product. Time is discrete
and indexed by t, t = 0,...,∞, while products are indexed by i , i = 1,..., N . The decision
2
3. variables, ait , belong to a discrete, finite set A = {0, 1}, where ait = 1 if the firm orders
product i in time period t and ait = 0 otherwise. The current profit function for product i
is given by
π(ait , xit , εit ) = pit E (yit ) (1 + λI{t > ttax change }) − (cit + γi )qit
−µsit − ωi I{yit = sit + qit } − ζi ait + εit (ait ),
(1)
where pit is the market price of product i in period t, E (yit ) represents expected sales, in
physical units, cit is the wholesale price, qit represents quantity of product i ordered during period t, sit is the stock at the beginning of period t, I{O} is the indicator function, being equal
to 1 if O is true and 0 otherwise, xit represents observed state variables, xit = (pit , cit , sit ) ,
and εit (ait ) is the random component, representing variables which are unobservable to the
econometrician. We assume that εit is independent over time with type 1 extreme value
distribution G(εit ).
The term I{t > ttax change } is included in the model to capture the effect of a VAT (value
added tax) change in expected sales. In addition, the term I{yit = sit + qit } intends to
capture the (negative) effect of a stock-out in profits.
Parameters in the current profit function include the measure of the effect of a tax change
in expected sales λ, the variable ordering cost γi , the unit storage cost µ, the (negative)
effect in profits resulting from a stock-out ωi and the fixed ordering cost ζi . While λ and
µ are fixed parameters, γi , ωi and ζi are random parameters to take into account product
heterogeneity. Specifically, we assume a lognormal distribution for these random parameters:
2
2
2
lnγi ∼ N (ln γ, σln γ ), lnωi ∼ N (ln ω, σln ω ) and lnζi ∼ N (ln ζ, σln ζ ). Hereafter, we denote all
parameters in the current profit function by θ.
Sales are assumed to be the minimum of inventories and demand. The firm cannot sell
more than the demand for a product, but it is also possible that, in the case of a positive
shock in the demand, the firm cannot satisfy it with the available stock of the product. In
the spirit of Aguirregabiria (1999), we assume that expected sales are equal to
sit + qit
, exp{φt }
dit
E (yit ) = dit E min
,
where dit is the expected demand for product i in period t and φt is an iid demand shock that
is known by the firm only after it has made the order decision for that period. We consider
the isoelastic expected demand
dit = exp{η log (pit )},
3
4. where η is the parameter associated with the price of the product.
As far as the state variables are concerned, we assume that the sales price and wholesale
price follow exogenous first-order markov processes fp (pit+1 | pit ) and fc (cit+1 | cit ), respectively. Also, the stock variable has the following transition law: sit+1 = max{0, sit + qit − yit }.
Given the state variables xit and εit , the problem of the firm is to make decisions ait in
order to maximize the expected discounted flow of profits over time for product i
∞
δ t π(ait , xit , εit ) ,
E
t=0
where δ ∈ (0, 1) is the discount factor.
Following Slade (1998), Aguirregabiria (1999) and Sanchez-Mangas (2002), we explore
the multiplicative separability structure of the current profit function. Thus, the current
profit function is defined as π(ait , xit , εit ) = E(π(ait , xit , εit )) + εit = π e (ait , xit )ψ(θ) + εit ,
where E(π(ait , xit , εit )) is the expected value of the current profit function conditional on the
decision variable ait and the state variables xit . Also,
e
π (ait , xit ) =
pit E (yit | xit , ait )
pit E (yit | xit , ait ) E (I{t > ttax change } | xit , ait )
cit E(qit | xit , ait )
E(qit | xit , ait )
sit
E (I{yit = sit + qit } | xit , ait )
ait
and ψ(θ) =
1
λ
−1
−γi
−µ
−ωi
−ζi
.
Let α(xit , εit ; θ) and V (xit , εit ; θ) denote the optimal decision rule and the value function of
the dynamic programming problem. We can obtain the value function in a recursive fashion
as
V (xit , εit ; θ) = max {π e (ait , xit )ψ(θ) + εit (a) + δEx,ε [V (xit+1 , εit+1 ; θ) | a, xit , εit ]} .
a∈A
(2)
The optimal decision rule is α(xit , εit ; θ) = arg maxa∈A {v(a, xit , εit ; θ)}, where for each
a ∈ A,
v(a, xit , εit ; θ) ≡ π e (ait , xit )ψ(θ) + εit (a) + δEx,ε [V (xit+1 , εit+1 ; θ) | a, xit , εit ] .
Given the structure of the current profit function and the assumptions made about the
transition probabilities of the state variables, we can rewrite problem (2) using the concept
of integrated Bellman equation,
4
5. ˆ
¯
V (xit ; θ) ≡
V (xit , εit ; θ) dG(εit )
ˆ
=
¯
max π e (ait , xit )ψ(θ) + εit (a) + δEx V (xi,t+1 ; θ) | a, xit
a∈A
dG(εit ).
(3)
In addition, define
¯
v (a, xit ; θ) ≡ π e (ait , xit )ψ(θ) + δEx V (xi,t+1 ; θ) | a, xit ,
¯
(4)
and denote the optimal rule by α(xit , εit ; θ) = arg maxa∈A {¯(a, xit ; θ) + εit (a)}.
¯
v
We define the Conditional Choice Probability (CCP), which is a component of the likelihood function, as
ˆ
P (a | x; θ) =
I {¯ (x, ε; θ) = a} dG(ε)
α
ˆ
=
I {¯(a, xit ; θ) + εit (a) > v (a , xit ; θ) + εit (a ) f or all a } dG(εit ).
v
¯
(5)
Given that εit is independent over time with type 1 extreme value distribution G(εit ),
equations (3) and (5) can be expressed as (see Aguirregabiria and Mira (2009))
¯
V (xit ; θ) = log
exp {¯(a, xit ; θ)}
v
(6)
exp {¯(a, xit ; θ)}
v
.
exp {¯(a = j, xit ; θ)}
v
(7)
a∈A
and
P (a | x; θ) =
1
j=0
Given (7), we have
ˆ
exp {¯(a, xit ; θ)}
v
dfθ ,
exp {¯(a = j, xit ; θ)}
v
P (a | x) =
1
j=0
where fθ represents the distribution of θ.
Having data on i = 1, ..., N products during t = 1, ..., T periods, we can define the
(conditional) likelihood function as
N
T
1
P (a | x; θ)I{a=j} ,
L(θ) =
i=1 t=1 j=0
5
(8)
6. and so the (unconditional) likelihood function is defined by
ˆ
N
T
1
P (a | x; θ)I{a=j} dfθ .
L=
i=1 t=1 j=0
3
The (In)Efficiency Measure
We try to develop a product-specific efficiency measure in the context of this dynamic
discrete choice structural model. Generally, we evaluate efficiency in models in which the
decision variables are continuous variables (e.g., see Silva and Stefanou (2007), Nemoto and
Goto (1999, 2003)). In our model, we do not have a continuous choice, but a discrete one.
However, a general concept of (in)efficiency can also be applied here: inefficiency arises because the firm actually deviates from its optimal, forward-looking, rational choices, therefore
getting a profit which is smaller than the maximum profit. The difference here is that the
firm only has two possible choices, in each period, for each product: choosing ait = 0 or
ait = 1. In this paper, we focus on the decision of ordering new deliveries and the inefficiency
associated with that decision.
The current profit function for product i, in (1), represents the maximum profit that the
firm obtains with product i. However, if there is inefficiency, the firm gets a smaller profit
than the one described in (1). Specifically, we assume that the firm only gets a fraction βi
of the maximum profit for product i and we redefine the current profit function in (1) as
follows:
π(ait , xit , εit ) = Γ(βi ) × E(π(ait , xit , εit )) + εit (ait ),
(9)
where
β
i
Γ(βi ) =
1/βi
if E(π(ait , xit , εit )) ≥ 0
if E(π(ait , xit , εit )) < 0
and 0 ≤ βi ≤ 1.
The term βi is the product-specific (in)efficiency measure. If the firm’s choice regarding
product i is optimal, then βi = 1; on the contrary, if the firm’s choice regarding product i is
not optimal, βi < 1 and the firm has a profit which is smaller than the maximum profit.
If E(π(ait , xit , εit )) ≥ 0 , we have 0 ≤ Γ(βi ) = βi ≤ 1 and actual profit is equal to maximum
profit only if βi = 1; conversely, if E(π(ait , xit , εit )) < 0, then Γ(βi ) = 1/βi ≥ 1 so that actual
profit is less than or equal to maximum profit. This is how we solve the problem of the
possible existence of negative profits, something which is often difficult to deal with in profit
efficiency analysis (see Fare et al (2004)). Note that the specification of the (in)efficiency
6
7. measure does not rule out the possibility of having E(π(ait , xit , εit )) ≥ 0 and actual profit
π(ait , xit , εit ) < 0 as the difference in signals may be explained by a sufficiently low εit .
The profit function specification in (9) assumes no interaction between βi and the random component εit (ait ). This is a simplifying assumption which is crucial from a technical
perspective. In fact, we do not consider the effect of βi over the random component εit (ait )
because if βi had such an effect on εit (ait ), βi would not be identifiable (see Rust (1994) and
Magnac and Thesmar (2002)).
We allow for efficiency heterogeneity across products by treating βi as a random parameter.
Specifically, we assume a truncated normal distribution for this random parameter: βi ∼
2
T N[0,1] (β T , σβ T ), where T N stands for the truncated normal distribution and [0, 1] represents
the truncation parameters with the lower bound equal to zero and the upper bound equal to
1.
We estimate the model presented in section 2 with the current profit function specified
in (9) and the parameters of the density function of βi (i.e., the average efficiency across
products and the associated standard deviation).
4
Estimation Method
We use a two-stage approach to estimate the structural model. In the first stage, we
estimate the transition probabilities of the state variables and the terms E(qit | xit , ait ),
E (yit | xit , ait ), E (I{t > ttax change } | xit , ait ) and E (I{yit = sit + qit } | xit , ait ) using nonparametric methods (see appendix A and appendix B for details). In the second stage, we exploit
the discrete choice decision to estimate the remaining parameters - conditional on the estimates obtained in the first stage - using the Bayesian estimation procedure suggested in Imai
et al (2009) and also analyzed in Ching et al (2009). This estimation procedure involves the
usage of Markov Chain Monte Carlo (MCMC) algorithms and Bayesian methods, which do
not require the maximization of the likelihood function. Maximization of this function could
be numerically difficult given the heterogeneity allowed in our model. Instead, the Bayesian
procedure consists on specifying a prior for every parameter to estimated and then drawing
many values from the posterior distribution of the parameters conditional on the observed
data.
Let us specify the priors and proposal distributions for the parameters. Recall that we
have denoted all the parameters in the current profit function by θ. Let us define θ = (θi , θ1 ),
where θi = (γi , ωi , ζi , βi ) and θ1 = (λ, µ). We do not include the discount factor, δ, because
in this type of models δ is nonparametrically non-identified (see Rust (1994) and Magnac
and Thesmar (2002), for details). Therefore, we set δ = 0.985 in the estimation process.
¯
In addition, we set θ = (ln γ, ln ω, ln ζ, β T ) and θσ = (σln γ , σln ω , σln ζ , σβ T ). We specify a
¯
Normal prior for each term of θ and an Inverted Gamma prior for each term of θσ . For the
7
8. parameters in θ1 , we use a flat prior (i.e., we set the prior to be equal to 1). Also, we define
a normal random-walk proposal distribution for each term of θ1 . The proposal distributions
for θi have already been specified in sections 2 and 3. Our goal is, therefore, to estimate the
¯
parameters in θ1 , θ and θσ .
The posterior distribution of the parameters, Λ(.), is defined as
¯
¯
Λ(θi , θ1 , θ, θσ ) ∝ L(θi , θ1 )pd(θ)k(θ1 , θ, θσ ).
(10)
Λ(.) is proportional to the expression in the right-hand side of (10), which depends on the
¯
likelihood function L(θi , θ1 ) defined in (8), the priors k(θ1 , θ, θσ ) and the proposal densities
pd(θ).
Our goal is, therefore, to repeatedly draw values from the posterior distribution (10),
obtaining, in each iteration, one value for each parameter. In order to draw from the posterior
distribution, we use a MCMC method called Gibbs sampling, which consists on breaking the
parameter vector in several blocks so that each block’s posterior distribution conditional on
the observed data as well as on the other blocks has a convenient form to draw values from.
If we repeatedly draw values from these blocks, these values eventually converge to draws
from the joint distribution of the entire parameter vector defined in (10) (see chapter 9 of
Train (2009) for details on Gibbs sampling).
In each iteration, we use Gibbs sampling to break the posterior distribution (10) into 3
¯
blocks. In the first block, we draw θ and θσ from their conditional posterior distributions
¯
- the Normal distribution for each term of θ and the Inverted Gamma distribution for each
term of θσ . In this block, standard procedures are used to obtain the draws (see chapter 12
of Train (2009) for details). In the second block, we draw individual parameters θi , whose
conditional posterior distribution is proportional to
T
1
exp(¯(a = j, xit ; θi , θ1 ))
v
1
v
k=0 exp(¯(a = k, xit ; θi , θ1 ))
t=1 j=0
I{a=j}
¯
pd(θi )k(θ, θσ ).
(11)
To draw from (11), we use the Metropolis-Hastings algorithm, which is also a MCMC method
(see chapter 9 of Train (2009) for details). Finally, in the third block, we draw fixed parameters θ1 , whose conditional posterior distribution is proportional to
N
T
1
i=1 t=1 j=0
exp(¯(a = j, xit ; θi , θ1 ))
v
1
v
k=0 exp(¯(a = k, xit ; θi , θ1 ))
I{a=j}
.
(12)
Again, we use the Metropolis-Hastings algorithm to draw values from (12).
The usage of Gibbs sampling allows us to obtain, for each parameter, a sequence of values,
one per iteration, which is used to estimate the parameters and their standard deviations.
8
9. Firstly, we discard the initial values of that sequence that constitute burn-in. Then, we use
the remaining values and, for each parameter, we compute the mean of the sequence of values
associated with it as well as the standard deviation of those values. The computed mean is
the parameter estimate and the computed standard deviation is the estimate of the standard
deviation of the parameter.
Note that in order to evaluate (11) and (12), we need to compute v (.), which is defined in
¯
equation (4). However, v (.) is not known since it depends on the (unknown) value function.
¯
Therefore, in addition to the MCMC step to draw from the posterior distribution, we need
in each iteration a step for the computation of the value function, allowing for v (.) to be
¯
known.
In order to calculate the value function, we use the procedure suggested in Imai et al
(2009): instead of solving the Bellman equation in each iteration, we iterate it only once. By
using this procedure, the estimation method simultaneously solves the dynamic programming
problem and estimates the parameters.
∗r
∗r
Let us define θi and θ1 as the candidate parameters of θi and θ1 used by the MetropolisHastings algorithm in the MCMC step in a given iteration r. We calculate the expected future
¯
value Ex V (xi,t+1 ; θ) | a, xit as the weight average of n∗ previous values functions, where the
weights are defined by kernel densities of the difference between the candidate parameter in
current iteration and the candidate parameter in previous iterations. The intuition here is
that the value function is continuous in the parameter space, thus parameters which are
closer to the current parameter have closer value functions. Therefore, candidate parameters
closer to current candidate parameter have more weight since the associated value functions
are closer to the current value function.
Thus, for a given value of the state variables x = (p, c, s) , we compute the expected future
value in iteration r as
r−1
∗l ∗l
¯
V l (x, θi , θ1 )
r¯
∗r ∗r
Ex V (x, θi , θ1 ) =
l=r−n∗
∗r
∗l
∗r
∗l
Kh (θ1 − θ1 )Kh (θi − θi )
,
r−1
∗r
∗k
∗r
∗k
∗ Kh (θ1 − θ1 )Kh (θi − θi )
k=r−n
(13)
r¯
∗r ∗r
where Ex V (x, θi , θ1 ) is the approximated expected future value in iteration r, Kh is the
Gaussian kernel with bandwidth h and n∗ is the number of past iterations used to approximate
the expected future value.
Note that we have to compute (13) for each value of the state variables. In order to reduce
the computational burden, we make use of Rust (1997)’s random grid: instead of computing
the expected future value for each value of the state variables, we randomly select in each
iteration one value for p and c and we weight (13) with their transition probabilities (see
Imai et al (2009) and Ching et al (2009) for an analysis of the conjunction of the Bayesian
method proposed by Imai et al (2009) with Rust (1997)’s random grid).
9
10. Let us define x1 = (p, c) as the values of p and c in iteration r and let fx1 represent the
transition probabilities of x1 . Then, equation (13) is now replaced by
r−1
∗l
∗r
∗l
∗r
Kh (θ1 − θ1 )Kh (θi − θi )fx1 (xl | x1 )
1
.
r−1
∗r
∗k
k
∗k
∗r
∗
k=r−n∗ Kh (θ1 − θ1 )Kh (θi − θi )fx1 (x1 | x1 )
l=r−n
(14)
By using (14) instead of (13), we only have to compute the expected future values for
given values of s. We do not treat s as the other state variables since Rust (1997)’s random
grid cannot be used when the transition law, given the parameters and the decision variables,
is deterministic.
We use the approximated expected future values obtained in (14) to compute v (.) defined
¯
¯
in equation (4), which are then used to update the value function V (.) defined in equation
(6).
To sum up, the Bayesian method used in this paper includes in each iteration two steps:
one step employs the MCMC algorithm to draw values from the posterior distributions of the
parameters and the other step allows for the solution of the dynamic programming model,
using equation (14) to update the expected future value.
r¯
∗r ∗r
Ex V (s, x1 , θi , θ1 )
5
5.1
=
∗l ∗l
¯
V l (s, xl , θi , θ1 )
1
The Data
The Firm and the Ordering Process
Our data include information on a firm that sells alcoholic drinks and operates in the
North of Portugal. The firm’s products are stocked in a single store and sold to a variety of
customers located in different regions, many of which are other firms (e.g., restaurants).
The firm does not produce any of its products; in fact, whenever it is considered adequate,
the firm orders new deliveries from suppliers and sells those products to its clients. The
ordering process is as follows: the firm has some regular suppliers who frequently meet the
manager of the firm in order to define prices for all products. Whenever the firm needs more
units of a given product or products, it sends an email to the respective supplier ordering a
new delivery, and the supplier brings those products to the store.
In the ordering process, the firm bears a cost that involves the price of each product and
the ordering cost. The ordering cost is composed of a fixed ordering cost (ζi ) and a variable
ordering cost (γi ). The inclusion of both fixed and variable components in the ordering cost is
due to the fact that the main component of the ordering cost is the transportation cost. While
we have no information about the transportation cost, the nature of the ordering process
leads us to include both fixed and variable components. Specifically, the transportation
cost is higher, the higher the number of vans necessary to transport the requested products.
10
11. However, when the firm orders a large amount of a given product, the supplier of the product
makes a discount on the transportation cost or even offers it. Therefore, we include in the
model a fixed ordering cost associated to the ordering process and a variable ordering cost
to take into account the fact that the transportation cost is not independent of the quantity
ordered. The nonexistence of a more sophisticated characterization of these costs is due to
lack of information on transportation costs.
5.2
The Database
The database contains weekly information on sales, prices, orders to suppliers and inventories for every product sold by the firm between January 2008 to June 2009. It is a balanced
panel data with 66534 observations, with data on 853 different products during 78 weeks.
The dataset includes the following information for every product and week: name of the
product, wholesale and selling prices, sales, orders to suppliers and stock at the beginning of
the week. Quantities are measured in number of bottles, while prices are measured in Euros.
Given these data, we define an ordering indicator: a binary variable ait which is equal
to one if the firm orders product i in time period t and equal to zero otherwise. Hence, we
associate each positive value of orders to suppliers to ait = 1 and no orders to ait = 0.
We also compute the two indicator functions used in the model, namely I{yit = sit + qit }
and I{t > ttax change }. The term I{yit = sit + qit } is a stock-out indicator function which
intends to capture the (negative) effect of a stock-out in profits. Since the firm cannot sell
more than the quantity available of a given product (which is given by the stock of that
product plus the quantity of the product ordered), I{yit = sit + qit } indicates situations
where the firm does have no quantity of a given product available for sale. In addition, we
compute the tax change indicator function I{t > ttax change } due to a change in the Portuguese
tax policy during this period. Most of the alcoholic drinks are charged a value added tax
(VAT) of 12%, but some of them are charged a higher VAT. In July 2008 (here, denoted by
ttax change ), the highest VAT changed from 21% to 20% and so I{t > ttax change } intends to
capture the effects of this tax change in expected sales. The change in tax policy in July
2008 affects 1872 observations, including 36 products and 52 weeks.
Table 1 presents some descriptive statistics for the variables of interest. As before, q
represents quantity of product ordered, y represents sales in number of bottles, s is the stock
in number of bottles, c is the wholesale price and p denotes the market price. The firm
orders some products in 71.75% of the observations and the stock-out effect occurs in 0.32%
of the observations. Orders, sales and stocks have a floating behavior in the sense that these
variables have relatively high standard deviations. In fact, these three variables seem to have
a lot of low values (the median is 12 for orders, 14 for sales and 74 for stocks) and some
“peaks” explaining the difference between the mean and the median (indeed, orders, sales
and stocks reach maximum values of 5977, 5726 and 4585 respectively).
11
12. Mean
Min
Max
St. Dev.
Skewness
Kurtosis
Pctil 25
Median
Pctil 75
q
122.4187
0
5977
319.7054
6.6018
81.9889
0
12
55
y
123.5191
0
5726
307.7122
6.1818
72.2997
6
14
56
s
274.5006
0
4585
457.8707
3.1427
14.8360
27
74
334
c
3.1864
0.64
18.31
1.8318
1.7141
9.3755
1.87
2.88
3.92
p
4.5628
0.92
22.75
2.6276
1.5013
6.7938
2.66
4.12
5.63
I{t >
0.0281
0
1
0.1654
5.7071
33.5706
0
0
0
0.0032
0
1
0.0564
17.6307
311.8428
0
0
0
0.7175
0
1
0.4502
-0.9663
1.9337
0
1
1
ttax change}
I{y =
s + q}
a
Table 1: Descriptive statistics
Coefficient
St. Error
z − test
P >| z − test |
y
0.0009219
0.0001054
8.747
0.000
s
-0.0008462
0.0001057
-8.006
0.000
c
-0.8726403
0.4208984
-2.073
0.038
p
-0.5068481
0.3103980
-1.633
0.102
Log Likelihood = -18996.405
LR χ2 (4) = 174.97
Prob > χ2 = 0.0000
Table 2: Fixed-Effects Logit Model for the Discrete Ordering Choice (a = 1 or a = 0)
Interestingly, 50% of the orders are associated with a value of 12 or an even lower number
of bottles; since this is a small number, it may be an indication that, in fact, the fixed cost
associated with the ordering process is not significant. As far as transportation costs are
concerned, the low median of q may also mean that a given supplier delivers in the same
ordering process a small number of different products, since it is unlikely the firm is willing
to bear a transportation cost just for the transportation of 12 bottles of a given product.
Unfortunately, we do not have additional information to confirm that.
Table 2 presents a reduced form estimation of the discrete ordering decision. The explanatory variables in this model include sales, stocks, the wholesale price and the market
price. The model was estimated using the Fixed-Effects Logit estimator controlling for the
existence of unobserved product heterogeneity. We note that the Likelihood Ratio (LR) test
clearly points out the global significance of the model. Also, the signs of the coefficients are as
expected: positive for sales and negative for stocks, the wholesale price and the market price.
Interestingly, while sales, stocks and the wholesale price are significant at a significance level
of 5%, the market price coefficient is not significant. This may be related to the existence of
12
13. stocks: the firm may order the products in some periods to take advantage of discounts at
the wholesale market and then keep them in stock for future sales.
6
Estimation Results
We estimate the structural model drawing from the posterior distribution of the parameters in (10) 30000 times. We drop the first 10000 iterations and we compute the means and
standard deviations using the values from iteration 10001 to 30000. We do not estimate the
discount factor δ, which is set equal to 0.985. The estimation results for the structural model
are shown in table 3.
The results show that the (in)efficiency terms are statistically different from zero. Specifically, σβ T is statistically relevant, suggesting there is a difference in inefficiency across products. In the event of a stock-out, the losses in profit are significant, although not very different
across products. Interestingly, while the fixed ordering cost is relevant and different across
products, the variable cost of ordering is neither significant nor different across products.
The heterogeneity in fixed ordering costs may be explained by the fact that the firm has
different suppliers for different products. In fact, although the general characteristics of the
ordering process are the same for all suppliers (see section 5.1), it is possible there are some
differences in some details of the ordering process and in the relationship between the firm
and the suppliers, implying differences in the ordering cost. As mentioned before, we do not
have detailed information on the ordering process. The decrease on taxes in July 2008 seems
to have a significant impact on sales as λ has the expected sign and is statistically significant
at a significance level of 5%. In fact, the decrease on taxes implied a 32% increase on sales,
suggesting this type of products is very price-sensitive. As expected, the unit storage cost µ
is also statistically significant.
Note that, apart from the estimates of the fixed parameters λ and µ, the values of the
coefficients in column 1 of table 3 are not the final estimates of the parameters. As far as γi ,
ωi and ζi are concerned, we consider a lognormal distribution and we parametrize it using the
associated normal distribution, that is, γi , ωi and ζi follow a lognormal distribution if and only
2
2
2
if lnγi ∼ N (ln γ, σln γ ), lnωi ∼ N (ln ω, σln ω ) and lnζi ∼ N (ln ζ, σln ζ ). Therefore, we have
estimated the means and standard deviations of the natural logarithm of the coefficients. By
using the corresponding estimates in table 3, we are able to obtain the means and standard
deviations of γi , ωi and ζi (see details in appendix C).
Similarly, we have to adjust the mean values of the coefficients associated to βi . Recall that
2
we have defined that βi ∼ T N[0,1] (β T , σβ T ). Although we consider βi to follow a truncated
normal distribution, the estimated mean and standard deviation do not take into account
the fact that βi lies between 0 and 1, that is, they do not give us the mean and standard
deviation of βi given that 0 ≤ βi ≤ 1. Therefore, we use the estimated values of β T and
13
14. Coefficient
St. Error
z − test
P >| z − test |
βT
0.9346019
0.0677182
13.801
0.000
σβ T
0.1981808
0.0262467
7.551
0.000
ln ω
-4.2880810
1.6767339
-2.557
0.011
σln ω
2.0260520
1.6416476
1.234
0.217
ln γ
-12.0491569
52.2364546
-0.230
0.818
σln γ
3.5025554
4.7131788
0.743
0.457
ln ζ
-0.6439362
0.1558840
-4.131
0.000
σln ζ
1.2229765
0.5091166
2.402
0.016
λ
0.3205798
0.1371140
2.338
0.019
µ
0.1398554
0.0477591
2.928
0.003
Table 3: Estimation Results for the Structural Model (δ = 0.985)
Coefficient
Coefficient
β
0.8156247
γ
0.0026981
σβ
0.1316712
σγ
1.2445017
ω
0.1069239
ζ
1.1094890
σω
0.8257122
σζ
2.0644642
λ
0.3205798
µ
0.1398554
Table 4: Final Values of the Means of the Parameters
σβ T to compute the adjusted values for the mean and standard deviation of βi (see details in
appendix C).
Let us define β, σβ , γ, σγ , ω, σω , ζ and σζ as the adjusted values of β T , σβ T , ln γ, σln γ ,
ln ω, σln ω , ln ζ and σln ζ . Table 4 shows the final values of the estimated parameters. The
results in table 4 show that the average product efficiency is around 81,6%, meaning that,
on average, the firm obtains 81,6% of the maximum profit associated with a given product.
Although there is some heterogeneity in efficiency across products, the estimated σβ is not
very high.
7
A Counterfactual Experiment
One of the advantages of dynamic discrete choice structural models is the fact that this
type of models allows us to perform counterfactual experiments.
Here, we perform a counterfactual experiment to investigate what would be the firm’s
choices if its ordering decisions were fully efficient, that is, if βi were equal to 1 for all products
in all time periods. In order to do so, we simulate the dynamic programming model, using all
the estimates in table 3, except the estimates of the mean and standard deviation associated
14
15. 2
with βi . Given that βi ∼ T N[0,1] (β T , σβ T ), we consider in this experiment β T to be equal to
2
one and σβ T to be equal to zero, which guarantees that β = 1 and σβ = 0, ∀i , as required in
a full efficiency scenario.
Note that in order to simulate the model, we need to compute E (yit ). We cannot use the
nonparametric estimates of E (yit ) computed in our estimation process to avoid the Lucas’
critique. For the purpose of this counterfactual experiment, we use the demand estimates in
appendix D to compute E (yit ). Also, in order to define the iid demand shock φt , we use the
estimates of the residuals of the demand equation in appendix D.
We simulate the model using 100 replications. The results are displayed in table 5.
Actual Decisions
Decisions in a Full Efficiency Scenario
% Orders
71.75%
59.04%
% No Orders
28.25%
40.96%
Table 5: Results for the Counterfactual Experiment
The first column in table 5 refers to the percentage of orders in the dataset used to
estimate the model, which are the actual decisions of the firm. The second column in table 5
refers to the choices the firm would make in a full efficiency scenario. The results show that
inefficiency affects the decisions of the firm in the sense that the firm would choose to do less
orders if there were full efficiency: in 12.71% of the cases, the firm decides to order products
whereas it should decide not to order them. This means that if the firm decided with full
efficiency, it would choose differently in at least 12.71% of the decisions.1
8
Conclusion
In this paper, we develop a measure for dynamic (profit) efficiency in a dynamic discrete
choice framework, in which decisions are over discrete rather than continuous variables. We
analyze a dynamic programming inventory model and develop a measure of dynamic efficiency
at the product level. For each product, we consider that, in the event of inefficiency, the firm
only gets a fraction of the maximum profit for that product. Using a dataset with weekly
information on prices, sales, orders and stocks for a Portuguese firm from January 2008 to
June 2009, we estimate the model with a two-stage approach. We find out that the average
product efficiency in the firm is around 81,6%, implying that, on average, the firm obtains
81,6% of the maximum profit associated with a given product.
We also investigate how different the decisions of the firm would be if there were full
1 This percentage is exactly 12.71% if all situations in which there is an order in the Full Efficiency Scenario correspond to
orders in actual decisions.
15
16. efficiency in the decision process. The results show that if the firm decided with full efficiency,
it would choose differently in at least 12.71% of the decisions.
To the best of our knowledge, this is the first paper to use a dynamic discrete choice
framework to measure efficiency at a micro level. We believe that this approach is promising
as it allows us to have information regarding firm efficiency that until now was only available
for models with continuous decision variables. In addition, it allows us to quantify the impact
of inefficiency in firm’s decisions, since it is possible to perform counterfactual experiments
and compare actual decisions with optimal decisions.
In our model, we consider that the decisions over different products are separable, that is,
there is no synchronization among decisions over different products so that the firm decides
whether to order or not a given product on an individual basis, without taking into consideration the order decisions over other products. Although the existence of interaction among
decisions over different products is an interesting topic, we do not address this issue in this
paper and consider this to be a topic for future research.
Appendix
A. Nonparametric Estimation of the Transition Probabilities of the State Variables
Following Sanchez-Mangas (2002), we estimate fp (pi,t+1 | pit ) and fc (ci,t+1 | cit ) nonparametrically.
Let us discretize the state variables pit , cit and sit : we consider M1 = 13 cells for pit ,
M2 = 12 cells for cit and M3 = 13 cells for sit , so in fact we have M = M1 × M2 × M3 = 2028
cells.
Let pc , cc and sc denote the discretized values of the state variables and let pm , cm and
it
it
it
it
it
sm denote the values of the state variables in the mth cell so that xm = (pm , cm , sm ).
it
We estimate the transition probabilities for p as
N T
ˆ
P rob(pc = pm | pc = pl ) =
t+1
t
m
I pc
K1 pit , pl
i,t+1 = p
i=1t=1
,
N T
K1 (pit
, pl )
i=1t=1
for m, l = 1, ..., M1 and K1 is the univariate gaussian kernel
K1
1
pit , p =
exp −
1/2
2
(2π)
l
1
pit − pl
h1
2
,
where h1 is the bandwidth parameter, here defined according to the Silverman’s rule.
16
17. Similarly, the transition probability for c is defined by
N T
ˆ
P rob(cc = cm | cc = cl ) =
t+1
t
m
K1 cit , cl
I cc
i,t+1 = c
i=1t=1
,
N T
K1 (cit
, cl )
i=1t=1
for m, l = 1, ..., M2 and K1 is the univariate gaussian kernel.
K1
1
1
cit , c =
exp −
1/2
2
(2π)
l
cit − cl
h2
2
,
where h2 is the bandwidth parameter, here defined according to the Silverman’s rule.
B. Other Nonparametric Estimates
We follow Sanchez-Mangas (2002). Here we provide a nonparametric estimate of E (qit | xit , ait = 1),
since for ait = 0 this term is defined by E (qit | xit , ait = 0) = 0. The nonparametric estimates
of E (qit | xit , ait ), E (yit | xit , ait ), E (I{t > ttax change } | xit , ait ) and E (I{yit = sit + qit } | xit , ait )
are defined similarly to E (qit | xit , ait = 1).
We start by discretizing the variable {qit ; ait = 1}, that is, we consider the variable qit
only for those observations such that qit > 0. We use a uniform grid with H cells.
Let q c denote the value of this discretized variable and let q h be the value of the variable
in cell h, h = 1, ..., H. We estimate E (q | xm , a = 1), for m = 1, ..., M as
H
q h P rob(q h | xm , a = 1),
h=1
where
N T
P rob(q h | xm , a = 1) =
c
I qit = q h I {ait = 1} K3 (xit , xm )
i=1t=1
,
N T
I {ait = 1} K3 (xit , xm )
i=1t=1
for h = 1, ..., H and m = 1, ..., M , where
1
1
K3 (xit , x ) =
exp −
3/2
(2π)
2
m
pit − pm
h1
2
+
cit − cm
h2
2
+
sit − sm
h3
2
is the trivariate gaussian kernel and h1 , h2 and h3 are defined according to the Silverman’s
rule.
Following this procedure will allow us to obtain the (M × 1) vector of nonparametrically
estimated values of E (q | x, a = 1).
17
18. C. Computation of the Final Estimates of the Parameters
Here we show how to compute the final estimates of the means and standard deviations of
the random coefficients from the estimated values in table 3.
¯
For all the elements of θ and θσ , with the exception of β T and σβ T , let us define a given value
¯
¯
of θ and its corresponding standard deviation in θσ by Υ and σΥ and their adjusted values
c
¯
by Υc and σΥ . Then, the final mean and the final standard deviation of the corresponding
parameter is given by (see chapter 6 of Train (2009)):
2
¯
¯
Υc = exp(Υ + σΥ /2)
and
c
σΥ =
2
2
¯
exp(2Υ + σΥ ) × (exp(σΥ ) − 1).
For β T and σβ T , let us define the adjusted values by β and σβ respectively. We can
compute such values as follows (see Johnson et al (1994)):
φ
−β T
σβ T
Φ
1−β T
σβ T
β = βT +
−φ
1−β T
σβ T
−Φ
−β T
σβ T
× σβ T
and
σβ =
2
σβ T × 1 +
−β T
σβ T
φ
−β T
σβ T
Φ
1−β T
σβ T
−
1−β T
σβ T
−Φ
φ
−β T
σβ T
1−β T
σβ T
−
φ
−β T
σβ T
Φ
1−β T
σβ T
−φ
−Φ
1−β T
σβ T
−β T
σβ T
2
,
where φ and Φ denote the standard normal probability density function and the standard
normal cumulative distribution function respectively.
D. Estimation of the Demand Parameters
For those observations where yit < sit + qit , that is, when there are no stockouts, we estimate
the demand following a similar version of Aguirregabiria (1999):
0
log yit = ηi + η log pit + φit .
(15)
The estimation of equation (8) using standard methods poses two problems: firstly, there
are brand fixed-effects; in addition, prices may be correlated with the random component φit
(see chapter 13 of Train (2009) for details) and so there is endogeneity.
18
19. In order to take into account the existence of brand-specific effects, we estimate equation
(8) using first differences:
log yit = η log pit +
φit .
(16)
We estimate equation (9) using Instrumental Variables (IV) in order to take into account
the endogeneity in prices. We use the Two-Stages Least Squares (2SLS) estimator in which
log pit is instrumented by log pit−2 and log pit−3 . The results are shown in table 6.
Coefficient
η
St. Error
t − test
P >| t − test |
-0.0136115
0.004796
-2.84
0.005
Sargan test χ2 (1) = 0.664112
Prob > χ2 = 0.4151
Table 6: Results for the IV Estimation
The demand coefficient η has the expected sign and it is statistically significant at the
usual significance levels. The Sargan test for overidentifying restrictions shows that the null
hypothesis is not rejected, pointing out the consistency of the instruments used.
References
[1] Aguirregabiria, V. (1999), “The Dynamics of Markups and Inventories in Retailing
Firms”, Review of Economic Studies, 66, 275-308
[2] Aguirregabiria, V. and Mira, P. (2009), “Dynamic Discrete Choice Structural Models: A
Survey”, Journal of Econometrics, forthcoming
[3] Ching, A., Imai, S., Ishihara, M. and Jain, N. (2009), “A Practitioner’s Guide to Bayesian
Estimation of Discrete Choice Dynamic Programming Models”, Available at SSRN:
http://ssrn.com/abstract=1398444
[4] Fare, R., Grosskopf, S. and Weber, W. (2004), “The Effect of Risk-Based Capital Requirements on Profit Efficiency in Banking”, Applied Economics, 36, 1731-1743
[5] Imai, S., Jain, N. and Ching, A. (2009), “Bayesian Estimation of Dynamic Discrete
Choice Models”, Econometrica, 77, 1865-1899
[6] Johnson, N., Kotz, S. and Balahrishnan, N. (1994), “Continuous Univariate Distributions”, New York, 2nd edition
19
20. [7] Magnac, T. and Thesmar, D. (2002), “Identifying Dynamic Discrete Decision Processes”,
Econometrica, 70, 801-816
[8] Nemoto, J. and Goto, M. (1999), “Dynamic Data Envelopment Analysis: Modeling
Intertemporal Behavior of a Firm in the Presence of Productive Inefficiencies”, Economic
Letters, 64, 51-56
[9] Nemoto, J. and Goto, M. (2003), “Measurement of Dynamic Efficiency in Production:
An Application of Data Envelopment Analysis to Japanese Electric Utilities”, Journal of
Productivity Analysis, 19, 191-210
[10] Rust, J. (1994), “Structural Estimation of Markov Decision Processes”, Handbook of
Econometrics, 4, Engle, R.E. and McFadden, D. (Eds.), North-Holland, Amsterdam
[11] Rust, J. (1997), “Using Randomization to Break the Curse of Dimensionality”, Econometrica, 65, 487-516
[12] Sanchez-Mangas, R. (2002), “Pseudo-Maximum Likelihood Estimation of a Dynamic
Structural Investment Model”, Working Paper 02-62 (18), Statistics and Econometrics
Series, Universidad Carlos III de Madrid
[13] Silva, E. and Stefanou, S. (2007), “Dynamic Efficiency Measurement: Theory and Application”, American Journal of Agricultural Economics, 89, 398-419
[14] Slade, M. (1998), “Optimal Pricing with Costly Adjustment: Evidence from Retail
Grocery Prices”, Review of Economic Studies, 65, 87-107
[15] Train, K. (2009), “Discrete Choice Methods with Simulation”, Cambridge University
Press, 2nd Edition
20