The assignment problem is a special type of linear programming problem and it is sub class of transportation problem. Assignment problems are defined with two sets of inputs i.e. set of resources and set of demands. Hungarian algorithm is able to solve assignment problems with precisely defined demands and resources.Nowadays, many organizations and competition companies consider markets of their products. They use many salespersons to improve their organizations marketing. Salespersons travel form one city to another city for their markets. There are some problems in travelling which salespeople should go which city in minimum cost. So, travelling assignment problem is a main process for many business functions. Mie Mie Aung | Yin Yin Cho | Khin Htay | Khin Soe Myint "Minimization of Assignment Problems" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-5 , August 2019, URL: https://www.ijtsrd.com/papers/ijtsrd26712.pdfPaper URL: https://www.ijtsrd.com/computer-science/other/26712/minimization-of-assignment-problems/mie-mie-aung
The reference of this book is from Dominick Salvatore's Managerial Economics. It is in chapter 8 with the following topic: Linear Programming, Production process, Feasible region, Optimal solution, Objective function, Inequality constraints, Nonnegativity constraints, Decision variables, Binding constraints, Slack variable, Simplex method, Primal problem, Dual problem, Shadow price, Duality theorem and Logistic management.
Phương pháp chữa nói lắp toàn diện được Echo Việt nam www.echovietnam.org dịch từ quyển sách Comprehensive Stuttering Therapy và phát hành dưới sự cho phép của tác giả Phillip J. Roberts. Đây là một trong nhiều tài liệu miễn phí hỗ trợ cho người nói lắp Việt Nam. Mọi việc trích dẫn từ sách này phải ghi rõ xuất xứ của quyển sách. Mọi yêu cầu phát hành lại, in lại… phải được sự chấp thuận của Echo Việt Nam, đại diện là người sáng lập Trương Minh Sử Nhiên.
Tài liệu này tuyệt đối không được sử dụng với mục đích thương mại dưới mọi hình thức.
Đại diện, nhóm Echo Việt Nam, tôi xin chúc các bạn sớm thành công trong việc giao tiếp công chúng và đời thường. Thành công sẽ không là xa với nếu ai kiên trì tìm kiếm nó.
Tôi cũng xin gửi lời cảm ơn đến tác giả Phillip J. Roberts, đã cho phép, tôi, tổ chức Echo Việt nam được phép dịch và phát hành để phục vụ rộng rãi cho cộng đồng nói lắp Việt nam.
Sài Gòn, tháng 10 năm 2009
Người sáng lập Echo Việt nam
Trương Minh Sử Nhiên
tmsnhien@gmail.com
The assignment problem is a special type of linear programming problem and it is sub class of transportation problem. Assignment problems are defined with two sets of inputs i.e. set of resources and set of demands. Hungarian algorithm is able to solve assignment problems with precisely defined demands and resources.Nowadays, many organizations and competition companies consider markets of their products. They use many salespersons to improve their organizations marketing. Salespersons travel form one city to another city for their markets. There are some problems in travelling which salespeople should go which city in minimum cost. So, travelling assignment problem is a main process for many business functions. Mie Mie Aung | Yin Yin Cho | Khin Htay | Khin Soe Myint "Minimization of Assignment Problems" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-5 , August 2019, URL: https://www.ijtsrd.com/papers/ijtsrd26712.pdfPaper URL: https://www.ijtsrd.com/computer-science/other/26712/minimization-of-assignment-problems/mie-mie-aung
The reference of this book is from Dominick Salvatore's Managerial Economics. It is in chapter 8 with the following topic: Linear Programming, Production process, Feasible region, Optimal solution, Objective function, Inequality constraints, Nonnegativity constraints, Decision variables, Binding constraints, Slack variable, Simplex method, Primal problem, Dual problem, Shadow price, Duality theorem and Logistic management.
Phương pháp chữa nói lắp toàn diện được Echo Việt nam www.echovietnam.org dịch từ quyển sách Comprehensive Stuttering Therapy và phát hành dưới sự cho phép của tác giả Phillip J. Roberts. Đây là một trong nhiều tài liệu miễn phí hỗ trợ cho người nói lắp Việt Nam. Mọi việc trích dẫn từ sách này phải ghi rõ xuất xứ của quyển sách. Mọi yêu cầu phát hành lại, in lại… phải được sự chấp thuận của Echo Việt Nam, đại diện là người sáng lập Trương Minh Sử Nhiên.
Tài liệu này tuyệt đối không được sử dụng với mục đích thương mại dưới mọi hình thức.
Đại diện, nhóm Echo Việt Nam, tôi xin chúc các bạn sớm thành công trong việc giao tiếp công chúng và đời thường. Thành công sẽ không là xa với nếu ai kiên trì tìm kiếm nó.
Tôi cũng xin gửi lời cảm ơn đến tác giả Phillip J. Roberts, đã cho phép, tôi, tổ chức Echo Việt nam được phép dịch và phát hành để phục vụ rộng rãi cho cộng đồng nói lắp Việt nam.
Sài Gòn, tháng 10 năm 2009
Người sáng lập Echo Việt nam
Trương Minh Sử Nhiên
tmsnhien@gmail.com
Phương pháp tự điều trị cho người nói lắp được Echo Việt nam www.echovietnam.org dịch từ quyển sách Self Therapy for stutterers của tác giả MALCOLM FRASER và phát hành dưới sự cho phép của tổ chức www.stutteringhelps.org. Đây là một trong nhiều tài liệu miễn phí hỗ trợ cho người nói lắp Việt Nam. Mọi việc trích dẫn từ sách này phải ghi rõ xuất xứ của quyển sách. Mọi yêu cầu phát hành lại, in lại… phải được sự chấp thuận của Echo Việt Nam, đại diện là người sáng lập Trương Minh Sử Nhiên.
Tài liệu này tuyệt đối không được sử dụng với mục đích thương mại dưới mọi hình thức.
Đại diện, nhóm Echo Việt Nam, tôi xin chúc các bạn sớm thành công trong việc giao tiếp công chúng và đời thường. Thành công sẽ không là xa với nếu ai kiên trì tìm kiếm nó.
Tôi cũng xin gửi lời cảm ơn đến tổ chức Stuttering Helps, đã cho phép, tôi, tổ chức Echo Việt nam được phép dịch và phát hành để phục vụ rộng rãi cho cộng đồng nói lắp Việt nam.
Sài Gòn, 22 tháng 10 năm 2010
Người sáng lập Echo Việt nam
Trương Minh Sử Nhiên
tmsnhien@gmail.com
International Journal of Mathematics and Statistics Invention (IJMSI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJMSI publishes research articles and reviews within the whole field Mathematics and Statistics, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
Sequences classification based on group technology for flexible manufacturing...eSAT Journals
Abstract Flexible cell formation is based on Group Technology. Group Technology rests on the exploitation of resemblances between products or processes, which makes the identification of products’ families and machines’ cells easier. We propose a new approach based on the language theory for product family grouping according to their manufacturing sequences. This approach uses linear sequences of the manufacturing products which are assimilated to the words of a language. We have chosen the Levenhstein distance for sequence classification. We are going to compare our method to Dice-Czekanowski and Jaccard’s methods and apply the vectorial correlation coefficient as a comparison tool between two hierarchical classifications. Keywords: manufacturing sequences, language theory, hierarchical classification, Group Technology.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
SURVEY ON POLYGONAL APPROXIMATION TECHNIQUES FOR DIGITAL PLANAR CURVESZac Darcy
Polygon approximation plays a vital role in abquitious applications like multimedia, geographic and object
recognition. An extensive number of polygonal approximation techniques for digital planar curves have
been proposed over the last decade, but there are no survey papers on recently proposed techniques.
Polygon is a collection of edges and vertices. Objects are represented using edges and vertices or contour
points (ie. polygon). Polygonal approximation is representing the object with less number of dominant
points (less number of edges and vertices). Polygon approximation results in less computational speed and
memory usage. This paper deals with comparative study on polygonal approximation techniques for digital
planar curves with respect to their computation and efficiency.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
3. Outline Distance functions Cost frontiers Decomposing cost efficiency Scale efficiency Panel data models Accounting for the production environment Conclusions 3 ECON377/477 Topic 4.2
4. Distance functions Distance functions can be used to estimate the characteristics of multiple-output production technologies in cases where we have no price information and/or it is inappropriate to assume firms minimise costs or maximise revenues Examples arise when an industry is regulated Input distance functions tend to be used instead of output distance functions when firms have more control over inputs than outputs, and vice versa We consider only input distance functions 4 ECON377/477 Topic 4.2
5. Distance functions Assume we have access to cross-sectional data on I firms An input distance function defined over M outputs and N inputs takes the form where xniis the n-th input of firm i; qmi is the m-th output; and diI ≥ 1 is the maximum amount by which the input vector can be radially contracted without changing the output vector ECON377/477 Topic 4.2 5
6. Distance functions The function dI(.) is non-decreasing, linearly homogeneous and concave in inputs, and non-increasing and quasi-concave in outputs The first step in econometric estimation of an input distance function is to choose a functional form for dI(.) It is convenient to choose a functional form that expresses the log-distance as a linear function of (transformations of) inputs and outputs 6 ECON377/477 Topic 4.2
7. Distance functions For example, if we choose the Cobb-Douglas functional form then the model becomes where viis a random variable introduced to account for errors of approximation and other sources of statistical noise This function is non-decreasing, linearly homogeneous and concave in inputs if βn ≥ 0 for all n and if 7 ECON377/477 Topic 4.2
8. Distance functions It is also quasi-concave in outputs if non-linear functions of the first- and second-order derivatives of diI with respect to the outputs are non-negative Econometric estimation would be reasonably straightforward were it not for the fact that the dependent variable is unobserved 8 ECON377/477 Topic 4.2
9. Distance functions Some substitution and re-arrangement enables us to obtain a homogeneity-constrained model where is a non-negative variable associated with technical inefficiency Our decision to express ln diI as a linear function of inputs and outputs results in a model that is in the form of the stochastic production frontier 9 ECON377/477 Topic 4.2
10. Distance functions This model is discussed in Part 1 of this topic It follows that we can estimate the parameters of the model using the ML technique that is also discussed in Part 1 A radial input-oriented measure of technical efficiency is: But there are two common problems in the estimation of distance functions 10 ECON377/477 Topic 4.2
11. Distance functions These problems are: The explanatory variables may be correlated with the composite error term Estimated input distance functions often fail to satisfy the concavity and quasi-concavity properties implied by economic theory A solution to the first problem is to estimate the model in an instrumental variables framework A solution to the second problem is to impose regularity conditions by estimating the model in a Bayesian framework 11 ECON377/477 Topic 4.2
12. Cost frontiers When price data are available and it is reasonable to assume firms minimise costs, we can estimate the economic characteristics of the production technology (and predict cost efficiency) using a cost frontier In the case where we have cross-sectional data, the cost frontier model can be written in the general form ci ≥ c(w1i, w2i, …, wNi, q1i, q2i, …, qMi) 12 ECON377/477 Topic 4.2
13. Cost frontiers In this equation, ci is the observed cost of firm i,wni is the n-th input price and qmi is the m-th output Note that c(.) is a cost function that is non-decreasing, linearly homogeneous and concave in prices The implication of the equation is that the observed cost is greater than or equal to the minimum cost The first step in estimating the relationship is to specify a functional form for c(.) 13 ECON377/477 Topic 4.2
14. Cost frontiers The Cobb-Douglas cost frontier model is: where viis a symmetric random variable representing errors of approximation and other sources of statistical noise and uiis a non-negative variable representing inefficiency This function is non-decreasing, linearly homogeneous and concave in inputs if the βn are non-negative and satisfy the constraint 14 ECON377/477 Topic 4.2
15. Cost frontiers A translog model is obtained in a similar way Both models can be written in the compact form: A measure of cost efficiency is the ratio of minimum cost to observed cost, which can be easily shown to be: CEi = exp(-ui) Check CROB (pp. 267-269) where they present annotated SHAZAM output from the estimation of a half-normal translog cost frontier defined over a single output and three inputs 15 ECON377/477 Topic 4.2
16. Decomposing cost efficiency When we have data on input quantities or cost-shares, cost efficiency can be decomposed into technical and allocative efficiency components One approach involves estimating a cost frontier together with a subset of cost-share equations We focus on a slightly different decomposition method, estimating a production frontier together with a subset of the first-order conditions for cost minimisation 16 ECON377/477 Topic 4.2
17. Decomposing cost efficiency Consider a single-output Cobb-Douglas production frontier: Minimising cost subject to this technology constraint entails writing out the Lagrangean, and setting the first-order derivatives to zero Taking the logarithm of the ratio of the first and n-th of these first-order conditions yields: for n = 2, …, N Allocative efficiency 17 ECON377/477 Topic 4.2
18. Decomposing cost efficiency In this equation, ηniis a random error term introduced to represent allocative inefficiency It is positive, negative or zero depending on whether the firm over-utilises, under-utilises or correctly utilises input 1 relative to input n A firm is regarded as being allocatively efficient if and only if ηni= 0 for all n Observe that inputs appear in ratio form 18 ECON377/477 Topic 4.2
19. Decomposing cost efficiency Thus, a radial expansion in the input vector (an increase in technical inefficiency) will not cause a departure from the first-order conditions But a change in the input mix (allocative inefficiency) will clearly cause a departure from the first-order conditions We can estimate the N equations by ML under the (reasonable) assumptions that the vis, uis and ηnis are iid as univariate normal, half-normal and multivariate normal random variables, respectively 19 ECON377/477 Topic 4.2
21. Decomposing cost efficiency CROB (p. 271) show that the cost function and its associated system takes the form: where: and α is a non-linear function of the βn Technical efficiency Allocative efficiency 21 ECON377/477 Topic 4.2
22. Decomposing cost efficiency The term ui/r measures the increase in the log-cost due to technical inefficiency The term Ai – ln r measures the increase due to allocative inefficiency A measure of cost efficiency is the ratio of minimum cost to observed cost: CEi = CTEi × CAEi where the component CTEi = exp(–ui/r) is due to technical inefficiency, and the component CAEi = exp(ln r – Ai) is due to allocative inefficiency 22 ECON377/477 Topic 4.2
23. Decomposing cost efficiency We can obtain point predictions for CTEi and CAEi by substituting predictions for ui and ηni into these expressions If the technology exhibits constant returns to scale (r = 1), then: CTEi = TEi = exp(–ui) CAEi= AEi ≡ exp(–Ai) Thus, CEi = TEi × AEi, which is the familiar expression from Topic 2 23 ECON377/477 Topic 4.2
24. Decomposing cost efficiency CROB illustrate the method and present annotated SHAZAM output in Table 10.2 from the estimation of a three-input Cobb-Douglas production frontier and decomposition of cost efficiency into its two components For simplicity, they estimate the production frontier in a single-equation framework, although more efficient estimators could be obtained by estimating the frontier in a seemingly unrelated regression framework 24 ECON377/477 Topic 4.2
25. Scale efficiency To measure scale efficiency, we must have a measure of productivity and a method for identifying the most productive scale size (MPSS) In the case of a single-input production function, we can measure productivity using the AP The MPSS is the point of maximum AP(x) The first-order condition for a maximum can be easily rearranged to show that the MPSS is the point where the elasticity of scale is 1 and the firm experiences local constant returns to scale 25 ECON377/477 Topic 4.2
26. Scale efficiency To measure scale efficiency, we set the elasticity of scale to 1 and solve for the MPSS, denoted x* Scale efficiency at any input level x is: This procedure generalises to the multiple-input case, although a measure of productivity is a little more difficult to conceptualise 26 ECON377/477 Topic 4.2
27. Scale efficiency Think of the input vector x as one unit of a composite input, so that kx represents k units of input A measure of productivity is the ray average product (RAP): Set the elasticity of scale to 1 and solve for the optimal number of units of the composite input, denoted k* 27 ECON377/477 Topic 4.2
28. Scale efficiency A measure of scale efficiency at input level kx is: or, if k = 1: A solution can be obtained for a translog functional form and the associated measure of scale efficiency derived 28 ECON377/477 Topic 4.2
29. Scale efficiency If the production frontier takes the translog form the scale efficiency measure becomes where 29 ECON377/477 Topic 4.2
30. Scale efficiency Now, ε(x) is the elasticity of scale evaluated at x, and If the production frontier is concave in inputs, β will be less than zero and the scale efficiency measure will be less than or equal to one 30 ECON377/477 Topic 4.2
34. Panel data models They also enable us to investigate changes in the underlying production technology over time A panel data model can be written as: where a subscript ‘t’ is added to represent time If we assume the vits and uits are independently distributed, we can estimate the parameters of this model using the methods described in Topic 4.1 32 ECON377/477 Topic 4.2
35. Panel data models A problem with assuming the uits are independently distributed is that we fail to reap any of the benefits listed above Moreover, for many industries the independence assumption is unrealistic – all other things being equal, we expect efficient firms to remain reasonably efficient from period to period, and we hope that inefficient firms improve their efficiency levels over time For these reasons, we need to impose some structure on the inefficiency effects 33 ECON377/477 Topic 4.2
36. Panel data models It is common to classify different structures on the inefficiency effects according to whether they are time-invariant or time-varying One of the simplest structures we can impose on the inefficiency effects is uit = uii = 1, …, I; t = 1, …., T where ui is treated as either a fixed parameter or a random variable These models are known as the fixed effects model and random effects model, respectively 34 ECON377/477 Topic 4.2
37. Panel data models The fixed effects model can be estimated in a standard regression framework using dummy variables The estimated model can only be used to measure efficiency relative to the most efficient firm in the sample so our estimates may be unreliable if the number of firms is small The random effects model can be estimated using either least squares or ML techniques 35 ECON377/477 Topic 4.2
38. Panel data models The ML approach involves making stronger distributional assumptions concerning the uis Estimating models in a random effects framework using the ML method allows us to disentangle the effects of inefficiency and technological change 36 ECON377/477 Topic 4.2
39. Panel data models The likelihood function for this model is a generalisation of the likelihood function for the half-normal stochastic frontier model discussed in Topic 4.1 Formulas for firm-specific and industry efficiencies are also generalisations of the formulas presented in Topic 4.1 The hypothesis testing procedures discussed in Topic 4.1 are also applicable 37 ECON377/477 Topic 4.2
40. Panel data models Models with time-invariant inefficiency effects can be conveniently estimated using FRONTIER and LIMDEP CROB illustrate this estimation in Table 10.3, which contains annotated FRONTIER output from the estimation of a truncated-normal frontier Note that significant differences exist between the first-order coefficient estimates reported in this table and those reported in Table 9.6 where no account is taken of the panel nature of the data 38 ECON377/477 Topic 4.2
41. Panel data models Two models that allow for time-varying technical inefficiency take the form: where α, β and η are unknown parameters to be estimated The Battese and Coelli function involves only one unknown parameter, and is less flexible Kumbhakar model Battese and Coelli model 39 ECON377/477 Topic 4.2
42. Panel data models A limitation of both functions is that they do not allow for a change in the rank ordering of firms over time The firm that is ranked n-th at the first time period is always ranked n-th That is, if ui < uj, then for all t 40 ECON377/477 Topic 4.2
43. Panel data models The Kumbhakar and Battese and Coelli models can both be estimated under the assumption that ui has a truncated normal distribution: Again, the likelihood function is a generalisation of the likelihood function for the half-normal stochastic frontier model, as are formulas for firm-specific and industry efficiencies Hypotheses concerning individual coefficients can be tested using a z test or LR test, but they are usually tested using an LR test if there is more than one coefficient in the test 41 ECON377/477 Topic 4.2
44.
45. H0: µ = 0 (half-normal inefficiency effects at time period T)CROB present annotated FRONTIER output from the estimation of a frontier in Table 10.4 They are unable to reject both null hypotheses that the technological change effect is zero and η = 0 42 ECON377/477 Topic 4.2
46. Panel data models These hypothesis test results suggest that the model is having difficulty distinguishing between output increases due to technological progress and output increases due to improvements in technical efficiency Several more flexible models are discussed in the efficiency literature Notably, Cuesta (2000) specifies a model of the form that generalises the Battese and Coelli model and allows the temporal pattern of inefficiency effects to vary across firms 43 ECON377/477 Topic 4.2
47. Accounting for the production environment The ability of a manager to convert inputs into outputs is often influenced by exogenous variables that characterise the environment in which production takes place It is useful to distinguish between non-stochastic variables that are observable at the time key production decisions are made and unforeseen stochastic variables that can be regarded as sources of production risk (events of any type that might lead managers to seek some form of liability insurance) 44 ECON377/477 Topic 4.2
48. Accounting for the production environment The simplest way to account for non-stochastic environmental variables is to incorporate them directly into the non-stochastic component of the production frontier In the case of cross-sectional data this leads to a model of the form: where zi is a vector of (transformations of) environmental variables and γ is a vector of unknown parameters 45 ECON377/477 Topic 4.2
49. Accounting for the production environment This model has exactly the same error structure as the conventional stochastic frontier model discussed in Topic 4.1 Thus, all the estimators and testing procedures discussed in that part of the topic are available Our predictions of firm-specific technical efficiency now vary with both the traditional inputs and the environmental variables 46 ECON377/477 Topic 4.2
50. Accounting for the production environment The preferred method to deal with observable environmental variables is to allow them directly to influence the stochastic component of the production frontier Assume and 47 ECON377/477 Topic 4.2
51. Accounting for the production environment The inefficiency effects in the frontier model have distributions that vary with zi, so they are no longer identically distributed The likelihood function is a generalisation of the likelihood function for the conventional model, as are measures of firm-specific and industry efficiency The model has also been generalised to the panel data case 48 ECON377/477 Topic 4.2
52. Accounting for the production environment A simple way to account for production risk is to append another random variable to the frontier model to represent the combined effects of any variables that are unobserved at the time input decisions are made If we assume this random variable has a symmetric distribution, then it is difficult to distinguish it from the noise vi Alternatively, if we assume it has a non-negative distribution, it is difficult to distinguish it from the inefficiency effect ui 49 ECON377/477 Topic 4.2
53.
54. The model does not permit substitutability between state-contingent outputs50 ECON377/477 Topic 4.2
55. Accounting for the production environment One way to overcome the first problem is to assume the composed error term is heteroskedastic One way to allow for substitution between state-contingent outputs is to estimate a state-contingent stochastic frontier of the form where βj is a vector of unknown parameters and viand ui represent noise and inefficiency, respectively (but not risk) 51 ECON377/477 Topic 4.2
56. Accounting for the production environment This model is identical to the conventional stochastic frontier model, except the coefficient vector βj is permitted to vary across risky states of nature, j = 1, …, J Estimation is complicated by the fact that states of nature are typically unobserved or data are sparse This problem can be overcome by estimating the model in a Bayesian mixtures framework, and using this model to identify output shortfalls due to inefficiency and output shortfalls due to adverse conditions 52 ECON377/477 Topic 4.2
57. Conclusions Two other possible methods for estimating multiple-output technologies are not discussed First, we can use profit frontiers when input and output prices are available and it is reasonable to assume firms maximise profits Methods to estimate profit frontiers are similar to those available for estimating cost frontiers Second, we can aggregate multiple outputs into a single output measure using index number methods, and estimate the technology in a conventional single-output framework 53 ECON377/477 Topic 4.2
58. Conclusions The decision to estimate a distance function, cost frontier, profit frontier or single-output production frontier is one of the many decisions facing researchers who want to estimate efficiency using a parametric approach Researchers must also make choices concerning functional forms, error distributions, estimation methods and software The need to make so many choices is often seen as a disadvantage of the parametric approach 54 ECON377/477 Topic 4.2
59. Conclusions We have two simple pieces of advice: Always make decisions on a case-by-case basis Whenever it is possible, explore alternative models and estimation methods and (formally or informally) assess the adequacy and robustness of the results obtained 55 ECON377/477 Topic 4.2