How to Analyze the Results of Linear
Programs—Part 1: Preliminaries
HARVEY J. GREENBERG Mathematics Department
University of Colorado at Denver
PO Box 173364
Denver. Colorado 80217-3364
In a four part series, I describe ways to analyze the results of
linear programs beyond what is commonly described in text-
books. My intent is to capture the thought process in analysis
with two objectives. First, I want to provide a guide to those
getting started in applications of linear programming by sug-
gesting useful ways of looking at the results. Second, I want to
help create an artificially intelligent environment for the analy-
sis of results by presenting a protocol that a knowledge engi-
neer can use. The former has been in the folklore for decades;
the latter is part of a project to develop an intelligent mathe-
matical programming system. This first part of the series con-
tains basic terms and concepts used in the other three parts:
price interpretation, infeasibility diagnosis, and forcing
substructures.
A great deal of research and develop-ment activity in large-scale linear
programming (LP) has been devoted to
solving problems faster. A medium-size
problem by today's standards contains
about 5,000 equations and 20,000 vari-
ables. Even microcomputer versions can
handle thousands of equations and vari-
ables, and supercomputers have been used
for problems with millions of variables!
How can we understand the results? At
one level, in the interests of model man-
agement, we must verify that the solution
obtained makes sense with respect to the
Cupyrighr S) 1993, The Inslilute of Management Sciencos
OO91-21U2/93/23O4/OO56S0I.25
This paptr was refereed.
PROCRAMMENG—LINEAR
INTERFACES 23: 4 July-August 1993 (pp. 56-67)
LINEAR PROGRAMS
problem represented by tho linear pro-
gram.
Once we think we have a good run, we
must delve into the meaning of a solution.
Questions of sensitivity play a direct role,
such as What if . . .? and Why . . .? For
example, we may ask the following. What
if the demand for a commodity increases?
What if capacity is expanded? What if
some resource is made available? Why did
this plant not operate? Why is total pro-
duction so low? Why is the price of some
commodity so large? Why does a certain
flow pattern occur? Is it preferred to others
because of the economic trade-off, or are
the flows forced by the constraints?
Textbook wisdom does not go far
enough in answering these questions in
practical terms (see Gal [1979] for an excel-
lent mathematical treatment). Also, once
an answer is obtained in some mathemati-
cal way, how can we present the answer to
problem owners who might not know lin-
ear programming? We must be able to look
at different views of linear programs and
their pieces, for example, using graphic tech-
niques to present information about flows.
Before we can venture into this world of
analysis, we must understand how linear
programs are constructed. In this overview,
I describe and illustra ...
Master of Computer Application (MCA) – Semester 4 MC0079Aravind NC
The document describes mathematical models and provides examples of different types of models. It discusses linear vs nonlinear models, deterministic vs probabilistic models, static vs dynamic models, discrete vs continuous models, and deductive vs inductive vs floating models. It also explains the Erlang family of distributions used in queuing systems and provides the probability density function and cumulative distribution function. Finally, it outlines the graphical method algorithm for solving a linear programming problem with two variables in 8 steps.
This document provides an overview of linear programming, including its history, key components, assumptions, and applications. Linear programming involves maximizing or minimizing a linear objective function subject to linear constraints. It was developed in 1947 and can be used to optimize problems involving allocation of limited resources. The key components of a linear programming problem are the objective function, decision variables, constraints, and parameters. It makes assumptions of proportionality, additivity, continuity, determinism, and finite choices. Common applications of linear programming include production planning, facility location, and transportation problems.
Linear programming is a mathematical modeling technique useful for allocating scarce or limited resources to competing activities based on an optimality criterion. There are four key components of any linear programming model: decision variables, objective function, constraints, and non-negativity assumptions. Linear programming models make simplifying assumptions like certainty of parameters, additivity, linearity/proportionality, and divisibility of decision variables. The technique helps decision-makers use resources effectively and arrive at optimal solutions subject to constraints, but it has limitations if variables are not continuous or parameters uncertain.
Mc0079 computer based optimization methods--phpapp02Rabby Bhatt
This document discusses mathematical models and provides examples of different types of mathematical models. It begins by defining a mathematical model as a description of a system using mathematical concepts and language. It then classifies mathematical models in several ways, such as linear vs nonlinear, deterministic vs probabilistic, static vs dynamic, discrete vs continuous, and deductive vs inductive vs floating. The document provides examples and explanations of each type of model. It also discusses using finite queuing tables to analyze queuing systems with a finite population size. In summary, the document outlines different ways to classify mathematical models and provides examples of applying various types of models.
This document outlines the syllabus for the course GE3151 Problem Solving and Python Programming. It includes 5 units that cover topics like computational thinking, Python data types, control flow, functions, lists, tuples, dictionaries, files and modules. The objectives of the course are to understand algorithmic problem solving, learn to solve problems using Python conditionals and loops, define functions and use data structures like lists and tuples. It also aims to teach input/output with files in Python. The document provides the number of periods (45) and textbooks recommended for the course.
The document describes an assignment given to Md. Mehedi Hasan on the topic of applying numerical methods in computer science engineering. The assignment was given by five students and includes an index listing numerical methods to cover: error analysis, N-R method, interpolation, differentiation and max/min, curve fitting, and integration.
Using Met-modeling Graph Grammars and R-Maude to Process and Simulate LRN ModelsWaqas Tariq
Nowadays, code mobility technology is one of the most attractive research domains. Numerous domains are concerned, many platforms are developed and interest applications are realized. However, the poorness of modeling languages to deal with code mobility at requirement phase has incited to suggest new formalisms. Among these, we find Labeled Reconfigurable Nets (LRN) [9], This new formalism allows explicit modeling of computational environments and processes mobility between them. it allows, in a simple and an intuitive approach, modeling mobile code paradigms (mobile agent, code on demand, remote evaluation). In this paper, we propose an approach based on the combined use of Meta-modeling and Graph Grammars to automatically generate a visual modeling tool for LRN for analysis and simulation purposes. In our approach, the UML Class diagram formalism is used to define a meta-model of LRN. The meta-modeling tool ATOM3 is used to generate a visual modeling tool according to the proposed LRN meta-model. We have also proposed a graph grammar to generate R-Maude [22] specification of the graphically specified LRN models. Then the reconfigurable rewriting logic language R-Maude is used to perform the simulation of the resulted R-Maude specification. Our approach is illustrated through examples.
A review of automatic differentiationand its efficient implementationssuserfa7e73
Automatic differentiation is a powerful tool for automatically calculating derivatives of mathematical functions and algorithms. It works by expressing the target function as a sequence of elementary operations and then applying the chain rule to differentiate each operation. This can be done using either forward or reverse mode. Forward mode calculates how changes in inputs propagate through the function to influence the outputs, while reverse mode calculates how changes in outputs backpropagate to influence the inputs. Both modes require performing the computation twice - once for the forward pass and once for the derivative pass. Careful implementation is required to make automatic differentiation efficient in terms of speed and memory usage.
Master of Computer Application (MCA) – Semester 4 MC0079Aravind NC
The document describes mathematical models and provides examples of different types of models. It discusses linear vs nonlinear models, deterministic vs probabilistic models, static vs dynamic models, discrete vs continuous models, and deductive vs inductive vs floating models. It also explains the Erlang family of distributions used in queuing systems and provides the probability density function and cumulative distribution function. Finally, it outlines the graphical method algorithm for solving a linear programming problem with two variables in 8 steps.
This document provides an overview of linear programming, including its history, key components, assumptions, and applications. Linear programming involves maximizing or minimizing a linear objective function subject to linear constraints. It was developed in 1947 and can be used to optimize problems involving allocation of limited resources. The key components of a linear programming problem are the objective function, decision variables, constraints, and parameters. It makes assumptions of proportionality, additivity, continuity, determinism, and finite choices. Common applications of linear programming include production planning, facility location, and transportation problems.
Linear programming is a mathematical modeling technique useful for allocating scarce or limited resources to competing activities based on an optimality criterion. There are four key components of any linear programming model: decision variables, objective function, constraints, and non-negativity assumptions. Linear programming models make simplifying assumptions like certainty of parameters, additivity, linearity/proportionality, and divisibility of decision variables. The technique helps decision-makers use resources effectively and arrive at optimal solutions subject to constraints, but it has limitations if variables are not continuous or parameters uncertain.
Mc0079 computer based optimization methods--phpapp02Rabby Bhatt
This document discusses mathematical models and provides examples of different types of mathematical models. It begins by defining a mathematical model as a description of a system using mathematical concepts and language. It then classifies mathematical models in several ways, such as linear vs nonlinear, deterministic vs probabilistic, static vs dynamic, discrete vs continuous, and deductive vs inductive vs floating. The document provides examples and explanations of each type of model. It also discusses using finite queuing tables to analyze queuing systems with a finite population size. In summary, the document outlines different ways to classify mathematical models and provides examples of applying various types of models.
This document outlines the syllabus for the course GE3151 Problem Solving and Python Programming. It includes 5 units that cover topics like computational thinking, Python data types, control flow, functions, lists, tuples, dictionaries, files and modules. The objectives of the course are to understand algorithmic problem solving, learn to solve problems using Python conditionals and loops, define functions and use data structures like lists and tuples. It also aims to teach input/output with files in Python. The document provides the number of periods (45) and textbooks recommended for the course.
The document describes an assignment given to Md. Mehedi Hasan on the topic of applying numerical methods in computer science engineering. The assignment was given by five students and includes an index listing numerical methods to cover: error analysis, N-R method, interpolation, differentiation and max/min, curve fitting, and integration.
Using Met-modeling Graph Grammars and R-Maude to Process and Simulate LRN ModelsWaqas Tariq
Nowadays, code mobility technology is one of the most attractive research domains. Numerous domains are concerned, many platforms are developed and interest applications are realized. However, the poorness of modeling languages to deal with code mobility at requirement phase has incited to suggest new formalisms. Among these, we find Labeled Reconfigurable Nets (LRN) [9], This new formalism allows explicit modeling of computational environments and processes mobility between them. it allows, in a simple and an intuitive approach, modeling mobile code paradigms (mobile agent, code on demand, remote evaluation). In this paper, we propose an approach based on the combined use of Meta-modeling and Graph Grammars to automatically generate a visual modeling tool for LRN for analysis and simulation purposes. In our approach, the UML Class diagram formalism is used to define a meta-model of LRN. The meta-modeling tool ATOM3 is used to generate a visual modeling tool according to the proposed LRN meta-model. We have also proposed a graph grammar to generate R-Maude [22] specification of the graphically specified LRN models. Then the reconfigurable rewriting logic language R-Maude is used to perform the simulation of the resulted R-Maude specification. Our approach is illustrated through examples.
A review of automatic differentiationand its efficient implementationssuserfa7e73
Automatic differentiation is a powerful tool for automatically calculating derivatives of mathematical functions and algorithms. It works by expressing the target function as a sequence of elementary operations and then applying the chain rule to differentiate each operation. This can be done using either forward or reverse mode. Forward mode calculates how changes in inputs propagate through the function to influence the outputs, while reverse mode calculates how changes in outputs backpropagate to influence the inputs. Both modes require performing the computation twice - once for the forward pass and once for the derivative pass. Careful implementation is required to make automatic differentiation efficient in terms of speed and memory usage.
- The document discusses compilation analysis and performance analysis of Feel++ scientific applications using Scalasca.
- It presents compilation analysis of Feel++ using examples of mesh manipulation and discusses performance analysis using Feel++'s TIME class or Scalasca instrumentation.
- The document analyzes the laplacian case study in Feel++ using different compilation options and polynomial dimensions and presents results from performance analysis with Scalasca.
LNCS 5050 - Bilevel Optimization and Machine Learningbutest
This document discusses using bilevel optimization and machine learning techniques to improve model selection in machine learning problems. It proposes framing machine learning model selection as a bilevel optimization problem, where the inner level problems involve optimizing models on training data and the outer level problem selects hyperparameters to minimize error on test data. This bilevel framing allows for systematic optimization of hyperparameters and enables novel machine learning approaches. The document illustrates the approach for support vector regression, formulating model selection as a Stackelberg game and solving the resulting mathematical program with equilibrium constraints.
By analysing the explanation of the classical heapsort algorithm via the method of levels of abstraction mainly due to Floridi, we give a concrete and precise example of how to deal with algorithmic knowledge. To do so, we introduce a concept already implicit in the method, the ‘gradient of explanations’. Analogously to the gradient of abstractions, a gradient of explanations is a sequence of discrete levels of explanation each one refining the previous, varying formalisation, and thus providing progressive evidence for hidden information. Because of this sequential and coherent uncovering of the information that explains a level of abstraction—the heapsort algorithm in our guiding example—the notion of gradient of explanations allows to precisely classify purposes in writing software according to the informal criterion of ‘depth’, and to give a precise meaning to the notion of ‘concreteness’.
Linear Programming Problems {Operation Research}FellowBuddy.com
FellowBuddy.com is an innovative platform that brings students together to share notes, exam papers, study guides, project reports and presentation for upcoming exams.
We connect Students who have an understanding of course material with Students who need help.
Benefits:-
# Students can catch up on notes they missed because of an absence.
# Underachievers can find peer developed notes that break down lecture and study material in a way that they can understand
# Students can earn better grades, save time and study effectively
Our Vision & Mission – Simplifying Students Life
Our Belief – “The great breakthrough in your life comes when you realize it, that you can learn anything you need to learn; to accomplish any goal that you have set for yourself. This means there are no limits on what you can be, have or do.”
Like Us - https://www.facebook.com/FellowBuddycom
AN AI PLANNING APPROACH FOR GENERATING BIG DATA WORKFLOWSgerogepatton
The scale of big data causes the compositions of extract-transform-load (ETL) workflows to grow increasingly complex. With the turnaround time for delivering solutions becoming a greater emphasis, stakeholders cannot continue to afford to wait the hundreds of hours it takes for domain experts to manually compose a workflow solution. This paper describes a novel AI planning approach that facilitates rapid composition and maintenance of ETL workflows. The workflow engine is evaluated on real-world scenarios from an industrial partner and results gathered from a prototype are reported to demonstrate the validity of the approach.
The scale of big data causes the compositions of extract-transform-load (ETL) workflows to grow increasingly complex. With the turnaround time for delivering solutions becoming a greater emphasis,
stakeholders cannot continue to afford to wait the hundreds of hours it takes for domain experts to manually compose a workflow solution. This paper describes a novel AI planning approach that facilitates rapid composition and maintenance of ETL workflows. The workflow engine is evaluated on real-world
scenarios from an industrial partner and results gathered from a prototype are reported to demonstrate the validity of the approach.
Performance Comparision of Machine Learning AlgorithmsDinusha Dilanka
In this paper Compare the performance of two
classification algorithm. I t is useful to differentiate
algorithms based on computational performance rather
than classification accuracy alone. As although
classification accuracy between the algorithms is similar,
computational performance can differ significantly and it
can affect to the final results. So the objective of this paper
is to perform a comparative analysis of two machine
learning algorithms namely, K Nearest neighbor,
classification and Logistic Regression. In this paper it
was considered a large dataset of 7981 data points and 112
features. Then the performance of the above mentioned
machine learning algorithms are examined. In this paper
the processing time and accuracy of the different machine
learning techniques are being estimated by considering the
collected data set, over a 60% for train and remaining
40% for testing. The paper is organized as follows. In
Section I, introduction and background analysis of the
research is included and in section II, problem statement.
In Section III, our application and data analyze Process,
the testing environment, and the Methodology of our
analysis are being described briefly. Section IV comprises
the results of two algorithms. Finally, the paper concludes
with a discussion of future directions for research by
eliminating the problems existing with the current
research methodology.
Surrogate modeling for industrial designShinwoo Jang
We describe GTApprox | a new tool for medium-scale surrogate modeling in industrial design. Compared to existing software, GTApprox brings several innovations: a few novel approximation algorithms, several advanced methods of automated model selection, novel options in the form of hints. We demonstrate the efficiency of GTApprox on a large collection of test problems. In addition, we describe several applications of GTApprox to real engineering problems.
This document provides information about obtaining fully solved assignments from an assignment help service. Students are instructed to send their semester, specialization, and contact details to the provided email address or call the phone number to receive help with their assignments. The document includes sample assignments covering topics in quantitative management, with questions regarding linear programming, inventory management, queuing theory, simulation, game theory, and dynamic programming.
Linear programming class 12 investigatory projectDivyans890
This document provides an introduction to linear programming, including its definition, characteristics, formulation, and uses. Linear programming is a technique for determining an optimal plan that maximizes or minimizes an objective function subject to constraints. It involves expressing a problem mathematically and using linear algebra to determine the optimal values for the decision variables. Common applications of linear programming include production planning, portfolio optimization, and transportation scheduling.
Dimensional analysis means analysis of the dimensions of physical quantities. Dimensional analysis lowers the number of variables in a fluid phenomenon by mixing the some variables to form parameters which have no dimensions.
Linear programming is a mathematical technique used to optimize a linear objective function subject to linear equality and inequality constraints. It has broad applications in business and economics for allocating scarce resources optimally. Some key points:
- George Dantzig developed the simplex method in 1947, making linear programming problems tractable.
- Linear programming is used widely in industries for problems like transportation, production planning, blending, and portfolio selection to maximize profits or minimize costs.
- It provides an objective way to identify bottlenecks and ensure the best use of limited resources like time, labor, and machines.
This document discusses resource optimization and linear programming. It defines optimization as finding the best solution to a problem given constraints. Linear programming is introduced as a mathematical technique to optimize allocation of scarce resources. The key components of a linear programming model are described as decision variables, an objective function, and constraints. Graphical and algebraic methods for solving linear programming problems are also summarized.
Efficient Solution of Two-Stage Stochastic Linear Programs Using Interior Poi...SSA KPI
The document describes efficient solution methods for two-stage stochastic linear programs (SLPs) using interior point methods. Interior point methods require solving large, dense systems of linear equations at each iteration, which can be computationally difficult for SLPs due to their structure leading to dense matrices. The paper reviews methods for improving computational efficiency, including reformulating the problem, exploiting special structures like transpose products, and explicitly factorizing the matrices to solve smaller independent systems in parallel. Computational results show explicit factorizations generally require the least effort.
1) The document discusses definitions and characteristics of operations research (OR). It provides definitions of OR from several leaders and pioneers in the field that describe OR as applying scientific methods to optimize complex systems.
2) Key characteristics of OR mentioned are that it takes a team approach using quantitative techniques, aims to help executives make optimal decisions, relies on mathematical models, and uses computers to analyze models.
3) Limitations of OR discussed include that it is time-consuming, practitioners may lack industrial experience, and solutions can be difficult to communicate to non-technical executives. Linear programming is introduced as a prominent OR technique.
The document provides information about the syllabus for the Data Analytics (KIT-601) course. It includes 5 units that will be covered: Introduction to Data Analytics, Data Analysis techniques including regression modeling and multivariate analysis, Mining Data Streams, Frequent Itemsets and Clustering, and Frameworks and Visualization. It lists the course outcomes and Bloom's taxonomy levels. It also provides details on the topics to be covered in each unit, including proposed lecture hours, textbooks, and an evaluation scheme. The syllabus aims to discuss concepts of data analytics and apply techniques such as classification, regression, clustering, and frequent pattern mining on data.
The Solution of Maximal Flow Problems Using the Method Of Fuzzy Linear Progra...theijes
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
Dear students get fully solved assignments
Send your semester & Specialization name to our mail id :
help.mbaassignments@gmail.com
or
call us at : 08263069601
This document provides information about an algorithms course, including the course syllabus and topics that will be covered. The course topics include introduction to algorithms, analysis of algorithms, algorithm design techniques like divide and conquer, greedy algorithms, dynamic programming, backtracking, and branch and bound. It also covers NP-hard and NP-complete problems. The syllabus outlines 5 units that will analyze performance, teach algorithm design methods, and solve problems using techniques like divide and conquer, dynamic programming, and backtracking. It aims to help students choose appropriate algorithms and data structures for applications and understand how algorithm design impacts program performance.
FAST School of ComputingProject Differential Equations (MTChereCheek752
FAST School of Computing
Project Differential Equations (MT-224)
Due Date: 14th, June 2021. Max Marks: 70
A Brief Literature Review:
We have studied the population growth model i.e., if P represents population. Since the
population varies over time, it is understood to be a function of time. Therefore we use the
notation P (t) for the population as a function of time. If P (t) is a differentiable function,
then the first derivative
dP
dt
represents the instantaneous rate of change of the population
as a function of time, which is proportional to present population in case of the exponential
growth and decay of populations and radioactive substances. Mathematically
dP
dt
∝ P.
We can verify that the function P (t) = P0e
rt satisfies the initial-value problem
dP
dt
= rP, P (0) = P0.
This differential equation has an interesting interpretation. The left-hand side represents
the rate at which the population increases (or decreases). The right-hand side is equal to a
positive constant multiplied by the current population. Therefore the differential equation
states that the rate at which the population increases is proportional to the population at
that point in time. Furthermore, it states that the constant of proportionality never changes.
One problem with this function is its prediction that as time goes on, the population grows
without bound. This is unrealistic in a real-world setting. Various factors limit the rate of
growth of a particular population, including birth rate, death rate, food supply, predators,
diseases and so on. The growth constant r usually takes into consideration the birth and
death rates but none of the other factors, and it can be interpreted as a net (birth minus
death) percent growth rate per unit time. A natural question to ask is whether the population
growth rate stays constant, or whether it changes over time. Biologists have found that in
many biological systems, the population grows until a certain steady-state population is
reached. This possibility is not taken into account with exponential growth. However, the
concept of carrying capacity allows for the possibility that in a given area, only a certain
number of a given organism or animal can thrive without running into resource issues.
• The carrying capacity of an organism in a given environment is defined to be the maxi-
mum population of that organism that the environment can sustain indefinitely.
• We use the variable K to denote the carrying capacity. The growth rate is represented by
the variable r. Using these variables, we can define the logistic differential equation.
dP
dt
= rP
(
1 −
P
K
)
.
1
• An improvement to the logistic model includes a threshold population. The threshold
population is defined to be the minimum population that is necessary for the species
to survive. We use the variable T to represent the threshold population. A differential
equation that incorporates both the threshold population T and carrying capacit ...
httpswww.azed.govoelaselpsUse this to see the English Lang.docxpooleavelina
https://www.azed.gov/oelas/elps/
Use this to see the English Language Proficiency Standards of Arizona-Pick a grade level
https://cms.azed.gov/home/GetDocumentFile?id=54de1d88aadebe14a87070f0
http://www.corestandards.org/ELA-Literacy/introduction/how-to-read-the-standards/
how to read standards
Week 04
Acquisition and Customer Lifetime Value (CLV)
https://www.smh.com.au/politics/federal/nbn-customers-face-higher-prices-or-poorer-internet-connection-audit-warns-20190813-p52go7.html
Customer Relationship Management?
CRM is the process of carefully managing detailed information about individual
customers and all customer touch points to maximize customer loyalty.
Now closely associated with data warehousing and mining
Relationship
Relationship
Identifying good customers: RFM Model
Recency
Frequency
Monetary Value
Time/purchase occasions since the last purchase
Number of purchase occasions since first purchase
Amount spent since the first purchase
R
F
M
Total RFM Score: R Score + F score + M Score
CASE: Database for BookBinders Book Club
Predict response to a mailing for the book, Art History of Florence, based on the
following variables accumulated in the database and the responses to a test mailing:
Gender
Amount purchased
Months since first purchase
Months since last purchase
Frequency of purchase
Past purchases of art books
Past purchases of children’s books
Past purchases of cook books
Past purchases of DIY books
Past purchases of youth books
Recency
Frequency
Monetary
Example: RFM Model Scoring Criteria
R
Months from last
purchase
13-max 10-12 7-9 3-6 0-2
Score 5pts 10 15 20 25
F
Frequency > 30 21-30 16-20 11-15 0-10
Score 25pts 20 15 10 5
M
Amount
purchased
> 400 301-400 201-300 101- 200 100
Score 50 45 30 15 10
Implement using Nested If statements in Excel
Decile Classification
• Standard Assessment Method
• Apply the results of approach and
calculate the “score” of each individual
• Order the customers based on “score”
from the highest to the lowest
• Divide into deciles
• Calculate profits per deciles
Customer 1 Score 1.00
Customer 2 Score 0.99
….
Customer 230 Score 0.92
Customer 2300 Score 0.00
Decile1
Decile10
…
..
…
..
Output for Bookbinders club
Decile Score RFM No. of Mailings Cost of mailing RFM Units sold RFM Profit
10 17.6% 5000 $3,250 783 $4,733
20 34.8% 10000 $6,500 1,543 $9,243
30 46.1% 15000 $9,750 2,043 $11,093
40 53.4% 20000 $13,000 2,370 $11,170
50 65.2% 25000 $16,250 2,891 $13,241
60 77.9% 30000 $19,500 3,457 $15,757
70 83.3% 35000 $22,750 3,696 $14,946
80 91.7% 40000 $26,000 4,065 $15,465
90 97.5% 45000 $29,250 4,326 $14,876
100 100.0% 50000 $32,500 4,435 $12,735
Note: Market Potential = 4435 units and margin = $10.20
Leaky bucket
New customer
acquisition
Purchase increase by
current customers
Purchase decrease by
current customers
Lost customers
Lost customers
Credit Card Rewards Program ...
- The document discusses compilation analysis and performance analysis of Feel++ scientific applications using Scalasca.
- It presents compilation analysis of Feel++ using examples of mesh manipulation and discusses performance analysis using Feel++'s TIME class or Scalasca instrumentation.
- The document analyzes the laplacian case study in Feel++ using different compilation options and polynomial dimensions and presents results from performance analysis with Scalasca.
LNCS 5050 - Bilevel Optimization and Machine Learningbutest
This document discusses using bilevel optimization and machine learning techniques to improve model selection in machine learning problems. It proposes framing machine learning model selection as a bilevel optimization problem, where the inner level problems involve optimizing models on training data and the outer level problem selects hyperparameters to minimize error on test data. This bilevel framing allows for systematic optimization of hyperparameters and enables novel machine learning approaches. The document illustrates the approach for support vector regression, formulating model selection as a Stackelberg game and solving the resulting mathematical program with equilibrium constraints.
By analysing the explanation of the classical heapsort algorithm via the method of levels of abstraction mainly due to Floridi, we give a concrete and precise example of how to deal with algorithmic knowledge. To do so, we introduce a concept already implicit in the method, the ‘gradient of explanations’. Analogously to the gradient of abstractions, a gradient of explanations is a sequence of discrete levels of explanation each one refining the previous, varying formalisation, and thus providing progressive evidence for hidden information. Because of this sequential and coherent uncovering of the information that explains a level of abstraction—the heapsort algorithm in our guiding example—the notion of gradient of explanations allows to precisely classify purposes in writing software according to the informal criterion of ‘depth’, and to give a precise meaning to the notion of ‘concreteness’.
Linear Programming Problems {Operation Research}FellowBuddy.com
FellowBuddy.com is an innovative platform that brings students together to share notes, exam papers, study guides, project reports and presentation for upcoming exams.
We connect Students who have an understanding of course material with Students who need help.
Benefits:-
# Students can catch up on notes they missed because of an absence.
# Underachievers can find peer developed notes that break down lecture and study material in a way that they can understand
# Students can earn better grades, save time and study effectively
Our Vision & Mission – Simplifying Students Life
Our Belief – “The great breakthrough in your life comes when you realize it, that you can learn anything you need to learn; to accomplish any goal that you have set for yourself. This means there are no limits on what you can be, have or do.”
Like Us - https://www.facebook.com/FellowBuddycom
AN AI PLANNING APPROACH FOR GENERATING BIG DATA WORKFLOWSgerogepatton
The scale of big data causes the compositions of extract-transform-load (ETL) workflows to grow increasingly complex. With the turnaround time for delivering solutions becoming a greater emphasis, stakeholders cannot continue to afford to wait the hundreds of hours it takes for domain experts to manually compose a workflow solution. This paper describes a novel AI planning approach that facilitates rapid composition and maintenance of ETL workflows. The workflow engine is evaluated on real-world scenarios from an industrial partner and results gathered from a prototype are reported to demonstrate the validity of the approach.
The scale of big data causes the compositions of extract-transform-load (ETL) workflows to grow increasingly complex. With the turnaround time for delivering solutions becoming a greater emphasis,
stakeholders cannot continue to afford to wait the hundreds of hours it takes for domain experts to manually compose a workflow solution. This paper describes a novel AI planning approach that facilitates rapid composition and maintenance of ETL workflows. The workflow engine is evaluated on real-world
scenarios from an industrial partner and results gathered from a prototype are reported to demonstrate the validity of the approach.
Performance Comparision of Machine Learning AlgorithmsDinusha Dilanka
In this paper Compare the performance of two
classification algorithm. I t is useful to differentiate
algorithms based on computational performance rather
than classification accuracy alone. As although
classification accuracy between the algorithms is similar,
computational performance can differ significantly and it
can affect to the final results. So the objective of this paper
is to perform a comparative analysis of two machine
learning algorithms namely, K Nearest neighbor,
classification and Logistic Regression. In this paper it
was considered a large dataset of 7981 data points and 112
features. Then the performance of the above mentioned
machine learning algorithms are examined. In this paper
the processing time and accuracy of the different machine
learning techniques are being estimated by considering the
collected data set, over a 60% for train and remaining
40% for testing. The paper is organized as follows. In
Section I, introduction and background analysis of the
research is included and in section II, problem statement.
In Section III, our application and data analyze Process,
the testing environment, and the Methodology of our
analysis are being described briefly. Section IV comprises
the results of two algorithms. Finally, the paper concludes
with a discussion of future directions for research by
eliminating the problems existing with the current
research methodology.
Surrogate modeling for industrial designShinwoo Jang
We describe GTApprox | a new tool for medium-scale surrogate modeling in industrial design. Compared to existing software, GTApprox brings several innovations: a few novel approximation algorithms, several advanced methods of automated model selection, novel options in the form of hints. We demonstrate the efficiency of GTApprox on a large collection of test problems. In addition, we describe several applications of GTApprox to real engineering problems.
This document provides information about obtaining fully solved assignments from an assignment help service. Students are instructed to send their semester, specialization, and contact details to the provided email address or call the phone number to receive help with their assignments. The document includes sample assignments covering topics in quantitative management, with questions regarding linear programming, inventory management, queuing theory, simulation, game theory, and dynamic programming.
Linear programming class 12 investigatory projectDivyans890
This document provides an introduction to linear programming, including its definition, characteristics, formulation, and uses. Linear programming is a technique for determining an optimal plan that maximizes or minimizes an objective function subject to constraints. It involves expressing a problem mathematically and using linear algebra to determine the optimal values for the decision variables. Common applications of linear programming include production planning, portfolio optimization, and transportation scheduling.
Dimensional analysis means analysis of the dimensions of physical quantities. Dimensional analysis lowers the number of variables in a fluid phenomenon by mixing the some variables to form parameters which have no dimensions.
Linear programming is a mathematical technique used to optimize a linear objective function subject to linear equality and inequality constraints. It has broad applications in business and economics for allocating scarce resources optimally. Some key points:
- George Dantzig developed the simplex method in 1947, making linear programming problems tractable.
- Linear programming is used widely in industries for problems like transportation, production planning, blending, and portfolio selection to maximize profits or minimize costs.
- It provides an objective way to identify bottlenecks and ensure the best use of limited resources like time, labor, and machines.
This document discusses resource optimization and linear programming. It defines optimization as finding the best solution to a problem given constraints. Linear programming is introduced as a mathematical technique to optimize allocation of scarce resources. The key components of a linear programming model are described as decision variables, an objective function, and constraints. Graphical and algebraic methods for solving linear programming problems are also summarized.
Efficient Solution of Two-Stage Stochastic Linear Programs Using Interior Poi...SSA KPI
The document describes efficient solution methods for two-stage stochastic linear programs (SLPs) using interior point methods. Interior point methods require solving large, dense systems of linear equations at each iteration, which can be computationally difficult for SLPs due to their structure leading to dense matrices. The paper reviews methods for improving computational efficiency, including reformulating the problem, exploiting special structures like transpose products, and explicitly factorizing the matrices to solve smaller independent systems in parallel. Computational results show explicit factorizations generally require the least effort.
1) The document discusses definitions and characteristics of operations research (OR). It provides definitions of OR from several leaders and pioneers in the field that describe OR as applying scientific methods to optimize complex systems.
2) Key characteristics of OR mentioned are that it takes a team approach using quantitative techniques, aims to help executives make optimal decisions, relies on mathematical models, and uses computers to analyze models.
3) Limitations of OR discussed include that it is time-consuming, practitioners may lack industrial experience, and solutions can be difficult to communicate to non-technical executives. Linear programming is introduced as a prominent OR technique.
The document provides information about the syllabus for the Data Analytics (KIT-601) course. It includes 5 units that will be covered: Introduction to Data Analytics, Data Analysis techniques including regression modeling and multivariate analysis, Mining Data Streams, Frequent Itemsets and Clustering, and Frameworks and Visualization. It lists the course outcomes and Bloom's taxonomy levels. It also provides details on the topics to be covered in each unit, including proposed lecture hours, textbooks, and an evaluation scheme. The syllabus aims to discuss concepts of data analytics and apply techniques such as classification, regression, clustering, and frequent pattern mining on data.
The Solution of Maximal Flow Problems Using the Method Of Fuzzy Linear Progra...theijes
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
Dear students get fully solved assignments
Send your semester & Specialization name to our mail id :
help.mbaassignments@gmail.com
or
call us at : 08263069601
This document provides information about an algorithms course, including the course syllabus and topics that will be covered. The course topics include introduction to algorithms, analysis of algorithms, algorithm design techniques like divide and conquer, greedy algorithms, dynamic programming, backtracking, and branch and bound. It also covers NP-hard and NP-complete problems. The syllabus outlines 5 units that will analyze performance, teach algorithm design methods, and solve problems using techniques like divide and conquer, dynamic programming, and backtracking. It aims to help students choose appropriate algorithms and data structures for applications and understand how algorithm design impacts program performance.
FAST School of ComputingProject Differential Equations (MTChereCheek752
FAST School of Computing
Project Differential Equations (MT-224)
Due Date: 14th, June 2021. Max Marks: 70
A Brief Literature Review:
We have studied the population growth model i.e., if P represents population. Since the
population varies over time, it is understood to be a function of time. Therefore we use the
notation P (t) for the population as a function of time. If P (t) is a differentiable function,
then the first derivative
dP
dt
represents the instantaneous rate of change of the population
as a function of time, which is proportional to present population in case of the exponential
growth and decay of populations and radioactive substances. Mathematically
dP
dt
∝ P.
We can verify that the function P (t) = P0e
rt satisfies the initial-value problem
dP
dt
= rP, P (0) = P0.
This differential equation has an interesting interpretation. The left-hand side represents
the rate at which the population increases (or decreases). The right-hand side is equal to a
positive constant multiplied by the current population. Therefore the differential equation
states that the rate at which the population increases is proportional to the population at
that point in time. Furthermore, it states that the constant of proportionality never changes.
One problem with this function is its prediction that as time goes on, the population grows
without bound. This is unrealistic in a real-world setting. Various factors limit the rate of
growth of a particular population, including birth rate, death rate, food supply, predators,
diseases and so on. The growth constant r usually takes into consideration the birth and
death rates but none of the other factors, and it can be interpreted as a net (birth minus
death) percent growth rate per unit time. A natural question to ask is whether the population
growth rate stays constant, or whether it changes over time. Biologists have found that in
many biological systems, the population grows until a certain steady-state population is
reached. This possibility is not taken into account with exponential growth. However, the
concept of carrying capacity allows for the possibility that in a given area, only a certain
number of a given organism or animal can thrive without running into resource issues.
• The carrying capacity of an organism in a given environment is defined to be the maxi-
mum population of that organism that the environment can sustain indefinitely.
• We use the variable K to denote the carrying capacity. The growth rate is represented by
the variable r. Using these variables, we can define the logistic differential equation.
dP
dt
= rP
(
1 −
P
K
)
.
1
• An improvement to the logistic model includes a threshold population. The threshold
population is defined to be the minimum population that is necessary for the species
to survive. We use the variable T to represent the threshold population. A differential
equation that incorporates both the threshold population T and carrying capacit ...
Similar to How to Analyze the Results of LinearPrograms—Part 1 Prelimi.docx (20)
httpswww.azed.govoelaselpsUse this to see the English Lang.docxpooleavelina
https://www.azed.gov/oelas/elps/
Use this to see the English Language Proficiency Standards of Arizona-Pick a grade level
https://cms.azed.gov/home/GetDocumentFile?id=54de1d88aadebe14a87070f0
http://www.corestandards.org/ELA-Literacy/introduction/how-to-read-the-standards/
how to read standards
Week 04
Acquisition and Customer Lifetime Value (CLV)
https://www.smh.com.au/politics/federal/nbn-customers-face-higher-prices-or-poorer-internet-connection-audit-warns-20190813-p52go7.html
Customer Relationship Management?
CRM is the process of carefully managing detailed information about individual
customers and all customer touch points to maximize customer loyalty.
Now closely associated with data warehousing and mining
Relationship
Relationship
Identifying good customers: RFM Model
Recency
Frequency
Monetary Value
Time/purchase occasions since the last purchase
Number of purchase occasions since first purchase
Amount spent since the first purchase
R
F
M
Total RFM Score: R Score + F score + M Score
CASE: Database for BookBinders Book Club
Predict response to a mailing for the book, Art History of Florence, based on the
following variables accumulated in the database and the responses to a test mailing:
Gender
Amount purchased
Months since first purchase
Months since last purchase
Frequency of purchase
Past purchases of art books
Past purchases of children’s books
Past purchases of cook books
Past purchases of DIY books
Past purchases of youth books
Recency
Frequency
Monetary
Example: RFM Model Scoring Criteria
R
Months from last
purchase
13-max 10-12 7-9 3-6 0-2
Score 5pts 10 15 20 25
F
Frequency > 30 21-30 16-20 11-15 0-10
Score 25pts 20 15 10 5
M
Amount
purchased
> 400 301-400 201-300 101- 200 100
Score 50 45 30 15 10
Implement using Nested If statements in Excel
Decile Classification
• Standard Assessment Method
• Apply the results of approach and
calculate the “score” of each individual
• Order the customers based on “score”
from the highest to the lowest
• Divide into deciles
• Calculate profits per deciles
Customer 1 Score 1.00
Customer 2 Score 0.99
….
Customer 230 Score 0.92
Customer 2300 Score 0.00
Decile1
Decile10
…
..
…
..
Output for Bookbinders club
Decile Score RFM No. of Mailings Cost of mailing RFM Units sold RFM Profit
10 17.6% 5000 $3,250 783 $4,733
20 34.8% 10000 $6,500 1,543 $9,243
30 46.1% 15000 $9,750 2,043 $11,093
40 53.4% 20000 $13,000 2,370 $11,170
50 65.2% 25000 $16,250 2,891 $13,241
60 77.9% 30000 $19,500 3,457 $15,757
70 83.3% 35000 $22,750 3,696 $14,946
80 91.7% 40000 $26,000 4,065 $15,465
90 97.5% 45000 $29,250 4,326 $14,876
100 100.0% 50000 $32,500 4,435 $12,735
Note: Market Potential = 4435 units and margin = $10.20
Leaky bucket
New customer
acquisition
Purchase increase by
current customers
Purchase decrease by
current customers
Lost customers
Lost customers
Credit Card Rewards Program ...
The 30 June 2019 local elections in Albania took place in a context of deep political polarization and crisis. The main opposition parties boycotted the elections and called on voters to abstain. As a result, many mayoral races were uncontested. The elections suffered from a lack of trust in the impartiality of the election administration due to its unbalanced composition. While voting and counting were carried out efficiently on election day, the broader process failed to provide voters with a genuine choice between political alternatives. The elections did not resolve the underlying political disputes and the country remained in a state of political uncertainty.
httpfmx.sagepub.comField Methods DOI 10.117715258.docxpooleavelina
http://fmx.sagepub.com
Field Methods
DOI: 10.1177/1525822X04269550
2005; 17; 30 Field Methods
Don A. Dillman and Leah Melani Christian
Survey Mode as a Source of Instability in Responses across Surveys
http://fmx.sagepub.com/cgi/content/abstract/17/1/30
The online version of this article can be found at:
Published by:
http://www.sagepublications.com
can be found at:Field Methods Additional services and information for
http://fmx.sagepub.com/cgi/alerts Email Alerts:
http://fmx.sagepub.com/subscriptions Subscriptions:
http://www.sagepub.com/journalsReprints.navReprints:
http://www.sagepub.com/journalsPermissions.navPermissions:
http://fmx.sagepub.com/cgi/content/refs/17/1/30 Citations
at SAGE Publications on September 9, 2009 http://fmx.sagepub.comDownloaded from
http://fmx.sagepub.com/cgi/alerts
http://fmx.sagepub.com/subscriptions
http://www.sagepub.com/journalsReprints.nav
http://www.sagepub.com/journalsPermissions.nav
http://fmx.sagepub.com/cgi/content/refs/17/1/30
http://fmx.sagepub.com
10.1177/1525822X04269550FIELD METHODSDillman, Christian / SURVEY MODE AS SOURCE OF INSTABILITY
Survey Mode as a Source of Instability
in Responses across Surveys
DON A. DILLMAN
LEAH MELANI CHRISTIAN
Washington State University
Changes in survey mode for conducting panel surveys may contribute significantly to
survey error. This article explores the causes and consequences of such changes in
survey mode. The authors describe how and why the choice of survey mode often
causes changes to be made to the wording of questions, as well as the reasons that
identically worded questions often produce different answers when administered
through different modes. The authors provide evidence that answers may change as a
result of different visual layouts for otherwise identical questions and suggest ways
to keep measurement the same despite changes in survey mode.
Keywords: survey mode; questionnaire; panel survey; measurement; survey error
Most panel studies require measurement of the same variables at different
times. Often, participants are asked questions, several days, weeks, months,
or years apart to measure change in some characteristics of interest to the
investigation. These characteristics might include political attitudes, satis-
faction with a health care provider, frequency of a behavior, ownership of
financial resources, or level of educational attainment. Whatever the charac-
teristic of interest, it is important that the question used to ascertain it perform
the same across multiple data collections.
In addition, declining survey response rates, particularly for telephone
surveys, have encouraged researchers to use multiple modes of data collec-
tion during the administration of a single cross-sectional survey. Encouraged
by the availability of more survey modes than in the past and evidence that a
change in modes produces higher response rates (Dillman 2002), surveyors
This is a revision of a paper presented at t ...
https://iexaminer.org/fake-news-personal-responsibility-must-trump-intellectual-laziness/
Fake news: Personal responsibility must trump intellectual laziness
By Matt Chan January 4, 2017
Where do you get your news? That question has become incredibly important given the results of our Presidential Election. How many times have you heard, “I read a news story on Facebook and …” The problem: Facebook is not a news service; it’s a “social media” site whose purpose is to connect like-minded friends and family, to provide you with social connections, and online entertainment.
For Asian Americans social media provides an important and useful way of connecting socially and in some cases politically, but there is a downside. The downside is how social media actually works. These sites employ elaborate algorithms to track and analyze your posts, likes, and dislikes to provide you with a custom experience unique to you. The truth is you are being marketed to, not informed. What looks like news, is not really news, it’s personal validation. All in an attempt to keep you on the site longer, to click a few more things, to make you feel good about what you’re reading. It makes it seem like most people agree with you because you’re only fed information and stories that validate your worldview.
On the other hand, real news is hard work. Its fact-based information presented by people who have checked, researched, and documented what they are presenting as the truth. Real news can be verified.
“Fake News” is, well, fake, often times entirely made-up or containing a hint of truth. Social media was largely responsible for pushing “fake news” stories that were entirely made up to drive clicks on websites. These clicks in turn generated money for the people promoting the stories. The more outrageous the story, the more clicks, the more revenue. When you factor in the algorithms that feed you what you like, you can clearly see the more “fake news” you consume on social media, the more is pushed your way. There’s an abundance of pseudo news sites that merely re-post and curate existing stories, adding their bias to validate their audience’s beliefs, no matter how crazy or mainstream. It is curated solely for you. Now factor in that nearly 44% of Americans obtain some or most of their news from social media and you have a very toxic mix.
The mainstream news media has also fallen into this validation trap. You have one news network that solely reflects the right wing, others that take the view of the left-center leaning, and what is lost are the facts and context, the balance we need to evaluate, learn, and understand the world. People seeking fact-based journalism lose, because the more extreme the media becomes to entice consumers with provocative headlines and click-bait to earn more money, the less their news is fact-based and becomes more opinion driven.
There was a time when fact-based reporting was required of broadcast news. It was called “The Fairness Doctrin ...
http1500cms.comBECAUSE THIS FORM IS USED BY VARIOUS .docxpooleavelina
http://1500cms.com/
BECAUSE THIS FORM IS USED BY VARIOUS GOVERNMENT AND PRIVATE HEALTH PROGRAMS, SEE SEPARATE INSTRUCTIONS ISSUED BY
APPLICABLE PROGRAMS.
NOTICE: Any person who knowingly files a statement of claim containing any misrepresentation or any false, incomplete or misleading information may
be guilty of a criminal act punishable under law and may be subject to civil penalties.
REFERS TO GOVERNMENT PROGRAMS ONLY
MEDICARE AND CHAMPUS PAYMENTS: A patient’s signature requests that payment be made and authorizes release of any information necessary to process
the claim and certifies that the information provided in Blocks 1 through 12 is true, accurate and complete. In the case of a Medicare claim, the patient’s signature
authorizes any entity to release to Medicare medical and nonmedical information, including employment status, and whether the person has employer group health
insurance, liability, no-fault, worker’s compensation or other insurance which is responsible to pay for the services for which the Medicare claim is made. See 42
CFR 411.24(a). If item 9 is completed, the patient’s signature authorizes release of the information to the health plan or agency shown. In Medicare assigned or
CHAMPUS participation cases, the physician agrees to accept the charge determination of the Medicare carrier or CHAMPUS fiscal intermediary as the full charge,
and the patient is responsible only for the deductible, coinsurance and noncovered services. Coinsurance and the deductible are based upon the charge
determination of the Medicare carrier or CHAMPUS fiscal intermediary if this is less than the charge submitted. CHAMPUS is not a health insurance program but
makes payment for health benefits provided through certain affiliations with the Uniformed Services. Information on the patient’s sponsor should be provided in those
items captioned in “Insured”; i.e., items 1a, 4, 6, 7, 9, and 11.
BLACK LUNG AND FECA CLAIMS
The provider agrees to accept the amount paid by the Government as payment in full. See Black Lung and FECA instructions regarding required procedure and
diagnosis coding systems.
SIGNATURE OF PHYSICIAN OR SUPPLIER (MEDICARE, CHAMPUS, FECA AND BLACK LUNG)
I certify that the services shown on this form were medically indicated and necessary for the health of the patient and were personally furnished by me or were furnished
incident to my professional service by my employee under my immediate personal supervision, except as otherwise expressly permitted by Medicare or CHAMPUS
regulations.
For services to be considered as “incident” to a physician’s professional service, 1) they must be rendered under the physician’s immediate personal supervision
by his/her employee, 2) they must be an integral, although incidental part of a covered physician’s service, 3) they must be of kinds commonly furnished in physician’s
offices, and 4) the services of nonphysicians must be included on the physician’s bills.
For CHA ...
https://www.medicalnewstoday.com/articles/323444.php
https://ascopubs.org/doi/full/10.1200/JCO.2008.16.0333
https://journals.lww.com/co-hematology/Abstract/2007/03000/Influence_of_new_molecular_prognostic_markers_in.5.aspx
Influence of new molecular prognostic markers in patients with karyotypically normal acute myeloid leukemia: recent advances
Mrózek, Krzysztofa; Döhner, Hartmutb; Bloomfield, Clara Da
Current Opinion in Hematology: March 2007 - Volume 14 - Issue 2 - p 106–114
doi: 10.1097/MOH.0b013e32801684c7
Myeloid disease
Purpose of review Molecular study of cytogenetically normal acute myeloid leukemia is among the most active areas of leukemia research. Despite having the same normal karyotype, adults with de-novo cytogenetically normal acute myeloid leukemia who constitute the largest cytogenetic group of acute myeloid leukemia, are very diverse with respect to acquired gene mutations and gene expression changes. These genetic alterations affect clinical outcome and may assist in selection of proper treatment. Herein we critically summarize recent clinically relevant molecular genetic studies of cytogenetically normal acute myeloid leukemia.
Recent findings NPM1 gene mutations causing aberrant cytoplasmic localization of nucleophosmin have been demonstrated to be the most frequent submicroscopic alterations in cytogenetically normal acute myeloid leukemia and to confer improved prognosis, especially in patients without a concomitant FLT3 gene internal tandem duplication. Overexpressed BAALC, ERG and MN1 genes and expression of breast cancer resistance protein have been shown to confer poor prognosis. A gene-expression signature previously suggested to separate cytogenetically normal acute myeloid leukemia patients into prognostic subgroups has been validated on a different microarray platform, although gene-expression signature-based classifiers predicting outcome for individual patients with greater accuracy are still needed.
Summary The discovery of new prognostic markers has increased our understanding of leukemogenesis and may lead to improved prognostication and generation of novel risk-adapted therapies.
http://www.bloodjournal.org/content/127/1/53?sso-checked=true
An update of current treatments for adult acute myeloid leukemia
Hervé Dombret and Claude Gardin
Abstract
Recent advances in acute myeloid leukemia (AML) biology and its genetic landscape should ultimately lead to more subset-specific AML therapies, ideally tailored to each patient's disease. Although a growing number of distinct AML subsets have been increasingly characterized, patient management has remained disappointingly uniform. If one excludes acute promyelocytic leukemia, current AML management still relies largely on intensive chemotherapy and allogeneic hematopoietic stem cell transplantation (HSCT), at least in younger patients who can tolerate such intensive treatments. Nevertheless, progress has been made, notably in terms of standard drug dose in ...
httpstheater.nytimes.com mem theater treview.htmlres=9902e6.docxpooleavelina
https://theater.nytimes.com/ mem/ theater/ treview.html?res=9902e6db1639f931a25753c1a962948260
THEATER: WILSON'S 'MA RAINEY'S' OPENS
By FRANK RICH
Published: October 12, 1984, Friday
LATE in Act I of ''Ma Rainey's Black Bottom,'' a somber, aging band trombonist (Joe Seneca) tilts his head heavenward to sing the blues. The setting is a dilapidated Chicago recording studio of 1927, and the song sounds as old as time. ''If I had my way,'' goes the lyric, ''I would tear this old building down.''
Once the play has ended, that lyric has almost become a prophecy. In ''Ma Rainey's Black Bottom,'' the writer August Wilson sends the entire history of black America crashing down upon our heads. This play is a searing inside account of what white racism does to its victims - and it floats on the same authentic artistry as the blues music it celebrates. Harrowing as ''Ma Rainey's'' can be, it is also funny, salty, carnal and lyrical. Like his real-life heroine, the legendary singer Gertrude (Ma) Rainey, Mr. Wilson articulates a legacy of unspeakable agony and rage in a spellbinding voice.
The play is Mr. Wilson's first to arrive in New York, and it reached here, via the Yale Repertory Theater, under the sensitive hand of the man who was born to direct it, Lloyd Richards. On Broadway, Mr. Richards has honed ''Ma Rainey's'' to its finest form. What's more, the director brings us an exciting young actor - Charles S. Dutton - along with his extraordinary dramatist. One wonders if the electricity at the Cort is the same that audiences felt when Mr. Richards, Lorraine Hansberry and Sidney Poitier stormed into Broadway with ''A Raisin in the Sun'' a quarter-century ago.
As ''Ma Rainey's'' shares its director and Chicago setting with ''Raisin,'' so it builds on Hansberry's themes: Mr. Wilson's characters want to make it in white America. And, to a degree, they have. Ma Rainey (1886-1939) was among the first black singers to get a recording contract - albeit with a white company's ''race'' division. Mr. Wilson gives us Ma (Theresa Merritt) at the height of her fame. A mountain of glitter and feathers, she has become a despotic, temperamental star, complete with a retinue of flunkies, a fancy car and a kept young lesbian lover.
The evening's framework is a Paramount-label recording session that actually happened, but whose details and supporting players have been invented by the author. As the action swings between the studio and the band's warm-up room - designed by Charles Henry McClennahan as if they might be the festering last- chance saloon of ''The Iceman Cometh'' - Ma and her four accompanying musicians overcome various mishaps to record ''Ma Rainey's Black Bottom'' and other songs. During the delays, the band members smoke reefers, joke around and reminisce about past gigs on a well-traveled road stretching through whorehouses and church socials from New Orleans to Fat Back, Ark.
The musicians' speeches are like improvised band solos - variously fiz ...
https://fitsmallbusiness.com/employee-compensation-plan/
The puzzle of motivation | Dan Pink [Video file]. Retrieved from https://www.youtube.com/watch?v=rrkrvAUbU9Y
Refining the total rewards package through employee input at MillerCoors [Video file]. Retrieved from https://www.youtube.com/watch?v=_I7nv0B4_NU&feature=youtu.be
How to design an employee compensation plan [SlideShare slides]. Retrieved from http://www.slideshare.net/FitSmallBusiness/how-to-design-a-compensation-plan-dave?ref=http://fitsmallbusiness.com/how-to-pay-employees/
Compensation strategies [Video file]. Retrieved from https://youtu.be/U2wjvBigs7w
· Expectations for Power Point Presentations in Units IV and V
I would like to provide information about what needs to be included in presentations. Please review the rubric prior to submitting any assignment. If you don't know where to find this, please contact me.
1. You need a title slide.
2. You need an overview of the presentation slide (slide after the title slide). This is how you would organize a presentation if you were presenting it at work.
3. You need a summary slide (before the reference slide); same reason as above.
4. Please do not forget to cite on slides where you are writing about something related to what you have read. Please consider each slide a paragraph. You can cite on the slides or in the notes. If you do not cite, you will not get credit for the slide.
- Direct quotes should not be used in this presentation as they are not analysis.
5. Remember, all I can evaluate is what you submit, so please consider using notes to explain what you are writing in further detail. Bullets are great and you can use these but then provide more detail in the notes.
6. Graphics - Please include graphics/charts/graphs as this is evaluated in the rubric (quality of the presentation).
7. References - For all references, you need citations. For all citations, you need references. They must match. All must be formatted using APA requirements. Please review the Quick Reference Guide that was posted in the announcements.
Please never hesitate to email me with any questions. If you need further clarification about feedback or if you do not agree with any of the feedback, please contact me. My door is always open.
Assignment 1
Positioning Statement and Motto
Use the provided information, as well as your own research, to assess one (1) of the stated brands (Tesla, SmoothieKing, Suave, or Nintendo) by completing the questions below with an ORIGINAL response to each. At the end of the worksheet, be sure to develop a new ORIGINAL positioning statement and motto for the brand you selected. Submit the completed template in the Week 4 assignment submission link.
Name:
Professor’s Name:
Course Title:
Date:
Company/Brand Selected (Tesla, SmoothieKing, Suave or Nintendo):
1. Target Customers/Users
Who are the target customers for the company/brand? Make sure you tell why you selected each item that you did. (NOTE: DO NO ...
This document provides instructions for students completing a research paper for an introductory radiography course. It outlines requirements for the paper, including length of 3 pages, use of 3 scholarly sources from 2008-present, and APA formatting. Key topics that must be addressed are introduced, including the chosen research topic, importance of the topic, and evidence of research through in-text citations on every page and a reference list. Formatting guidelines specify use of a cover page, introduction, body, and summary. The instructions emphasize accurately citing all sources to avoid plagiarism. Students are encouraged to visit the campus writing center for assistance meeting the standards.
https://www.worldbank.org/en/country/vietnam/overview
-------------- Context ----------------
Vietnam’s development over the past 30 years has been remarkable. Economic and political reforms under Đổi Mới, launched in 1986, have spurred rapid economic growth, transforming what was then one of the world’s poorest nations into a lower middle-income country. Between 2002 and 2018, more than 45 million people were lifted out of poverty. Poverty rates declined sharply from over 70% to below 6% (US$3.2/day PPP), and GDP per capita increased by 2.5 times, standing over US$2,500 in 2018.
In the medium-term, Vietnam’s economic outlook is positive, despite signs of cyclical moderation in growth. After peaking at 7.1% in 2018, real GDP growth in 2019 is projected to slightly decelerate in 2019, led by weaker external demand and continued tightening of credit and fiscal policies. Real GDP growth is projected to remain robust at around 6.5% in 2020 and 2021. Annual headline inflation has been stable for the seven consecutive years – at single digits, trending towards 4% and below in recent years. The external balance remains under control and should continue to be financed by strong FDI inflows which reached almost US$18 billion in 2018 – accounting for almost 24% of total investment in the economy.
Vietnam is experiencing rapid demographic and social change. Its population reached 97 million in 2018 (up from about 60 million in 1986) and is expected to expand to 120 million before moderating around 2050. Today, 70% of the population is under 35 years of age, with a life expectancy of 76 years, the highest among countries in the region at similar income levels. But the population is rapidly aging. And an emerging middle class, currently accounting for 13% of the population, is expected to reach 26% by 2026.
Vietnam ranks 48 out of 157 countries on the human capital index (HCI), second in ASEAN behind Singapore. A Vietnamese child born today will be 67% as productive when she grows up as she could be if she enjoyed complete education and full health. Vietnam’s HCI is highest among middle-income countries, but there are some disparities within the country, especially for ethnic minorities. There would also be a need to upgrade the skill of the workforce to create productive jobs at a large scale in the future.
Over the last thirty years, the provision of basic services has significantly improved. Access of households to modern infrastructure services has increased dramatically. As of 2016, 99% of the population used electricity as their main source of lighting, up from 14 % in 1993. Access to clean water in rural areas has also improved, up from 17% in 1993 to 70% in 2016, while that figure for urban areas is above 95%.
Vietnam performs well on general education. Coverage and learning outcomes are high and equitably achieved in primary schools — evidenced by remarkably high scores in the Program for International Student Assessment (PISA) in 2012 and 2015, ...
HTML WEB Page solutionAbout.htmlQuantum PhysicsHomeServicesAbou.docxpooleavelina
HTML WEB Page solution/About.htmlQuantum PhysicsHomeServicesAboutContact Me
This website gives a detail inward look in quantam physics as it is a evolving field now-a-days and has many upcoming changes that is going to leave the world in shock. There has been a lot of confusion lately related to this topics in people so it is encourage that people visit this website and get to know more about this field and explore the horizons there is yet to come.
HTML WEB Page solution/FirstLastHomePage.htmlQuantum PhysicsHomeServicesAboutContact Me
Definition
Quantum mechanics is the part of material science identifying with the little.
It brings about what may have all the earmarks of being some extremely peculiar decisions about the physical world. At the size of particles and electrons, a significant number of the conditions of old style mechanics, which depict how things move at ordinary sizes and speeds, stop to be helpful. In traditional mechanics, objects exist in a particular spot at a particular time. Be that as it may, in quantum mechanics, protests rather exist in a fog of likelihood; they have a specific possibility of being at point An, another possibility of being at point B, etc.Three revolutionary principles
Quantum mechanics (QM) created over numerous decades, starting as a lot of questionable scientific clarifications of tests that the math of old style mechanics couldn't clarify. It started at the turn of the twentieth century, around a similar time that Albert Einstein distributed his hypothesis of relativity, a different numerical unrest in material science that portrays the movement of things at high speeds. In contrast to relativity, nonetheless, the sources of QM can't be credited to any one researcher. Or maybe, various researchers added to an establishment of three progressive rules that bit by bit picked up acknowledgment and exploratory confirmation somewhere in the range of 1900 and 1930. They are:
Quantized properties:
Certain properties, for example, position, speed and shading, can once in a while just happen in explicit, set sums, much like a dial that "clicks" from number to number. This tested a crucial presumption of old style mechanics, which said that such properties should exist on a smooth, ceaseless range. To portray the possibility that a few properties "clicked" like a dial with explicit settings, researchers begat the word "quantized".
Particles of light:
Light can now and again act as a molecule. This was at first met with unforgiving analysis, as it negated 200 years of trials indicating that light acted as a wave; much like waves on the outside of a quiet lake. Light acts comparatively in that it ricochets off dividers and twists around corners, and that the peaks and troughs of the wave can include or counteract. Included wave peaks bring about more splendid light, while waves that counterbalance produce obscurity. A light source can be thought of ...
https://www.huffpost.com/entry/online-dating-vs-offline_b_4037867
For your initial post, provide a sentence to share which article you are referring to so that you can best communicate with your peers. Include a link to your selection.
· Explain how the argument contains or avoids bias.
i. Provide specific examples to support your explanation.
ii. What assumptions does it make?
· Discuss the credibility of the overall argument.
i. Were the resources the argument was built upon credible?
ii. Does the credibility support or undermine the article’s claims in any important ways?
In response to your peers, provide an additional resource to support or refute the argument your peer makes. Do you agree with their claims of credibility? Are there any other possible bias not identified?
Response #1
Allysa Tantala posted Sep 22, 2019 10:17 PM
Subscribe
The article that I am looking at is Online Dating Vs. Offline Dating: Pros and Cons.It was written by Julie Spira, an online dating expert, bestselling author, and CEO of Cyber-Dating Expert. The name of the article is spot on in describing what it is about. The author goes through the pros and cons of dating online and offline in today’s day and age. The author avoids bias because she looks at both options in both their positive and negative attributes. She comes at the issues from both angles and I believe she does a very good job at remaining unbiased. She states that “if you're serious about meeting someone special, you must include a combination of both online and offline dating in your routine” (Spira, 2013, par. 18). She’s stating that both options have their pros and cons and that really a combination of both is needed to find someone. The only bias I could see anyone pointing out would be that she is a woman, so you do not get the male perspective on these things. That being said, I one hundred percent think she covers all of the questions people may have about online and offline dating in today’s world. The only assumption being made here is that the reader wants to be out in the dating world and they need to know what is best. But, the title of the article is pretty self-explanatory so if someone did not want to know these things, they would not have to waste their time reading it all because they could tell what it would be about by the title.
The resource that she used was herself, and like I stated above, she is an online dating expert, bestselling author, and CEO of Cyber-Dating Expert; so she is more than qualified to give her perspective on these issues. I find her to be credible and thought provoking. Her credibility supports everything the article says and makes the reader feel like they are being told the truth by someone who completely understands all of the pros and cons.
Resource:
Spira, J. (2013, December 3). Online Dating Vs. Offline Dating: Pros and Cons. Retrieved from https://www.huffpost.com/entry/online-dating-vs-offline_b_4037867
Response #2
Jennifer Caforio posted Se ...
https://www.vitalsource.com/products/comparative-criminal-justice-systems-harry-r-dammer-jay-s-v9781285630779
THE ASSIGNMENT IS BASED ON CHAPTER 1 (ONE)
Login : [email protected]
Password: Greekyogurt13!
1
3Defining the Problem
Rigina CochranMPA/593
August 19, 2019
Peter ReevesDefining the Problem
The health care system in Colorado is a composition of medical professionals providing services such as diagnosis, treatment, as well as preventive measures to mental illness and injuries ("Healthcare policy in Colorado - Ballotpedia," 2019). Health care policy involves the establishment and implementation of legislation and other regulations that the states use to manage its health care system effectively. Further, this sector consists of other participants, such as insurance and health information technology. The cost citizens pay for medical care and also the access to quality care influence the overall health care providers in Colorado. Therefore, the need for the creation and implementation of laws that help the state maintain efficiency in the health sector in Colorado.
Problem Statement
The declining standards of medical care within the United States has caused significant concern in the world. Due to these rising concerns, there have been various policies implemented, leading to mixed reactions among the different states. Some of the active policies implemented offer a long-term solution to this problem including Medicaid and Medicare. After acquiring state control, the Republicans dismissed the idea to expand and create medical insurance for Medicaid in Colorado. Sustaining the structure of the health care payroll calls for the deductions from the employees and the employers, which may lead to loss of jobs and increased burden of expenditure (Garcia, 2019).
Identify the Methodology
The main objective of this policy plan is to investigate the role of legislation in the management of the health care sector in the United States. Due to the need for achieving in-depth exploration, this paper uses a combination of both qualitative and quantitative methods of data collection by addressing both practical and theoretical aspects of the research. Based on the answers that the policy requires, choosing survey as the research design. This method involves collecting and analyzing data from a few people who represent the principal group within health care. However, the survey method faces some challenges such as attitudes and perception of the health workers leading to the delimitation of the study. The target population for the study includes the nurses within the health sectors in Colorado. The selection of the participants involved in the use of stratified random sampling.
Identify your Stakeholders
The major stakeholders in the creation and implementation of the policy plan include the legislatures, local government, patients, and other private parties such as the insurance companies. Collectively, these bodies are involved in the makin ...
Avoidant/Restrictive Food Intake Disorder (ARFID) is a feeding disorder characterized by avoidance of food due to sensory characteristics, fear of aversive consequences, or lack of interest in eating. This results in insufficient calorie or nutrient intake leading to issues like weight loss, nutritional deficiencies, or interference with functioning. Treatments that have shown promise for ARFID include family-based treatment involving parents supporting exposure to new foods, cognitive-behavioral therapy with elements like food exposure and relaxation training, and hospital-based refeeding programs, some of which utilize tube feeding for severe cases. However, more research is still needed, as existing studies on treating ARFID are limited and no single approach has been proven
https://www.youtube.com/watch?time_continue=59&v=Bh_oEYX1zNM&feature=emb_logo
BA 325 Pivot Table Assignment Answer Sheet
Name:
Before you do anything fill out your name on the assignment and save your file as BA325 Firstname Lastname (use your actual name).
The table has all of the questions from the DuPont Assignment. Fill in your answers to the questions in the corresponding cell in the Answer column. Below the table there is a spot for the Screen Clippings from both the Practice Assignment, and the DuPont Assignment.
After you have filled out all of the answers and Screen Clippings submit the file to the Assignments folder in D2L.
Q Number
Question
Answer
Q1
How much was American Airlines’ Net Revenues in 2013?
Q2
What was the Return on Equity for Apple in 2015?
Q3
Which company had the highest Net Income and in which year? What was the value?
Q4
Which company had the lowest Net Income and in which year? What was the value?
Q5
How many unique companies in your sample had Net Losses exceeding one billion dollars? Which companies, and what years?
Q6
What was the Sum of the Net Income for all companies in the sample for 2015?
Q7
Which company had the highest total Net Income over the three year period? What was the value?
Q8
Which company had the lowest total Net Income over the three year period? What was the value?
Q9
Which industry had the highest Average Profit Margin over the three year period? What was the value?
Q10
In which year was the Average Profit Margin the highest for the entire sample? What was the value?
Q11
For how many companies do you have Profit Margin ratio data in 2013?
Q12
For what Industry do you have the most Profit Margin ratio data in the sample? What was the value? For that Industry what year was the highest? What was the value?
Q13
Which Industry has the highest Average Asset Turnover over the three year period? What was the value?
Q14
Which of the remaining Industries has the highest Asset Turnover in 2014? What was the value?
Q15
Which Industry has the highest Average Financial Leverage over the three year period? What was the value?
Q16
Which Industry has the lowest Average Financial Leverage that does not include negative numbers in any year? What was the value?
Q17
What is the Average Financial Leverage for the Transportation Industry in 2013?
Note: The answer is odd. You will have to use Data Cleaning to resolve the issue.
Q18
Which Industry has the highest Average Return on Equity over the three year period and which company is the highest within that Industry? What are the values?
Q19
Which two companies in the Public Utilities Industry have the highest Average Return on Equity during the period? What are the values?
Q20
Which Industry had the largest decrease in Average Return on Equity between 2013 and 2014? What was the value?
Q21
Which Industry had the largest increase in Average Return on Equity between 2014 and 2015? What was the value?
Q22
Bonus Question 1: How many industrie ...
বাংলাদেশের অর্থনৈতিক সমীক্ষা ২০২৪ [Bangladesh Economic Review 2024 Bangla.pdf] কম্পিউটার , ট্যাব ও স্মার্ট ফোন ভার্সন সহ সম্পূর্ণ বাংলা ই-বুক বা pdf বই " সুচিপত্র ...বুকমার্ক মেনু 🔖 ও হাইপার লিংক মেনু 📝👆 যুক্ত ..
আমাদের সবার জন্য খুব খুব গুরুত্বপূর্ণ একটি বই ..বিসিএস, ব্যাংক, ইউনিভার্সিটি ভর্তি ও যে কোন প্রতিযোগিতা মূলক পরীক্ষার জন্য এর খুব ইম্পরট্যান্ট একটি বিষয় ...তাছাড়া বাংলাদেশের সাম্প্রতিক যে কোন ডাটা বা তথ্য এই বইতে পাবেন ...
তাই একজন নাগরিক হিসাবে এই তথ্য গুলো আপনার জানা প্রয়োজন ...।
বিসিএস ও ব্যাংক এর লিখিত পরীক্ষা ...+এছাড়া মাধ্যমিক ও উচ্চমাধ্যমিকের স্টুডেন্টদের জন্য অনেক কাজে আসবে ...
This document provides an overview of wound healing, its functions, stages, mechanisms, factors affecting it, and complications.
A wound is a break in the integrity of the skin or tissues, which may be associated with disruption of the structure and function.
Healing is the body’s response to injury in an attempt to restore normal structure and functions.
Healing can occur in two ways: Regeneration and Repair
There are 4 phases of wound healing: hemostasis, inflammation, proliferation, and remodeling. This document also describes the mechanism of wound healing. Factors that affect healing include infection, uncontrolled diabetes, poor nutrition, age, anemia, the presence of foreign bodies, etc.
Complications of wound healing like infection, hyperpigmentation of scar, contractures, and keloid formation.
हिंदी वर्णमाला पीपीटी, hindi alphabet PPT presentation, hindi varnamala PPT, Hindi Varnamala pdf, हिंदी स्वर, हिंदी व्यंजन, sikhiye hindi varnmala, dr. mulla adam ali, hindi language and literature, hindi alphabet with drawing, hindi alphabet pdf, hindi varnamala for childrens, hindi language, hindi varnamala practice for kids, https://www.drmullaadamali.com
Reimagining Your Library Space: How to Increase the Vibes in Your Library No ...Diana Rendina
Librarians are leading the way in creating future-ready citizens – now we need to update our spaces to match. In this session, attendees will get inspiration for transforming their library spaces. You’ll learn how to survey students and patrons, create a focus group, and use design thinking to brainstorm ideas for your space. We’ll discuss budget friendly ways to change your space as well as how to find funding. No matter where you’re at, you’ll find ideas for reimagining your space in this session.
it describes the bony anatomy including the femoral head , acetabulum, labrum . also discusses the capsule , ligaments . muscle that act on the hip joint and the range of motion are outlined. factors affecting hip joint stability and weight transmission through the joint are summarized.
Chapter wise All Notes of First year Basic Civil Engineering.pptxDenish Jangid
Chapter wise All Notes of First year Basic Civil Engineering
Syllabus
Chapter-1
Introduction to objective, scope and outcome the subject
Chapter 2
Introduction: Scope and Specialization of Civil Engineering, Role of civil Engineer in Society, Impact of infrastructural development on economy of country.
Chapter 3
Surveying: Object Principles & Types of Surveying; Site Plans, Plans & Maps; Scales & Unit of different Measurements.
Linear Measurements: Instruments used. Linear Measurement by Tape, Ranging out Survey Lines and overcoming Obstructions; Measurements on sloping ground; Tape corrections, conventional symbols. Angular Measurements: Instruments used; Introduction to Compass Surveying, Bearings and Longitude & Latitude of a Line, Introduction to total station.
Levelling: Instrument used Object of levelling, Methods of levelling in brief, and Contour maps.
Chapter 4
Buildings: Selection of site for Buildings, Layout of Building Plan, Types of buildings, Plinth area, carpet area, floor space index, Introduction to building byelaws, concept of sun light & ventilation. Components of Buildings & their functions, Basic concept of R.C.C., Introduction to types of foundation
Chapter 5
Transportation: Introduction to Transportation Engineering; Traffic and Road Safety: Types and Characteristics of Various Modes of Transportation; Various Road Traffic Signs, Causes of Accidents and Road Safety Measures.
Chapter 6
Environmental Engineering: Environmental Pollution, Environmental Acts and Regulations, Functional Concepts of Ecology, Basics of Species, Biodiversity, Ecosystem, Hydrological Cycle; Chemical Cycles: Carbon, Nitrogen & Phosphorus; Energy Flow in Ecosystems.
Water Pollution: Water Quality standards, Introduction to Treatment & Disposal of Waste Water. Reuse and Saving of Water, Rain Water Harvesting. Solid Waste Management: Classification of Solid Waste, Collection, Transportation and Disposal of Solid. Recycling of Solid Waste: Energy Recovery, Sanitary Landfill, On-Site Sanitation. Air & Noise Pollution: Primary and Secondary air pollutants, Harmful effects of Air Pollution, Control of Air Pollution. . Noise Pollution Harmful Effects of noise pollution, control of noise pollution, Global warming & Climate Change, Ozone depletion, Greenhouse effect
Text Books:
1. Palancharmy, Basic Civil Engineering, McGraw Hill publishers.
2. Satheesh Gopi, Basic Civil Engineering, Pearson Publishers.
3. Ketki Rangwala Dalal, Essentials of Civil Engineering, Charotar Publishing House.
4. BCP, Surveying volume 1
This presentation was provided by Steph Pollock of The American Psychological Association’s Journals Program, and Damita Snow, of The American Society of Civil Engineers (ASCE), for the initial session of NISO's 2024 Training Series "DEIA in the Scholarly Landscape." Session One: 'Setting Expectations: a DEIA Primer,' was held June 6, 2024.
Pollock and Snow "DEIA in the Scholarly Landscape, Session One: Setting Expec...
How to Analyze the Results of LinearPrograms—Part 1 Prelimi.docx
1. How to Analyze the Results of Linear
Programs—Part 1: Preliminaries
HARVEY J. GREENBERG Mathematics Department
University of Colorado at Denver
PO Box 173364
Denver. Colorado 80217-3364
In a four part series, I describe ways to analyze the results of
linear programs beyond what is commonly described in text-
books. My intent is to capture the thought process in analysis
with two objectives. First, I want to provide a guide to those
getting started in applications of linear programming by sug-
gesting useful ways of looking at the results. Second, I want to
help create an artificially intelligent environment for the analy-
sis of results by presenting a protocol that a knowledge engi-
neer can use. The former has been in the folklore for decades;
the latter is part of a project to develop an intelligent mathe-
matical programming system. This first part of the series con-
tains basic terms and concepts used in the other three parts:
price interpretation, infeasibility diagnosis, and forcing
substructures.
A great deal of research and develop-ment activity in large-
scale linear
programming (LP) has been devoted to
solving problems faster. A medium-size
problem by today's standards contains
about 5,000 equations and 20,000 vari-
ables. Even microcomputer versions can
handle thousands of equations and vari-
2. ables, and supercomputers have been used
for problems with millions of variables!
How can we understand the results? At
one level, in the interests of model man-
agement, we must verify that the solution
obtained makes sense with respect to the
Cupyrighr S) 1993, The Inslilute of Management Sciencos
OO91-21U2/93/23O4/OO56S0I.25
This paptr was refereed.
PROCRAMMENG—LINEAR
INTERFACES 23: 4 July-August 1993 (pp. 56-67)
LINEAR PROGRAMS
problem represented by tho linear pro-
gram.
Once we think we have a good run, we
must delve into the meaning of a solution.
Questions of sensitivity play a direct role,
such as What if . . .? and Why . . .? For
example, we may ask the following. What
if the demand for a commodity increases?
What if capacity is expanded? What if
some resource is made available? Why did
this plant not operate? Why is total pro-
duction so low? Why is the price of some
commodity so large? Why does a certain
flow pattern occur? Is it preferred to others
because of the economic trade-off, or are
3. the flows forced by the constraints?
Textbook wisdom does not go far
enough in answering these questions in
practical terms (see Gal [1979] for an excel-
lent mathematical treatment). Also, once
an answer is obtained in some mathemati-
cal way, how can we present the answer to
problem owners who might not know lin-
ear programming? We must be able to look
at different views of linear programs and
their pieces, for example, using graphic tech-
niques to present information about flows.
Before we can venture into this world of
analysis, we must understand how linear
programs are constructed. In this overview,
I describe and illustrate some terms, con-
cepts, and principles that one needs to un-
derstand bow to analyze LP results. I have
published earlier tutorials on new ap-
proaches to analysis [Greenberg 1978,
1981, 1982]. The references I give here are
not exhaustive of the attention devoted to
analysis (for an extensive bibliography, see
Greenberg [1992c]).
LP Structure and Syntax
It is important to understand how linear
programs are formulated in order to de-
velop practical analysis techniques
[Williams 1978]. The rules of LP composi-
tion comprise the syntax of the linear pro-
gram.
4. Mathematically, we use the algebraic
representation, y ^ Ax and L < {x, y) < (J.
Subject to these equations and bounds,
some linear function, like cost, is mini-
mized. I call the A"-variab!es levels of activi-
ties, and 1 call the ly-variables logical levels
because y is determined logically from x
and the equations. The bound constraints
typically have L, > 0 for the level, x,, and
many of the (explicit) upper bounds (L/,)
are infinite for x's. Positive lower bounds
arise to represent minimal levels of opera-
tion or contracted shipments. Finite upper
bounds arise to represent capacity limits of
physical units or market limits of sales.
The canonical form, generally found in
textbooks (for example, Dantzig [1963]), is
to minimize ex subject to AY > b and x > 0.
Bounds can be incorporated into the con-
straints, and free variables (that is, those
allowed tt) be negative) can be partitioned
into their positive and negative parts.
Many different algebraic forms are treated
in most textbooks and shown to be mathe-
matically equivalent to the canonical form.
To reach our form, simply define y = Ax,
set the lower bound of y equal to b, and let
all upper bounds be infinite. Conceptually,
however, it is better to segregate bounds
on variables from equations that represent
relations among the variables.
The coefficient matrix. A, is highly struc-
tured. Often, it decomposes into blocks
with special equations or activities that link
5. the blocks in a well-formulated linear pro-
gram. For example, the blocks could be
July-August 1993 57
GREENBERG
processes in different regions, and the links
could be aggregate resource limits or trans-
shipment activities. Knowledge of such
structures can be useful in analysis [Baker
1990 and Welch 1987].
Sometimes the structures are known or
can easily be inferred from the model's
syntax—that is, from the rules for compos-
ing the objects and relations that comprise
the LP. Other times, structures are inferred
by executing a recognition algorithm. One
example is recognizing a network embed-
ded in the LP,
Typically, the analysis process has two
steps. First, tbe analyst must work tbrougb
the relevant portions of tbe linear program,
generally witb analytical techniques, to
compute implied relations using the LP
structure. Second, once tbe mathematical
phase of analysis is completed, tbe analyst
must translate the results so that they will
be comprebensible to someone concerned
witb the analysis, generally with linguistic
techniques using tbe LP syntax. Tbis some-
one could be a problem owner for whom
6. tbe linear program is used, sucb as an en-
gineer or corporate executive. Tbe some-
one could also be a data-base manager
wbo must become involved if questions of
data arise.
In practice, variables tend to fall into
classes. Activity classes can be production,
consumption, transportation, conversion,
capacity expansion, or inventory carrying.
Equation classes can be resource limits, de-
mand requirements, flow balances, or
quality assurance ranges. Tbese lists are
not exhaustive, but typically a linear pro-
gram bas fewer tban a dozen classes of ac-
tivities and even fewer classes of equa-
tions. Wbat makes a linear program large
are tbe dimensions of basic entities, like re-
gions, materials, and time periods [Glover,
Klingman, and Phillips 1990, 1992].
To understand a solution, one must have
a sense of the basic dimensions and classes
of variables. Traditionally, tbe names of
rows and columns in a linear program con-
tain the underlying syntax.
One way to express an LP model, which
is tbe way of most textbooks, is first to de-
fine sets, domains, data tables, and vari-
ables, and second to define the constraints.
To illustrate, I shall explain a production-
distribution model.
Given are
(1) A collection of plants distinguished by
7. their locations and processes of opera-
tion, wbicb require raw material inputs
and produce finisbed products;
(2) Markets, distinguisbed by tbeir loca-
tions and products; and
(3) Transportation links from plants to
markets.
Tbe data for a specific instance are:
R,,,i = unit amount of raw material i used
by process p at plant j;
Ypjk = unit yield of product k from process
p at plant j;
S, ^ total supply of raw material /';
COST—OPpi = unit operation cost of pro-
cess p at plant j;
K, ^ capacity limit of plant /, measured in
terms of its total output;
COST—SHjnit, ^ unit shipping cost of prod-
uct k from plant; to market
m (if there is no link from
plant ;• to market m for
product k, tbe value of
COST^SHj,,,), = GO); and
D,,,( ^ demand for product k tbat must be
satisfied in market m.
8. INTERFACES 23:4 58
LINEAR PROGRAMS
The variables are:
Pf,i ^ level of production using process p at
plant /;
T,,,,i = amount of product k sent from plant
/ to market m; and
COST ^ total cost of production and trans-
portation.
The LP model is:
Minimize COST subject to: P,,,, T,,,,̂ > 0;
COST - ^,,i
(Raw material availabihties),
C{/) - XpPp, < Kj (Capacity limits),
B{j. k) - 2,,y,,,P,,, - 2«,T,,,,i - 0
(Balance equations), and
D{m, k) - 2,T,,,,t > D ,̂i (Demands).
9. Notice that I began with the definition of
sets and data tables. In expressing the
model, I defined each variable with a sym-
bol (P and T) over domains (set products);
then, I wrote the objective and the con-
straints. This is the algebraic form. Alter-
natively, 1 could express the model in
schema form (Figure 1).
In the schema form of the model, the
rows have been classified with COST as the
objective row, and the other row types be-
gin with A, B, C, and D. The activities have
been classified as production (P) and trans-
portation (T). The subscripts in the original
problem definition become domains in this
expression. Each domain is a cross product
of sets, usually restricted to only some of
the many products. For example, the trans-
10. portation activity has three domain sets:
source (plant), destination (market), and
material (product). The distribution net-
work, however, is usually sparse in that
not every plant ships every product to ev-
ery market.
In our example, the sets are as follows:
(' ^ raw material;
p = process;
/ ^ plant (location);
k = finished product; and
m = market location.
When forming the name of a row or col-
umn, ! distinguish its type by its first char-
acter. Then, I specify its domain member,
but without the parentheses and commas.
For example, consider the transportation
activity T{j, m, k) for the particular plant)
11. = S (a code for South), the particular mar-
ket m - CH (a code for Chicago), and the
particular finished product /c = T (a code
for table). Then, I name the column
TSCHT. (1 use this name syntax here and I
shall use it in the sequel papers).
Suppose, for example, I specify the fol-
lowing set elements.
i = {S, W}: S means steel, W means wood;
COST
A(i)
B(j, k)
C(j)
D(m, k)
COST_OP(p, j)
R(i, p. j)
Y(p, j , k)
1
T(j. m, k)
COST_SH(j, m, k)
- 1
1
12. = MIN
< = S(i)
= 0
< = K(j)
> = D(m, k)
Figure 1: The schema form of the production-distribution model
offers an alternative view
from the algebraic form.
July-August 1993 59
GREENBERG
;' ^ {1, 2, 3[: 1 means process 1, 2 means
process 2, 3 means process 3;
j = {N, S]: N means North and S means
South;
m - {DE,CH}: D£ means Denver and CH
means Chicago;
k ^ {C, T]: C means chair and T means
table.
Suppose further that the shipping links are
only those shown in Figure 2,
Then, an equation listing for the particu-
lar linear program is as follows (where d
simply indicates some data value).
13. Minimize COST subject to:
COST ^ dPlS + d P2N + d P2S
+ d TSCHT + d TNDEC + d TNDET
d PIS + dP3N + d P3S
= d P2N + dP2S + d P3N + d P3S
<d
BNC = d PIN + dPSN - TNCHC
- TNDEC = 0
BNT - d P2N + d P3N - TNDET = 0
BST ^dP2S + d P3S - TSCHT = 0
CS ^ PIS + PIS ^ P3S < d
DCHC = TNCHC > d
DCHT = TSCHT > d
DDEC - TNDEC > d
DDET = TNDET > d
All activity levels > 0.
Consider, for example, the first column,
associated with activity PIN. This is the
name of activity P{p, j) for p = I and /
14. ^ N. It uses steel, so it appears in equation
AS, which is row A{i) for / = S. The activi-
ties that produce with process 2, namely
P2N and P2S, each use wood; and, those
with process 3 use fixed shares of steel and
wood. The capacity equation, CN, limits
the total capacities used by the associated
production activities, PIN, P2N and P3JV.
An equation listing is not always the
best way to view a linear program, es-
pecially when it is large. I shall describe
some alternative views that support
analysis.
Alternative Views of Linear Programs
The most common view of a linear pro-
gram, found in textbooks, is an algebraic
one. It presents a dictionary for what the
data and variables mean, followed by a
system of equatioap that represent con-
straints. The sheer size of today's problems
can make an algebraic view confounding
when one is trying to understand patterns
of relationships.
The entire subject of views has been in-
vestigated elsewhere [Greenberg and
Murphy forthcoming]. Here I consider two
views that have been helpful to me in
Plants Markets
(N)
(S)
15. DE
CH
Figure 2: Shipping links for an instance of
the production-distribution model connect
the north (/V) and south (S) plants to Denver
(DE) and Chicago {CH). The labels on the arcs
show which products (tables and chairs) each
plant can ship to each city.
INTERFACES 23:4 60
LINEAR PROGRAMS
gaining insight quickly for analysis. The
first is a picture of sign patterns in the LP
matrix, and the second is a directed graph.
Figure 3 shows a picture of the example
product distribution LP. Over the columns,
the activity names are printed vertically,
and the entries are the signs of the nonze-
roes (a blank means the coefficient is zero).
The picture gives us a cognitive view of a
pattern, which is often more useful than
an equation listing.
Computer graphics offer us more oppor-
tunities for visual insights, for example,
graph-based views of activity paths from
production through consumption. Graph-
based views can be obtained from a vari-
16. ety of fundamental graphs associated with
a linear program (the ones I use here were
developed by several authors [Choobineh
1991; Glover 1983; Glover, Klingman, and
Phillips 1990, 1992; Greenberg 1978;
Schrage 1981]; alternative graphs, based
AS
AW
BNC
BNT
BST
CN
COST
CS
DCHC
DCHT
DDEC
DDET
P P
1 1
N S
21. 4
0
D
0
4
MIN
4-
4-
4-
4
4-
Figure 3: A picture of Ihe product distribution
linear program shows the sign pattern for a
relational view.
on the structured modeling formalism,
were given by Geoffrion [1987, 1989], and
those based on graph grammars were
given by Jones [1990, 1991]).
The particular graph that I use in this se-
ries is one that relates activities and equa-
tions [Greenberg 1978], The nodes are the
rows and columns, giving a natural divi-
sion into two node types that makes the
22. graph bipartite. Each link corresponds to a
nonzero in the coefficient matrix, A. The
picture is a view of the adjacency matrix of
this bipartite graph, where each link ap-
pears as either a plus (+) or a minus (-).
There are two ways to account for sign
patterns; either sign each link or orient it.
The former leads to representations of eco-
nomic correlation, which I do not consider
here (see Greenberg, Lundgren, and
Maybee [1989]), and the latter leads to
flow representations, which I now de-
scribe.
Consider an activity that represents an
exchange, where negative coefficients rep-
resent inputs and positive coefficients rep-
resent outputs. Based on this notion of ac-
tivity I/O, orient an arc from its row node
to its column node if the coefficient is neg-
ative, and from the column node to the
row node if it is positive.
(row i)
(row ()
Aihvily Inpul
A,,>0
[column /]
Activity Outp
»• [column ;]
23. utput ' ^ '
I call this the fundamental digraph, which
can be used with the syntax to give an al-
ternative view of equations, which com-
prise the rows, and activities, which com-
prise the columns. From the fundamental
digraph, there are two projections that of-
fer insights. These are called the row di-
July-August 1993 61
GREENBERG
graph and tbe column di^^raph.
The row digrapb consists of row nodes
(only), and an arc from one row node to
another means there is at least one activity
with one arc from the first row node, and
one arc into the second. Such arcs tend to
represent flows, where the activity trans-
forms some basic entity that is represented
by the two equations. In the standard
transportation problem, for example, the
entity is a location and the transformation
changes the location of the material that is
transported from location at / to location
atk.
... other nonzeroes.
Activity j creates an arc from row node / to
24. row node k in the row digraph (entities of i
transformed into entities of k).
The column digraph consists of column
nodes (only), and an arc from one column
node to another means there is at least one
equation that is an output of the first activ-
ity and an input to the second. Such arcs
tend to represent an ordering of the activi-
ties.
:... othernonzeroes.
Equation i creates an arc from column
node j to column node k in the column di-
graph (output of j is input to k).
For an ordinary network, the coefficient
matrix is the usual incidence relation, from
sources to destinations. More generally, in
canonical form, negative coefficients corre-
spond to activity inputs and positive coeffi-
cients to outputs. When an activity has
more than one input or more than one
output, the LP is sometimes called a net-
form [Glover 1983] and sometimes called a
process network [Chinneck 1990].
For example, consider the following
2 X 3 system.
S - 2P - T and
D = {T - C.
25. Think of P, T, and C as production, trans-
portation, and consumption activities, re-
spectively; and think of S and D as supply
and demand stocks. For example, suppose
we produce one unit, transport two units
and consume one unit; that is, P ^ 1, T ^
2, and C - 1. Then, S - D - 0; that is, the
stocks are balanced (no excess or shortage).
Figure 4 shows all three digraphs for this
2 X 3 system.
The row digraph shows flow from sup-
ply (S) to demand (D), The column digraph
shows a precedence, where production (P)
precedes transportation (T), which pre-
cedes consumption (C).
1 can also apply these digraphs to por-
tions of an LP, for example, to the portion
of the product distribution LP (compare
Figure 3) that is shown in Figure 5. This
digraph represents a portion pertaining to
demand for tables in Denver (row node
DDET). The headless arc out of row node
DDET denotes a demand requirement, and
the tailless arcs into row nodes AW and CN
denote availabilities of wood and capacity,
respectively.
Figure 6 shows the row and column di-
graphs associated with the fundamental
INTERFACES 23:4 62
26. LINEAR PROGRAMS
(S)
(D)
[P]
[T]
•[C]
[P]
(S)
I
(D)
[T]
I
[C]
Fundamental Row Column
Digraph Digraph Digraph
Figure 4: The digraphs for the 2 x 3 system
show flows and activity precedence.
digraph in Figure 5. The row digraph gives
a view of flows: wood {AW) and capacity
in the North (CN) are transformed into a
table in the North (BNT), which is trans-
ported to Denver (DDET). The column di-
graph gives a view of activity sequence:
27. produce a table in the North by process 2
[P2N], then transport the table to Denver
[TNDET].
Re-organization of Equations
In any system of equations, such as y
= Ax, I regard the variables on the right (x)
as independent and those on the left (y) as
dependent. In linear programming, the ter-
minology given to dependent variables is
basic and to independent variables is non-
basic, and these roles oi variables can
change from the original expression to a
form that results from a solution. This
change of role is important to analysis
questions because the dependence of y, on
X, is no longer measured by the original
coefficient, A,,, and dependencies among
the x-variables become revealed by the re-
organization.
(AW)
(CN)
[P2N] (BNT)
For example, consider the 2 X 3 system.
In its original form, we can make such
statements as the following:
—An increase in production (P) causes an
increase in supply stock (S) at double the
rate;
—̂A decrease in transportation (T) causes
28. an equal increase in supply stock (S) and
a decrease in demand stock (D) at half
the rate.
Notice that the causality is from the in-
dependent variable (on the right) to the
dependent variable (on the left). An alge-
braically equivalent system of equations is
obtained by rewriting the original ones, for
example, the following:
P= ^,S+ jTand
C - {T -D.
In this form, I have kept T as an inde-
pendent variable on the right-hand side,
but I exchanged P for S in the first equa-
tion and C for D in the second. Now P and
C are dependent, or basic, variables, and S
and D are independent, or nonbasic, vari-
ables.
In this form, we can make such state-
ments as the following:
—An increase in supply stock (S) causes an
increase in production (P) at half the
rate;
—A decrease in transportation (7) causes a
decrease in both production (P) and con-
sumption (C), each at half the rate.
The roles the variables assume in any re-
organization of the equations determine
29. •• [TNDET] • • (DDET)
Figure 5: The fundamental digraph for a portion of the product
distribution LP traces a path
from production to demand.
July-August 1993 63
GREENBERG
(BNT) (DDET) [P2N] [TNDET]
(a) Row Digraph (b) Column Digraph
Figure 6: Row and column digraphs for the portion of the
product distribution LP shown in
Figure 5 show flows and the activity sequence associated with
the flow trace.
causality relationships: what happens to
the dependent, or basic, variable when an
independent, or nonbasic, variable is
changed. The details of how one obtains
the new system of equations from the orig-
inal system are a matter of computation,
and I do not consider them here. What is
important is to recognize marginal rates of
substitution that depend upon the parti,cu-
lar roles, basic versus nonbasic, which is
part of the solution information.
Let us consider an example of how this
information is used for an analysis ques-
30. tion. Suppose a solution to the product dis-
tribution LP has activities P2N and TNDET
basic, and activity P3JV nonbasic (at its
lower bound of zero). Note, from Figure 3,
that P3N is a substitute for activity P2N~
that is, they are substitutes, or competitors,
because they use the same inputs (wood
and capacity), and they produce the same
output (tables in the North). Activity P3N
uses an additional input (steel) and pro-
duces an additional output (chairs in the
North). The question is. What if activity
P3N were forced to increase to a positive
level?
Figure 7 shows some rates of substitu-
tion, using hypothetical data. Among the
basic variables affected is P2N, whose level
is displaced one-for-one—that is, for each
unit increase in P3N, there is a unit de-
crease in P2N. The level of row AS in-
creases at a rate of 0.4 because activity
P3N uses 0.4 units of steel for each table it
produces (hypothetically). The net effect
on wood used (row AW) is a decrease at a
rate of 0.4 because P3N uses 0.6 units of
wood to produce one table, while P2iV
uses one unit of wood to produce a table;
the displacement of P'5N for P2N results in
a net decrease of 0.4 units of wood per
unit of P3N. The COST row is also af-
fected, where each unit of increase in P3N
results in an increase of $9.00.
Other basic variables are affected by an
31. increase in P3N, such as the increase in the
level of transportation of chairs produced
by P3N with accompanying displacements
of other chair transportation and produc-
tion.
The rates of substitution can be ob-
tained, but one must take care in interpret-
ing their meaning in the presence of a
property called degeneracy. Although 1
AS = current level + 0.4P3N
+ other non-basic rates
AW = current level - 0AP3N
+ other non-basic rates
COST - current level + 9P3/V
+ other non-basic rates
P2N = current level - P3N
+ other non-basic rates
Figure 7: Rates of substitution, from a rewrite
of the equations, reveal how a change in the
level of activity P3N affects basic variables.
INTERFACES 23:4 64
LINEAR PROGRAMS
32. shall not consider it in detail here, degen-
eracy will arise in some of our exercises in
the sequels. One form of degeneracy,
which affects the use of rates for sensitivity
questions, occurs when the level of a basic
variable is at its bound, such as zero.
When that is the case, additional analysis
is required to address what-if questions.
The information obtained from rates of
substitution directly addresses not only the
paradigm what if . . .? sensitivity ques-
tion, but also other questions of analysis,
such as the meaning of redundancy in the
interests of model management and deeper
understanding of the results.
The Analysis Processes
I conclude our preliminaries with an
overview of the analysis process. Three
types of analysis processes are (1) validity
testing, (2) postoptimal analysis, and (3)
debugging. Validity pertains to how well
the LP represents the world it is intended
to, but I include, in validity testing, ele-
ments of verification: whether what is in
the LP is what is believed to be there. Post-
optimal analysis is probing into the mean-
ing of an optimal solution. This includes
conventional questions of sensitivity, and it
includes some additional analyses that are
unconventional in the sense that they go
beyond textbook definitions. Debugging is
the process of diagnosing the cause of a
failure, for example, an infeasible LP.
33. The first thing one checks after a solver
has terminated is whether an optimal solu-
tion has been found. It could be that the
solver detected that the linear program is
infeasible or unbounded. This is called a
mechanical failure, and its detection
launches an analysis effort to diagnose the
cause in order to repair it.
Diagnosing the cause of a mechanical
failure is sometimes called debugging. De-
bugging also applies more generally to op-
timal linear programs, such as checking
that the results make sense. To make
sense, one must explain the results in
problem domain terms. Failure to do so
can lead to erroneous conclusions.
Once we have a run that is not a me-
chanical failure, we check some things that
are particular to the model, and I call this
validity testing. One case that occurred had
the following result. All the variables were
zero, contrary to what makes sense for the
problem. 1 discovered that demands were
inadvertently omitted from the scenario
specification, so a do-nothing solution had
minimum cost. This was easy to detect and
remedy just by looking at the right-hand
sides of the equations.
Other validity tests are not as easy, but
part of the maturation of a model is the
maturation of the personnel that run it.
The tests can become increasingly complex
34. with this maturation, so what comprises
the vahdity test depends upon accumu-
lated experiences with what can go wrong.
A deep validity test, for example, is to
impute price elasticities from scenarios.
This measures how quantities change with
respect to percentage changes in prices.
Suppose one has a sense that the price
elasticity of a product is about 10 per-
cent—that is, if the price doubles, the pro-
duction is expected to increase by 10 per-
cent. If the imputed value is more like 100
percent, something might be wrong with
the run. (To compute an elasticity, there
must be some other run, such as a base
case, against which to compare the cur-
rent run.)
July-August 1993 65
GREENBERG
In general, a validity test is a test of so-
lution values with a sense of what they
should be. The test looks for gross devia-
tions from the expected results. This is part
of model management, and it has so far
been passed on from one generation to an-
other by on-the-job training. The time is
undoubtedly right for an in-depth treat-
ment of applied Hnear programming that
includes a substantive description of model
management.
35. In sequels to this overview 1 shall pre-
sent the following examples of analysis.
—Price interpretation: What a dual price
means.
—Infeasibitity diagnosis: Why a mechani-
cal failure occurred.
—Forcing substructures: Separating eco-
nomic trade-offs from forced values.
Collectively, these illustrate some princi-
ples that have been used in practice and
some new ones introduced with the avail-
ability of ANALYZE [Greenberg 1983,
1987, 1988, 1989, 1992a, 1993], a software
system designed to provide computer-
assisted analysis, including rule-based
intelligence.
Acknowledgments
I gratefully acknowledge encouragement
and technical help from Frederic H.
Murphy. I also received valuable com-
ments from ]ohn Stone and an anonymous
referee that led to an improved version. In
addition, support for the ongoing project
that produced ANALYZE (among other
things) comes from a consortium of com-
panies: Amoco Oil Company, IBM, Shell
Development Company, Chesapeake Deci-
sion Sciences, GAMS Development Corp.,
Ketron Management Science, and
MathPro, Inc.
36. References
Baker, T. E. 1990, "Integrating AI/OR/DATA-
BASE technology for production planning
and scheduling," Technical report, Chesa-
peake Decision Sciences, Inc., New Provi-
dence, Nevv' lersey.
Chinneck, J. W. 1990, "Formulating processing
network models: Viability theory," Naval Re-
search Logistics, Vol. 37, No. 2, pp. 245-261.
Choobineh, I. 1991, "A diagramming technique
for representation of linear models," OMEGA
International journal of Management Science,
Vol. 19, No. 1, pp. 4 3 - 5 1 .
Dantzig, G. B. 1963, Linear Programming and Ex-
tensions, Princeton University Press, Prince-
ton, New Jersey.
Gal, T. 1979, Postoptimal Analyses, Parametric
Programming, and Related Topics, McGraw-
Hill International, New York.
Geoffrion, A. M. 1987, "An introduction to
structured modeling," Management Science,
Vol. 33, No. 5, pp. 547-588.
Geoffrion, A. M. 1989, "The formal aspects of
structured modeling," Operations Research,
Vol. 37, No. l , p p . 3 0 - 5 1 .
Glover, F. 1983, "Netform modeling," Draft
monograph. School of Business, University of
Colorado, Buulder, Colorado.
37. Glover, F.; Klingman, D.; and Phillips, N. 1990,
"Netform modeling and applications,"
interfaces. Vol. 20, No. 4, pp. 7-27.
Glover, F.; Klingman, D.; and Phillips, N. V.
1992, Network Models in Optimization and
Their Applications in Practice, Wiley-
Interscience, New York.
Greenberg, H. J. 1978, "A new approach to an-
alyze information contained in a model," in
Energy Models Validation and Assessment, ed.
S. I. Gass, NBS Pub. 569, National Bureau of
Standards, Gaithersburg, Maryland, pp. 5 1 7 -
524.
Greenberg, H. J. 1981, "Implementation aspects
of model management: A focus on computer-
assisted analysis," in Energy Policy Planning,
eds. B. A. Bayraktar, E. A. Gherniavsky,
M. A. Laughton, and L. E. Ruff, Plenum
Press, New York, pp. 443-459.
Greenberg, H. J, 1982, "A tutorial on computer-
assisted analysis," in Advanced Techniques in
the Practice of Operations Research, eds. H . ) .
Greenberg, F. H. Murphy, and S. H. Shaw,
American Elsevier, Nevy York, pp. 212-249.
INTERFACES 23:4 66
LINEAR PROGRAMS
38. Greenberg, H. J. 1983, "A functional descrip-
tion of ANALYZE: A computer-assisted anal-
ysis system for linear programming models,"
ACM Transactions on Mathematical Software,
Vol. 9, No, 1, pp, 18-56.
Greenberg, H, J. 1987, "ANALYZE: A com-
puter-assisted analysis system for linear pro-
gramming models," Operations Research Let-
ters. Vol, 6, No. 5, pp. 249-255.
Greenberg, H. J. 1988, "ANALYZE rulebase,"
in Mathematical Models for Decision Support,
eds. G. Mitra, H. j , Greenberg, F. A. Lootsma,
M. J. Rijckaert, and H-J. Zimmerman, Pro-
ceedings of NATO ASl, July 26-August 6,
Springer-Verlag, Berlin, pp. 229-238.
Greenberg, H. J. 1989, "Intelligent user inter-
faces for mathematical programming," Pro-
ceedings of Shell Conference: Logistics: Where
Ends Have to Meet, ed, C. Van Rijgn, Perga-
mon Press, Oxford, United Kingdom, pp.
198-223,
Greenberg, 11. J, 1992a, "Intelligent analysis
support for linear programs," Computers and
Chemical Engineering, Vol. 16, No, 7, pp, 6 5 9 -
674.
Greenberg, H. J. 1992b, ' A bibliography for the
development of an intelligent mathematical
programming system," Technical report.
Mathematics Department, University of Colo-
rado at Denver.
39. Greenberg, H. J. 1993, A Computer-Assisted
Analysis System for Mathematical Programming
Models and
Solution
s: A User's Guide for ANA-
LYZE. Kluwer, Boston, Massachusetts,
Greenberg, H, J.; Lundgren, J, R.; and Maybee,
J. S. 1989, "Extensions of graph inversion to
support an artificially intelligent modeling en-
vironment," Annals of Operations Research,
Vol, 21, No. , p p , 127-142.
Greenberg, H. j . and Murphy, F. H. forthcom-
ing, "Views of mathematical programming
models and their instances," Decision Support
Systems.
Jones, C. V, 1990, "An introduction to graph-
based modeling systems. Part 1: Overview,"
ORSA Journal on Computing, Vol, 2, No. 2, pp.
136-151.
40. Jones, C. V. 1991, "An introduction to graph-
based modeling systems. Part 2: Graph-gram-
mars and the implementation," ORSA Journal
on Computing, Vol. 2, No, 2, pp, 136-151,
Schrage, L. 1981, User's Manual for LINDO. Sci-
entific Press, Palo Alto, California.
Welch, Jr., J. S. 1987, "PAM—A practitioners'
approach to modeling," Management Science.
Vol, 33, No. 5, pp, 610-625.
Williams, H. P, 1978, Model Building in Mathe-
matical Programming, Wiley-Interscience, New
York.
July-August 1993 67