Upcoming SlideShare
×

Decision making techniques ppt @ mba opreatiop mgmt

5,231 views

Published on

Decision making techniques ppt @ mba opreatiop mgmt

Published in: Technology, Education
4 Likes
Statistics
Notes
• Full Name
Comment goes here.

Are you sure you want to Yes No
• Be the first to comment

Views
Total views
5,231
On SlideShare
0
From Embeds
0
Number of Embeds
3
Actions
Shares
0
215
0
Likes
4
Embeds 0
No embeds

No notes for slide
• Course summary by Kim Stephens, class of 2006 [email_address]
• Decision making techniques ppt @ mba opreatiop mgmt

1. 1. EBS Decision Making Techniques
2. 2. 1. Decision Analysis <ul><li>There are two common elements to every decision problem: </li></ul><ul><ul><li>A choice has to be made. </li></ul></ul><ul><ul><li>There must be a criterion by which to evaluate the outcome of the choice. </li></ul></ul><ul><li>A pattern of decision/ chance event/ decision/ chance event… is the single most important characteristic of problems that are potentially solvable by the technique of decision analysis. </li></ul>
3. 3. Problem Characteristics <ul><li>The decision maker seeks a technique that matches the problem’s special characteristics. The two principal characteristics of a problem for which decision analysis will be an effective technique: </li></ul><ul><ul><li>The problem comprises a series of sequential decisions spread over time. These decisions are interwoven with chance elements, with decisions taken in stages as new information emerges. </li></ul></ul><ul><ul><li>The decisions within the sequence are not quantitatively complex. </li></ul></ul>
4. 4. Decision Tree <ul><li>The logical sequence of decisions and chance events are represented diagrammatically in a decision tree, with decision nodes represented by a square () and chance nodes by a circle (O), connected by branches representing the logical sequence between nodes. </li></ul><ul><li>At the end of each branch is a payoff. Although the payoff can be non-monetary, Expected Monetary Value (EMV) is the usual criterion for decision making in decision analysis. </li></ul>
5. 5. Carrying Out Decision Analysis <ul><li>Stage 1 :Draw the Tree </li></ul><ul><li>Stage 2 : Insert Payoffs </li></ul><ul><ul><li>This might include a “gate”, or the cost of proceeding down a particular path: the symbol  straddles the branch. </li></ul></ul><ul><li>Stage 3 : Insert Probabilities </li></ul>
6. 6. Carrying Out Decision Analysis, 2 <ul><li>Stage 4 : Roll-back Procedure </li></ul><ul><ul><li>Starts with the payoffs and works backwards to the commencement of the tree. </li></ul></ul><ul><ul><li>Abandon branches with inferior pay-offs. The elimination of a branch is denoted by ‖. </li></ul></ul><ul><ul><li>All but the most favorable branch is eliminated. This is the Optimal Path . </li></ul></ul><ul><li>Stage 5 : Summarize the Optimal Path and Draw up the Risk Profile . </li></ul>
7. 7. Risk Profile <ul><li>… is a table summarizing the possible outcomes on the optimal path, along with their payoffs and probabilities. </li></ul><ul><ul><li>This is not the same thing as listing all outcomes since the act of choosing the optimal path will have eliminated some possibilities. </li></ul></ul><ul><ul><li>A decision maker may favor a course with a lower EMV but a more attractive risk profile. </li></ul></ul><ul><ul><li>This will reveal whether the EMV incorporates a high-impact, low probability (HILP) event. </li></ul></ul><ul><ul><li>Where differences are small, a report might contain multiple profiles. </li></ul></ul>
8. 8. Value of Additional Information <ul><li>The Expected Value of Sample Information (EVSI) relates to some specific extra information. </li></ul><ul><ul><li>It is measured as the amount by which the EMV of the decision will increase if this information is available at zero cost . </li></ul></ul><ul><ul><li>This is a guide to the maximum amount that should be paid for a particular piece of information. </li></ul></ul>
9. 9. Value of Additional Information, 2 <ul><li>The Expected Value of Perfect Information (EVPI) relates to any piece of information. </li></ul><ul><ul><li>It is measured as the amount by which the EMV of the decision will increase if perfect information were available. </li></ul></ul><ul><ul><li>Perfect information means that the decision maker is told what will be the outcome at every chance event node. </li></ul></ul><ul><ul><li>Perfect information does not guarantee a particular outcome: it only tells what the outcome will be. </li></ul></ul>
10. 10. Validity of the EMV Criterion <ul><li>It frequently makes sense to do as well as possible ‘on average’: the foundation of EMV. However, when there are very large payoffs or losses (even with low probability), another criterion may be selected by the decision maker. </li></ul><ul><ul><li>Maximin : At each decision node, the option is selected whose worst possible outcome is better than the worst possible outcome of other options. </li></ul></ul><ul><ul><li>Expected Opportunity Loss (EOL) incorporates the value of the excluded alternative, calculated by multiplying lost profit payoffs by the corresponding probabilities. </li></ul></ul>
11. 11. Proceeding to Decision <ul><li>Recommendations should be tested against other criteria: </li></ul><ul><ul><li>Is the decision tree an accurate model of the decision faced? </li></ul></ul><ul><ul><li>How much faith can be put in the probability estimates? </li></ul></ul><ul><ul><li>Is EMV an appropriate criterion for the situation? </li></ul></ul><ul><ul><li>How robust (sensitive to small variations in assumptions) is the optimal decision? </li></ul></ul>
12. 12. 2. Advanced Decision Analysis <ul><li>The most difficult aspect of decision analysis is the derivation of the probabilities of chance events. </li></ul><ul><li>The fact that an optimal decision was so highly dependent upon one probability is valuable information: </li></ul><ul><ul><li>It suggests that the decision problem has no clear-cut answer. </li></ul></ul><ul><ul><li>It also indicates the area to which attention should be given and further work done. </li></ul></ul>
13. 13. Continuous Probability Distributions <ul><li>Rather than being comprised of discrete values, a chance node outcome may be a continuous distribution, depicted in a decision tree by a fan. </li></ul><ul><li>Probability in a distribution is notionally determined by measuring areas under the curve. </li></ul><ul><ul><li>Alternatively, one can produce a cumulative distribution – one that indicates on the vertical scale the probability of a variable taking a value more than, or less than, the value indicated on the horizontal scale. </li></ul></ul><ul><ul><li>EMV is calculated upon a selected number of representative values, often bracket medians. </li></ul></ul>
14. 14. Bracket Medians <ul><li>Decide how many representative values are required. </li></ul><ul><ul><li>Rule of thumb: at least five but not more than ten. </li></ul></ul><ul><ul><li>If calculations are computerized, a large number of values can be used for intricate or sensitive distributions. </li></ul></ul><ul><li>Divide the whole range of values into equi-probable ranges (usual although not strictly necessary). </li></ul>
15. 15. Bracket Medians, 2 <ul><li>Select one representative value from each of the ranges by taking the median of the range. This value is the bracket median . </li></ul><ul><li>Let the bracket median stand for all of the values in the range and assign to it the probability of all the values in the range. </li></ul><ul><ul><li>In a normal distribution, the mean is the bracket median for the entire range. </li></ul></ul>
16. 16. Assessment of Subjective Probability <ul><li>The inability to objectively measure the probability does not relegate it to guesswork. </li></ul><ul><li>An estimate might be produced by interviewing experienced personnel to: </li></ul><ul><ul><li>Determine the endpoints, shape and relative probability of outcomes. </li></ul></ul><ul><ul><li>Subdivide ranges so far as it is useful to do so. </li></ul></ul><ul><ul><li>Test for consistency. </li></ul></ul>
17. 17. Revising Probability <ul><li>Bayes’ theorem is used whenever probabilities are revised in light of some new information (e.g., a sample). </li></ul><ul><ul><li>It combines experience ( prior probabilities ) with sample information (numbers) to derive posterior probabilities . </li></ul></ul><ul><ul><li>P(A and B) = P(A) x P(B|A) </li></ul></ul><ul><ul><li>P(A and B) = P(B) x P(A|B) </li></ul></ul><ul><li>The theorem could apply to payoffs as well as to probabilities. </li></ul>
18. 18. A diagrammatic approach <ul><li>A Venn diagram – in this case, a square – can be used to represent prior and posterior probabilities: </li></ul><ul><ul><li>Divide the square vertically to represent prior probabilities. Thus, rectangles are created those areas are proportional to their prior probabilities. </li></ul></ul><ul><ul><li>Divide the rectangles horizontally to represent the ‘sample accuracy’ probabilities. These smaller areas are proportional to the conditional probabilities. </li></ul></ul><ul><ul><li>Areas inside the square can be combined or divided to represent the posterior probabilities. A conditional probability is given not by the area representing the event, but by this area as a proportion of the area that represents the condition. </li></ul></ul>
19. 19. Bayesian Revision <ul><li>Results are conventionally summarized in a table, showing how the calculations are built up. </li></ul><ul><li>The posterior probability is calculated by dividing the term P(V) x P(T|V) in the fifth column by P(T). </li></ul>
20. 20. Utility <ul><li>The EMV criterion can mask large payoffs by averaging them out. A utility function represents monetary payoffs by utilities and allows EMVs to be used universally. </li></ul><ul><ul><li>A utility is a measurement that reflects the (subjective?) value of the monetary outcome to the decision maker. </li></ul></ul><ul><ul><li>These are not necessarily scaled proportionately. If a \$2 million loss meant bankruptcy where a \$1 million loss meant difficulty, the “disutility” of the former may be more than twice the latter. </li></ul></ul><ul><ul><li>The rollback then deals with utility rather than EMV. </li></ul></ul>
21. 21. 3. Linear Programming <ul><li>Linear programming (“LP”) is a technique for solving certain types of decision problems. It is applied to those that require the optimization of some criterion (e.g., maximizing profit or minimizing cost), but where the actions that can be taken are restricted (i.e., constrained). </li></ul><ul><li>Its formulation requires three factors: decision variables, a linear objective function and linear constraints. </li></ul><ul><ul><li>“ Linear” means that there are no logarithmic nor exponential functions. </li></ul></ul>
22. 22. Application <ul><li>The Simplex method (developed by Dantzig) or variations of it are used to solve even problems with thousands of decision variables and thousands of constraints. </li></ul><ul><li>These are typically the product of computer applications, although simple problems might be resolved algebraically (simultaneous equations) or graphically. </li></ul>
23. 23. The Solution <ul><li>The optimal point (if there is one) will be one of the “corners” of the feasible region: the northeast (maximization) or southwest (minimization), unless: </li></ul><ul><ul><li>The objective line is parallel to a constraint, in which case the two corner points and all of the points on the line are optimal. </li></ul></ul><ul><ul><li>The problem is infeasible (there are no solutions to satisfy all constraints). </li></ul></ul><ul><ul><li>The solution is unbounded (usually an error?) and there is an infinite number of solutions. </li></ul></ul>
24. 24. Redundancy and Slack <ul><li>One way to save time is to scan the formulation for redundancy in the constraints. Redundancy occurs when one constraint is always weaker than another. This term can be omitted. </li></ul><ul><li>In the optimal solution: </li></ul><ul><ul><li>A constraint is tight if all the resource to which the constraint refers is used up. A tight constraint has zero slack. </li></ul></ul><ul><ul><li>A slack constraint will have an excess, usually indicated in the form ‘available minus used’ (which, in a minimization problem, may be a negative number). </li></ul></ul><ul><ul><li>The slack is reported for each constraint. </li></ul></ul>
25. 25. 4. Extending Linear Programming <ul><li>Linear programming can be adapted to solve problems for which the assumptions of LP, such as linearity, are not met. </li></ul><ul><li>The technique can also generate other information that can be used to perform sensitivity analysis. </li></ul><ul><ul><li>It can solve problems in which some or all of the variables can take only whole number values. </li></ul></ul><ul><ul><li>It can be applied to problems where there is uncertainty, as where decision variables have a statistical distribution. </li></ul></ul>
26. 26. Dual Values <ul><li>Each constraint has a dual value that is a measure of the increase in the objective function if one extra unit of the constrained resource were available, everything else being unchanged. </li></ul><ul><ul><li>Dual values are sometimes called shadow prices . </li></ul></ul><ul><ul><li>They refer only to constraints, not to variables. </li></ul></ul><ul><li>These measure the true worth of the resources to the decision maker and there is no reason why they should be the same as the price paid for the resource or its market value. </li></ul>
27. 27. Dual Values, 2 <ul><li>When the slack is non-zero, then the dual value is zero. </li></ul><ul><ul><li>If this were not so, then incremental use of the resource could have been made, with an increment in the objective function. </li></ul></ul><ul><li>The dual value also works in the other direction: indicating a reduction in the objective function for a one unit reduction in the constrained resource. </li></ul>
28. 28. Reduced Costs <ul><li>Dual values can also be found for the non-negativity constraints, but they are then called “ reduced costs ”. </li></ul><ul><ul><li>If the optimal value of a variable is not zero, then its non-negativity constraint is slack, and the dual value (reduced cost) is zero. </li></ul></ul><ul><ul><li>If the optimal value is zero, then the constraint is tight and will have a non-zero reduced cost (a loss). </li></ul></ul><ul><li>Dual values are marginal values and may not hold over large ranges. Most computer packages give the range for which the dual value holds. This is known as the right-hand side range . </li></ul>
29. 29. Coefficient Ranges <ul><li>A coefficient range shows by how much an objective function coefficient can change before the optimal values in the decision variable change. </li></ul><ul><ul><li>Changing the coefficient value is the same as varying the slope of the line. If they change by small amounts, the slope of the line will not be sufficiently different to move away from the optimal point. </li></ul></ul><ul><ul><li>The same small changes will, however, change the value of the optimal objective function. </li></ul></ul><ul><ul><li>These ranges apply to the variation of one coefficient at a time, the others being kept at original values. </li></ul></ul>
30. 30. Transportation Problem <ul><li>Some LP problems have special characteristics that allow them to be solved with a simpler algorithm. </li></ul><ul><li>The classic transportation problem is an allocation of production to destinations with an objective of minimizing costs. </li></ul><ul><ul><li>The special structure is that all of the variables in all of the constraints have the coefficient ‘0’ or ‘1’. (This is what the book says, but I don’t understand what it means.) </li></ul></ul><ul><li>An assignment problem is like the transportation problem, with the additional feature that the coefficients are all ‘1’. (Likewise.) </li></ul>
31. 31. Other Extensions <ul><li>LP assumes that the decision variables are continuous, that is, they can take on any values subject to the constraints, including decimals and fractions. When the solution requires integer values (or blocks of integer values), a more complex algorithm is required. </li></ul><ul><li>Quadratic programming can contend with objective functions have squared terms, as may arise with variable costs, economies of scale or quantity discounts. </li></ul><ul><li>Goal programming – an example of a method with multiple objectives – will minimize the distance between feasibility and an objective. </li></ul>
32. 32. Handling Uncertainty <ul><li>The coefficients and constants used in problem formulation are fixed, even if they have been derived from estimates with various degrees of certainty (i.e., the model is deterministic ). </li></ul><ul><ul><li>Parametric programming is a systematic form of sensitivity analysis. A coefficient is varied continuously over a range and the effect on the optimal solution displayed. </li></ul></ul><ul><ul><li>Stochastic programming deals specifically with probability information. </li></ul></ul><ul><ul><li>Chance-constrained programming deals with constraints do not have to be met all of the time, but only a certain percentage of the time. </li></ul></ul>
33. 33. 5. Simulation <ul><li>Simulation means imitating the operation of a system. </li></ul><ul><li>Although a model may take (scaled down) physical form, it is more often comprised of a set of mathematical equations. </li></ul><ul><li>The purpose of a simulation is to test the effect of different decisions and different assumptions. </li></ul><ul><ul><li>These can be tested quickly and without the expense or danger of carrying out the decision in reality. </li></ul></ul>
34. 34. Benefits and Shortcomings <ul><li>Benefits </li></ul><ul><li>Simulation is capable of many applications where the requirements of optimization techniques cannot be met, e.g., linearity of variable inputs. </li></ul><ul><li>Management has an intuitive comfort with simulation that other methodologies may not enjoy. </li></ul><ul><li>Shortcomings </li></ul><ul><li>Significant time and expense is required to develop and then use the model. </li></ul><ul><li>The model may determine the “best” of the formulations tested, but it cannot make decisions or test alternatives other than those specified by the operator. </li></ul><ul><li>Data may not be readily available. </li></ul><ul><li>One must establish the validity of the model. </li></ul>
35. 35. Types of Simulations <ul><li>Deterministic </li></ul><ul><li>… means that the inputs and assumptions are fixed and known: none is subject to probability distributions. </li></ul><ul><li>It is used because of the number and complexity of the relationships involved. </li></ul><ul><li>Corporate planning models are an example of this type. </li></ul><ul><li>Stochastic </li></ul><ul><li>… means that the some of the inputs, assumptions and variables are subject to probability distributions. </li></ul><ul><li>The Monte Carlo technique describes a simulation run many times, with the input of particular variables defined by their probabilities in a distribution. The output will often form a distribution, thus defining a range and probability of outcomes. </li></ul>
36. 36. Flowchart <ul><li>A flowchart is an intermediate stage that helps transform a verbal description of a problem into a simulation. </li></ul>Action Question Start/ End Record File Flow of Events Flow of Information
37. 37. Controlling the Simulation <ul><li>Even if inputs are provided by random number generation, they could still (also by chance) be unrepresentative. </li></ul><ul><ul><li>To test (only) the effect of different policies, a set of simulations should use the same starting conditions and inputs for each. </li></ul></ul><ul><ul><li>A larger number of simulations would support an inference that an outcome was related to the differing policies rather than the chance selection of inputs. </li></ul></ul>
38. 38. Interpreting the Output <ul><li>The dispersion of results is a proxy for their risk: the probability that an outcome may be higher or lower than outcomes for other policies. </li></ul><ul><li>Risk might also be regarded as the frequency of a result outside certain parameters, e.g., a stock out may also have negative implications for customer service. </li></ul>
39. 39. Risk Analysis <ul><li>When a stochastic simulation uses the Monte Carlo technique to model cash flows, it is called a risk analysis . This differs from other stochastic simulations only in terms of its application to financial problems. </li></ul><ul><li>A best practice is to plan a series of trial policies structured so that the range of options can be narrowed down until a nearly optimal one can be obtained. </li></ul>
40. 40. 6. Network Planning Techniques <ul><li>The Critical Path Method (CPM) or Programme Evaluation and Review Technique (PERT) may benefit a project through: </li></ul><ul><ul><li>Coordination: getting resources deployed at the right time </li></ul></ul><ul><ul><li>Time management: exploiting the flexibility within the plan </li></ul></ul><ul><ul><li>Issue identification: discovering tasks or constraints that may otherwise be hidden until too late </li></ul></ul><ul><ul><li>Cost control: idle resources can be costly </li></ul></ul>
41. 41. Technique Basics <ul><li>A project is broken down into all of its individual activities. </li></ul><ul><li>The predecessors of each activity are identified. </li></ul><ul><ul><li>Some must be completed before others can start; some can take place at the same time as others. </li></ul></ul><ul><ul><li>A common mistake is confuse the customary sequence with their logical sequence. </li></ul></ul><ul><li>The sequence of activities is represented graphically, where the lines are activities and circles (called nodes) mark the start and completion of the activities. </li></ul>
42. 42. Technique Basics, 2 <ul><li>(cont.) </li></ul><ul><ul><li>The node at the beginning of a task should be the end node of all activities that are immediate predecessors. </li></ul></ul><ul><ul><li>The node at the end should be the beginning node of all activities that are immediate successors. </li></ul></ul><ul><ul><li>A dummy activity, represented by a dotted line, overcomes the requirement that no task can have the same beginning and end nodes, and avoids depicting a ‘false dependency’ of one task upon another. </li></ul></ul><ul><li>The critical tasks are studied with a view to deploying resources to speed completion or minimize the impacts of over-runs. </li></ul>
43. 43. Analyzing the Network <ul><li>The network is analyzed to determine timings and schedules. There are five stages: </li></ul><ul><ul><li>Estimate activity durations </li></ul></ul><ul><ul><li>Forward Pass </li></ul></ul><ul><ul><li>Backward Pass </li></ul></ul><ul><ul><li>Calculating float </li></ul></ul><ul><ul><li>Determining the Critical Path </li></ul></ul>
44. 44. Activity Duration <ul><li>The expected duration of an activity may be based on experience or on subjective estimation. </li></ul><ul><li>Where there is some uncertainty, a sensitivity analysis can be performed. Alternatively, one can obtain optimistic, most likely and pessimistic durations, then </li></ul><ul><ul><li>Expected duration = (Optimistic + (4 x Most Likely) + Pessimistic) / 6 </li></ul></ul><ul><ul><li>Variance (for activity) = ((Pessimistic – Optimistic)/6) 2 </li></ul></ul><ul><ul><li>Variance (for project) = Σ (variance of activities on the critical path) </li></ul></ul><ul><ul><li>Standard deviation =  variance (With 95% confidence, expected duration = mean ± 2 std dev) </li></ul></ul>
45. 45. Methodological Assumptions <ul><li>Activity times follow a beta distribution – a unimodal and non-symmetrical distribution with a shape commonly found in activities in large projects. </li></ul><ul><li>Activities are independent. This allows the application of the central limit theorem , that would predict that, as the number of variables in the sum increases, the distribution of the sum becomes increasingly normal. In fact, the activities are probably not independent: </li></ul><ul><ul><li>The cause of delay in one activity may be the same cause of delay in other activities. </li></ul></ul><ul><ul><li>The critical path can change. Reductions in durations may make other activities critical. </li></ul></ul>
46. 46. Forward Pass <ul><li>A forward pass means going through the network activity by activity to calculate for each an ‘earliest start’ and ‘earliest finish’, using the following formulae: </li></ul><ul><ul><li>Earliest finish = Earliest start + Activity duration </li></ul></ul><ul><ul><li>Earliest start = Earliest finish (of previous activity) </li></ul></ul><ul><ul><li>If there are several previous activities, we use the latest of their earliest finish times. </li></ul></ul><ul><li>The notation appears, for example, as (6, 10, 16), indicating that the activity’s earliest start is at week 6, has a duration of 10 weeks, and has an earliest finish at week 16. </li></ul>
47. 47. Backward Pass <ul><li>A backward pass means going through the network activity by activity, starting with the last and finishing with the first, to calculate for each an ‘latest finish’ and ‘latest start’. </li></ul><ul><ul><li>Latest finish = Latest start (of following activity) </li></ul></ul><ul><ul><li>Latest start = Latest finish - Activity duration </li></ul></ul><ul><ul><li>If there are several following activities, we use the earliest of their latest start times. </li></ul></ul><ul><li>The notation for backward pass is similar to that for forward pass, indicating Latest start, Float (discussed next) and Latest finish. </li></ul>
48. 48. Calculating Float <ul><li>An activity’s float is the amount of time that it can be delayed without delaying completion of the entire project. </li></ul><ul><ul><li>The difference between its earliest and latest start times </li></ul></ul><ul><ul><li>The difference between its earliest and latest finish times (the same thing) </li></ul></ul>
49. 49. Determining the Critical Path <ul><li>Activities for which a delay in start or completion will delay the entire project are critical . Others are slack . The critical activities define a critical path through the network. </li></ul><ul><li>A critical activity has a float of zero. </li></ul><ul><li>If the total completion time of a project must be shortened, then one or more activities on the critical path have to be shortened. </li></ul>
50. 50. Time – Cost Trade-offs <ul><li>Time and cost are usually, but not always, interchangeable and the planner can probably reduce project completion time by spending more money. </li></ul><ul><li>The analysis relating cost to time (cost of shortening by one week, by two weeks, etc.) is known as crashing the network. This is based on a time-cost “curve” for each activity: indicating cost and time for normal operations and the (crash) time and (crash) cost associated with the absolute minimum. </li></ul>
51. 51. Time – Cost Trade-offs, 2 <ul><li>The ratio of (Cost increase/ Time saved) is, in effect, the crashing cost per unit of time . </li></ul><ul><li>Only critical activities are crashed, since they are the only ones that can reduce overall project completion time. An activity is crashed until the greatest possible reduction has been made or until a another parallel path is critical. </li></ul><ul><li>If there are multiple critical paths, activities in each must be crashed simultaneously, with the sum of their costs calculated per unit time. </li></ul><ul><li>The final result can be graphed in a time – cost trade-off curve. </li></ul>
52. 52. Stages in Network Analysis <ul><li>List all activities in the project. </li></ul><ul><li>Identify the predecessors of each activity. </li></ul><ul><li>Draw the network diagram. </li></ul><ul><li>Estimate activity durations as average times or, if uncertainty is to be dealt with explicitly, as optimistic/ most likely/ pessimistic times. </li></ul><ul><li>Produce an activity schedule by making a forward and then a backward pass through the network. </li></ul><ul><li>Determine the critical path by calculating activity floats. </li></ul>
53. 53. Stages in Network Analysis, 2 <ul><li>Measure uncertainty by calculating activity variances, the variance of the whole project and finally confidence limits for the duration of the project. </li></ul><ul><li>Draw cost/ time curves for each activity. </li></ul><ul><li>Crash the network and construct a cost/ time graph for the project. </li></ul>Activities-Predecessors-Draw-Duration-Passes- Path-Variance-Costs-Crashes = APDDPPVCC = Acting pretty draws dirty passes, pathing very costly crashes .
54. 54. 7. Decisions & Info Technology <ul><li>A decision support system (DSS) is a computer system that produces information specially prepared to support particular decisions. </li></ul><ul><li>Six elements of IT: </li></ul><ul><ul><li>Computers </li></ul></ul><ul><ul><li>Software </li></ul></ul><ul><ul><li>Telecommunications </li></ul></ul><ul><ul><li>Workstations </li></ul></ul><ul><ul><li>Robotics </li></ul></ul><ul><ul><li>Smart Products </li></ul></ul>
55. 55. History of IT in Business <ul><li>1960’s : successful mainframe applications for accounting, operations (inventory, customer accounts) and technical operations. MIS was little more than a data dump. </li></ul><ul><li>1970’s : recognition of the MIS failure, with various attempts made to convert data to information that managers at different levels could use. </li></ul><ul><li>1980’s : introduction of the micro-computer puts computing power in the hands of the end user. </li></ul><ul><li>1990’s and beyond : downloading for local analysis, internal linkages (internal resource and information sharing) and external linkages (internet). </li></ul>
56. 56. Challenges for Computerized Management <ul><li>Controlling information: more is not necessarily better </li></ul><ul><li>Keeping information consistent </li></ul><ul><li>Developing analytical abilities </li></ul><ul><li>Avoiding behavioral conflicts (a supportive business culture?) </li></ul><ul><li>Aligning business and IT strategy </li></ul>
57. 57. Benefits <ul><li>Communications : speed and penetration of internal and external communications; interfaces for data transfer </li></ul><ul><li>Efficiency : direct time and cost savings, turnaround </li></ul><ul><li>Functionality : systems capabilities, usability and quality </li></ul><ul><li>Management : reporting provides a form of supervision; gives personnel greater control of their resources </li></ul><ul><li>Strategy : greater market contact; support for company growth; competitive advantage </li></ul>
58. 58. The Right Attitudes <ul><li>Commitment: adequate resources, especially management time, are put into the project </li></ul><ul><li>Realistic expectations </li></ul><ul><li>Motivation: the need for a champion or super-users </li></ul><ul><li>Persistence: detailing the plan, following through on the details and monitoring performance </li></ul><ul><li>Fear: balancing respect for IT against fear of it </li></ul>
59. 59. The Right Approach <ul><li>Top-down : how can IT help to meet corporate objectives? </li></ul><ul><li>Planning based : defining objectives and technologies </li></ul><ul><li>Substantial user involvement </li></ul><ul><li>Flexibility : adapting to a new modus operandi ; contingency planning </li></ul><ul><li>Monitoring : costs and benefits </li></ul><ul><li>Commitment to continuing development : markets and technology are not static </li></ul>
60. 60. Skills and Knowledge <ul><li>Planning : leads to improved communication, resource management and control </li></ul><ul><li>Project planning techniques : critical path </li></ul><ul><li>IT project stages : Feasibility study  Requirements  Specification  Programming  Testing  Running in parallel  Install  Review </li></ul><ul><li>Managing transition : setting objectives, defining tasks, recognizing the impacts of change (especially on personnel), communicating objectives and progress, dealing with unexpected developments </li></ul>
61. 61. The Future <ul><li>Technical progress </li></ul><ul><ul><li>Expert systems, e.g., a medical diagnostic system </li></ul></ul><ul><ul><li>Artificial Intelligence (AI): can deal with less structured issue </li></ul></ul><ul><ul><li>Data mining: looking for patterns </li></ul></ul><ul><ul><li>Decision mapping: discovering the structure of a problem </li></ul></ul><ul><ul><li>Knowledge management, especially as a competitive advantage </li></ul></ul><ul><li>Greater internal and external integration (especially within the value chain) </li></ul><ul><li>IT will be a greater part of corporate strategy </li></ul>