Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Paired Comparison Analysis

29 views

Published on

in paired comparison experiments, the worth or merit of a unit is measured through comparisons against other units.

Published in: Technology
  • Be the first to comment

  • Be the first to like this

Paired Comparison Analysis

  1. 1. + Paired Comparison Analysis In paired comparison experiments, the worth or merit of a unit is measured through comparisons against other units. 1
  2. 2. We rank movies We rank bands We rank everything
  3. 3. + Plurality Choice is Simple  Picking 1 of 2 candidates is what we do in major elections  But what about picking between more than 2  Which puppy to do want?  Which car should we buy?  Which girl friend should I take to the prom? (nice to be 17 uh?)  Which vendor “best” meets provide the capabilities in a PPM system? 3
  4. 4. + Too Many Choices Make for Bad Decisions  Simple importance ranking cannot be the source for making the right choice  Weighted ranking hides underlying importance between individual selection elements  Paired Comparison Analysis is an approach that deals with multi-selection decision making 4
  5. 5. + History of Paired Comparison  Introduced nearly 150 years ago:  Method of paired comparison is perhaps the most straightforward way of presenting items for comparative judgment.  With the method items are presented in pairs to one or more judges:  For each pair, the judge selects the item that best satisfies the specified judgment criterion. 5
  6. 6. + Rank Ordering in College Football  Before 2006 the “best teams” were chosen using a formula by the BCS  In 2001 Nebraska was soundly defeated by CU in the final regular season game, but not in a Conference championship game  BCS ranked Nebraska above CU, who went on to be soundly defeated by Miami “…division 1A college football uses what may be the most complicated monstrosity on the planet.” – Alissa Bauer, 2004 6
  7. 7. + College Football Rank Ordering circa 2003  The BCS system raised or lowered the weight put on votes cast by coaches, sports writers, and even the computers  All these individual ranks are assembled into the BCS ranking  This is called Borda Ranking, from Jean-Charles de Borda, 1781Jean-Charles de Borda 1733 – 1799 Born in the city of Dax, in 1756 Borda wrote Mémoire sur le mouvement des projectiles, a product of his work as a military engineer. For that, he was elected to the French Academy of Sciences in 1764. Jean-Charles de Borda 1733 – 1799 Born in the city of Dax, in 1756 Borda wrote Mémoire sur le mouvement des projectiles, a product of his work as a military engineer. For that, he was elected to the French Academy of Sciences in 1764. 7
  8. 8. + The Borda Method  Give a certain number of points for each 1st place ranking, 2nd place ranking, etc.  If there are n alternatives to be ranked, a 1st place vote is worth n – 1 points, a 2nd place vote is worth n – 2 points, and so on all the way to an nth place vote is worth 0 points. 8
  9. 9. + The Borda Count  The approach works sensibly in many situations  When all the voters agree on the rankings for all the alternatives  This “sum of ranks” works:  for the BCS  for elections in the US  for the Academy Awards  Although there are bitter squabbles over the results at times, there is rarely commentary that there is a fundamental flaw in the Borda Methodology 9
  10. 10. + The Fundamental Flaw is the presence of Irrelevant Alternatives  An Irrelevant alternative effects the outcome  This is what happen when Millie Vanilli was eliminated from the Grammy’s for lip-syncing  The next ranked group should have won, but they didn’t  Because in Borda, the reordering of the remaining top three are inverted 10
  11. 11. + Irrelevant Alternatives A proper selection principle holds that the ranking of two alternatives should not be influenced by the placement of other alternatives Suppose your friend and his family have received a gift certificate for a new car, a Toyota or a Honda. Focused on those two alternatives, the desirability of all other makes should be irrelevant. You’re are discussing this in the dealer's parking lot and the friend says out loud, “We prefer the Toyota to Honda. We will go in to pick that one.” You’re sure all the work is done, so you go home. Later, the friend drives up to in a shiny new Honda. You say, “what happened? You preferred Toyota!” The friend says, “While we were in line, my son heard that Cadillac‘s have great durability. So we changed from Toyota to Honda!” Could the outcome be more ridiculous? It can not possibly make sense to have the choice between a Honda and a Toyota depend on the road performance of a Cadillac. And yet, that can happen with the Borda count. 11
  12. 12. + The Borda Winner is the Loser  The Borda method could produce a winner that would lose in a head-to-head comparison against other alternative choices  When Borda proposed his election system, Marquis de Condorcet pointed the fatal flaw:  A Borda selection can be defeated in a head- to-head contest with other selections in the same population of choices  This head-to-head selection process is the Pair Wise Selection we’re after Marquis de Condorcet 1743 – 1794 a French philosopher, mathematician, and early political scientist who devised the concept of a Condorcet method. Marquis de Condorcet 1743 – 1794 a French philosopher, mathematician, and early political scientist who devised the concept of a Condorcet method. 12
  13. 13. + The Guts of the Flaw  Using integer rankings is the culprit  If the choice x is better than choice y and choice y is better than z  The ranking is 1, 2, 3 for x, y, z  The magnitude of the preference is ignored  But this approach does not make the comparison between x and z  The same problem occurs in single and double elimination tournaments  And of course our beloved BCS rankings of Big 12 Football 13
  14. 14. + Some Math Behind This Approach The probability that an object j is judged to have more of an attribute that object i is: Where is the scale location for object i. { }Pr 1 , 1 j i j i ji e X e δ δ δ δ − − = = + iδ 14
  15. 15. 15 Getting out of the dilemma of Integer Ranking, Borda Methods, and all the attendant problems means using a Round Robin Tournament, where every team plays every other team This is Paired Comparison Analysis
  16. 16. + It's a Two Step Approach to Vendor Selection  Use Paired Comparison Analysis to Rank the capabilities of the desired system  Used a weighted comparison to build the Pareto Chart of the resulting rankings 16
  17. 17. + First Rank the Capabilities Rank each of the desired capabilities using this tool. The result may be surprising, but it represents how the capabilities are in fact ranked in order of importance using the “round robin” approach 17
  18. 18. + Then Construct a Prioritization Matrix  This approach “sorts” items into their order of importance using Paired Comparison to:  Prioritize complex or unclear issues, where there are multiple criteria for deciding importance  when there is data available to help score criteria and issues  Gain agreement on priorities and key issues 18
  19. 19. + Use the Ranking Tool to compare the Vendors 19 This tool provides the rank order selection for each “design element” of Capabilty compared for each vendor. It allows the display of ranking and ordering of both the capabilities and the vendors
  20. 20. + Step By Step Instructions  Build a list of items to be prioritized  Identify the list of criteria used to judge how well each item meets he criteria  Allocate weighting to each criteria  Select actual criteria to be used for prioritization  Score each item against the criteria 20

×