Decision Support Systems 43 (2007) 618 – 644
                                                                             ...
A. Jøsang et al. / Decision Support Systems 43 (2007) 618–644                           619


expects as long as he trusts...
620                              A. Jøsang et al. / Decision Support Systems 43 (2007) 618–644


it is possible that the v...
A. Jøsang et al. / Decision Support Systems 43 (2007) 618–644                         621


his trust on that. Statement (...
622                                   A. Jøsang et al. / Decision Support Systems 43 (2007) 618–644


  (2) Ratings about ...
A. Jøsang et al. / Decision Support Systems 43 (2007) 618–644                         623


3.2. Computer security and tru...
624                                    A. Jøsang et al. / Decision Support Systems 43 (2007) 618–644


dation purposes. In...
A. Jøsang et al. / Decision Support Systems 43 (2007) 618–644                           625


good car mechanicQ, which ca...
626                                  A. Jøsang et al. / Decision Support Systems 43 (2007) 618–644


two main types are ce...
A. Jøsang et al. / Decision Support Systems 43 (2007) 618–644                                   627


                    ...
628                            A. Jøsang et al. / Decision Support Systems 43 (2007) 618–644


that reputation systems cou...
A. Jøsang et al. / Decision Support Systems 43 (2007) 618–644                                                    629


par...
630                                     A. Jøsang et al. / Decision Support Systems 43 (2007) 618–644


expresses the rely...
A. Jøsang et al. / Decision Support Systems 43 (2007) 618–644                          631


    Some flow models assume a...
632                                    A. Jøsang et al. / Decision Support Systems 43 (2007) 618–644


9.2. Expert sites  ...
A. Jøsang et al. / Decision Support Systems 43 (2007) 618–644                        633


est) as a function of the accum...
634                            A. Jøsang et al. / Decision Support Systems 43 (2007) 618–644


first. Reviewers do not get...
===A Survey Of Trust And Reputation
===A Survey Of Trust And Reputation
===A Survey Of Trust And Reputation
===A Survey Of Trust And Reputation
===A Survey Of Trust And Reputation
===A Survey Of Trust And Reputation
===A Survey Of Trust And Reputation
===A Survey Of Trust And Reputation
===A Survey Of Trust And Reputation
===A Survey Of Trust And Reputation
Upcoming SlideShare
Loading in …5
×

===A Survey Of Trust And Reputation

3,297 views

Published on

Published in: Technology, Business
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
3,297
On SlideShare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
56
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

===A Survey Of Trust And Reputation

  1. 1. Decision Support Systems 43 (2007) 618 – 644 www.elsevier.com/locate/dss A survey of trust and reputation systems for online service provision Audun Jøsang a,*, Roslan Ismail b, Colin Boyd b a Distributed Systems Technology Centre, University of Queensland, Level 7, GP South, UQ Qld 4072, Australia b Information Security Research Centre, Queensland University of Technology, Brisbane Qld 4001, Australia Available online 5 July 2006 Abstract Trust and reputation systems represent a significant trend in decision support for Internet mediated service provision. The basic idea is to let parties rate each other, for example after the completion of a transaction, and use the aggregated ratings about a given party to derive a trust or reputation score, which can assist other parties in deciding whether or not to transact with that party in the future. A natural side effect is that it also provides an incentive for good behaviour, and therefore tends to have a positive effect on market quality. Reputation systems can be called collaborative sanctioning systems to reflect their collaborative nature, and are related to collaborative filtering systems. Reputation systems are already being used in successful commercial online applications. There is also a rapidly growing literature around trust and reputation systems, but unfortunately this activity is not very coherent. The purpose of this article is to give an overview of existing and proposed systems that can be used to derive measures of trust and reputation for Internet transactions, to analyse the current trends and developments in this area, and to propose a research agenda for trust and reputation systems. D 2005 Elsevier B.V. All rights reserved. Keywords: Trust; Reputation; Transitivity; Collaboration; E-commerce; Security; Decision 1. Introduction services offered. This forces the consumer to accept the brisk of prior performanceQ, i.e. to pay for services Online service provision commonly takes place and goods before receiving them, which can leave him between parties who have never transacted with in a vulnerable position. The consumer generally has each other before, in an environment where the ser- no opportunity to see and try products, i.e. to bsqueeze vice consumer often has insufficient information the orangesQ, before he buys. The service provider, on about the service provider, and about the goods and the other hand, knows exactly what he gets, as long as he is paid in money. The inefficiencies resulting from this information asymmetry can be mitigated through * Corresponding author. trust and reputation. The idea is that even if the E-mail addresses: ajosang@dstc.edu.au (A. Jøsang), consumer cannot try the product or service in ad- roslan@uniten.edu.my (R. Ismail), c.boyd@qut.edu.au (C. Boyd). vance, he can be confident that it will be what he 0167-9236/$ - see front matter D 2005 Elsevier B.V. All rights reserved. doi:10.1016/j.dss.2005.05.019
  2. 2. A. Jøsang et al. / Decision Support Systems 43 (2007) 618–644 619 expects as long as he trusts the seller. A trusted seller Section 5 we describe different trust classes, of which therefore has a significant advantage in case the prod- provision trust is a class of trust that refers to service uct quality cannot be verified in advance. provision. Section 6 describes four categories for This example shows that trust plays a crucial role reputation and trust semantics that can be used in in computer mediated transactions and processes. trust and reputation systems, Section 7 describes cen- However, it is often hard to assess the trustworthiness tralised and distributed reputation system architec- of remote entities, because computerised communica- tures, and Section 8 describes some reputation tion media are increasingly removing us from familiar computation methods, i.e. how ratings are to be com- styles of interaction. Physical encounter and tradition- puted to derive reputation scores. Section 9 provides al forms of communication allow people to assess a an overview of reputation systems in commercial and much wider range of cues related to trustworthiness live applications. Section 10 describes the main pro- than is currently possible through computer mediated blems in reputation systems, and provides an over- communication. The time and investment it takes to view of literature that proposes solutions to these establish a traditional brick-and-mortar street presence problems. The study is rounded off with a discussion provides some assurance that those who do it are in Section 11. serious players. This stands in sharp contrast to the relative simplicity and low cost of establishing a good looking Internet presence which gives little evidence 2. Background for trust and reputation systems about the solidity of the organisation behind it. The difficulty of collecting evidence about unknown trans- 2.1. The notion of trust action partners makes it hard to distinguish between high and low quality service providers on the Internet. Manifestations of trust are easy to recognise be- As a result, the topic of trust in open computer net- cause we experience and rely on it everyday, but at the works is receiving considerable attention in the aca- same time trust is quite challenging to define because demic community and e-commerce industry. it manifests itself in many different forms. The liter- There is a rapidly growing literature on the theory ature on trust can also be quite confusing because the and applications of trust and reputation systems, and term is being used with a variety of meanings [46]. the main purpose of this document is to provide a Two common definitions of trust which we will call survey of the developments in this area. An earlier reliability trust and decision trust respectively will be brief survey of reputation systems has been published used in this study. by Mui et al. [50]. Overviews of agent transaction As the name suggest, reliability trust can be inter- systems are also relevant because they often relate to preted as the reliability of something or somebody, reputation systems [25,42,38]. There is considerable and the definition by Gambetta [22] provides an confusion around the terminology used to describe example of how this can be formulated: these systems, and we will try to describe proposals Definition 1 (Reliability trust). Trust is the subjective and developments using a consistent terminology in probability by which an individual, A, expects that this study. There also seems to be a lack of coherence another individual, B, performs a given action on in this area, as indicated by the fact that authors often which its welfare depends. propose new systems from scratch, without trying to extend and enhance previous proposals. This definition includes the concept of dependence Section 2 attempts to define the concepts of trust on the trusted party, and the reliability (probability) of and reputation, and proposes an agenda for research the trusted party, as seen by the trusting party. into trust and reputation systems. Section 3 describes However, trust can be more complex than Gam- why trust and reputation systems should be regarded betta’s definition indicates. For example, Falcone and as security mechanisms. Section 4 describes the rela- Castelfranchi [19] recognise that having high (reliabil- tionship between collaborative filtering systems and ity) trust in a person in general is not necessarily reputation systems, where the latter can also be de- enough to decide to enter into a situation of depen- fined in terms of collaborative sanctioning systems. In dence on that person. In [19] they write: bFor example
  3. 3. 620 A. Jøsang et al. / Decision Support Systems 43 (2007) 618–644 it is possible that the value of the damage per se (in The strongest expression for this view has been given case of failure) is too high to choose a given decision by Williamson [67] who argues that the notion of branch, and this independently either from the prob- trust should be avoided when modelling economic ability of the failure (even if it is very low) or from the interactions, because it adds nothing new, and that possible payoff (even if it is very high). In other well known notions such as reliability, utility and risk words, that danger might seem to the agent an intol- are adequate and sufficient for that purpose. Accord- erable risk.Q In order to capture this broad concept of ing to Williamson, the only type of trust that can be trust, the following definition inspired by McKnight meaningful for describing interactions is personal and Chervany [46] can be used. trust. He argues that personal trust applies to emo- tional and personal interactions such as love relation- Definition 2 (Decision trust). Trust is the extent to ships where mutual performance is not always which one party is willing to depend on something or monitored and where failures are forgiven rather somebody in a given situation with a feeling of rela- than sanctioned. In that sense, traditional computa- tive security, even though negative consequences are tional models would be inadequate e.g. because of possible. insufficient data and inadequate sanctioning, but also The relative vagueness of this definition is useful because it would be detrimental to the relationships if because it makes it the more general. It explicitly and the involved parties were to take a computational implicitly includes aspects of a broad notion of trust approach. Non-computation models for trust can be which are dependence on the trusted entity or party, meaningful for studying such relationships according the reliability of the trusted entity or party, utility in to Williamson, but developing such models should be the sense that positive utility will result from a posi- done within the domains of sociology and psycholo- tive outcome, and negative utility will result from a gy, rather than in economy. negative outcome, and finally a certain risk attitude in the sense that the trusting party is willing to accept the 2.2. Reputation and trust situational risk resulting from the previous elements. Risk emerges, for example, when the value at stake in The concept of reputation is closely linked to that a transaction is high, and the probability of failure is of trustworthiness, but it is evident that there is a clear non-negligible (i.e. reliability b 1). Contextual aspects, and important difference. For the purpose of this such law enforcement, insurance and other remedies study, we will define reputation according to the in case something goes wrong, are only implicitly Concise Oxford dictionary. included in the definition of trust above, but should Definition 3 (Reputation). Reputation is what is gen- nevertheless be considered to be part of trust. erally said or believed about a person’s or thing’s There are only a few computational trust models character or standing. that explicitly take risk into account [23]. Studies that combine risk and trust include Manchala [44] This definition corresponds well with the view of and Jøsang and Lo Presti [32]. Manchala explicitly social network researchers [20,45] that reputation is a avoids expressing measures of trust directly, and quantity derived from the underlying social network instead develops a model around other elements which is globally visible to all members of the net- such as transaction values and the transaction history work. The difference between trust and reputation can of the trusted party. Jøsang and Lo Presti distinguish be illustrated by the following perfectly normal and between reliability trust and decision trust, and plausible statements: develops a mathematical model for decision trust based on more finely grained primitives, such as (1) bI trust you because of your good reputation.Q agent reliability, utility values, and the risk attitude (2) bI trust you despite your bad reputation.Q of the trusting agent. The difficulty of capturing the notion of trust in Assuming that the two sentences relate to identical formal models in a meaningful way has led some transactions, statement (1) reflects that the relying economists to reject it as a computational concept. party is aware of the trustee’s reputation, and bases
  4. 4. A. Jøsang et al. / Decision Support Systems 43 (2007) 618–644 621 his trust on that. Statement (2) reflects that the relying depend on in the physical world are missing in online party has some private knowledge about the trustee, environments, so that electronic substitutes are need- e.g. through direct experience or intimate relationship, ed. Secondly, communicating and sharing information and that these factors overrule any reputation that a related to trust and reputation is relatively difficult, person might have. This observation reflects that trust and normally constrained to local communities in the ultimately is a personal and subjective phenomenon physical world, whereas IT systems combined with that is based on various factors or evidence, and that the Internet can be leveraged to design extremely some of those carry more weight than others. Personal efficient systems for exchanging and collecting such experience typically carries more weight than second information on a global scale. hand trust referrals or reputation, but in the absence of Motivated by this basic observation, the purposes of personal experience, trust often has to be based on research in trust and reputation systems should be to: referrals from others. Reputation can be considered as a collective mea- (a) Find adequate online substitutes for the tradi- sure of trustworthiness (in the sense of reliability) tional cues to trust and reputation that we are based on the referrals or ratings from members in a used to in the physical world, and identify new community. An individual’s subjective trust can be information elements (specific to a particular derived from a combination of received referrals and online application) which are suitable for deriv- personal experience. In order to avoid dependence and ing measures of trust and reputation. loops it is required that referrals be based on first hand (b) Take advantage of IT and the Internet to create experience only, and not on other referrals. As a efficient systems for collecting that information, consequence, an individual should only give subjec- and for deriving measures of trust and reputa- tive trust referral when it is based on first hand tion, in order to support decision making and to evidence or when second hand input has been re- improve the quality of online markets. moved from its derivation base [33]. It is possible to abandon this principle for example when the weight These simple principles invite rigorous research in of the trust referral is normalised or divided by the order to answer some fundamental questions: what total number of referrals given by a single entity, and information elements are most suitable for deriving the latter principle is applied in Google’s PageRank measures of trust and reputation in a given applica- algorithm [52] described in more detail in Section 9.5. tion? How can these information elements be captured Reputation can relate to a group or to an individual. and collected? What are the best principles for de- A group’s reputation can for example be modelled as signing such systems from a theoretic and from a the average of all its members’ individual reputations, usability point of view? Can they be made resistant or as the average of how the group is perceived as a to attacks of manipulation by strategic agents? How whole by external parties. Tadelis’ [66] study shows should users include the information provided by such that an individual belonging to a given group will systems into their decision process? What role can inherit an a priori reputation based on that group’s these systems play in the business model of commer- reputation. If the group is reputable all its individual cial companies? Do these systems truly improve the members will a priori be perceived as reputable and quality of online trade and interactions? These are vice versa. important questions that need good answers in order to determine the potential for trust and reputation 2.3. A research agenda for trust and reputation systems in online environments. systems According to Resnick et al. [56], reputation sys- tems must have the following three properties to There are two fundamental differences between operate at all: traditional and online environments regarding how trust and reputation are, and can be, used. (1) Entities must be long lived, so that with every Firstly, as already mentioned, the traditional cues interaction there is always an expectation of of trust and reputation that we are used to observe and future interactions.
  5. 5. 622 A. Jøsang et al. / Decision Support Systems 43 (2007) 618–644 (2) Ratings about current interactions are captured The main differences between trust and reputation and distributed. systems can be described as follows: trust systems (3) Ratings about past interactions must guide deci- produce a score that reflects the relying party’s sub- sions about current interactions. jective view of an entity’s trustworthiness, whereas reputation systems produce an entity’s (public) repu- The longevity of agents means, for example, that it tation score as seen by the whole community. Sec- should be impossible or difficult for an agent to ondly, transitivity is an explicit component in trust change identity or pseudonym for the purpose of systems, whereas reputation systems usually only erasing the connection to its past behaviour. The take transitivity implicitly into account. Finally, second property depends on the protocol with which trust systems usually take subjective and general ratings are provided, and this is usually not a problem measures of (reliability) trust as input, whereas infor- for centralised systems, but represents a major chal- mation or ratings about specific (and objective) lenge for distributed systems. The second property events, such as transactions, are used as input in also depends on the willingness of participants to reputation systems. provide ratings, for which there must be some form There can of course be trust systems that incorpo- of incentive. The third property depends on the us- rate elements of reputation systems and vice versa, so ability of reputation system, and how people and that it is not always clear how a given systems should systems respond to it, and this is reflected in the be classified. The descriptions of the various trust and commercial and live reputation systems described in reputation systems below must therefore be seen in Section 9, but only to a small extent in the theoretic light of this. proposals described in Sections 8 and 10. The basic principles of reputation systems are rel- atively easy to describe (see Sections 7 and 8). How- 3. Security and trust ever, because the notion of trust itself is vague, what constitutes a trust system is difficult to describe con- 3.1. Trust and reputation systems as soft security cisely. A method for deriving trust from a transitive mechanisms trust path is an element which is normally found in trust systems. The idea behind trust transitivity is that In a general sense, the purpose of security mechan- when Alice trusts Bob, and Bob trusts Claire, and Bob isms is to provide protection against malicious parties. refers Claire to Alice, then Alice can derive a measure In this sense there is a whole range of security chal- of trust in Claire based on Bob’s referral combined lenges that are not met by traditional approaches. with her trust in Bob. This is illustrated in Fig. 1. Traditional security mechanisms will typically protect The type of trust considered in this example is resources from malicious users, by restricting access obviously reliability trust (and not decision trust). In to only authorised users. However, in many situations addition there are semantic constraints for the transi- we have to protect ourselves from those who offer tive trust derivation to be valid, i.e. that Alice must resources so that the problem in fact is reversed. trust Bob to recommend Claire for a particular pur- Information providers can for example act deceitfully pose, and that Bob must trust Claire for that same by providing false or misleading information, and purpose [33]. traditional security mechanisms are unable to protect against this type of threat. Trust and reputation sys- Alice 2 Bob Claire tems on the other hand can provide protection against such threats. The difference between these two referral approaches to security was first described by Rasmus- 1 1 sen and Jansson [53] who used the term hard security trust trust for traditional mechanisms like authentication and derived trust access control, and soft security for what they called 3 social control mechanisms in general, of which trust Fig. 1. Trust transitivity principle. and reputation systems are examples.
  6. 6. A. Jøsang et al. / Decision Support Systems 43 (2007) 618–644 623 3.2. Computer security and trust security mechanisms) are useful tools for deriving provision trust. Security mechanisms protect systems and data It can be observed that identity trust is a condition from being adversely affected by malicious and non- for trusting a party behind the identity with anything authorised parties. The effect of this is that those more than a baseline or default provision trust that systems and data can be considered more reliable, applies to all parties in a community. This does not and thus more trustworthy. The concepts of Trusted mean that the real world identity of the principal must Systems and Trusted Computing Base have been used be known. An anonymous party, who can be recog- in the IT security jargon (see e.g. Abrams [3]), but the nised from interaction to interaction, can also be concept of security assurance level is more standar- trusted for the purpose of providing services. dised as a measure of security.1 The assurance level can be interpreted as a system’s strength to resist malicious attacks, and some organisations require 4. Collaborative filtering and collaborative systems with high assurance levels for high risk or sanctioning highly sensitive applications. In an informal sense, the assurance level expresses a level of public (reliability) Collaborative filtering systems (CF) have similar- trustworthiness of given system. However, it is evi- ities with reputation systems in that both collect dent that additional information, such as warnings ratings from members in a community. However about newly discovered security flaws, can carry they also have fundamental differences. The assump- more weight than the assurance level when people tions behind CF systems is that different people have form their own subjective trust in the system. different tastes, and rate things differently according to subjective taste. If two users rate a set of items 3.3. Communication security and trust similarly, they share similar tastes, and are called neighbours in the jargon. This information can be Communication security includes encryption of used to recommend items that one participant likes, the communication channel and cryptographic au- to his or her neighbours, and implementations of this thentication of identities. Authentication provides technique are often called recommender systems. so-called identity trust, i.e. a measure of the correct- This must not be confused with reputation systems ness of a claimed identity over a communication which are based on the seemingly opposite assump- channel. The term btrust providerQ is sometimes tion, namely that all members in a community should used in the industry to describe CAs2 and other judge the performance of a transaction partner or the authentication service providers with the role of pro- quality of a product or service consistently. In this viding the necessary mechanisms and services for sense the term bcollaborative sanctioningQ (CS) [48] verifying and managing identities. The type of trust has been used to describe reputation systems, because that CAs and identity management systems provide is the purpose is to sanction poor service providers, simply identity trust. In case of chained identity certi- with the aim of giving an incentive for them to ficates, the derivation of identity trust is based on trust provide quality services. transitivity, so in that sense these systems can be CF takes ratings subject to taste as input, whereas called identity trust systems. reputation systems take ratings assumed insensitive to However, users are also interested in knowing taste as input. People will for example judge data files the reliability of authenticated parties, or the quality containing film and music differently depending on of goods and services they provide. This latter type their taste, but all users will judge files containing of trust will be called provision trust in this study, viruses to be bad. CF systems can be used to select the and only trust and reputation systems (i.e. soft preferred files in the former case, and reputation systems can be used to avoid the bad files in the latter 1 See e.g. the UK CESG at http://www.cesg.gov.uk/ or the Com- case. There will of course be cases where CF systems mon Criteria Project at http://www.commoncriteriaportal.org/. identify items that are invariant to taste, which simply 2 Certification authority. indicates low usefulness of that result for recommen-
  7. 7. 624 A. Jøsang et al. / Decision Support Systems 43 (2007) 618–644 dation purposes. Inversely, there will be cases where Provision Trust ratings that are subject to personal taste are being fed Access Trust into reputation systems. The latter can cause pro- blems, because most reputation systems will be un- Delegation Trust Trust purpose able to distinguish between variations in service Identity Trust provider performance, and variations in the observer’s taste, potentially leading to unreliable and misleading Context Trust reputation scores. Another important point is that CF systems and Fig. 2. Trust classes (Grandison and Sloman [23]). reputation systems assume an optimistic and a pessi- mistic world view respectively. To be specific CF provision trust. For example when a contract specifies systems assume all participants to be trustworthy quality requirements for the delivery of services, then and sincere, i.e. to their job as best they can and to this business trust would be provision trust in our always report their genuine opinion. Reputation sys- terminology. tems, on the other hand, assume that some participants !Access trust describes trust in principals for the will try to misrepresent the quality of services in order purpose of accessing resources owned by or under the to make more profit, and to lie or provide misleading responsibility of the relying party. This relates to the ratings in order to achieve some specific goal. It can access control paradigm which is a central element in be very useful to combine CF and reputation systems, computer security. A good overview of access trust and Amazon.com described in Section 9.3.3 does this systems can be found in Grandison and Sloman [23]. to a certain extent. Theoretic schemes include !Delegation trust describes trust in an agent (the Damiani et al.’s proposal to separate between provider delegate) that acts and makes decision on behalf of the reputation and resource reputation in P2P networks relying party. Grandison and Sloman point out that [14]. acting on one’s behalf can be considered to a special form of service provision. !Identity trust 5 describes the belief that an agent 5. Trust classes identity is as claimed. Trust systems that derive iden- tity trust are typically authentication schemes such as In order to be more specific about trust semantics, X.509 and PGP [74]. Identity trust systems have been we will distinguish between a set of different trust discussed mostly in the information security commu- classes according to Grandison and Sloman’s classifi- nity, and a brief overview and analysis can be found in cation [23]. This is illustrated in Fig. 2.3 The high- Reiter and Stubblebine [54]. lighting of provision trust in Fig. 2 is done to illustrate !Context trust 6 describes the extent to which the that it is the focus of the trust and reputation systems relying party believes that the necessary systems and described in this study. institutions are in place in order to support the !Provision trust describes the relying party’s trust transaction and provide a safety net in case some- in a service or resource provider. It is relevant when thing should go wrong. Factors for this type of trust the relying party is a user seeking protection from can for example be critical infrastructures, insurance, malicious or unreliable service providers. The Lib- legal system, law enforcement and stability of soci- erty Alliance Project4 uses the term bbusiness trustQ ety in general. [5] to describes mutual trust between companies Trust purpose is an overarching concept that can emerging from contract agreements that regulate inter- be used to express any operational instantiation of the actions between them, and this can be interpreted as trust classes mentioned above. In other words, it defines the specific scope of a given trust relationship. 3 Grandison and Sloman use the terms service provision trust, A particular trust purpose can for example be bto be a resource access trust, delegation trust, certification trust, and in- 5 frastructure trust. Called bauthentication trustQ in Liberty Alliance [5]. 4 6 http://www.projectliberty.org/. Called bsystem trustQ in McKnight and Chervany [46].
  8. 8. A. Jøsang et al. / Decision Support Systems 43 (2007) 618–644 625 good car mechanicQ, which can be grouped under the !Subjective and general measures are for example provision trust class. used on eBay’s reputation system which is described Conceptually, identity trust and provision trust can in detail in Section 9.1. An inherent problem with this be seen as two layers on top of each other, where type of measure is that it often fails to assign credit or provision trust normally cannot exist without identity blame to the right aspect or even the right party. For trust. In the absence of identity trust, it is only example, if a shipment of an item bought on eBay possible to have a baseline provision trust in an arrives late or is broken, the buyer might give the agent or entity. seller a negative rating, whereas the post office might have caused the problem. A general problem with all subjective measures is 6. Categories of trust semantics that it is difficult to protect against unfair ratings. Another potential problem is that the act of referring The semantic characteristics of ratings, reputation negative general and subjective trust in an entity can scores and trust measures are important in order for lead to accusations of slander. This is not so much a participants to be able to interpret those measures. The problem in reputation systems because the act of semantics of measures can be described in terms of a rating a particular transaction negatively is less sensi- specificity-generality dimension and a subjectivity-ob- tive than it is to refer negative trust in an entity in jectivity dimension as illustrated in Table 1. general. A specific measure means that it relates to a spe- !Objective and specific measures are for example cific trust aspect such as the ability to deliver on time, used in technical product tests where the perfor- whereas a general measure is supposed to represent an mance or the quality of the product can be objec- average of all aspects. tively measured. Washing machines can for example A subjective measure means that an agent provides be tested according to energy consumption, noise, a rating based on subjective judgement whereas an washing program features, etc. Another example is to objective measure means that the rating has been rate the fitness of commercial companies based on determined by objectively assessing the trusted party specific financial measures, such as earning, profit, against formal criteria. investment, R&D expenditure, etc. An advantage !Subjective and specific measures are for example with objective measures is that the correctness of used in survey questionnaires where people are asked ratings can be verified by others, or automatically to express their opinion over a range of specific issues. generated based on automated monitoring of events. A typical question could for example be: bHow do you !Objective and general measures can for example see election candidate X’s ability to handle the econ- be computed based on a vector of objective and omy?Q and the possible answers could be on a scale 1– specific measures. In product tests, where a range of 5 which could be assumed equivalent to bDisastrousQ, specific characteristics is tested, it is common to bBadQ, bAverageQ, bGoodQ, and bExcellentQ. Similar derive a general score which can be a weighted aver- questions could be applied to foreign policy, national age of the score of each characteristic. Dunn and security, education and health, so that a person’s an- Bradstreet’s business credit rating is an example of a swer forms a subjective vector of his or her trust in measure that is derived from a vector of objectively candidate X. measurable company performance parameters. Table 1 Classification of trust and reputation measures 7. Reputation network architectures Specific, vector General, synthesised based The technical principles for building reputation Subjective Survey eBay, voting systems are described in this and the following questionnaires section. The network architecture determines how Objective Product tests Synthesised general score from ratings and reputation scores are communicated product tests, D&B rating between participants in a reputation system. The
  9. 9. 626 A. Jøsang et al. / Decision Support Systems 43 (2007) 618–644 two main types are centralised and distributed The two fundamental aspects of centralised repu- architectures. tation systems are: 7.1. Centralised reputation systems (1) Centralised communication protocols that allow participants to provide ratings about transaction In centralised reputation systems, information partners to the central authority, as well as to about the performance of a given participant is col- obtain reputation scores of potential transaction lected as ratings from other members in the commu- partners from the central authority. nity who have had direct experience with that (2) A reputation computation engine used by the participant. The central authority (reputation centre) central authority to derive reputation scores for that collects all the ratings typically derives a reputa- each participant, based on received ratings, and tion score for every participant, and makes all scores possibly also on other information. This is de- publicly available. Participants can then use each scribed in Section 8 below. other’s scores, for example, when deciding whether or not to transact with a particular party. The idea is 7.2. Distributed reputation systems that transactions with reputable participants are likely to result in more favourable outcomes than transac- There are environments where a distributed repu- tions with disreputable participants. tation system, i.e. without any centralised functions, Fig. 3 below shows a typical centralised reputation is better suited than a centralised system. In a dis- framework, where A and B denote transaction part- tributed system there is no central location for sub- ners with a history of transactions in the past, and mitting ratings or obtaining reputation scores of who consider transacting with each other in the others. Instead, there can be distributed stores present. where ratings can be submitted, or each participant After each transaction, the agents provide ratings simply records the opinion about each experience about each other’s performance in the transaction. The with other parties, and provides this information on reputation centre collects ratings from all the agents, request from relying parties. A relying party, who and continuously updates each agent’s reputation considers transacting with a given target party, must score as a function of the received ratings. Updated find the distributed stores, or try to obtain ratings reputation scores are provided online for all the agents from as many community members as possible who to see, and can be used by the agents to decide have had direct experience with that target party. This whether or not to transact with a particular agent. is illustrated in Fig. 4. Past transactions A G F B Potential transaction A E A B D B A C Reputation scores Ratings Reputation Centre Reputation Centre a) Past b) Present Fig. 3. General framework for a centralised reputation system.
  10. 10. A. Jøsang et al. / Decision Support Systems 43 (2007) 618–644 627 Past transactions F G A G D E F B C A E Ratings D B A B A C Potential transaction a) Past b) Present Fig. 4. General framework for a distributed reputation system. The relying party computes the reputation score like Gnutella8 and Freenet9, also the search phase is based on the received ratings. In case the relying party distributed. Intermediate architectures also exist, e.g. has had direct experience with the target party, the the FastTrack architecture which is used in P2P net- experience from that encounter can be taken into works like KaZaA10, grokster11 and iMesh.12 In Fast- account as private information, possibly carrying a Track based P2P networks, there are nodes and higher weight than the received ratings. supernodes, where the latter keeps track of other The two fundamental aspects of distributed repu- nodes and supernodes that are logged onto the net- tation systems are: work, and thus act as directory servers during the search phase. (1) A distributed communication protocol that After the search phase, where the requested re- allows participants to obtain ratings from other source has been located, comes the download phase, members in the community. which consists of transferring the resource from the (2) A reputation computation method used by each exporting to the requesting servent. individual agent to derive reputation scores of P2P networks introduce a range of security threats, target parties based on received ratings, and as they can be used to spread malicious software, possibly on other information. This is described such as viruses and Trojan horses, and easily bypass in Section 8. firewalls. There is also evidence that P2P networks suffer from free riding [4]. Reputation systems are Peer-to-Peer (P2P) networks represent an envi- well suited to fight these problems, e.g. by sharing ronment well suited for distributed reputation man- information about rogue, unreliable or selfish partici- agement. In P2P networks, every node plays the pants. P2P networks are controversial because they role of both client and server, and is therefore have been used to distribute copyrighted material sometimes called a servent. This allows the users such as MP3 music files, and it has been claimed to overcome their passive role typical of web nav- that content poisoning13 has been used by the music igation, and to engage in an active role by provid- industry to fight this problem. We do not defend using ing their own resources. There are two phases in P2P networks for illegal file sharing, but it is obvious the use of P2P networks. The first is the search phase, which consists of locating the servent where 8 http://www.gnutella.com. the requested resource resides. In some P2P net- 9 http://www.zeropaid.com/freenet. works, the search phase can rely on centralised 10 http://www.kazaa.com. functions. One such example is Napster7 which 11 http://www.grokster.com/. 12 has a resource directory server. In pure P2P networks http://imesh.com. 13 Poisoning music file sharing networks consists of distributing files with legitimate titles and put inside them silence or random 7 http://www.napster.com/. noise.
  11. 11. 628 A. Jøsang et al. / Decision Support Systems 43 (2007) 618–644 that reputation systems could be used by distributors whereas others have been proposed by the academic of illegal copyrighted material to protect themselves community. from poisoning. Many authors have proposed reputation systems 8.1. Simple summation or average of ratings for P2P networks [2,13,14,18,24,36,40]. The purpose of a reputation system in P2P networks is: The simplest form of computing reputation scores is simply to sum the number of positive ratings and (1) To determine which servents are most reliable at negative ratings separately, and to keep a total score as offering the best quality resources, and the positive score minus the negative score. This is the (2) To determine which servents provide the most principle used in eBay’s reputation forum which is reliable information with regard to (1). described in detail in [55]. The advantage is that anyone can understand the principle behind the repu- In a distributed environment, each participant is tation score, the disadvantage is that it is primitive and responsible for collecting and combining ratings therefore gives a poor picture on participants’ reputa- from other participants. Because of the distributed tion score although this is also due to the way rating is environment, it is often impossible or too costly to provided, see Sections 10.1 and 10.2. obtain ratings resulting from all interactions with a A slightly more advanced scheme proposed in e.g. given agent. Instead the reputation score is based on a [63] is to compute the reputation score as the average subset of ratings, usually from the relying party’s of all ratings, and this principle is used in the reputa- bneighbourhoodQ. tion systems of numerous commercial web sites, such as Epinions, and Amazon described in Section 9. Advanced models in this category compute a 8. Reputation computation engines weighted average of all the ratings, where the rating weight can be determined by factors such as rater Seen from the relying party’s point of view, trust trustworthiness/reputation, age of the rating, distance and reputation scores can be computed based on between rating and current score, etc. own experience, on second hand referrals, or on a combination of both. In the jargon of economic 8.2. Bayesian systems theory, the term private information is used to describe first hand information resulting from own Bayesian systems take binary ratings as input (i.e. experience, and public information is used to de- positive or negative), and are based on computing scribe publicly available second hand information, reputation scores by statistical updating of beta prob- i.e. information that can be obtained from third ability density functions (PDF). The a posteriori (i.e. parties. the updated) reputation score is computed by combin- Reputation systems are typically based on public ing the a priori (i.e. previous) reputation score with information in order to reflect the community’s opin- the new rating [29,31,48–51,68]. The reputation score ion in general, which is in line with Definition 3 of can be represented in the form of the beta PDF reputation. A party, who relies on the reputation score parameter tuple (a, b) (where a and b represent the of some remote party, is in fact trusting that party amount of positive and negative ratings respectively), through trust transitivity [33]. or in the form of the probability expectation value of Some systems take both public and private infor- the beta PDF, and optionally accompanied with the mation as input. Private information, e.g. resulting variance or a confidence parameter. The advantage of from personal experience, is normally considered Bayesian systems is that they provide a theoretically more reliable than public information, such as ratings sound basis for computing reputation scores, and the from third parties. only disadvantage that it might be too complex for This section describes various principles for com- average persons to understand. puting reputation and trust measures. Some of the The beta-family of distributions is a continuous principles are used in commercial applications, family of distribution functions indexed by the two
  12. 12. A. Jøsang et al. / Decision Support Systems 43 (2007) 618–644 629 parameters a and b. The beta PDF denoted by ous measures. This is also valid for determining trust beta( p|a, b) can be expressed using the gamma func- measures, and some authors, including [1,9,10,44], tion C as: have proposed discrete trust models. For example, in the model of Abdul-Rahman and Cða þ bÞ aÀ1 Hailes [1] trustworthiness of an agent x can be re- betað pja; bÞ ¼ p ð1 À pÞbÀ1 CðaÞCðbÞ ferred as Very Trustworthy, Trustworthy, Untrustwor- where 0VpV1; a; bN0 ð1Þ thy and Very Untrustworthy. The relying party can then apply his or her own perception about the trust- with the restriction that the probability variable p p 0 if worthiness of the referring agent before taking the a b 1 and p p 1 if b b 1. The probability expectation referral into account. Look-up tables, with entries value of the beta distribution is given by: for referred trust and referring party downgrade/up- grade, are used to determine derived trust in x. When- E ð pÞ ¼ a=ða þ bÞ: ð2Þ ever the relying party has had personal experience When nothing is known, the a priori distribution is with x, this can be used to determine referring party the uniform beta PDF with a = 1 and b = 1 illustrated trustworthiness. The assumption is that personal ex- in Fig. 5a. Then, after observing r positive and s perience reflects x’s real trustworthiness and that negative outcomes, the a posteriori distribution is referrals about x that differ from the personal experi- the beta PDF with a = r + 1 and b = s + 1. For example, ence will indicate whether the referring party under- the beta PDF after observing 7 positive and 1 negative rates or overrates. Referrals from a referring party who outcomes is illustrated in Fig. 5b. is found to overrate will be downgraded, and vice A PDF of this type expresses the uncertain proba- versa. bility that future interactions will be positive. The The disadvantage of discrete measures is that they most natural is to define the reputation score as a do not easily lend themselves to sound computational function of the expectation value. The probability principles. Instead, heuristic mechanisms like look-up expectation value of Fig. 5b according to Eq. (2) is tables must be used. E( p) = 0.8. This can be interpreted as saying that the relative frequency of a positive outcome in the future 8.4. Belief models is somewhat uncertain, and that the most likely value is 0.8. Belief theory is a framework related to probability theory, but where the sum of probabilities over all 8.3. Discrete trust models possible outcomes not necessarily add up to 1, and the remaining probability is interpreted as uncertainty. Humans are often better able to rate performance in Jøsang [29,30] has proposed a belief/trust metric A the form of discrete verbal statements, than continu- called opinion denoted by x x = (b, d, u, a), which Probability density beta ( p | 8,2 ) Probability density beta ( p | 1,1 ) 5 5 4 4 3 3 2 2 1 1 0 0 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 Probability p Probability p (a) Uniform PDF beta(p ⎢1, 1) (b) PDF beta(p ⎜8, 2) Fig. 5. Example beta probability density functions. (a) Uniform PDF beta( p|1, 1). (b) PDF beta( p|8, 2).
  13. 13. 630 A. Jøsang et al. / Decision Support Systems 43 (2007) 618–644 expresses the relying party A’s belief in the truth of agent A is trustworthy (TA ) or not trustworthy ( ITA ), statement x. Here b, d, and u represent belief, disbe- and separate beliefs are being kept about whether A is lief and uncertainty respectively where b, d, u a [0,1] trustworthy or not, denoted by m(TA ) and m( ITA ) and b + d + u = 1. The parameter a a [0,1], which is respectively. The reputation score C of an agent A is called the relative atomicity, represents the base rate then defined as: probability in the absence of evidence, and is used for Cð AÞ ¼ mðTA Þ À mð ITA Þ; computing an opinion’s probability expectation value A E(x x ) = b + au, meaning that a determines how un- where mðTA Þ; mð ITA Þa½0; 1Š and Cð AÞa½ À 1; 1Š: A certainty shall contribute to E(x x ). When the state- ð3Þ ment x for example says bDavid is honest and reliableQ, then the opinion can be interpreted as reli- Without going into detail, the ratings provided by ability trust in David. As an example, let us assume individual agents are belief measures determined as a that Alice needs to get her car serviced, and that she function of A’s past history of behaviours with indi- asks Bob to recommend a good car mechanic. When vidual agents as trustworthy or not trustworthy, using Bob recommends David, Alice would like to get a predefined threshold values for what constitutes trust- second opinion, so she asks Claire for her opinion worthy and untrustworthy behaviour. These belief about David. This situation is illustrated in Fig. 6. measures are then combined using Dempster’s When trust and trust referrals are expressed as rule14, and the resulting beliefs are fed into Eq. (3) opinions, each transitive trust path AliceYBobY to compute the reputation score. Ratings are consid- David, and AliceYClaireYDavid can be computed ered valid if they result from a transitive trust chain of with the discounting operator, where the idea is that length less or equal to a predefined limit. the referrals from Bob and Claire are discounted as a function Alice’s trust in Bob and Claire respectively. 8.5. Fuzzy models Finally the two paths can be combined using the consensus operator. These two operators form part Trust and reputation can be represented as linguis- of Subjective Logic [30], and semantic constraints tically fuzzy concepts, where membership functions must be satisfied in order for the transitive trust describe to what degree an agent can be described as derivation to be meaningful [33]. Opinions can be e.g. trustworthy or not trustworthy. Fuzzy logic pro- uniquely mapped to beta PDFs, and in this sense the vides rules for reasoning with fuzzy measures of this consensus operator is equivalent to the Bayesian type. The scheme proposed by Manchala [44] de- updating described in Section 8.2. This model is scribed in Section 2 as well as the REGRET reputa- thus both belief-based and Bayesian. tion system proposed by Sabater and Sierra [59–61] Yu and Singh [70] have proposed to use belief fall in this category. In Sabater and Sierra’s scheme, theory to represent reputation scores. In their scheme, what they call individual reputation is derived from two possible outcomes are assumed, namely that an private information about a given agent, what they call social reputation is derived from public informa- Bob tion about an agent, and what they call context de- ref. pendent reputation is derived from contextual 2 Alice David information. 1 1 trust trust 8.6. Flow models trust trust 1 1 Claire Systems that compute trust or reputation by tran- 2 sitive iteration through looped or arbitrarily long ref. derived chains can be called flow models. trust 3 14 Dempster’s rule is the classical operator for combining evidence Fig. 6. Deriving trust from parallel transitive chains. from different sources.
  14. 14. A. Jøsang et al. / Decision Support Systems 43 (2007) 618–644 631 Some flow models assume a constant trust/reputa- case. The Feedback Forum is a centralised reputa- tion weight for the whole community, and this weight tion system, where eBay collects all the ratings and can be distributed among the members of the com- computes the scores. The running total reputation munity. Participants can only increase their trust/rep- score of each participant is the sum of positive utation at the cost of others. Google’s PageRank [52] ratings (from unique users) minus the sum of neg- described in Section 9.5, the Appleseed algorithm [73] ative ratings (from unique users). In order to pro- and Advogato’s reputation scheme [39] described in vide information about a participant’s more recent Section 9.2 belong to this category. In general, a behaviour, the total of positive, negative and neutral participant’s reputation increases as a function of ratings for the three different time windows (i) past incoming flow, and decreases as a function of outgo- 6 months, (ii) past month, and (iii) past 7 days are ing flow. In the case of Google, many hyperlinks to a also displayed. web page contributes to increased PageRank whereas There are many empirical studies of eBay’s repu- many hyperlinks from a web page contributes to tation system, see Resnick et al. [57] for an overview. decreased PageRank for that web page. In general the observed ratings on eBay are surpris- Flow models do not always require the sum of the ingly positive. Buyers provide ratings about sellers reputation/trust scores to be constant. One such ex- 51.7% of the time, and sellers provide ratings about ample is the EigenTrust model [36] which computes buyers 60.6% of the time [55]. Of all ratings provided, agent trust scores in P2P networks through repeated less than 1% is negative, less than 0.5% is neutral and and iterative multiplication and aggregation of trust about 99% is positive. It was also found that there is a scores along transitive chains until the trust scores for high correlation between buyer and seller ratings, all agent members of the P2P community converge to suggesting that there is a degree of reciprocation of stable values. positive ratings and retaliation of negative ratings. This is problematic if obtaining honest and fair ratings is a goal, and a possible remedy could be to not let 9. Commercial and live reputation systems sellers rate buyers. The problem of ballot stuffing, i.e. that ratings can This section describes the most well known appli- be repeated many times, e.g. to unfairly boost some- cations of online reputation systems. All analysed body’s reputation score, seems to be a minor problem systems have a centralised network architecture. The on eBay because participants are only allowed to rate computation is mostly based on the summation or each other after the completion of a transaction, average of ratings, but two systems use the flow which is monitored by eBay. It is of course possible model. to create fake transactions, but because eBay charges a fee for listing items, there is a cost associated with 9.1. eBay’s feedback forum this practice. However, unfair ratings for genuine transactions cannot be avoided. eBay15 is a popular auction site that allows sell- The eBay reputation system is very primitive and ers to list items for sale, and buyers to bid for those can be quite misleading. With so few negative rat- items. The so-called Feedback Forum on eBay gives ings, a participant with 100 positive and 10 negative buyer and seller the opportunity to rate each other ratings should intuitively appear much less reputable (provide feedback in the eBay jargon) as positive, than a participant with 90 positive and no negatives, negative, or neutral (i.e. 1, À 1, 0) after completion but on eBay they would have the same total reputa- of a transaction. Buyers and sellers also have the tion score. Despite its drawbacks and primitive na- possibility to leave comments like bSmooth trans- ture, the eBay reputation system seems to have a action, thank you!Q which are typical in positive strong positive impact on eBay as a marketplace. case or bBuyers beware!Q in the rare negative Any system that facilitates interaction between humans depend on how they respond to it, and people appear to respond well to the eBay system 15 http://ebay.com/. and its reputation component.
  15. 15. 632 A. Jøsang et al. / Decision Support Systems 43 (2007) 618–644 9.2. Expert sites each other with the status of Apprentice (lowest), Journeyer (medium) or Master (highest). A separate Expert sites have a pool of individuals that are flow graph is computed for each type of referral. A willing to answer questions in their areas of expertise, member will get the highest status for which there is a and the reputation systems on those sites are there to positive flow to his or her node. For example if the rate the experts. Depending on the quality of a reply, flow graph of Master referrals and the flow graph of the person who asked the question can rate the expert Apprentice referrals both reach member x then that on various aspects of the reply such as clarity and member will have Master status, but if only the flow timeliness. graph of Apprentice referrals reaches member x then AllExperts16 provides a free expert service for the that member will have Apprentice status. The Advo- public on the Internet with a business model based on gato reputation systems does not have any direct advertising. The reputation system on AllExperts uses purpose other than to boost the ego of members, the aspects: Knowledgeable, Clarity of Response, and to be a stimulant for social and professional Timeliness and Politeness where ratings can be networking within the Advogato community. given in the interval [1,10]. The score in each aspect is simply the numerical average of ratings received. 9.3. Product review sites The number of questions an expert has received is also displayed in addition to a General Prestige score Product review sites have a pool of individual that is simply the sum of all average ratings an expert reviewers who provide information for consumers has received. Most experts receive ratings close to 10 for the purpose of making better purchase decisions. on all aspects, so the General Prestige is usually close The reputation systems on those sites apply to pro- to 10Â the number of questions received. It is also ducts as well as to the reviewers themselves. possible to view charts of ratings over periods from 2 months to 1 year. 9.3.1. Epinions AskMe17 is an expert site for a closed user group of Epinions20 founded in 1999 is a product and shop companies and their employees, and the business review site with a business model mainly based on so- model is based on charging a fee for participating in called cost-per-click online marketing, which means the AskMe network. Ask Me does not publicly pro- that Epinions charges product manufacturers and vide any details of how the system works. online shops by the number of clicks consumers Advogato18 is a community of open-source pro- generate as a result of reading about their products grammers. Members rank each other according to on Epinions’ web site. Epinions also provides product how skilled they perceive each other to be, using reviews and ratings to other web sites for a fee. Advogato’s trust scheme,19 which in essence is a Epinions has a pool of members who write product centralised reputation system based on a flow and shop reviews. Anybody from the public can model. The reputation engine of Advogato computes become a member simply by signing up. The product the reputation flow through a network where members and shop reviews written by members consist of prose constitute the nodes and the edges constitute referrals text and quantitative ratings from 1 to 5 stars for a set between nodes. Each member node is assigned a of aspects such as Ease of Use, Battery Life, etc. in capacity between 800 and 1 depending on the distance case of products, and Ease of Ordering, Customer from the source node that is owned by Raph Levien Service, On-Time Delivery and Selection in case of who is the creator of Advogato. The source node has a shops. Other members can rate reviews as Not Help- capacity of 800 and the further away from the source ful, Somewhat Helpful, Helpful, and Very Helpful, and node, the smaller the capacity. Members can refer thereby contribute to determining how prominently the review will be placed, as well as to giving the 16 reviewer a higher status. A member can obtain the http://www.allexperts.com/. 17 http://www.askmecorp.com/. status Advisor, Top Reviewer or Category Lead (high- 18 http://www.advogato.org/. 19 20 http://www.advogato.org/trust-metric.html. http://www.epinions.com/.
  16. 16. A. Jøsang et al. / Decision Support Systems 43 (2007) 618–644 633 est) as a function of the accumulated ratings on all his revenue than poorly rated reviews, because the for- or her reviews over a period. It takes considerable mer are more prominently placed so that they are reviewing effort to obtain a status above member, and more likely to be read and used by others. Category most members do not have any status. Leads will normally earn more than Top Reviewers Category Leads are selected at the discretion of who in turn will normally earn more than Advisors, Epinions staff each quarter based on nominations because their reviews per definition are rated and from members. Top Reviewers are automatically se- listed in that order. lected every month based on how well their reviews Providing high quality reviews is Epinions core are rated, as well as on the Epinions Web of Trust (see value proposition to consumers, and the reputation below), where a member can Trust or Block another system is instrumental in achieving that. The reputa- member. Advisors are selected in the same way as Top tion system can be characterised as highly sophisti- Reviewers, but with a lower threshold for review cated because of the revenue based incentive ratings. Epinions does not publish the exact thresholds mechanism. Where other reputation systems on the for becoming Top Reviewer or Advisor, in order to Internet only provide immaterial incentives like status discourage members from trying to manipulate the or karma, the Epinions system can provide hard cash. selection process. The Epinions Web of Trust is a simple scheme, 9.3.2. BizRate whereby members can decide to either trust or block BizRate runs a Customer Certified Merchant another member. A member’s list of trusted members scheme whereby consumers who buy at a BizRate represents that member’s personal Web of Trust. As listed store are asked to rate site navigation, selec- already mentioned, the Web of Trust influences the tion, prices, shopping options and how satisfied they automated selection of Top Reviewers and Advisors. were with the shopping experience. Consumers par- The number of members (and their status) who trust ticipating in this scheme become registered BizRate a given member will contribute to that member get- members. A Customer Certificate is granted to a ting a higher status. The number of members (and merchant if a sufficient number of surveys over a their status) who block another member will have a given period are positive, and this allows the mer- negative impact on that member getting a higher chant to display the BizRate Customer Certified seal status. of approval on its web site. As an incentive to fill Epinions has an incentive system for reviewers out survey forms BizRate members get discounts at called the Income Share Program, whereby members listed stores. This scheme does not capture the frus- can earn money. Income Share is automatically de- trated customers who give up before they reach the termined based on general use of reviews by con- check, and therefore tends to provide a positive bias sumers. Reviewers can potentially earn as much for of web stores. Thus is understandable from a busi- helping someone make a buying decision with a ness perspective, because it provides an incentive for positive review, as for helping someone avoid a stores to participate in the Customer Certificate purchase with a negative review. This is important scheme. in order not to give an incentive to write biased BizRate also runs a product review service similar reviews just for profit. As stated on the Epinions to Epinions, but which uses a much simpler reputation FAQ pages: bEpinions wants you to be brutally hon- system. Members can write product reviews on Biz- est in your reviews, even if it means saying negative Rate, and anybody can become a member simply by thingsQ. The Income Share pool is a portion of Epi- signing up. Users, including non-members, who nions’ income. The pool is split among all members browse BizRate for product reviews can vote on based on the utility of their reviews. Authors of more reviews as being helpful, not helpful or off topic, useful reviews earn more than authors of less useful and the reputation systems stops there. Reviews are reviews. ordered according to the ratio of helpful over total The Income Share formula is not specified in votes, where the reviews with the highest ratios are detail in order to discourage attempts to defraud the listed first. It is also possible to have the reviews system. Highly rated reviews will generate more sorted by rating, so that the best reviews are listed
  17. 17. 634 A. Jøsang et al. / Decision Support Systems 43 (2007) 618–644 first. Reviewers do not get any status and they cannot want to pay people to write good reviews for their earn money by writing reviews for BizRate. There is books on Amazon. thus less incentive for writing reviews on BizRate There are many reports of attacks on the Amazon than there is on Epinions, but it is uncertain how review scheme where various types of ballot stuffing this influences the quality of the reviews. The fact has artificially elevated reviewers to top reviewer, or that anybody can sign up to become a member and various types of bbad mouthingQ has dethroned top write reviews and that anybody including non-mem- reviewers. This is not surprising due to the fact that bers can vote on the helpfulness of reviews makes this users can vote without becoming a member. For reputation scheme highly vulnerable to attack. A sim- example the Amazon #1 Reviewer usually is some- ple attack could consist of writing many positive body who posts more reviews than any living person reviews for a product and ballot stuff them so that could possibly do if it would require that person to they get presented first and result in a high average read each book, thus indicating that the combined score for that product. effort of a group of people, presented as a single person’s work, is needed to get to the top. Also, 9.3.3. Amazon reviewers who have reached the Top 100 rank have Amazon21 is mainly an online bookstore that reported a sudden increase in negative votes which allows members to write book reviews. Amazon’s reflects that there is a cat fight taking place in order reputation scheme is quite similar to the one BizRate to get into the ranks of top reviewers. In order to uses. Anybody can become a member simply by reduce the problem, Amazon only allows one vote signing up. Reviews consist of prose text and a per registered cookie for any given review. However rating in the range 1 to 5 stars. The average of all deleting that cookie or switching to another computer ratings gives a book its average rating. Users, in- will allow the same user to vote on the same review cluding non-members, can vote on reviews as being again. There will always be new types of attacks, and helpful or not helpful. The numbers of helpful as Amazon needs to be vigilant and respond to new well as the total number of votes are displayed with types of attacks as they emerge. However, due to each review. The order in which the reviews are the vulnerability of the review scheme it cannot be listed can be chosen by the user according to criteria described as a robust scheme. such as bnewest firstQ, bmost helpful firstQ or bhighest rating firstQ. 9.4. Discussion fora As a function of the number of helpful votes each reviewer has received, as well as other parameters not 9.4.1. Slashdot publicly revealed, Amazon determines each revie- Slashdot22 was started in 1997 as a bnews for wer’s rank, and those reviewers who are among the nerdsQ message board. More precisely it is a forum 1000 highest get assigned the status of Top 1000, Top for posting articles and comments to articles. In the 500, Top 100, Top 50, Top 10 or #1 Reviewer. Am- early days when the community was small, the signal azon has a system of Favourite People, where each to noise ratio was very high. As is the case with all member can choose other members as favourite mailing lists and discussion fora where the number of reviewers, and the number of other members who members grow rapidly, spam and low quality postings has a specific reviewer listed as favourite person emerged to become a major problem, and this forced also influences that reviewer’s rank. Apart from giv- Slashdot to introduce moderation. To start with there ing some members status as top reviewers, Amazon was a team of 25 moderators which after a while grew does not give any financial incentives. However there to 400 moderators to keep pace with the growing are obviously other financial incentives external to number of users and the amount of spam that fol- Amazon that can play an important role. It is for lowed. In order to create a more democratic and example easy to imagine why publishers would healthy moderation scheme, automated moderator se- 21 22 http://www.amazon.com/. http://slashdot.org/.

×