Forensic Biology & Its biological significance.pdf
A game theoretical approach to modeling information dissemination in social networks
1. A Game Theoretical Approach to
Modeling Information
Dissemination in Social Networks
Dmitry Zinoviev, Vy Duong, Honggang Zhang
Mathematics and Computer Science Department
Suffolk University
Boston
2. Actors and Assertions
⏏ Our first paper considers two people engaged in a one-way
communication.
⏏ One person (“S[ender]”) has an assertion Φ that she wants to
share with another person (“R[eceiver]”). Both S and R are actors.
⏏ An assertion is an atomic piece of knowledge.
⏏ R may already have the assertion Φ, but S does not know about it.
⏏ R may have other assertions as well, but is not allowed to share
them.
3. Assertions and Feedback
⏏ Sender S must decide whether to speak (post, publish, etc.) or not
⏏ Publishing can hurt—and so can not publishing
⏏ Receiver R must decide whether to trust S or not, and also
whether to comment on S's post or not to comment
⏏ Commenting can hurt—and so can not commenting
4. Two Actors—Two Policies
⏏ As a result, each actor has two
strategies: to post or not to post and to
comment (if posted) or not to
comment.
⏏ Each actor has to make a choice that
maximizes his/her utility.
⏏ This forms a mathematical game—a
square game with two players and two
strategies per player
⏏ Solve the game—get the strategies!
⏏ What is the utility?
5. Actor's Utility
⏏ Actor's utility is a convex linear combination of three factors:
credibility, popularity, and knowledge:
Ui
=τi
Ti
+πi
Pi
+κi
Ki
τi
+πi
+κi
=1
0≤T,P,K≤N
⏏ T is credibility—the extent to which S trusts R and R trusts S
τ is the importance of trust to the actor
⏏ P is popularity—a measure of “social visibility” of the actor
π is the importance of popularity to the actor
⏏ K is the measure of knowledge
κ is the importance of knowledge to the actor
⏏ N is simply a reasonably large number.
If in the course of simulation T, P or K are re-normalized as
needed
6. Personality Types
⏏ Depending on the values of τ, π, and κ, one can define
several personality types; for example:
⏏ “Internet Trolls” have high π and low τ and κ
⏏ “Experts” have medium κ, high τ, and low π
⏏ “Mad Professors” have high κ, medium or low τ, and low π
7. Knowledge
⏏ Actor S's knowledge is a collection of S's assertions; S
knows KS
assertions
⏏ The number of assertions in the system, N, is finite and fixed
⏏ Each assertion can be of three types:
⏏ Privately believed to be true—a positive fact (+); S knows
F+
S
true assertions
⏏ Privately believed to be false—a negative fact (-); S knows
F-
S
false assertions
⏏ Privately not known to be true or false—a rumor (○); S
knows F○
S
rumors
8. Rumor Discount
⏏ λ is a rumor discount coefficient:
⏏ λ=0 means that rumors are not included in the total
knowledge
⏏ λ=1 means that rumors are fully included
⏏ The measure of S's knowledge is K (0≤K≤N):
K=F+
S
+F-
S
+λF○
S
9. Knowledge Types
⏏ Depending on the value of k=K/N, one can define several
knowledge types; for example:
⏏ “Ignoramuses” have low k
⏏ “Mediocres” have medium k
⏏ “Gurus” have high k
10. What Is Global Truth?
⏏ The probability of an assertion to be globally true is φ
⏏ Only an external oracle (a “God”) knows which particular
assertions are globally true
11. What Is Perceived Truth?
⏏ Upon receiving an assertion, R must assess (or fail to assess) it—
that is, calculate the probabilities of Φ being a true assertion (g+
), a
false assertion (g-
), or a rumor (g○
); g+
+g-
+g○
=1
⏏ This process is based on:
⏏ R's own knowledge
⏏ R's trust in S
⏏ The probability of the assertion being true by nature
⏏ The sender's opinion of the assertion
⏏ Situations:
⏏ kR
=0: R is an Ignoramus, he must trust S to the extent of S's
credibility
⏏ kR
=1: R is a Guru, he makes a decision himself using φ as a
guidance
⏏ 0<kR
<1: R is a Mediocre, he blends the two extreme strategies
13. Learning
⏏ After receiving an assertion from S, R will:
⏏ Learn the assertion if he didn't know it before (with probability of
p1
=1-kR
), and give S a popularity credit of 1
⏏ Reassess the assertion if he already knows it, but has a
different opinion (with probability of p2
=fR
(1-(g-
f-
R
+g+
f+
R
+g○
f○
R
)),
and give S a popularity credit of 1; reassessment does not
change FR
!
⏏ Ignore the assertion, otherwise, and ignore S altogether; we
assume that in the absence of popularity credits, S's popularity
slowly deteriorates by -δP per communication cycle
⏏ The overall change of the receiver's knowledge is:
ΔKR
=λ(1-fR
)+(1-λ)(g-
+g+
-fR
(f+
R
+f-
R
))
⏏ The overall change of the sender's popularity is:
ΔPS
=1-fR
(g+
f+
R
+g-
f-
R
+g○
f○
R
)
15. Feedback: Receiver's Side
⏏ R can influence his own trust level by providing feedback in the
form of a comment
⏏ If R is a Guru (kR
=1, he knows all assertions), he can always
assess a passed assertion correctly and earn a trust credit of +1
⏏ If R is an Ignoramus (kR
=0), then we compare R's assessment of
an assertion with the oracle's assessment of the assertion; if they
match, R get a trust credit of +1; otherwise, he gets a trust penalty
of -1
⏏ Overall:
ΔRR
=kR
+(1-kR
)rS
(2φ-1)(f+
S
-f-
S
)
16. Feedback: Sender's Side
⏏ If receiver R sends a feedback, he can influence sender's S trust,
too
⏏ If R's assessment of an assertion matches the S's assessment of
the same assertion, S earns a trust credit of +1 (discounter by the
receiver's trust level!)
⏏ Otherwise, she gets a trust penalty of -1:
ΔRS
=RR
((1-2g-
-2g+
)(1-2f-
S
-2f+
S
)-2(f+
S
g+
+f-
S
g-
))
⏏ The feedback can also change the knowledge distribution of S by
forcing her to reassess her assertions, based on her trust in R:
ΔKS
=rR
(1-λ)(g-
+g+
-(f+
S
+f-
S
))
⏏ The number of assertions at S will not change
18. The Game
⏏ S and R form a two-player square game
⏏ We assume that in general the game is non-cooperative S and R
do not coordinate their strategies to maximize their joint utility)
⏏ The game has a pure-strategy Nash equilibrium
19. Case Study: a MOSN
Let's experiment with a simulated massive online
social network (MOSN)
20. Network Design
⏏ Massive online social network (MOSN) represented as a
connected bidirectional graph where nodes represent actors and
edges represent “friendship” connections or other information
dissemination channels
⏏ 1,000 actors, fully connected (anyone can talk to anyone)
⏏ At each simulation step, exactly two actors talk (still a 2⨯2 game,
not n⨯n)
⏏ The probability of a fact being true is φ=0.8. The actor popularity
decay factor is δP=-0.1. The rumor discount coefficient is λ= 0.5.
The maximum number of facts in the network is N = 2000.
21. Network Population
⏏ Three experiments:
⏏ All “trolls”
⏏ All “experts”
⏏ 50% “trolls,” 50% “experts”
⏏ In each experiment, 1/3 of actors are “ignoramuses,” 1/3
—”mediocres,” and 1/3—”gurus”
⏏ ri
, pi
, f+
i
, f-
i
are drawn uniformly at random between 0 and 1
22. Goal
⏏ Execute 10,000,000 random communications (10,000
communications per actor)
⏏ Monitor the distribution of knowledge k and its quality f+
and f-
23. “Trolls”
⏏ The “troll” community converges to the state of “total knowledge”
after a finite number of iterations (“Ignore credibility, talk!”)
24. “Experts”
⏏ The distribution of information in the “expert” community changes
marginally over time (“Think before you say!”)
⏏ Dispersion of the learning speed: some “Ignoramuses” and
“mediocres” learn faster
25. Difference in Learning Speed
⏏ Actors with lower credibility and lower initial knowledge learn
faster to increase their utility
⏏ Actors with higher credibility or higher initial knowledge learn
slower, because they have less incentive to learn
26. Future Directions
⏏ Study the variability of κ, π, and ρ for different actors
⏏ Analyze a full-duplex (two-way) communication scenario where
the actors are both senders and receivers—just finished,
submitted to the Summer Simulation Conference-2010
⏏ Analyze a groupcast (one-to-many) communication scenario that
is more common in massive online social networks
⏏ Collect experimental data that supports the model—so far, done
only for the popularity component
27. Acknowledgment
This research has been supported in part by the
College of Arts and Sciences, Suffolk University,
through an undergraduate research assistantship
grant.