Title: Variation on preferential-attachment
Abstract
In this talk, I will describe how preferential attachment arises from the first principle using game theory. Next, I will extend the model of preferential attachment into a general model, which allows for the incorporation of Homophily ties in the network. This talk is based on joint works with Prof. Chen Avin, Avi Cohen, Yinon Nahum, Prof. Pierre Fraigniaud, and Prof. David Peleg.
3. 3
Power Law Distributions
Observed in
both network
and non-
network
structures
“Emergence of
Scaling in
Random
Networks”
(Barabási and
Albert, 1999)
Curabitur a nisl facilisis lectus
posuere pharetra porta sed
neque. Fusce porttitor venenatis
ipsum at ullamcorper.
Suspendisse sodales leo vehicula
libero pharetra, ac porttitor
metus mollis.
Object Photography
Pr 𝑥𝑥 > 𝑡𝑡 ~𝐿𝐿 𝑡𝑡 𝑡𝑡−𝛽𝛽+1,
lim
𝑟𝑟→ ∞
L r t /𝐿𝐿[𝑡𝑡] = 1
4. 4
History
𝛽𝛽=3 𝛽𝛽=3 𝛽𝛽=3 𝛽𝛽=3 𝛽𝛽=3𝛽𝛽=3 𝛽𝛽 ∈(2,3]𝛽𝛽 ∈(2,3]
1925,
1925,
1976
1976
1999
1999
Udny Yule Price Barabási
Pr[ 𝑣𝑣𝑡𝑡connects to 𝑣𝑣𝑖𝑖 ] =
𝑑𝑑𝑖𝑖
∑𝑗𝑗 𝑑𝑑𝑗𝑗
2006
2006
Chung and Lu
5. 05
Preferential-Attachment
• Evolutionary model of networks
• There are two operations: node event,
edge event.
• In the node event node arrives
• connected to net with only one edge
• Node arriving connects according to
degree.
• In edge event
• we select two nodes according to the
degree.
6. 6
OUR Preferential-Attachment
An edge
event
happens
with
probability
𝑟𝑟𝑡𝑡
Edge
event
A component
event
happens with
probability
𝑞𝑞𝑡𝑡
Component
event
Can change in
time.
But
𝑝𝑝𝑡𝑡+𝑞𝑞𝑡𝑡 + 𝑟𝑟𝑡𝑡 = 1
Time
varies
In fact, it is also
possible to
prove the
results on
Hyper graph
Hyper
graph
A node
event
happens
with
probability
𝑝𝑝𝑡𝑡
Node
event
The basic model can be expanded simply in many
directions.
7. Theorem
r=1-1/log(t), p=1/log(t)
E=𝑛𝑛 log 𝑛𝑛, sub-linear core
Consider
p=1/2,r=0,q=1/2
Consider
p=0,r=0.25,q=0.75
Giant component
Push 𝛽𝛽 to be
Full domain [2,∞)
p=ε,r=1/2,q=1- ε
Full domain (1,2]
r=1-1/t^a, p=1/t^a
• PA follows a power law with exponent
– 𝛽𝛽 = 1 +
2
𝑝𝑝+2𝑟𝑟
8. 8
Example
P=1/2 r=1/3 q=1/6
𝑑𝑑𝑑𝑑𝑖𝑖(𝑡𝑡)
𝑑𝑑𝑡𝑡
= 𝑑𝑑𝑖𝑖(𝑡𝑡)/2t
Node E
𝑑𝑑𝑑𝑑𝑖𝑖(𝑡𝑡)
𝑑𝑑𝑑𝑑
= 2𝑑𝑑𝑖𝑖(𝑡𝑡)/2t
Edge E
𝑑𝑑𝑑𝑑𝑖𝑖(𝑡𝑡)
𝑑𝑑𝑑𝑑
= 0
Comp E
p=1/2,r=1/3,q=1/6
Let 𝑑𝑑𝑖𝑖(𝑡𝑡) denote the Degree of vertices of i at time t
𝑑𝑑𝑑𝑑𝑖𝑖(𝑡𝑡)
𝑑𝑑𝑡𝑡
= 1/2𝑑𝑑𝑖𝑖(𝑡𝑡)/2t+1/3𝑑𝑑𝑖𝑖(𝑡𝑡)/t
𝑑𝑑𝑖𝑖(𝑡𝑡)=(t/i)^(7/12)
𝛽𝛽 = 1 + 12/7
10. 10
Game Theory in one slide
There are
players who
can choose
strategies
A profile is an
assignment
strategy for
each
The players
wish to
maximize their
(expected)
payoff
A profile
determines an
outcome, and
an outcome
determines a
payoff for each
player
11. 11
Game Theory, cont. ANALYSIS
Nash Equilibrium is a
profile such that no single player
can gain by changing her strategy
unilaterally.
example
A B
A (10,10) (5,0)
B (0,5) (0,0)
12. 12
Network Formation Game [Fabrikant et al, 2003]
Player = node
Strategy =
which nodes
to connect to
.
Goal:
minimize
their average
distance
Resulting
graph is not
Power Law
13. With probability 𝛼𝛼,
𝑣𝑣𝑡𝑡 connects, and
with
probability 1−𝛼𝛼, 𝑣𝑣𝑡𝑡
connects to a
random neighbour
of the host.
Start with one
node 𝑣𝑣1.
Wealth Based
Recommendation.
At time 𝑡𝑡, node
𝑣𝑣𝑡𝑡 arrives.
𝑣𝑣𝑡𝑡 proposes to
an (existing)
host node.
Wealth&Recommendation
(W&R) Game
Utility = (expected) degree
14. 14
How to play W&R
Each time a node arrives, it has to choose a single
node to connect to or receive a recommendation.
After selecting a node, either the node connects to
that node, or gets a proposal and then connects to
that node.
W&R game
End of
The game
𝜏𝜏
Wealth
𝛼𝛼
Partial
Information
deg seq
Utility
Max
degree
Question
what is
Nash
15. 15
Strategy
Player 𝒗𝒗𝒕𝒕 Strategy 𝝅𝝅𝒕𝒕.
is a probability distribution on existing
nodes.
𝝅𝝅𝒕𝒕 𝒅𝒅𝒊𝒊, 𝑫𝑫
prob. of choosing the node of degree 𝑑𝑑_𝑖𝑖 in
the degree sequence 𝐷𝐷=(𝑑𝑑_1,…,𝑑𝑑_(𝑡𝑡−1)).
.
Strategy Profile -𝚷𝚷 = 𝝅𝝅𝒕𝒕 , (𝒕𝒕≥𝟏𝟏)
16. 016
Examples
• What to do if every one plays
uniform
• Better to connect to small deg
nodes?
• if 𝛼𝛼=1?
• Is it Nash Equilibrium?
5
1
1
1
1
1
1
1
1
3
3
3
1/n
1/n
17. The Preferential Attachment Strategy
The Preferential Attachment (PA) strategy at time 𝑡𝑡
over the degree sequence 𝐷𝐷 = (𝑑𝑑1, … , 𝑑𝑑𝑡𝑡−1) is:
=
𝑑𝑑𝑖𝑖
2(𝑡𝑡 − 2)
𝜋𝜋𝑡𝑡 𝑑𝑑𝑖𝑖, 𝐷𝐷 =
𝑑𝑑𝑖𝑖
∑𝑗𝑗 𝑑𝑑𝑗𝑗
The Preferential Attachment strategy profile is the
strategy profile where all players 𝑣𝑣𝑡𝑡 for 𝑡𝑡 ≥ 5 play the PA strategy.
18. 1
9 RUNDO
The stationary distribution of a
simple random
Walk is a probability of PA
PA and
Random
Walks
Pr 𝑣𝑣𝑡𝑡connects to 𝑣𝑣𝑖𝑖 =Pr[RW to visit 𝑣𝑣𝑖𝑖]=
𝑑𝑑𝑖𝑖
∑𝑗𝑗 𝑑𝑑𝑗𝑗
19. 4 FOUR
2
0
TheoremThe Preferential Attachment
strategy profile is the only
universal Nash equilibrium.
Provides a possible explanation why preferential
attachment occurs in social networks.
20. 21
PA is a universal Nash Equilibrium
Suppose PA profile is played 𝑣𝑣𝑖𝑖 changes its strategy to 𝜋𝜋𝑖𝑖′ At step 𝑡𝑡>𝑖𝑖, No deviating from PAPROCESS
𝑣𝑣𝑖𝑖 starts at
degree 𝑑𝑑 = 1.
𝑑𝑑 ← 𝑑𝑑 + 1
w.p.
𝑑𝑑
2(𝑡𝑡−2)
.
Hence, 𝜋𝜋𝑖𝑖′ doesn’t matter