Introduction Consensus SPADE Agents Performance Conclusions
Co-Learning: Consensus-based Learning for
Multi-Agent Systems
C. Carrascosa J. Rincón M. Rebollo
VRAIN. Valencian Research Inst. for AI
Univ. Politècnica de València (Spain)
Practical Applications of Agents and Multiagent Systems
L’Aquila 2022
c b a
@mrebollo VRAIN
Co-Learning: Consensus-based Learning for Multi-Agent Systems
Introduction Consensus SPADE Agents Performance Conclusions
Problem
Objetivo
Machine learning (ML) models can be expensive to train on a
single computer
@mrebollo VRAIN
Co-Learning: Consensus-based Learning for Multi-Agent Systems
Introduction Consensus SPADE Agents Performance Conclusions
Federated Learning
Federated Learning
Distributed set of nodes
Train set divided into
subsets
central server averages
weights
Advantadges
Reduction of
computational load
Keeps data private
@mrebollo VRAIN
Co-Learning: Consensus-based Learning for Multi-Agent Systems
Introduction Consensus SPADE Agents Performance Conclusions
Consensus Process in Networks
Process to share información on a
network, ruled by
xi (t+1) = xi (t)+ε
X
j∈Ni
[xj(t) − xi (t)]
Information from direct neighbors
only
0 20 40 60 80 100
EPOCH
0
0.2
0.4
0.6
0.8
1
VALUE
OLFATI CONSENSUS
@mrebollo VRAIN
Co-Learning: Consensus-based Learning for Multi-Agent Systems
Introduction Consensus SPADE Agents Performance Conclusions
Model: FL by Consensus
Goal
To learn a global model (W , tr) of weights W for a training set tr.
n identical agents as nodes in a network. Each agent with a NN
model (Wi , tri ), being
Wi = (Wi,1, . . . , Wi,k) weight and bias matrices of node i for
component k
tri ⊆ tr fragment of the training set assigned to i.
Weights averaged in the neighborhood using consensus algorithm
(Olfati, 2007).
Wi (t + 1) = Wi (t) + ε
X
j∈Ni
[Wj(t) − Wi (t)]
@mrebollo VRAIN
Co-Learning: Consensus-based Learning for Multi-Agent Systems
Introduction Consensus SPADE Agents Performance Conclusions
Model: FL by Consensus
The NN converge to the average values of Wi
Once adjusted, a new train epoch executes
@mrebollo VRAIN
Co-Learning: Consensus-based Learning for Multi-Agent Systems
Introduction Consensus SPADE Agents Performance Conclusions
SPADE Agents
This solution is implemented over SPADE architecture for
multi-agent systems
@mrebollo VRAIN
Co-Learning: Consensus-based Learning for Multi-Agent Systems
Introduction Consensus SPADE Agents Performance Conclusions
Co-Learning Algorithm
1: while !doomsday do
2: for f ← 1, e do
3: W ← Train(f )
4: end for
5: for j ← 1, k do
6: Xi (0) ← Wj
7: for t ← 1, c do
8: Receive Xj(t) from ai neighbors
9: Xi (t + 1) ← Xi (t) + ε
P
j∈Ni
[Xj(t) − Xi (t)]
10: Send Xi (t + 1) to ai neighbors
11: end for
12: end for
13: end while
@mrebollo VRAIN
Co-Learning: Consensus-based Learning for Multi-Agent Systems
Introduction Consensus SPADE Agents Performance Conclusions
SPADE Behav. for FL Consensus
Finite state machine for the co-learning behaviour of the agent
@mrebollo VRAIN
Co-Learning: Consensus-based Learning for Multi-Agent Systems
Introduction Consensus SPADE Agents Performance Conclusions
Network Topologies
6 network topologies studied to identify wich one is the best to
connect the agents
Regular 2-d Grid Triangular Grid Kleinberg's Navigable Graph
Random Geometric Graph (RGG) Delaunay Triangulation Gabriel Graph
@mrebollo VRAIN
Co-Learning: Consensus-based Learning for Multi-Agent Systems
Introduction Consensus SPADE Agents Performance Conclusions
Degree Distribution
Degree distribution affects to the number of messages exchanged
0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0
degree
0.0
0.1
0.2
0.3
0.4
0.5
0.6
freq
2-d Grid Graph
0 1 2 3 4 5 6
degree
0.0
0.1
0.2
0.3
0.4
0.5
0.6
freq
Triangular Grid Graph
n=32
n=44
n=54
n=70
n=83
n=102
0 2 4 6 8 10 12 14
degree
0.00
0.05
0.10
0.15
0.20
0.25
0.30
freq
Navigable Graph
n=25
n=36
n=49
n=64
n=81
n=100
0 5 10 15 20 25 30
degree
0.00
0.02
0.04
0.06
0.08
0.10
0.12
0.14
freq
RGG Graph
n=25
n=36
n=49
n=64
n=81
n=100
0 5 10 15 20 25 30
degree
0.00
0.05
0.10
0.15
0.20
0.25
freq
Delaunay Graph
n=25
n=36
n=49
n=64
n=81
n=100
0 1 2 3 4 5
degree
0.0
0.1
0.2
0.3
0.4
0.5
freq
Gabriel Graph
n=25
n=36
n=49
n=64
n=81
n=100
@mrebollo VRAIN
Co-Learning: Consensus-based Learning for Multi-Agent Systems
Introduction Consensus SPADE Agents Performance Conclusions
Global Performance
Combined with the degree, the path length its another factor that
affect to the erformance of the consensus (not to the value)
30 40 50 60 70 80 90 100
#nodes
3
4
5
6
7
8
path
length
Average Shortest Path Lengths
2d-grid
triangular
RGG
Delaunay
Gabriel
Kleinberg
30 40 50 60 70 80 90 100
#nodes
2
4
6
8
10
12
degree
Mean Degree
2d-grid
triangular
RGG
Delaunay
Gabriel
Kleinberg
@mrebollo VRAIN
Co-Learning: Consensus-based Learning for Multi-Agent Systems
Introduction Consensus SPADE Agents Performance Conclusions
Global Performance
Total number of iterations needed for the consensus to complete
30 40 50 60 70 80 90 100
#nodes
50
100
150
200
#iterations
Consensus Performance
2d grid
triangle
RGG
Delaunay
Gabriel
Kleinberg
@mrebollo VRAIN
Co-Learning: Consensus-based Learning for Multi-Agent Systems
Introduction Consensus SPADE Agents Performance Conclusions
Network Efficiency
How the networks behaves under random or deliberated attacks
(by degree)
0 20 40 60 80 100
#nodes removed
0.0
0.2
0.4
0.6
0.8
1.0
E
/
E
G
Network Efficiency (random)
2d grid
triangle
RGG
Delaunay
Gabriel
Kleinberg
0 20 40 60 80 100
#nodes removed
0.0
0.2
0.4
0.6
0.8
1.0
E
/
E
G
Network Efficiency (targeted)
2d grid
triangle
RGG
Delaunay
Gabriel
Kleinberg
Conclusion
RGG is the best balanced topoogy for performance and robustness
@mrebollo VRAIN
Co-Learning: Consensus-based Learning for Multi-Agent Systems
Introduction Consensus SPADE Agents Performance Conclusions
Effect of Network Size
Accuracy and loss of the trained model after the co-learning
process
@mrebollo VRAIN
Co-Learning: Consensus-based Learning for Multi-Agent Systems
Introduction Consensus SPADE Agents Performance Conclusions
Conclusions
share advances of federated learning
distributed aggregation of models
keeps privacy of datasets
RGG topologies present a good balance between performance
and robustness
@mrebollo VRAIN
Co-Learning: Consensus-based Learning for Multi-Agent Systems

Co-Learning: Consensus-based Learning for Multi-Agent Systems

  • 1.
    Introduction Consensus SPADEAgents Performance Conclusions Co-Learning: Consensus-based Learning for Multi-Agent Systems C. Carrascosa J. Rincón M. Rebollo VRAIN. Valencian Research Inst. for AI Univ. Politècnica de València (Spain) Practical Applications of Agents and Multiagent Systems L’Aquila 2022 c b a @mrebollo VRAIN Co-Learning: Consensus-based Learning for Multi-Agent Systems
  • 2.
    Introduction Consensus SPADEAgents Performance Conclusions Problem Objetivo Machine learning (ML) models can be expensive to train on a single computer @mrebollo VRAIN Co-Learning: Consensus-based Learning for Multi-Agent Systems
  • 3.
    Introduction Consensus SPADEAgents Performance Conclusions Federated Learning Federated Learning Distributed set of nodes Train set divided into subsets central server averages weights Advantadges Reduction of computational load Keeps data private @mrebollo VRAIN Co-Learning: Consensus-based Learning for Multi-Agent Systems
  • 4.
    Introduction Consensus SPADEAgents Performance Conclusions Consensus Process in Networks Process to share información on a network, ruled by xi (t+1) = xi (t)+ε X j∈Ni [xj(t) − xi (t)] Information from direct neighbors only 0 20 40 60 80 100 EPOCH 0 0.2 0.4 0.6 0.8 1 VALUE OLFATI CONSENSUS @mrebollo VRAIN Co-Learning: Consensus-based Learning for Multi-Agent Systems
  • 5.
    Introduction Consensus SPADEAgents Performance Conclusions Model: FL by Consensus Goal To learn a global model (W , tr) of weights W for a training set tr. n identical agents as nodes in a network. Each agent with a NN model (Wi , tri ), being Wi = (Wi,1, . . . , Wi,k) weight and bias matrices of node i for component k tri ⊆ tr fragment of the training set assigned to i. Weights averaged in the neighborhood using consensus algorithm (Olfati, 2007). Wi (t + 1) = Wi (t) + ε X j∈Ni [Wj(t) − Wi (t)] @mrebollo VRAIN Co-Learning: Consensus-based Learning for Multi-Agent Systems
  • 6.
    Introduction Consensus SPADEAgents Performance Conclusions Model: FL by Consensus The NN converge to the average values of Wi Once adjusted, a new train epoch executes @mrebollo VRAIN Co-Learning: Consensus-based Learning for Multi-Agent Systems
  • 7.
    Introduction Consensus SPADEAgents Performance Conclusions SPADE Agents This solution is implemented over SPADE architecture for multi-agent systems @mrebollo VRAIN Co-Learning: Consensus-based Learning for Multi-Agent Systems
  • 8.
    Introduction Consensus SPADEAgents Performance Conclusions Co-Learning Algorithm 1: while !doomsday do 2: for f ← 1, e do 3: W ← Train(f ) 4: end for 5: for j ← 1, k do 6: Xi (0) ← Wj 7: for t ← 1, c do 8: Receive Xj(t) from ai neighbors 9: Xi (t + 1) ← Xi (t) + ε P j∈Ni [Xj(t) − Xi (t)] 10: Send Xi (t + 1) to ai neighbors 11: end for 12: end for 13: end while @mrebollo VRAIN Co-Learning: Consensus-based Learning for Multi-Agent Systems
  • 9.
    Introduction Consensus SPADEAgents Performance Conclusions SPADE Behav. for FL Consensus Finite state machine for the co-learning behaviour of the agent @mrebollo VRAIN Co-Learning: Consensus-based Learning for Multi-Agent Systems
  • 10.
    Introduction Consensus SPADEAgents Performance Conclusions Network Topologies 6 network topologies studied to identify wich one is the best to connect the agents Regular 2-d Grid Triangular Grid Kleinberg's Navigable Graph Random Geometric Graph (RGG) Delaunay Triangulation Gabriel Graph @mrebollo VRAIN Co-Learning: Consensus-based Learning for Multi-Agent Systems
  • 11.
    Introduction Consensus SPADEAgents Performance Conclusions Degree Distribution Degree distribution affects to the number of messages exchanged 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 degree 0.0 0.1 0.2 0.3 0.4 0.5 0.6 freq 2-d Grid Graph 0 1 2 3 4 5 6 degree 0.0 0.1 0.2 0.3 0.4 0.5 0.6 freq Triangular Grid Graph n=32 n=44 n=54 n=70 n=83 n=102 0 2 4 6 8 10 12 14 degree 0.00 0.05 0.10 0.15 0.20 0.25 0.30 freq Navigable Graph n=25 n=36 n=49 n=64 n=81 n=100 0 5 10 15 20 25 30 degree 0.00 0.02 0.04 0.06 0.08 0.10 0.12 0.14 freq RGG Graph n=25 n=36 n=49 n=64 n=81 n=100 0 5 10 15 20 25 30 degree 0.00 0.05 0.10 0.15 0.20 0.25 freq Delaunay Graph n=25 n=36 n=49 n=64 n=81 n=100 0 1 2 3 4 5 degree 0.0 0.1 0.2 0.3 0.4 0.5 freq Gabriel Graph n=25 n=36 n=49 n=64 n=81 n=100 @mrebollo VRAIN Co-Learning: Consensus-based Learning for Multi-Agent Systems
  • 12.
    Introduction Consensus SPADEAgents Performance Conclusions Global Performance Combined with the degree, the path length its another factor that affect to the erformance of the consensus (not to the value) 30 40 50 60 70 80 90 100 #nodes 3 4 5 6 7 8 path length Average Shortest Path Lengths 2d-grid triangular RGG Delaunay Gabriel Kleinberg 30 40 50 60 70 80 90 100 #nodes 2 4 6 8 10 12 degree Mean Degree 2d-grid triangular RGG Delaunay Gabriel Kleinberg @mrebollo VRAIN Co-Learning: Consensus-based Learning for Multi-Agent Systems
  • 13.
    Introduction Consensus SPADEAgents Performance Conclusions Global Performance Total number of iterations needed for the consensus to complete 30 40 50 60 70 80 90 100 #nodes 50 100 150 200 #iterations Consensus Performance 2d grid triangle RGG Delaunay Gabriel Kleinberg @mrebollo VRAIN Co-Learning: Consensus-based Learning for Multi-Agent Systems
  • 14.
    Introduction Consensus SPADEAgents Performance Conclusions Network Efficiency How the networks behaves under random or deliberated attacks (by degree) 0 20 40 60 80 100 #nodes removed 0.0 0.2 0.4 0.6 0.8 1.0 E / E G Network Efficiency (random) 2d grid triangle RGG Delaunay Gabriel Kleinberg 0 20 40 60 80 100 #nodes removed 0.0 0.2 0.4 0.6 0.8 1.0 E / E G Network Efficiency (targeted) 2d grid triangle RGG Delaunay Gabriel Kleinberg Conclusion RGG is the best balanced topoogy for performance and robustness @mrebollo VRAIN Co-Learning: Consensus-based Learning for Multi-Agent Systems
  • 15.
    Introduction Consensus SPADEAgents Performance Conclusions Effect of Network Size Accuracy and loss of the trained model after the co-learning process @mrebollo VRAIN Co-Learning: Consensus-based Learning for Multi-Agent Systems
  • 16.
    Introduction Consensus SPADEAgents Performance Conclusions Conclusions share advances of federated learning distributed aggregation of models keeps privacy of datasets RGG topologies present a good balance between performance and robustness @mrebollo VRAIN Co-Learning: Consensus-based Learning for Multi-Agent Systems