Distributed federated learning using consensus with intelligent agents over a network. Work presented to the 20th International Conference on Practical Applications of Agents and Multi-Agent Systems. July 2022 L'Aquila (Italy)
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
Co-Learning: Consensus-based Learning for Multi-Agent Systems
1. Introduction Consensus SPADE Agents Performance Conclusions
Co-Learning: Consensus-based Learning for
Multi-Agent Systems
C. Carrascosa J. Rincón M. Rebollo
VRAIN. Valencian Research Inst. for AI
Univ. Politècnica de València (Spain)
Practical Applications of Agents and Multiagent Systems
L’Aquila 2022
c b a
@mrebollo VRAIN
Co-Learning: Consensus-based Learning for Multi-Agent Systems
2. Introduction Consensus SPADE Agents Performance Conclusions
Problem
Objetivo
Machine learning (ML) models can be expensive to train on a
single computer
@mrebollo VRAIN
Co-Learning: Consensus-based Learning for Multi-Agent Systems
3. Introduction Consensus SPADE Agents Performance Conclusions
Federated Learning
Federated Learning
Distributed set of nodes
Train set divided into
subsets
central server averages
weights
Advantadges
Reduction of
computational load
Keeps data private
@mrebollo VRAIN
Co-Learning: Consensus-based Learning for Multi-Agent Systems
4. Introduction Consensus SPADE Agents Performance Conclusions
Consensus Process in Networks
Process to share información on a
network, ruled by
xi (t+1) = xi (t)+ε
X
j∈Ni
[xj(t) − xi (t)]
Information from direct neighbors
only
0 20 40 60 80 100
EPOCH
0
0.2
0.4
0.6
0.8
1
VALUE
OLFATI CONSENSUS
@mrebollo VRAIN
Co-Learning: Consensus-based Learning for Multi-Agent Systems
5. Introduction Consensus SPADE Agents Performance Conclusions
Model: FL by Consensus
Goal
To learn a global model (W , tr) of weights W for a training set tr.
n identical agents as nodes in a network. Each agent with a NN
model (Wi , tri ), being
Wi = (Wi,1, . . . , Wi,k) weight and bias matrices of node i for
component k
tri ⊆ tr fragment of the training set assigned to i.
Weights averaged in the neighborhood using consensus algorithm
(Olfati, 2007).
Wi (t + 1) = Wi (t) + ε
X
j∈Ni
[Wj(t) − Wi (t)]
@mrebollo VRAIN
Co-Learning: Consensus-based Learning for Multi-Agent Systems
6. Introduction Consensus SPADE Agents Performance Conclusions
Model: FL by Consensus
The NN converge to the average values of Wi
Once adjusted, a new train epoch executes
@mrebollo VRAIN
Co-Learning: Consensus-based Learning for Multi-Agent Systems
7. Introduction Consensus SPADE Agents Performance Conclusions
SPADE Agents
This solution is implemented over SPADE architecture for
multi-agent systems
@mrebollo VRAIN
Co-Learning: Consensus-based Learning for Multi-Agent Systems
8. Introduction Consensus SPADE Agents Performance Conclusions
Co-Learning Algorithm
1: while !doomsday do
2: for f ← 1, e do
3: W ← Train(f )
4: end for
5: for j ← 1, k do
6: Xi (0) ← Wj
7: for t ← 1, c do
8: Receive Xj(t) from ai neighbors
9: Xi (t + 1) ← Xi (t) + ε
P
j∈Ni
[Xj(t) − Xi (t)]
10: Send Xi (t + 1) to ai neighbors
11: end for
12: end for
13: end while
@mrebollo VRAIN
Co-Learning: Consensus-based Learning for Multi-Agent Systems
9. Introduction Consensus SPADE Agents Performance Conclusions
SPADE Behav. for FL Consensus
Finite state machine for the co-learning behaviour of the agent
@mrebollo VRAIN
Co-Learning: Consensus-based Learning for Multi-Agent Systems
10. Introduction Consensus SPADE Agents Performance Conclusions
Network Topologies
6 network topologies studied to identify wich one is the best to
connect the agents
Regular 2-d Grid Triangular Grid Kleinberg's Navigable Graph
Random Geometric Graph (RGG) Delaunay Triangulation Gabriel Graph
@mrebollo VRAIN
Co-Learning: Consensus-based Learning for Multi-Agent Systems
12. Introduction Consensus SPADE Agents Performance Conclusions
Global Performance
Combined with the degree, the path length its another factor that
affect to the erformance of the consensus (not to the value)
30 40 50 60 70 80 90 100
#nodes
3
4
5
6
7
8
path
length
Average Shortest Path Lengths
2d-grid
triangular
RGG
Delaunay
Gabriel
Kleinberg
30 40 50 60 70 80 90 100
#nodes
2
4
6
8
10
12
degree
Mean Degree
2d-grid
triangular
RGG
Delaunay
Gabriel
Kleinberg
@mrebollo VRAIN
Co-Learning: Consensus-based Learning for Multi-Agent Systems
13. Introduction Consensus SPADE Agents Performance Conclusions
Global Performance
Total number of iterations needed for the consensus to complete
30 40 50 60 70 80 90 100
#nodes
50
100
150
200
#iterations
Consensus Performance
2d grid
triangle
RGG
Delaunay
Gabriel
Kleinberg
@mrebollo VRAIN
Co-Learning: Consensus-based Learning for Multi-Agent Systems
14. Introduction Consensus SPADE Agents Performance Conclusions
Network Efficiency
How the networks behaves under random or deliberated attacks
(by degree)
0 20 40 60 80 100
#nodes removed
0.0
0.2
0.4
0.6
0.8
1.0
E
/
E
G
Network Efficiency (random)
2d grid
triangle
RGG
Delaunay
Gabriel
Kleinberg
0 20 40 60 80 100
#nodes removed
0.0
0.2
0.4
0.6
0.8
1.0
E
/
E
G
Network Efficiency (targeted)
2d grid
triangle
RGG
Delaunay
Gabriel
Kleinberg
Conclusion
RGG is the best balanced topoogy for performance and robustness
@mrebollo VRAIN
Co-Learning: Consensus-based Learning for Multi-Agent Systems
15. Introduction Consensus SPADE Agents Performance Conclusions
Effect of Network Size
Accuracy and loss of the trained model after the co-learning
process
@mrebollo VRAIN
Co-Learning: Consensus-based Learning for Multi-Agent Systems
16. Introduction Consensus SPADE Agents Performance Conclusions
Conclusions
share advances of federated learning
distributed aggregation of models
keeps privacy of datasets
RGG topologies present a good balance between performance
and robustness
@mrebollo VRAIN
Co-Learning: Consensus-based Learning for Multi-Agent Systems