17 institutions contributed to the
questionnaire
10 in-depth interviews
5 case studies
Together, they hold a significant share of the market*
*70% of Retail Banking, 70% of Insurance and 60% of Wealth Management & Private Banking 2
3
Predictive AI Generative AI
Key objectives
Identify patterns, anomalies, correlations to
predict outcomes
Create new content based on learned
patterns and rules
Use cases
Forecast customer behavior, manage risk,
detect fraud, market trends
Generate personalised customer
interactions, financial reports, marketing
copy, synthetic data
Algorithms used
Regression, time series analysis, anomaly
detection, decision trees
Large language models, GPT models,
variational autoencoders
Training data
Historical transactions, market data,
customer profiles
Financial news, regulatory documents,
customer feedback, product descriptions
Limitations
Requires high-quality data, model
interpretability can be challenging
Risk of generating inaccurate or misleading
information, potential for misuse
4
5
6
1.
RETURN
7
1.
RETURN
8
1.
RETURN
9
1.
RETURN
10
Benefits Roadblocks
2.
RESOURCES
11
2.
RESOURCES
Centralised
Centralised with
CoE
Coordinated Decentralised
Resources
All AI expertise concentrated
within the central AI team.
AI specialists report to both the
CoE team and business unit
leaders.
Business units have their own AI
teams or contract external
expertise.
Business units fully responsible
for resourcing their AI projects.
Governance
Strict central control over data,
algorithms, model selection, and
deployment.
CoE sets overall AI standards and
guidelines.
Central AI team provides tools,
platforms, and best practices
guidelines.
Minimal central guidelines, focus
on data privacy and basic ethical
standards.
Project Initiation
Business units submit project
proposals, central team prioritizes
and executes.
Collaboration between business
units and CoE to define use cases.
Business units drive their own AI
initiatives, a central "steering
committee" ensures alignment
and coordination.
Business units independently
identify and execute their AI
initiatives.
12
Centralised Decentralised
CXO
Business Units
AI team
Direct reporting
Functional reporting
2.
RESOURCES
13
70% of employees rarely, if ever, use AI
applications in their daily tasks.*
*2024 survey - 1300 respondents
14
Figure 15: AI Act’s Risk Based Approach
High risk
Ex: Credit scoring, algorithmic
trading
2
Limited risk
Ex: Chatbots, robo-advisors
3
Minimal risk
Ex: Budgeting tool, fraud
detection, etc.
4
Unacceptable risk
Ex: Social scoring
1
Risk level Obligation
Prohibited
Conformity assessment
Transparency
obligation
No obligation
3.
REGULATION
Source: European Commission, Sailpeak
15
June 2023
Agreement on AI Act by
European Parliament
March 2024
AI Act adopted by European
Parliament
June 2024
Publication of the AI Act
End of 2024
Bans on certain AI practices
like social scoring will be
enforced
Mid 2025
High-risk AI systems (e.g.
credit scoring) need to
comply
Mid 2026
All remaining regulations of
the AI Act come into effect.
Time to prepare systems, processes, conformity assessment,
documentation, etc.
Figure 16: AI Act Timeline
Source: European Commission, Sailpeak
3.
REGULATION
16
3.
REGULATION
17
4.
ETHICS
18
5.
ECOSYSTEM
19
5.
ECOSYSTEM
20
CONTACT
Alessandra Guion
alessandra@fintechbelgium.be
CONTACT
Nico Vincent
nico@sailpeak.com
Scan me

AI Barometer by Nicolas Vincent from Sailpeak

  • 2.
    17 institutions contributedto the questionnaire 10 in-depth interviews 5 case studies Together, they hold a significant share of the market* *70% of Retail Banking, 70% of Insurance and 60% of Wealth Management & Private Banking 2
  • 3.
    3 Predictive AI GenerativeAI Key objectives Identify patterns, anomalies, correlations to predict outcomes Create new content based on learned patterns and rules Use cases Forecast customer behavior, manage risk, detect fraud, market trends Generate personalised customer interactions, financial reports, marketing copy, synthetic data Algorithms used Regression, time series analysis, anomaly detection, decision trees Large language models, GPT models, variational autoencoders Training data Historical transactions, market data, customer profiles Financial news, regulatory documents, customer feedback, product descriptions Limitations Requires high-quality data, model interpretability can be challenging Risk of generating inaccurate or misleading information, potential for misuse
  • 4.
  • 5.
  • 6.
  • 7.
  • 8.
  • 9.
  • 10.
  • 11.
  • 12.
    Centralised Centralised with CoE Coordinated Decentralised Resources AllAI expertise concentrated within the central AI team. AI specialists report to both the CoE team and business unit leaders. Business units have their own AI teams or contract external expertise. Business units fully responsible for resourcing their AI projects. Governance Strict central control over data, algorithms, model selection, and deployment. CoE sets overall AI standards and guidelines. Central AI team provides tools, platforms, and best practices guidelines. Minimal central guidelines, focus on data privacy and basic ethical standards. Project Initiation Business units submit project proposals, central team prioritizes and executes. Collaboration between business units and CoE to define use cases. Business units drive their own AI initiatives, a central "steering committee" ensures alignment and coordination. Business units independently identify and execute their AI initiatives. 12 Centralised Decentralised CXO Business Units AI team Direct reporting Functional reporting 2. RESOURCES
  • 13.
    13 70% of employeesrarely, if ever, use AI applications in their daily tasks.* *2024 survey - 1300 respondents
  • 14.
    14 Figure 15: AIAct’s Risk Based Approach High risk Ex: Credit scoring, algorithmic trading 2 Limited risk Ex: Chatbots, robo-advisors 3 Minimal risk Ex: Budgeting tool, fraud detection, etc. 4 Unacceptable risk Ex: Social scoring 1 Risk level Obligation Prohibited Conformity assessment Transparency obligation No obligation 3. REGULATION Source: European Commission, Sailpeak
  • 15.
    15 June 2023 Agreement onAI Act by European Parliament March 2024 AI Act adopted by European Parliament June 2024 Publication of the AI Act End of 2024 Bans on certain AI practices like social scoring will be enforced Mid 2025 High-risk AI systems (e.g. credit scoring) need to comply Mid 2026 All remaining regulations of the AI Act come into effect. Time to prepare systems, processes, conformity assessment, documentation, etc. Figure 16: AI Act Timeline Source: European Commission, Sailpeak 3. REGULATION
  • 16.
  • 17.
  • 18.
  • 19.
  • 20.