Why you need AI for personalisation and how to build
explainable AI in-house
- to increase trust and secure board backing
Powering personalised recommendations via
explainable AI that you can trust
Agenda
- Carousell’s context in the world of Retail
- Why you need AI for personalised recommendations
- Need for Explainable AI and its advantages
- Strategies to build explainable AI in-house
- Pre-Modeling
- Explainable Modeling
- Post-Modeling
8
markets
Tens of millions
of users monthly
$900
million USD
in valuation
Over
In 8 years, we have grown to become
South East Asia’s #1 classifieds marketplace
2012 2013 2014 2015 2016 2017 2018 2019
Founded
Carousell
in SG
Raised
USD800K
Series Seed
Raised USD6M
Series A
Launched in MY,
ID, TW
Launched
web platform
Launched HK
+ PH
Raised
USD35M
Series B
Acquired
Caarly,
launched Autos
+ Coins &
Bumps
+ Property
Launched
Spotlight +
Carousell
Protection
Acquired OLX PH,
Naspers invests,
received funding
from Telenor,
merged with 701
Search entities,
Mudah, Cho Tot
and OneKyat
Phase III
Verticalization & monetization
Phase I
Foundation
Raised
USD85M
Series C
Phase II
Internationalization
2020
Carousell
announces
US$80M
investment from
Naver, Mirae
Asset-Naver Asia
Growth Fund and
NH Investment &
Securities
US$900
million
Data Science projects @ Carousell
Content
Discovery
Search &
Recommendations
Content
Generation
Category, Title &
Price suggestions
Content
Moderation
Fraud / Spam
detection
C2C
Conversations
Smart Chat Replies
Ads Targeting
User Segmentation,
Keyword Targeting,
Dynamic Pricing
Why you need AI for personalisation and how to build
explainable AI in-house
Personalisation
Image Source: PNGGuru
MATCH
CONTENT
USERS
AI led Content Understanding
Image Source: PNGGuru
SHALLOW: without AI, only
surface-level understanding
- Attributes
- Metadata
- Statistical figures
- ...
DEEP: AI powered
understanding
- Semantic Space
- Content Associations
- Content Quality
- Attractiveness /
Desirability
- ...
Powers variety of predictive
capabilities
AI led Content Understanding
A sample listing from Carousell
Questions AI can help answer:
Image Understanding
- Image quality?
- If image quality is bad, why?
- How can a seller improve it?
Price Understanding
- Is it priced well?
- How long will it take to sell?
Understanding Content Associations
- Other similar listings?
- Alternate options?
- Complimentary items (e.g. airpods)?
Predicting Transaction Experience
- What’s seller quality?
- Is the seller responsive & trustworthy?
AI led User Understanding
Image Source: Pinterest
NARROW: Without AI - limited
understanding
- Age, Gender, Location
- Browsing History
- Transaction History
- ...
COMPREHENSIVE:
Holistic understanding of
the user
- Likes / Dislikes
- Interests
- Preferences
- Behavioral
Patterns
- Affinities /
Sensitivities
- ...
AI led User Understanding
Image Source: PNGGuru
An example user on Carousell
AI led User Understanding
NARROW:
40 yr Male, browsed_toys,
bought_books, ...
COMPREHENSIVE:
Father, interested in books & toys
for kids in 5-8 & 9-12 age-group
- books: fiction: adventure
& fun stories,
- author: Roald Dahl
- toys: action-figures,
beyblades
- owns a printer
- ...
Image Source: PNGGuru
An example user on Carousell
Questions AI can help answer:
Product Affinity
- What products / product-groups would this user
be interested in?
- What is this user actively looking to buy recently?
Deal Priorities
- User’s sensitivity to price
- Likes to negotiate? / Dislikes low-balling?
- Payment / Delivery / meetup-location preferences
User Associations
- Look-alike users based on “semantic user
segmentation”
- Seller Recommendations
Monetisation
- Comfort with paid-content
- Would this seller benefit from listing promotion?
Matchmaking (without AI)
BROAD: match
It’s like painting a wall using a
broad brush.
MATCH
Image Source: PNGGuru
AI led Matchmaking
FINE: match
AI provides you the variety of
colors & brushes to create a
beautiful painting.
Image Source: PNGGuru
Overview of COUPLENET model architecture illustrating the computation of
similarity score for two users. [Image: courtesy of Yi Tay]
… algorithm was able to spot what
mutual interests between users it
thought suggested a good match...
The researchers write that “we believe
‘hidden’ (latent) patterns (such as
emotions and personality) of the users
are being learned and modeled in
order to make recommendations.” ...
their method likely overcomes issues
that crop up on dating sites, like
deceptive self-presentation,
harassment, and bots.
- article link
DEEP CONTENT
UNDERSTANDING
COMPREHENSIVE USER
UNDERSTANDING
ABILITY TO MAKE A FINE
MATCH
Accuracy vs Explainability
Simple rule-based recommendation models
might be less accurate but their predictions are
highly explainable
Sophisticated ML models (esp. based on Neural
Networks) are often more accurate but are also more
opaque and their predictions are harder to explain
Why did I get these recommendations?
Image Source: PNGGuru Image Source: netflix
Why you need AI for personalisation and how to build
explainable AI in-house
close your eyes and picture a shoe
Image Source: klipartz
Bias in AI
mostly.ai
Examples of bias in AI
Image Source: huffingtonpost
Examples of bias in AI
Image Source: PCmag Image Source: ProPublica
Examples of bias in AI
Impact of bias in AI
Image Source: osadc Klipartz
How to avoid bias in AI systems
- Diversity in Data
- Diverse Team
- Identify scenarios with high impact
- Explicit focus on avoiding bias
shutterstock
Though prevention is better than cure, it
is almost impossible to have a bias-free
AI models. Often we don’t know what we
don’t know.
Hence the need for “Explainable AI”
Advantages of explainable AI (XAI)
- helps debug & improve the prediction quality across
scenarios & user segments
- company-level or even legislative restrictions to use
only explainable systems, for sectors like healthcare
or banking
- boosts user trust. Perception of algorithm bias can
spoil company or brands reputation
- for critical areas like medical diagnosis, providing
additional context around the prediction allows users
to decide whether to trust the prediction or not
all these result in “building trust with stakeholders
and speed up the AI adoption”
XAI: refers to methods & techniques in the application of AI technology such that the results of the solution can be
understood by humans.
Why you need AI for personalisation and how to build
explainable AI in-house
XAI: Design for explainability
Image Source: Bahador Khaleghi
Jaime Zornoza
Pre-modelling Explainability
- Exploratory Data Analysis
- Explicitly look for minority segments that might not get
represented well
- E.g. for Carousell we look at “new users” vs “repeat
users”; “head queries” vs “tail queries”
- Explainable feature engineering
- Model predictions would be explained in terms of model
features. If model features are non-intuitive, the
prediction explanation won’t serve its purpose
- Sneak-peak into what model predictions might look like
- Use a nearest-neighbor based approach to make
predictions on a representative sample of the training
dataset and review the FPs & FNs
- This might inform if the data needs to be augmented or
re-balanced for any specific data segments
Image Source: Bahador Khaleghi
Explainable Modelling
- Adopt Explainable Model Family
- E.g. Linear models, Tree based models. Deep NNs
should not be the default choice for all ML problems
- Hybrid models architectures that combines “performance”
and “explainability”
- Architectures like: deep-kNN, deep weighted averaging
classifier (DWAC), SENN, CEN etc.
- Joint prediction & explanation: E.g. Teaching
Explanations for Decisions (TED) framework etc.
- Other Methodologies
- Using regularization to explicitly constrain explanations
of models predictions to ensure they are right for better
reasons
- This is an active area of research & development. New
approaches are being introduced regularly
Post-modelling Explainability
- Explaining feature importance for making predictions
- reflect the significance of each feature in making the
prediction
- this is the most popular approach. Most commonly used
libraries are SHAP and LIME
- Using more explainable models to explain Deep NNs
- DeepRed method provides a solution for extracting a
decision tree to approximate a given DNN model
- Explaining model prediction as a function of a given
features’ values
- Partial Dependence Plot (PDP)
- Individual Conditional Expectation (ICE)
Jaime Zornoza
Recommendation Model @ Carousell
User Activity History
HP printer
ink
cartridge
Beyblades
Roald Dahl
Books
Mapping user actions
to explainable “content
interests”
Content
Understanding
- Repeat purchase
frequency
- Content associations
- ...
Explainable Recommendations
- “HP Ink Cartridge” bcos used
bought last month & repeat
purchase freq = 1 month
- “Beyblade Stadium” bcos it has
high “post-purchase” association
with “beyblades”
Explainable ai powered personalised content feeds

Explainable ai powered personalised content feeds

  • 1.
    Why you needAI for personalisation and how to build explainable AI in-house - to increase trust and secure board backing
  • 2.
    Powering personalised recommendationsvia explainable AI that you can trust Agenda - Carousell’s context in the world of Retail - Why you need AI for personalised recommendations - Need for Explainable AI and its advantages - Strategies to build explainable AI in-house - Pre-Modeling - Explainable Modeling - Post-Modeling
  • 3.
    8 markets Tens of millions ofusers monthly $900 million USD in valuation Over
  • 4.
    In 8 years,we have grown to become South East Asia’s #1 classifieds marketplace 2012 2013 2014 2015 2016 2017 2018 2019 Founded Carousell in SG Raised USD800K Series Seed Raised USD6M Series A Launched in MY, ID, TW Launched web platform Launched HK + PH Raised USD35M Series B Acquired Caarly, launched Autos + Coins & Bumps + Property Launched Spotlight + Carousell Protection Acquired OLX PH, Naspers invests, received funding from Telenor, merged with 701 Search entities, Mudah, Cho Tot and OneKyat Phase III Verticalization & monetization Phase I Foundation Raised USD85M Series C Phase II Internationalization 2020 Carousell announces US$80M investment from Naver, Mirae Asset-Naver Asia Growth Fund and NH Investment & Securities US$900 million
  • 5.
    Data Science projects@ Carousell Content Discovery Search & Recommendations Content Generation Category, Title & Price suggestions Content Moderation Fraud / Spam detection C2C Conversations Smart Chat Replies Ads Targeting User Segmentation, Keyword Targeting, Dynamic Pricing
  • 6.
    Why you needAI for personalisation and how to build explainable AI in-house
  • 7.
  • 8.
    AI led ContentUnderstanding Image Source: PNGGuru SHALLOW: without AI, only surface-level understanding - Attributes - Metadata - Statistical figures - ... DEEP: AI powered understanding - Semantic Space - Content Associations - Content Quality - Attractiveness / Desirability - ... Powers variety of predictive capabilities
  • 9.
    AI led ContentUnderstanding A sample listing from Carousell Questions AI can help answer: Image Understanding - Image quality? - If image quality is bad, why? - How can a seller improve it? Price Understanding - Is it priced well? - How long will it take to sell? Understanding Content Associations - Other similar listings? - Alternate options? - Complimentary items (e.g. airpods)? Predicting Transaction Experience - What’s seller quality? - Is the seller responsive & trustworthy?
  • 10.
    AI led UserUnderstanding Image Source: Pinterest NARROW: Without AI - limited understanding - Age, Gender, Location - Browsing History - Transaction History - ... COMPREHENSIVE: Holistic understanding of the user - Likes / Dislikes - Interests - Preferences - Behavioral Patterns - Affinities / Sensitivities - ...
  • 11.
    AI led UserUnderstanding Image Source: PNGGuru An example user on Carousell
  • 12.
    AI led UserUnderstanding NARROW: 40 yr Male, browsed_toys, bought_books, ... COMPREHENSIVE: Father, interested in books & toys for kids in 5-8 & 9-12 age-group - books: fiction: adventure & fun stories, - author: Roald Dahl - toys: action-figures, beyblades - owns a printer - ... Image Source: PNGGuru An example user on Carousell Questions AI can help answer: Product Affinity - What products / product-groups would this user be interested in? - What is this user actively looking to buy recently? Deal Priorities - User’s sensitivity to price - Likes to negotiate? / Dislikes low-balling? - Payment / Delivery / meetup-location preferences User Associations - Look-alike users based on “semantic user segmentation” - Seller Recommendations Monetisation - Comfort with paid-content - Would this seller benefit from listing promotion?
  • 13.
    Matchmaking (without AI) BROAD:match It’s like painting a wall using a broad brush. MATCH Image Source: PNGGuru
  • 14.
    AI led Matchmaking FINE:match AI provides you the variety of colors & brushes to create a beautiful painting. Image Source: PNGGuru Overview of COUPLENET model architecture illustrating the computation of similarity score for two users. [Image: courtesy of Yi Tay] … algorithm was able to spot what mutual interests between users it thought suggested a good match... The researchers write that “we believe ‘hidden’ (latent) patterns (such as emotions and personality) of the users are being learned and modeled in order to make recommendations.” ... their method likely overcomes issues that crop up on dating sites, like deceptive self-presentation, harassment, and bots. - article link DEEP CONTENT UNDERSTANDING COMPREHENSIVE USER UNDERSTANDING ABILITY TO MAKE A FINE MATCH
  • 15.
    Accuracy vs Explainability Simplerule-based recommendation models might be less accurate but their predictions are highly explainable Sophisticated ML models (esp. based on Neural Networks) are often more accurate but are also more opaque and their predictions are harder to explain
  • 16.
    Why did Iget these recommendations? Image Source: PNGGuru Image Source: netflix
  • 17.
    Why you needAI for personalisation and how to build explainable AI in-house
  • 18.
    close your eyesand picture a shoe Image Source: klipartz
  • 19.
  • 20.
    Examples of biasin AI Image Source: huffingtonpost
  • 21.
    Examples of biasin AI Image Source: PCmag Image Source: ProPublica
  • 22.
  • 23.
    Impact of biasin AI Image Source: osadc Klipartz
  • 24.
    How to avoidbias in AI systems - Diversity in Data - Diverse Team - Identify scenarios with high impact - Explicit focus on avoiding bias shutterstock Though prevention is better than cure, it is almost impossible to have a bias-free AI models. Often we don’t know what we don’t know. Hence the need for “Explainable AI”
  • 25.
    Advantages of explainableAI (XAI) - helps debug & improve the prediction quality across scenarios & user segments - company-level or even legislative restrictions to use only explainable systems, for sectors like healthcare or banking - boosts user trust. Perception of algorithm bias can spoil company or brands reputation - for critical areas like medical diagnosis, providing additional context around the prediction allows users to decide whether to trust the prediction or not all these result in “building trust with stakeholders and speed up the AI adoption” XAI: refers to methods & techniques in the application of AI technology such that the results of the solution can be understood by humans.
  • 26.
    Why you needAI for personalisation and how to build explainable AI in-house
  • 27.
    XAI: Design forexplainability Image Source: Bahador Khaleghi Jaime Zornoza
  • 28.
    Pre-modelling Explainability - ExploratoryData Analysis - Explicitly look for minority segments that might not get represented well - E.g. for Carousell we look at “new users” vs “repeat users”; “head queries” vs “tail queries” - Explainable feature engineering - Model predictions would be explained in terms of model features. If model features are non-intuitive, the prediction explanation won’t serve its purpose - Sneak-peak into what model predictions might look like - Use a nearest-neighbor based approach to make predictions on a representative sample of the training dataset and review the FPs & FNs - This might inform if the data needs to be augmented or re-balanced for any specific data segments Image Source: Bahador Khaleghi
  • 29.
    Explainable Modelling - AdoptExplainable Model Family - E.g. Linear models, Tree based models. Deep NNs should not be the default choice for all ML problems - Hybrid models architectures that combines “performance” and “explainability” - Architectures like: deep-kNN, deep weighted averaging classifier (DWAC), SENN, CEN etc. - Joint prediction & explanation: E.g. Teaching Explanations for Decisions (TED) framework etc. - Other Methodologies - Using regularization to explicitly constrain explanations of models predictions to ensure they are right for better reasons - This is an active area of research & development. New approaches are being introduced regularly
  • 30.
    Post-modelling Explainability - Explainingfeature importance for making predictions - reflect the significance of each feature in making the prediction - this is the most popular approach. Most commonly used libraries are SHAP and LIME - Using more explainable models to explain Deep NNs - DeepRed method provides a solution for extracting a decision tree to approximate a given DNN model - Explaining model prediction as a function of a given features’ values - Partial Dependence Plot (PDP) - Individual Conditional Expectation (ICE) Jaime Zornoza
  • 31.
    Recommendation Model @Carousell User Activity History HP printer ink cartridge Beyblades Roald Dahl Books Mapping user actions to explainable “content interests” Content Understanding - Repeat purchase frequency - Content associations - ... Explainable Recommendations - “HP Ink Cartridge” bcos used bought last month & repeat purchase freq = 1 month - “Beyblade Stadium” bcos it has high “post-purchase” association with “beyblades”