Basic tips for Product Managers on readiness and preparing for building AI products within a Agile Startup culture. Defining problems, understanding solutions and future considerations.
6. • Understand basic components and work flows
• Get familiar with basic terms, methods and
constructs e.g. supervised, unsupervised,
classification, algorithm and models, model
evaluation, variance, bias, overfitting, precision,
recall
• Listen to Podcasts
• Loads on Youtube
• Do a short course
ACQUIREDATASCIENCELITERACY
PREPAREBEFORETHE
DOCTORSARRIVE
@johnbfagan
7. Use the Retrospective to make small and continuous
improvements.
Data hygiene & integrity
Data models
Transaction ids
Data flows
Simple data archiving mechanic
Gradually tune up your Definition of Done
Consider hiring a data science consultant on short term
contract
PREPAREBEFORETHE
DOCTORSARRIVE
GET YOUR DATA READY.
@johnbfagan
12. Allows you to align expectations and outcomes.
You should spend time collaborating with your
team (engineers, data science, testers and
management) on defining the problem,
assumptions & expected outcomes. More so
than you would with a classic problem which is
solved by CRUD.
Luckily Machine Learning Mastery have a great
template, which I have adapted to agile stories.
https://machinelearningmastery.com/how-to-define-your-machine-learning-problem/
MUST.DEFINETHE
PROBLEM
PUT A LOT OF LOVE INTO THIS
@johnbfagan
13. MACHINELEARNING
MASTERY Tests the boundaries
and re-tests the
problem statement and
assumptions
Breaks the solution
down into layman level
@johnbfagan
14. IN ORDER TO become a social medial influencer
AS A regular twitter user
I NEED twitter to predict if my draft tweet content will get retweets
GIVEN there is a history of tweets from @illizian
AND some have retweets
AND some do not
WHEN @illizian composes a new tweet
THEN classify the tweet if its going to get retweets or not
AND ensure the classification model has an accuracy score as a percentage.
AND the accuracy score is the number of tweets predicted correctly out of all tweets
AND The specific words he used in the tweet matter to the model.
AND The specific user that retweets does not matter to the model.
AND The number of retweets may matter to the model.
AND Older tweets are less predictive than more recent tweets.
TRANSLATETOUSERSTORIESWITHBDD
@johnbfagan
15. F1 score - measure of a test's accuracy. It considers both
the precision p and the recall r
False Positives - a test result which wrongly indicates that a
particular condition or attribute is present.
False Negatives - a test result which wrongly indicates that
a particular condition or attribute is absent.
Tradeoffs - impact mapping milestone a great way to
describe tradeoffs of quality versus, time and cost and
define you Go vs No-Go Metrics
MUST. CAREABOUT
SUCCESSMETRICS
WHAT DOES SUCCESS LOOK LIKE?
https://www.productschool.com/blog/product-management-2/great-machine-learning-product-management-google/
https://www.impactmapping.org/
@johnbfagan
17. We all have a great solution, the best solution, but machine
learning is just one solution along with many others.
First create a super dumb baseline model (!AI), e.g.
• 100% certainty each tweet will be RT’d!
• Use average % of last 100 tweets that got RT’d
• If any words (excluding stop words) previously got
RT’d, then 100% certain tweet will get RT’d!
You might be surprised that your super simple solution is fit
for purpose
STARTWITHTHE
DUMBESTSOLUTION
@johnbfagan
25. Software is usually static, but data is
always changing.
Monitor algorithms performance for drift
Adapt by understanding, re-fitting,
updating, weighting, learning the
changes.
MONITOR.DRIFT
BEHAVIOURS ALWAYS CHANGE.
https://www.semanticscholar.org/paper/Concept-drift-adaptation-for-learning-with-data-Liu/5b105e357936f989cfb46ddd055ea44a2b0aed04
https://machinelearningmastery.com/gentle-introduction-concept-drift-machine-learning/
@johnbfagan