Surveys show that on average only 1 out of 7 A/B tests run by e-commerces end up to be successful. Lukasz Twardowski, the CEO of UseItBetter, tries to explain how some of the most successful online businesses master this process turning it into iterative, evidence-led experimentation at scale programme.
SAMPLE SIZE – The indispensable A/B test calculation that you’re not makingZack Notes
If you’re a marketer it’s very likely that you’ve run an A/B test. It’s also likely that you’ve never calculated the sample size for your tests, and instead, you run tests until they reach statistical significance. If this is the case, your strategy is statistically flawed. Conforming to sample size requires marketers to wait longer for test results, but choosing to ignore it will bear false positives and lead to bad decisions.
This deck was created for an email audience for there are valuable lessons for anyone who runs A/B tests.
SXSW 2016 - Everything you think about A/B testing is wrongDan Chuparkoff
Everything you've learned about A/B Testing is based on the fundamentally flawed belief that there's one right answer. But the era of mass-market, one-right-answers is over. A/B Testing is our most valuable tool in the battle to create a more engaging web. But our strategy is broken. Don't worry, we can gain a better understanding of our users with a little data science. And we can reinvent A/B Testing... I will show you how.
At Civis Analytics, we specialize in Data Science. From here, we can clearly see that all people are not the same. So why are A/B Tests designed to search for a single solution? In this session I'll show you where A/B Testing is headed next. See you in Austin!
Talks@Coursera - A/B Testing @ Internet Scalecourseratalks
Talks@Coursera
This tech talk will describe how to build an experiment platform that can handle large-scale experiments. The talk will also discuss several best practices in designing and analyzing online experiments learned from companies like Coursera, Microsoft and LinkedIn.
About the Speakers
Ya Xu has been working in the domain of online A/B testing for over 4 years. She currently leads a team of engineers and data scientists building a world-class online A/B testing platform at LinkedIn. She also spearheads taking LinkedIn's A/B testing culture to the next level by evangelizing best practices and pushing for broad-based platform adoption. She holds a Ph.D. in Statistics from Stanford University.
Chuong (Tom) Do currently leads a team of data engineers and analysts in the Analytics team at Coursera, which is responsible for data infrastructure and quantitative analysis in support of the product and business. He completed his Ph.D. in Computer Science at Stanford University in 2009 and worked as a scientist in the personal genetics company 23andMe until 2012, where his research has collectively spanned the fields of machine learning, computational biology, and statistical genetics.
[CXL Live 16] When, Why and How to Do Innovative Testing by Marie PolliCXL
Innovative testing is risky. If not addressed carefully it can destroy your optimization strategy by creating loopholes that make it impossible to know what exactly in the change caused the uplift or drop in your conversion rate. At the same time disruptive methods are needed to break out of the ordinary and to take your business to the next level.
The session is a break down and an overview of the worst and the best of innovative testing so that when you take a jump into the unknown you know what to expect.
[CXL Live 16] The Grand Unified Theory of Conversion Optimization by John EkmanCXL
Optimizers love models. And there’s plenty of them. The Prospect awareness scale. The LIFT model, the ResearchXL model. But John grew tired of trying to explain how they all fit together and when to use what model. So he took a shot at creating ”one model to rule them all”. Will he succeed? You will be the judge.
To build a successful A/B testing strategy, you'll need more than just ideas of what to test, you'll need a plan that builds data into a repeatable strategy for producing winning experiments.
SAMPLE SIZE – The indispensable A/B test calculation that you’re not makingZack Notes
If you’re a marketer it’s very likely that you’ve run an A/B test. It’s also likely that you’ve never calculated the sample size for your tests, and instead, you run tests until they reach statistical significance. If this is the case, your strategy is statistically flawed. Conforming to sample size requires marketers to wait longer for test results, but choosing to ignore it will bear false positives and lead to bad decisions.
This deck was created for an email audience for there are valuable lessons for anyone who runs A/B tests.
SXSW 2016 - Everything you think about A/B testing is wrongDan Chuparkoff
Everything you've learned about A/B Testing is based on the fundamentally flawed belief that there's one right answer. But the era of mass-market, one-right-answers is over. A/B Testing is our most valuable tool in the battle to create a more engaging web. But our strategy is broken. Don't worry, we can gain a better understanding of our users with a little data science. And we can reinvent A/B Testing... I will show you how.
At Civis Analytics, we specialize in Data Science. From here, we can clearly see that all people are not the same. So why are A/B Tests designed to search for a single solution? In this session I'll show you where A/B Testing is headed next. See you in Austin!
Talks@Coursera - A/B Testing @ Internet Scalecourseratalks
Talks@Coursera
This tech talk will describe how to build an experiment platform that can handle large-scale experiments. The talk will also discuss several best practices in designing and analyzing online experiments learned from companies like Coursera, Microsoft and LinkedIn.
About the Speakers
Ya Xu has been working in the domain of online A/B testing for over 4 years. She currently leads a team of engineers and data scientists building a world-class online A/B testing platform at LinkedIn. She also spearheads taking LinkedIn's A/B testing culture to the next level by evangelizing best practices and pushing for broad-based platform adoption. She holds a Ph.D. in Statistics from Stanford University.
Chuong (Tom) Do currently leads a team of data engineers and analysts in the Analytics team at Coursera, which is responsible for data infrastructure and quantitative analysis in support of the product and business. He completed his Ph.D. in Computer Science at Stanford University in 2009 and worked as a scientist in the personal genetics company 23andMe until 2012, where his research has collectively spanned the fields of machine learning, computational biology, and statistical genetics.
[CXL Live 16] When, Why and How to Do Innovative Testing by Marie PolliCXL
Innovative testing is risky. If not addressed carefully it can destroy your optimization strategy by creating loopholes that make it impossible to know what exactly in the change caused the uplift or drop in your conversion rate. At the same time disruptive methods are needed to break out of the ordinary and to take your business to the next level.
The session is a break down and an overview of the worst and the best of innovative testing so that when you take a jump into the unknown you know what to expect.
[CXL Live 16] The Grand Unified Theory of Conversion Optimization by John EkmanCXL
Optimizers love models. And there’s plenty of them. The Prospect awareness scale. The LIFT model, the ResearchXL model. But John grew tired of trying to explain how they all fit together and when to use what model. So he took a shot at creating ”one model to rule them all”. Will he succeed? You will be the judge.
To build a successful A/B testing strategy, you'll need more than just ideas of what to test, you'll need a plan that builds data into a repeatable strategy for producing winning experiments.
[CXL Live 16] You Can’t Make This Stuff Up by Alex HarrisCXL
See how moderated user testing is a proven tactic to gain valuable insights that you could never think of yourself. Learn how these discoveries turned into dramatic results for website growth.
The top reasons and solutions for not getting value out of your AB tests - some practical tips for designing insightful and correctly instrumented test
Mobile presentation - Sydney Online Retailer - 26 Sep 2011Craig Sullivan
In this presentation, I use analytics data from our global mobile reach, to illustrate the trends that are driving growth, how to take opportunity from them and what to do with your own site. I present a case for device and user knowledge, to allow you to optimise conversion rates, revenue and delight for visitors.
Condensed testing syrup - @OptimiseorDie @sydney sep 2011 - 4 years of testin...Craig Sullivan
A summary of my 4 years of A/B and Split testing, with case studies of work, photography guidelines, and key advice on which elements of the page to test for quick wins.
I enjoyed giving this talk at the Online Retailer Conference in Sydney, which is a fine place to visit.
The presentation c.overs a really good 'pizza' analogy for explaining testing to senior management and budget holders. Then covering how you go about discovering good places to test on your site, and what tools will help you get that data.
Lastly, I explore what worked for me in testing, show some examples of how similar our winners are across the globe and then cover some cross channel testing. The last one here is a big growth area and involves optimising contact centres and channels, using the web as a tool. Some interesting work going on here and I show some of ours, as well as new things in the pipeline.
There are some great resources attached, including a list of remote user testing services and the best 'guides' I could find on 'Conversion Rate Optimisation'. Hope you enjoyed the talk and thank you Sydney.
[CXL Live 16] Beyond Test-by-Test Results: CRO Metrics for Performance & Insi...CXL
Individual tests drive insights & ROI, but the most sophisticated optimizers look beyond what an individual test is telling them and use data to optimize their overall testing performance.
In this talk, Claire will dive into the specifics of how to track, improve, and drive insight from performance metrics for your conversion program, so you can not only run better tests, but get more out of your investment in CRO.
The Million Dollar Optimization Strategy - Andre Morys - ConversionXL Live 2015CXL
"Conversion Optimization" and "Testing" became the hottest topics on the strategic roadmaps of online marketing managers. But still, a lot of companies suffer from easy to solve problems that probably cost a couple of million dollars.
In this talk, André Morys shows some unkown conversion barriers that are not visible on the website. He shows some practical examples and cases that show how effective optimization programs gain a lot more momentum and drive more optimization ROI than usual testing efforts do.
[CXL Live 16] SaaS Optimization - Effective Metrics, Process and Hacks by Ste...CXL
Stephen will be talking on SaaS optimization strategy, including:
- The data, insight and metrics you need to track to identify opportunity
- Flow optimization: from landing page to trial, usage, purchase and retention
- Problems and opportunities: e.g. how do you test a landing page when the sale happens 14 or 30 days later? How do you manage testing across multiple KPIs simultaneously? How do you understand and segment your key product offering?
[Elite Camp 2016] Peep Laja - Fresh Out Of the OvenCXL
Peep Laja, founder of ConversionXL, talks about some of the original UX research studies ConversionXL Institute has conducted over the last months. Inspiration for effective tests and website changes.
How to Increase Your Testing Success by Combining Qualitative and Quantitativ...Optimizely
Hiten Shah, President and Co-Founder, KISSmetrics and Crazy Egg
The majority of A/B tests that you run end up failing. Wouldn't it be great if you could increase your chance of success?
In this session Hiten Shah, President and Co-Founder of KISSmetrics and Crazy Egg provides a framework and examples of how to increase your success rate by using both qualitative and quantitative tactics. Learn how to design great experiments by understanding more about your visitors, users and customers.
[CXL Live 16] Growth Hacking BS: Fixing Marketing One Truth at a Time by Morg...CXL
Growth hacking has exploded in popularity, with companies scrambling to find magical 'hackers' who create massive user bases and revenue out of thin air. Unfortunately, a closer look shows us that much of the hype is just that. To find growth, companies must stop searching for unicorns and do something much less sexy: get back to work.
In this talk I'll cover:
- how companies actually grow online based on studying dozens of fast-growing startups
- why growth is based on companies finding winning tests faster
- how we did that at growth hackers
- how other companies do it (HubSpot, Twitter, more)
- how to start hacking and start growing
- things to avoid, etc.
Myths, Lies and Illusions of AB and Split TestingCraig Sullivan
What are the common assumptions about AB (split) testing that are wrong? What are the lies told by vendors, consultants and the stuff you have convinced yourself about. What is illusory - what can you trust - what's it really all about. 20 top myths debunked after asking fellow CRO professionals what is on THEIR top list.
Presented at https://www.onlinetestconf.com/program-spring-otc-2020/
Sometimes you’re asked to start testing in a context that is not ideal: you’ve only just joined the project, the test environment is broken, the product is migrating to a new stack, the developer has left, no-one seems quite sure what’s being done or why, and there is not much time.
Knowing where to begin and what to focus on can be difficult and so in this talk I’ll describe how I try to meet that challenge.
I’ll share a definition of testing which helps me to navigate uncertainty across contexts and decide on a starting point. I’ll catalogue tools that I use regularly such as conversation, modelling, and drawing; the rule of three, heuristics, and background knowledge; mission-setting, hypothesis generation, and comparison. I’ll show how they’ve helped me in my testing, and how I iterate over different approaches regularly to focus my testing.
The takeaways from this talk will be a distillation of hard-won, hands-on experience that has given me
* an expansive, iterative view of testing
* a comprehensive catalogue of testing tools
* the confidence to start testing anything from anywhere
#Measurecamp : 18 Simple Ways to F*** up Your AB TestingCraig Sullivan
An expanded deck of the top 18 blockers to getting successful AB or Multivariate test results. In this deck, you get a complete checklist of the stuff you need to prepare, watch, launch and monitor your testing, so it gets you the *right* conclusions.
A primer on how ab testing can be set-up for success in an e-commerce environment. Includes guidelines of how to set-up ab tests including hypotheses definition, sample size determination, statistical testing and avoiding bias that can come in any experiment's set-up
Myths and Illusions of Cross Device Testing - Elite Camp June 2015Craig Sullivan
A compendium of the most common mistakes and problems people encounter when trying to optimise or split test cross device experiences (mobile, tablet, desktop, app, tv etc.)
[CXL Live 16] You Can’t Make This Stuff Up by Alex HarrisCXL
See how moderated user testing is a proven tactic to gain valuable insights that you could never think of yourself. Learn how these discoveries turned into dramatic results for website growth.
The top reasons and solutions for not getting value out of your AB tests - some practical tips for designing insightful and correctly instrumented test
Mobile presentation - Sydney Online Retailer - 26 Sep 2011Craig Sullivan
In this presentation, I use analytics data from our global mobile reach, to illustrate the trends that are driving growth, how to take opportunity from them and what to do with your own site. I present a case for device and user knowledge, to allow you to optimise conversion rates, revenue and delight for visitors.
Condensed testing syrup - @OptimiseorDie @sydney sep 2011 - 4 years of testin...Craig Sullivan
A summary of my 4 years of A/B and Split testing, with case studies of work, photography guidelines, and key advice on which elements of the page to test for quick wins.
I enjoyed giving this talk at the Online Retailer Conference in Sydney, which is a fine place to visit.
The presentation c.overs a really good 'pizza' analogy for explaining testing to senior management and budget holders. Then covering how you go about discovering good places to test on your site, and what tools will help you get that data.
Lastly, I explore what worked for me in testing, show some examples of how similar our winners are across the globe and then cover some cross channel testing. The last one here is a big growth area and involves optimising contact centres and channels, using the web as a tool. Some interesting work going on here and I show some of ours, as well as new things in the pipeline.
There are some great resources attached, including a list of remote user testing services and the best 'guides' I could find on 'Conversion Rate Optimisation'. Hope you enjoyed the talk and thank you Sydney.
[CXL Live 16] Beyond Test-by-Test Results: CRO Metrics for Performance & Insi...CXL
Individual tests drive insights & ROI, but the most sophisticated optimizers look beyond what an individual test is telling them and use data to optimize their overall testing performance.
In this talk, Claire will dive into the specifics of how to track, improve, and drive insight from performance metrics for your conversion program, so you can not only run better tests, but get more out of your investment in CRO.
The Million Dollar Optimization Strategy - Andre Morys - ConversionXL Live 2015CXL
"Conversion Optimization" and "Testing" became the hottest topics on the strategic roadmaps of online marketing managers. But still, a lot of companies suffer from easy to solve problems that probably cost a couple of million dollars.
In this talk, André Morys shows some unkown conversion barriers that are not visible on the website. He shows some practical examples and cases that show how effective optimization programs gain a lot more momentum and drive more optimization ROI than usual testing efforts do.
[CXL Live 16] SaaS Optimization - Effective Metrics, Process and Hacks by Ste...CXL
Stephen will be talking on SaaS optimization strategy, including:
- The data, insight and metrics you need to track to identify opportunity
- Flow optimization: from landing page to trial, usage, purchase and retention
- Problems and opportunities: e.g. how do you test a landing page when the sale happens 14 or 30 days later? How do you manage testing across multiple KPIs simultaneously? How do you understand and segment your key product offering?
[Elite Camp 2016] Peep Laja - Fresh Out Of the OvenCXL
Peep Laja, founder of ConversionXL, talks about some of the original UX research studies ConversionXL Institute has conducted over the last months. Inspiration for effective tests and website changes.
How to Increase Your Testing Success by Combining Qualitative and Quantitativ...Optimizely
Hiten Shah, President and Co-Founder, KISSmetrics and Crazy Egg
The majority of A/B tests that you run end up failing. Wouldn't it be great if you could increase your chance of success?
In this session Hiten Shah, President and Co-Founder of KISSmetrics and Crazy Egg provides a framework and examples of how to increase your success rate by using both qualitative and quantitative tactics. Learn how to design great experiments by understanding more about your visitors, users and customers.
[CXL Live 16] Growth Hacking BS: Fixing Marketing One Truth at a Time by Morg...CXL
Growth hacking has exploded in popularity, with companies scrambling to find magical 'hackers' who create massive user bases and revenue out of thin air. Unfortunately, a closer look shows us that much of the hype is just that. To find growth, companies must stop searching for unicorns and do something much less sexy: get back to work.
In this talk I'll cover:
- how companies actually grow online based on studying dozens of fast-growing startups
- why growth is based on companies finding winning tests faster
- how we did that at growth hackers
- how other companies do it (HubSpot, Twitter, more)
- how to start hacking and start growing
- things to avoid, etc.
Myths, Lies and Illusions of AB and Split TestingCraig Sullivan
What are the common assumptions about AB (split) testing that are wrong? What are the lies told by vendors, consultants and the stuff you have convinced yourself about. What is illusory - what can you trust - what's it really all about. 20 top myths debunked after asking fellow CRO professionals what is on THEIR top list.
Presented at https://www.onlinetestconf.com/program-spring-otc-2020/
Sometimes you’re asked to start testing in a context that is not ideal: you’ve only just joined the project, the test environment is broken, the product is migrating to a new stack, the developer has left, no-one seems quite sure what’s being done or why, and there is not much time.
Knowing where to begin and what to focus on can be difficult and so in this talk I’ll describe how I try to meet that challenge.
I’ll share a definition of testing which helps me to navigate uncertainty across contexts and decide on a starting point. I’ll catalogue tools that I use regularly such as conversation, modelling, and drawing; the rule of three, heuristics, and background knowledge; mission-setting, hypothesis generation, and comparison. I’ll show how they’ve helped me in my testing, and how I iterate over different approaches regularly to focus my testing.
The takeaways from this talk will be a distillation of hard-won, hands-on experience that has given me
* an expansive, iterative view of testing
* a comprehensive catalogue of testing tools
* the confidence to start testing anything from anywhere
#Measurecamp : 18 Simple Ways to F*** up Your AB TestingCraig Sullivan
An expanded deck of the top 18 blockers to getting successful AB or Multivariate test results. In this deck, you get a complete checklist of the stuff you need to prepare, watch, launch and monitor your testing, so it gets you the *right* conclusions.
A primer on how ab testing can be set-up for success in an e-commerce environment. Includes guidelines of how to set-up ab tests including hypotheses definition, sample size determination, statistical testing and avoiding bias that can come in any experiment's set-up
Myths and Illusions of Cross Device Testing - Elite Camp June 2015Craig Sullivan
A compendium of the most common mistakes and problems people encounter when trying to optimise or split test cross device experiences (mobile, tablet, desktop, app, tv etc.)
MozCon 2016! Mind Games: Craft Killer Experiences with 7 Lessons from Cogniti...Sarah Weise
Slides from Sarah Weise's talk at MozCon 2016. How often are you asked to influence people to click a button? Buy a product? Stay on a page? We like to think of ourselves as logical, yet 95% of our decisions are unconscious. Sarah shares how to weave cognitive psychology concepts into your digital experiences. Steal these persuasive triggers to boost engagement, conversions, leads, and even delight.
MozCon is 3 days of forward-thinking, actionable sessions in SEO, social media, community building, content marketing, brand development, CRO, the mobile landscape, analytics, digital marketing, and more.
Want to book Sarah for your next speaking event?
http://sarahweise.com
Follow Sarah on Twitter @weisesarah
10 A/B Testing Mistakes that Make Your Wallet CryConvert.com
When you think about a/b testing, you instantly think about increasing your conversion rate, but that is the most deeply rooted mistake in all of conversion optimization and a/b testing.
Pre-plan your tests and avoid these 10 common mistakes when A/B testing.
My books- Hacking Digital Learning Strategies http://hackingdls.com & Learning to Go https://gum.co/learn2go
Resources at http://shellyterrell.com/emoji
Study: The Future of VR, AR and Self-Driving CarsLinkedIn
We asked LinkedIn members worldwide about their levels of interest in the latest wave of technology: whether they’re using wearables, and whether they intend to buy self-driving cars and VR headsets as they become available. We asked them too about their attitudes to technology and to the growing role of Artificial Intelligence (AI) in the devices that they use. The answers were fascinating – and in many cases, surprising.
This SlideShare explores the full results of this study, including detailed market-by-market breakdowns of intention levels for each technology – and how attitudes change with age, location and seniority level. If you’re marketing a tech brand – or planning to use VR and wearables to reach a professional audience – then these are insights you won’t want to miss.
UX, ethnography and possibilities: for Libraries, Museums and ArchivesNed Potter
These slides are adapted from a talk I gave at the Welsh Government's Marketing Awards for the LAM sector, in 2017.
It offers a primer on UX - User Experience - and how ethnography and design might be used in the library, archive and museum worlds to better understand our users. All good marketing starts with audience insight.
The presentation covers the following:
1) An introduction to UX
2) Ethnography, with definitions and examples of 7 ethnographic techniques
3) User-centred design and Design Thinking
4) Examples of UX-led changes made at institutions in the UK and Scandinavia
5) Next Steps - if you'd like to try out UX at your own organisation
An immersive workshop at General Assembly, SF. I typically teach this workshop at General Assembly, San Francisco. To see a list of my upcoming classes, visit https://generalassemb.ly/instructors/seth-familian/4813
I also teach this workshop as a private lunch-and-learn or half-day immersive session for corporate clients. To learn more about pricing and availability, please contact me at http://familian1.com
3 Things Every Sales Team Needs to Be Thinking About in 2017Drift
Thinking about your sales team's goals for 2017? Drift's VP of Sales shares 3 things you can do to improve conversion rates and drive more revenue.
Read the full story on the Drift blog here: http://blog.drift.com/sales-team-tips
SearchLove Boston 2017 | Richard Fergie | You Aren't Doing Science and That's OKDistilled
We have been trained and encouraged to focus on p-values and statistical significance in every aspect of testing, from PPC to CRO. In this talk, Richard is going to challenge your preconceptions, show how scientific accuracy isn't necessarily the same as commercial success, and demonstrate strategies that are better than waiting for your variation to be declared a winner by your testing platforms. The way you approach data-driven decision-making will never be the same.
Creating a culture that provokes failure and boosts improvementBen Dressler
Everyone fails - but not everyone uses failed attempts as a source of learning and improvement. This talk outlines a framework to turn failure into gaining knowledge by understanding IF, HOW and WHY something fails.
A/B testing, optimization and results analysis by Mariia Bocheva, ATD'18Mariia Bocheva
While working with data we usually face several problems: we don't have enough data, we have too much data, we don't know what to do with this data.
In this session, I'll show how to make sure you can rely on your data and share my favorite ideas on how you can use Google Analytics and other for A/B testing, optimization and analysis.
You’ll gain a better understanding on what to look at to answer your UX questions, how to run a test properly and evaluate the its results.
19 Lessons I learned from a year of SEO split testingDominic Woodman
Last year I got a new job and spent the year running all the tests we've done on DistilledODN (an SEO split testing platform).
It's changed my perspective, taught me a huge amount and I'd like to take people through all the different lessons I've learned (19 of them in fact).
That's everything from: What sort of effect do basic SEO changes? Why is changing your title tags possibly a really risky move? How and when has structured data helped? How important is freshness (and can you fake it)? Does testing change your relationship with a client? Should you put emoji's in everything...
SearchLove London 2018 - Dom Woodman - A year of SEO split testing changed ho...Distilled
If you asked a UX professional whether users prefer one image or two on a blog post, they'd tell you to test it — trying to double guess users is foolish.
Yet for many companies, SEO has no testing at all, just endless reams of best practice and hand waving. Last year I changed role and got the chance to treat SEO differently, running over 50 tests across different websites. This session will give an insight into what worked, and just as importantly, what didn’t.
SearchLove San Diego - Dom Woodman - A Year of SEO Split Testing Changed How ...Distilled
If you asked a UX professional whether users prefer one image or two on a blog post, they'd tell you to test it — trying to double guess users is foolish.
Yet for many companies, SEO has no testing at all, just endless reams of best practice and hand waving. Last year I changed role and got the chance to treat SEO differently, running over 50 tests across different websites. This session will give an insight into what worked, and just as importantly, what didn’t.
7 ways you are doing your A/B testing wrong by Côme CourteaultTheFamily
As as startup, you should be running tests all the time.
Making sure you are drawing the right conclusions from those tests is also a vital part of this: it helps you get those customers and save precious time.
There are lots of mistakes you can make when A/B testing. In this 45mn workshop, Côme tells you about them, why they happen, and how to avoid them. He is also share tips and tricks to get your A/B testing right.
Côme Courteault is Growth Hacker at TheFamily. He helps many of our startups build sustainable growth practices, and is also a teacher at our Growth Hacking school.
Design Thinking in the Product Development Process - Product tank oxford AJ Justo
Introduction to the basic secrets that make Design Thinking a great tool for innovation and to enable collaboration. The talk also includes a few exercises on Lateral Thinking.
How a year of SEO split testing changed how I thought SEO workedDominic Woodman
I spent a year running all the split tests from DistilledODN, a split testing platform. Here's how an entire year of testing changed how I thought and worked.
Things Could Get Worse: Ideas About Regression TestingTechWell
Michael Bolton, DevelopSense
Tester, consultant, and trainer Michael Bolton is the coauthor (with James Bach) of Rapid Software Testing, a course that presents a methodology and mindset for testing software expertly in uncertain conditions and under extreme time pressure. Michael is a leader in the context-driven software testing movement with twenty years of experience testing, developing, managing, and writing about software. Currently, he leads DevelopSense, a Toronto-based consultancy.
UX STRAT Online 2020: Dr. Martin Tingley, NetflixUX STRAT
Over the years, the Netflix UI has evolved from a sparse and static webpage into an immersive, video-centric experience tailored to a variety of platforms. In this talk, I’ll describe the simple but powerful framework that Netflix uses to evolve the product experience: we ask our members, through online A/B tests, which of several possible experiences resonate with them. I’ll also describe the steps we are taking to democratize access to experimentation across the company so that we can explore more ideas and identify those that deliver more value to our members.
As Europe's leading economic powerhouse and the fourth-largest hashtag#economy globally, Germany stands at the forefront of innovation and industrial might. Renowned for its precision engineering and high-tech sectors, Germany's economic structure is heavily supported by a robust service industry, accounting for approximately 68% of its GDP. This economic clout and strategic geopolitical stance position Germany as a focal point in the global cyber threat landscape.
In the face of escalating global tensions, particularly those emanating from geopolitical disputes with nations like hashtag#Russia and hashtag#China, hashtag#Germany has witnessed a significant uptick in targeted cyber operations. Our analysis indicates a marked increase in hashtag#cyberattack sophistication aimed at critical infrastructure and key industrial sectors. These attacks range from ransomware campaigns to hashtag#AdvancedPersistentThreats (hashtag#APTs), threatening national security and business integrity.
🔑 Key findings include:
🔍 Increased frequency and complexity of cyber threats.
🔍 Escalation of state-sponsored and criminally motivated cyber operations.
🔍 Active dark web exchanges of malicious tools and tactics.
Our comprehensive report delves into these challenges, using a blend of open-source and proprietary data collection techniques. By monitoring activity on critical networks and analyzing attack patterns, our team provides a detailed overview of the threats facing German entities.
This report aims to equip stakeholders across public and private sectors with the knowledge to enhance their defensive strategies, reduce exposure to cyber risks, and reinforce Germany's resilience against cyber threats.
Techniques to optimize the pagerank algorithm usually fall in two categories. One is to try reducing the work per iteration, and the other is to try reducing the number of iterations. These goals are often at odds with one another. Skipping computation on vertices which have already converged has the potential to save iteration time. Skipping in-identical vertices, with the same in-links, helps reduce duplicate computations and thus could help reduce iteration time. Road networks often have chains which can be short-circuited before pagerank computation to improve performance. Final ranks of chain nodes can be easily calculated. This could reduce both the iteration time, and the number of iterations. If a graph has no dangling nodes, pagerank of each strongly connected component can be computed in topological order. This could help reduce the iteration time, no. of iterations, and also enable multi-iteration concurrency in pagerank computation. The combination of all of the above methods is the STICD algorithm. [sticd] For dynamic graphs, unchanged components whose ranks are unaffected can be skipped altogether.
Data Centers - Striving Within A Narrow Range - Research Report - MCG - May 2...pchutichetpong
M Capital Group (“MCG”) expects to see demand and the changing evolution of supply, facilitated through institutional investment rotation out of offices and into work from home (“WFH”), while the ever-expanding need for data storage as global internet usage expands, with experts predicting 5.3 billion users by 2023. These market factors will be underpinned by technological changes, such as progressing cloud services and edge sites, allowing the industry to see strong expected annual growth of 13% over the next 4 years.
Whilst competitive headwinds remain, represented through the recent second bankruptcy filing of Sungard, which blames “COVID-19 and other macroeconomic trends including delayed customer spending decisions, insourcing and reductions in IT spending, energy inflation and reduction in demand for certain services”, the industry has seen key adjustments, where MCG believes that engineering cost management and technological innovation will be paramount to success.
MCG reports that the more favorable market conditions expected over the next few years, helped by the winding down of pandemic restrictions and a hybrid working environment will be driving market momentum forward. The continuous injection of capital by alternative investment firms, as well as the growing infrastructural investment from cloud service providers and social media companies, whose revenues are expected to grow over 3.6x larger by value in 2026, will likely help propel center provision and innovation. These factors paint a promising picture for the industry players that offset rising input costs and adapt to new technologies.
According to M Capital Group: “Specifically, the long-term cost-saving opportunities available from the rise of remote managing will likely aid value growth for the industry. Through margin optimization and further availability of capital for reinvestment, strong players will maintain their competitive foothold, while weaker players exit the market to balance supply and demand.”
Opendatabay - Open Data Marketplace.pptxOpendatabay
Opendatabay.com unlocks the power of data for everyone. Open Data Marketplace fosters a collaborative hub for data enthusiasts to explore, share, and contribute to a vast collection of datasets.
First ever open hub for data enthusiasts to collaborate and innovate. A platform to explore, share, and contribute to a vast collection of datasets. Through robust quality control and innovative technologies like blockchain verification, opendatabay ensures the authenticity and reliability of datasets, empowering users to make data-driven decisions with confidence. Leverage cutting-edge AI technologies to enhance the data exploration, analysis, and discovery experience.
From intelligent search and recommendations to automated data productisation and quotation, Opendatabay AI-driven features streamline the data workflow. Finding the data you need shouldn't be a complex. Opendatabay simplifies the data acquisition process with an intuitive interface and robust search tools. Effortlessly explore, discover, and access the data you need, allowing you to focus on extracting valuable insights. Opendatabay breaks new ground with a dedicated, AI-generated, synthetic datasets.
Leverage these privacy-preserving datasets for training and testing AI models without compromising sensitive information. Opendatabay prioritizes transparency by providing detailed metadata, provenance information, and usage guidelines for each dataset, ensuring users have a comprehensive understanding of the data they're working with. By leveraging a powerful combination of distributed ledger technology and rigorous third-party audits Opendatabay ensures the authenticity and reliability of every dataset. Security is at the core of Opendatabay. Marketplace implements stringent security measures, including encryption, access controls, and regular vulnerability assessments, to safeguard your data and protect your privacy.
8. The industry average hit rate
for A/B testing
=
Provide the benchmark:
EXERCISE 1.
9. The industry average hit rate
for A/B testing
=
14%
Just 1 out of 7 A/B tests
is successful!
http://conversionxl.com/ab-tests-fail/
Provide the benchmark:
EXERCISE 1.
10. King Kong (1933, Dir. Merian Cooper, Ernest Schoedsack)
How to
be the greatest
monkey in the biz
if infinity is not an
option?
14. The currency in which
you pay for A/B tests
is traffic. The more you
have, the more tests
you can run.
15. The currency in which
you pay for A/B tests
is traffic. The more you
have, the more tests
you can run. Never
waste what you have.
16. Shop Direct
Scaled to 101
experiments a month
in two years.
100+ year old company
Etsy
25 releases a day,
most of them are
A/B tests.
A startup launched in 2005
http://www.slideshare.net/danmckinley/design-for-continuous-experimentation(linkedin)
17. Zero Tests Per Month.
Here’s the test idea,
numbers and execution.
Can we proceed?
Let’s meet to
discuss. Maybe
next week?
Looks good.
Will check with Z
and get back to you.
So here’s the test idea,
numbers…
Sorry,
had other priorities.
Can we meet
next week?
Sure! (D***!)
Have you
checked with Z?
Have you…?
Have you…?
18. Ground rules:
1. Test ideas are
subject to prioritization
not approval.
19. evidence
x opportunity size
x strategy
=
priority
Magic formula:
EXERCISE 3.
The worst idea gets tested
if resources are available.
20. 101 Tests Per Month.
Ok then, we’ll
do this, this
and that test.
Others will wait.
Guys, our
strategy shifted
to checkout
optimization.
Guys, we
need to increase
basket value.
Now this
and that one…
And this…
These two
would work…
Xmas is
coming!
DO NOTHING!
…this, this
and that…
26. If 1 out of 7 tests
wins, what about the
other 6? 5 of them
will be inconclusive.
27. Most tests are inconclusive because:
a) too few users were using the changed
feature for it to get statistical significance.
b) the changed feature had little to do with
metrics used to evaluate the test.
c) there were multiple changes in the same
test and they levelled up.
28. Complete the sentence:
EXERCISE 4.
You do it to find out
what works and how well.
A/B testing is NOT about __________.making money
30. … removing a feature
… slowing down the website
…
Cheat: Experiment to
test significance.
Test results show that…
didn’t reduce conversion.
31. … we shouldn’t
waste time on that.
Cheat:
test significance.
Test results show that…
32. Cheat: One change
per test. Order matters.
Select products, produce
videos, upload, add links,
launch test
Add links
Select products
Produce videos
…
INCONCLUSIVE
33. … people don’t
click “watch
video” links.
Cheat: Measure against
your hypothesis.
… adding videos
had no impact on
conversion.
INCONCLUSIVE
CONCLUSIVE
Test results show that…
41. A/B test
is launched.
Test results come
back negative.
The idea gets killed,
next test is
launched.
A/B Testing Flow
Fail Fast Approach
42. One failed test doesn’t
make collecting
underpants a bad idea.
43. A/B test
is launched.
Test results come
back negative.
Survey responses
give a clue why.
Users are surveyed
alongside the test.
Respondents’
logs give
another clue.
Respondents
are emailed to
clarify the issue.
The issue is solved,
the test relaunched.
Users’ behaviors
are logged.
Pre-test research
is done.
Example of A/B Testing Flow at Spotify
Prepare for failure.
Courtesy of @bendressler researcher at Spotify
44. The real price you pay
for not researching
why tests fail is the
death of great ideas.
45. User
Testing
Voice of
Customer
I predict
that doing B
will change X
by Y% because
of Z.
Are
Metrics
Good?
Accepted
Rejected
What really
happened?
Insight
and Evidence
Metrics Based
Evaluation
Hypothesis
check
Evidence-Led Flow
Hypothesis Based
A/B Testing
Qual/Quant
Analytics
46. User
Testing
Voice of
Customer
I predict
that doing B
will change X
by Y% because
of Z.
Are
Metrics
Good?
Accepted
Rejected
What really
happened?
Insight
and Evidence
Metrics Based
Evaluation
Hypothesis
check
Evidence-Led Flow
Hypothesis Based
A/B Testing
51. 1. Never waste your traffic. 2.
Many small changes are better
than one big change.
52. 1. Never waste your traffic. 2.
Many small changes are better
than one big change. 3. Even
the smallest change needs an
insight.
53. 1. Never waste your traffic. 2.
Many small changes are better
than one big change. 3. Even
the smallest change needs an
insight. 4. Prepare for failure.
54. 1. Never waste your traffic. 2.
Many small changes are better
than one big change. 3. Even
the smallest change needs an
insight. 4. Prepare for failure.
5. It’s OK to fail if you know
why you failed.
55. 1. Never waste your traffic. 2.
Many small changes are better
than one big change. 3. Even
the smallest change needs an
insight. 4. Prepare for failure.
5. It’s OK to fail if you know
why you failed. 6. Iterate.
56. 1. Never waste your traffic. 2.
Many small changes are better
than one big change. 3. Even
the smallest change needs an
insight. 4. Prepare for failure.
5. It’s OK to fail if you know
why you failed. 6. Iterate. 7. Be
honest.
57. For the sake of this presentation, I assumed that the results of the
7 tests I referred to had been correctly read out by the people who
are familiar with the terms like statistical significance,
confidence intervals, p-value etc.
Otherwise, it’s likely that the one winning test was just a phantom.
Disclaimer
58. Get in touch:
THE FINAL EXERCISE
Łukasz Twardowski
https://linkedin.com/in/twardowski