A primer on AB testing and it's application in ecommerce. A necessary tool in every product manager's arsenal. Covers the principles behind setting up a good test and the statistical tools required to analyze results.
SAMPLE SIZE – The indispensable A/B test calculation that you’re not makingZack Notes
If you’re a marketer it’s very likely that you’ve run an A/B test. It’s also likely that you’ve never calculated the sample size for your tests, and instead, you run tests until they reach statistical significance. If this is the case, your strategy is statistically flawed. Conforming to sample size requires marketers to wait longer for test results, but choosing to ignore it will bear false positives and lead to bad decisions.
This deck was created for an email audience for there are valuable lessons for anyone who runs A/B tests.
To build a successful A/B testing strategy, you'll need more than just ideas of what to test, you'll need a plan that builds data into a repeatable strategy for producing winning experiments.
A primer on how ab testing can be set-up for success in an e-commerce environment. Includes guidelines of how to set-up ab tests including hypotheses definition, sample size determination, statistical testing and avoiding bias that can come in any experiment's set-up
Talks@Coursera - A/B Testing @ Internet Scalecourseratalks
Talks@Coursera
This tech talk will describe how to build an experiment platform that can handle large-scale experiments. The talk will also discuss several best practices in designing and analyzing online experiments learned from companies like Coursera, Microsoft and LinkedIn.
About the Speakers
Ya Xu has been working in the domain of online A/B testing for over 4 years. She currently leads a team of engineers and data scientists building a world-class online A/B testing platform at LinkedIn. She also spearheads taking LinkedIn's A/B testing culture to the next level by evangelizing best practices and pushing for broad-based platform adoption. She holds a Ph.D. in Statistics from Stanford University.
Chuong (Tom) Do currently leads a team of data engineers and analysts in the Analytics team at Coursera, which is responsible for data infrastructure and quantitative analysis in support of the product and business. He completed his Ph.D. in Computer Science at Stanford University in 2009 and worked as a scientist in the personal genetics company 23andMe until 2012, where his research has collectively spanned the fields of machine learning, computational biology, and statistical genetics.
A/B Testing best practices from strategic vision to operational considerations to communication and finally expectations management. We need to adhere to fundamental project management, technology, statistical, experimental design, UX Design, Customer Relationship, business and data principles to ensure that the insights and hence the decision is as trustworthy as possible.
This presentation by Anna Marie Clifton, Product Manager at Yammer, covers the important topics of when to use A/B testing, how to implement it and most importantly, how to measure the results.
The content is directed for software engineers who want to transition to product management, MBA's with finance/consulting background who wish to work high-tech companies as product managers and Project Managers, Marketers, and Designers who are seeking opportunities in product management.
SAMPLE SIZE – The indispensable A/B test calculation that you’re not makingZack Notes
If you’re a marketer it’s very likely that you’ve run an A/B test. It’s also likely that you’ve never calculated the sample size for your tests, and instead, you run tests until they reach statistical significance. If this is the case, your strategy is statistically flawed. Conforming to sample size requires marketers to wait longer for test results, but choosing to ignore it will bear false positives and lead to bad decisions.
This deck was created for an email audience for there are valuable lessons for anyone who runs A/B tests.
To build a successful A/B testing strategy, you'll need more than just ideas of what to test, you'll need a plan that builds data into a repeatable strategy for producing winning experiments.
A primer on how ab testing can be set-up for success in an e-commerce environment. Includes guidelines of how to set-up ab tests including hypotheses definition, sample size determination, statistical testing and avoiding bias that can come in any experiment's set-up
Talks@Coursera - A/B Testing @ Internet Scalecourseratalks
Talks@Coursera
This tech talk will describe how to build an experiment platform that can handle large-scale experiments. The talk will also discuss several best practices in designing and analyzing online experiments learned from companies like Coursera, Microsoft and LinkedIn.
About the Speakers
Ya Xu has been working in the domain of online A/B testing for over 4 years. She currently leads a team of engineers and data scientists building a world-class online A/B testing platform at LinkedIn. She also spearheads taking LinkedIn's A/B testing culture to the next level by evangelizing best practices and pushing for broad-based platform adoption. She holds a Ph.D. in Statistics from Stanford University.
Chuong (Tom) Do currently leads a team of data engineers and analysts in the Analytics team at Coursera, which is responsible for data infrastructure and quantitative analysis in support of the product and business. He completed his Ph.D. in Computer Science at Stanford University in 2009 and worked as a scientist in the personal genetics company 23andMe until 2012, where his research has collectively spanned the fields of machine learning, computational biology, and statistical genetics.
A/B Testing best practices from strategic vision to operational considerations to communication and finally expectations management. We need to adhere to fundamental project management, technology, statistical, experimental design, UX Design, Customer Relationship, business and data principles to ensure that the insights and hence the decision is as trustworthy as possible.
This presentation by Anna Marie Clifton, Product Manager at Yammer, covers the important topics of when to use A/B testing, how to implement it and most importantly, how to measure the results.
The content is directed for software engineers who want to transition to product management, MBA's with finance/consulting background who wish to work high-tech companies as product managers and Project Managers, Marketers, and Designers who are seeking opportunities in product management.
Learn how to use A/B testing to figure out the best product and marketing strategies for your business. Adopt a culture of testing everything from website copy to engagement emails to Facebook ads. Learn through a real SaaS product experiment.
SXSW 2016 - Everything you think about A/B testing is wrongDan Chuparkoff
Everything you've learned about A/B Testing is based on the fundamentally flawed belief that there's one right answer. But the era of mass-market, one-right-answers is over. A/B Testing is our most valuable tool in the battle to create a more engaging web. But our strategy is broken. Don't worry, we can gain a better understanding of our users with a little data science. And we can reinvent A/B Testing... I will show you how.
At Civis Analytics, we specialize in Data Science. From here, we can clearly see that all people are not the same. So why are A/B Tests designed to search for a single solution? In this session I'll show you where A/B Testing is headed next. See you in Austin!
Spotify strives for team autonomy and independence. This means that no team should be blocked by others and they should be able to move as fast as they can. The autonomy has is a challenge for managing a centralised and coordinated experimentation infrastructure and analysis. This a talk about how we approach A/B testing in a fast moving company.
Test for Success: A Guide to A/B Testing on Emails & Landing PagesOptimizely
Email marketing is a key component to any successful marketing strategy — and it's constantly evolving! That's why testing and optimizing your communications is just as important as the strategy itself.
Sometimes knowing how, what, and when to test can seem overwhelming but don't worry, we've got your back. Join this informative webinar with Jessica Langensand of Marketo and Allison Sparrow of Optimizely to discover:
How to design an effective email A/B test
What to test in your emails and landing pages
Testing ideas you'll want to share with your team!
Controlled Experimentation aka A/B Testing for PMs by Tinder Sr PMProduct School
Main Takeaways:
-A/B testing: a simple idea that can be simple to apply
-Useful for more than incremental optimization - A/B tests can yield deep insight
-Just test it - A/B tests have the highest ROI of any data activity
A/B Testing for New Product Launches by Booking.com Sr PMProduct School
Main takeaways:
-There is no one right way of validating a product, A/B testing is just one of them
-Get your product 'qualitatively' validated before 'quantitatively' validating
-Use holdouts to measure the long term success of your new products, while running A/B test in parallel
A/B Testing at Pinterest: Building a Culture of Experimentation WrangleConf
Presenter: Andrea Burbank, Pinterest
A successful experimentation program consists of much more than mere randomization and measurement. How do you help stakeholders understand the right things to measure, avoid common pitfalls, and learn to rely on A/B tests as the best way to measure a new system or feature? In this talk, Andrea will explain how building a culture of experimentation and the right tools to support it is just as important as the statistics behind the comparisons themselves - and potentially much trickier to get right.
Growth Hacking / Marketing 101: It's about processRuben Hamilius
Outline of the repeatable growth process startups should adopt to do Growth Marketing. Show & tell deck on basic principles and mindsets of Growth hacking for early stage startups.
Presented at the Singtel Group-Samsung Regional Mobile App Challenge 2015
in the Startup Mentorship Programme
This is a 5-step model for creating a metrics framework for your business & customers, and how to apply it to your product & marketing efforts. The "pirate" part comes from the 5 steps: Acquisition, Activation, Retention, Referral, & Revenue (AARRR!)
Conversion conference london nov 2011 - multi channel testing - craig sullivanCraig Sullivan
Cross channel testing tips including how to optimise your call tracking, call centres, contact methods, contact flows. Also how to deflect, manage or remove cost from contact handling.
This set of slides also covers our optimisation process, results and shows some case studies.
If you're a serious marketer, cross channel is going to be vital for you in 2012.
Tag-it 2016 slides: UX + A/B Testing at Booking.com: Design focused on conver...Maria Lígia Klokner
We talk a lot about User Experience Design nowadays, but how do you know that what you designed really works? To have a testing mindset is to know whether or not you are designing for the user and not yourself. Understand how Booking.com works with A/B testing when validating ideas, and what are the biggest challenges to be a designer in that environment. In this presentation we’ll see the advantages of obtaining quick feedback with data, so that we can learn from it, iterate and try again!
Learn how to use A/B testing to figure out the best product and marketing strategies for your business. Adopt a culture of testing everything from website copy to engagement emails to Facebook ads. Learn through a real SaaS product experiment.
SXSW 2016 - Everything you think about A/B testing is wrongDan Chuparkoff
Everything you've learned about A/B Testing is based on the fundamentally flawed belief that there's one right answer. But the era of mass-market, one-right-answers is over. A/B Testing is our most valuable tool in the battle to create a more engaging web. But our strategy is broken. Don't worry, we can gain a better understanding of our users with a little data science. And we can reinvent A/B Testing... I will show you how.
At Civis Analytics, we specialize in Data Science. From here, we can clearly see that all people are not the same. So why are A/B Tests designed to search for a single solution? In this session I'll show you where A/B Testing is headed next. See you in Austin!
Spotify strives for team autonomy and independence. This means that no team should be blocked by others and they should be able to move as fast as they can. The autonomy has is a challenge for managing a centralised and coordinated experimentation infrastructure and analysis. This a talk about how we approach A/B testing in a fast moving company.
Test for Success: A Guide to A/B Testing on Emails & Landing PagesOptimizely
Email marketing is a key component to any successful marketing strategy — and it's constantly evolving! That's why testing and optimizing your communications is just as important as the strategy itself.
Sometimes knowing how, what, and when to test can seem overwhelming but don't worry, we've got your back. Join this informative webinar with Jessica Langensand of Marketo and Allison Sparrow of Optimizely to discover:
How to design an effective email A/B test
What to test in your emails and landing pages
Testing ideas you'll want to share with your team!
Controlled Experimentation aka A/B Testing for PMs by Tinder Sr PMProduct School
Main Takeaways:
-A/B testing: a simple idea that can be simple to apply
-Useful for more than incremental optimization - A/B tests can yield deep insight
-Just test it - A/B tests have the highest ROI of any data activity
A/B Testing for New Product Launches by Booking.com Sr PMProduct School
Main takeaways:
-There is no one right way of validating a product, A/B testing is just one of them
-Get your product 'qualitatively' validated before 'quantitatively' validating
-Use holdouts to measure the long term success of your new products, while running A/B test in parallel
A/B Testing at Pinterest: Building a Culture of Experimentation WrangleConf
Presenter: Andrea Burbank, Pinterest
A successful experimentation program consists of much more than mere randomization and measurement. How do you help stakeholders understand the right things to measure, avoid common pitfalls, and learn to rely on A/B tests as the best way to measure a new system or feature? In this talk, Andrea will explain how building a culture of experimentation and the right tools to support it is just as important as the statistics behind the comparisons themselves - and potentially much trickier to get right.
Growth Hacking / Marketing 101: It's about processRuben Hamilius
Outline of the repeatable growth process startups should adopt to do Growth Marketing. Show & tell deck on basic principles and mindsets of Growth hacking for early stage startups.
Presented at the Singtel Group-Samsung Regional Mobile App Challenge 2015
in the Startup Mentorship Programme
This is a 5-step model for creating a metrics framework for your business & customers, and how to apply it to your product & marketing efforts. The "pirate" part comes from the 5 steps: Acquisition, Activation, Retention, Referral, & Revenue (AARRR!)
Conversion conference london nov 2011 - multi channel testing - craig sullivanCraig Sullivan
Cross channel testing tips including how to optimise your call tracking, call centres, contact methods, contact flows. Also how to deflect, manage or remove cost from contact handling.
This set of slides also covers our optimisation process, results and shows some case studies.
If you're a serious marketer, cross channel is going to be vital for you in 2012.
Tag-it 2016 slides: UX + A/B Testing at Booking.com: Design focused on conver...Maria Lígia Klokner
We talk a lot about User Experience Design nowadays, but how do you know that what you designed really works? To have a testing mindset is to know whether or not you are designing for the user and not yourself. Understand how Booking.com works with A/B testing when validating ideas, and what are the biggest challenges to be a designer in that environment. In this presentation we’ll see the advantages of obtaining quick feedback with data, so that we can learn from it, iterate and try again!
#Measurecamp : 18 Simple Ways to F*** up Your AB TestingCraig Sullivan
An expanded deck of the top 18 blockers to getting successful AB or Multivariate test results. In this deck, you get a complete checklist of the stuff you need to prepare, watch, launch and monitor your testing, so it gets you the *right* conclusions.
A/B Testing You Might Be Driving in the Wrong Direction TOMASZ BORYS
@Kissmetrics #KissWebinar @thuelmadsen
Thue is the Kissmetrics Webinar Wizard and Marketing Ops Manager. Before joining forces with Kissmetrics, he was a Lyft driver in SF, which is also how he ended up as a Kissmetrics marketer. Whenever Thue is not trying to automate everything around him, you can find him hiking in the Sierras. THUE MADSEN Marketing Operations Manager, Kissmetrics @ThueLMadsen Tomasz loves dipping his feet in the river while fishing, injuring his thumb while gaming, hacking away at a golf club, and driving demand at Kissmetrics. He’s also the biggest fan of gummy bears. TOMASZ BORYS Director of Marketing, Kissmetrics @tbcali
@Kissmetrics #KissWebinar @tbcali
1 The optimum strategies for A/B testing 2 How we A/B Test at KISSmetrics 3 A/B Testing beyond click conversions - having a pulse on the entire funnel and why it’s important TABLE OF CONTENTS 4 The influence social traffic has in the funnel
• Set a Goal • Baby Steps • Aim for Statistical Significance • Never Lose Sight THE OPTIMUM STRATEGIES FOR A/B TESTING
Baby Steps People want to move the needle and see results fast, by implementing multiple changes at once. But how can you be sure what element had an impact?
Aim for Statistical Significance SET A GOAL Aim for Statistical Significance getdatadriven.com
Never Lose Sight We can caught up on what we think is best to optimize conversions or traffic…but data doesn’t lie
• 4,000 and above sample size • 99% Statistical Confidence • Blind Eye for 1 Week HOW WE A/B TEST AT KISSMETRICS
4,000 and Above Sample Size Significant or decent amount of data is needed when you’re testing beyond click conversions
4,000 and Above Sample Size significant or decent amount of data is needed when you’re testing beyond click conversions 99% Statistical Confidence Police
Blind Eye for 1 Week Data can be very erratic the first several days, so it’s easy to hit the panic button
A/B Testing Beyond Click Conversions
ORIGINAL
VARIANT
SIGNUPS
Signups Install JS
Custom Data
Custom Data Opportunities Created
SOCIAL AD FUNNEL
TOMASZ BORYS Director of Marketing, Kissmetrics @tbcali tborys@kissmetrics.com THUE MADSEN Marketing Operations Manager, Kissmetrics @ThueLMadsen tmadsen@kissmetrics.com Questions?
Test of significance (t-test, proportion test, chi-square test)Ramnath Takiar
The presentation discusses the concept of test of significance including the test of significance examples of t-test, proportion test and chi-square test.
The Top Skills That Can Get You Hired in 2017LinkedIn
We analyzed all the recruiting activity on LinkedIn this year and identified the Top Skills employers seek. Starting Oct 24, learn these skills and much more for free during the Week of Learning.
#AlwaysBeLearning https://learning.linkedin.com/week-of-learning
How to Run Landing Page Tests On and Off Paid Social PlatformsVWO
Join us for an exclusive webinar featuring Mariate, Alexandra and Nima where we will unveil a comprehensive blueprint for crafting a successful paid media strategy focused on landing page testing.With escalating costs in paid advertising, understanding how to maximize each visitor’s experience is crucial for retention and conversion.
This session will dive into the methodologies for executing and analyzing landing page tests within paid social channels, offering a blend of theoretical knowledge and practical insights.
The Pearmill team will guide you through the nuances of setting up and managing landing page experiments on paid social platforms. You will learn about the critical rules to follow, the structure of effective tests, optimal conversion duration and budget allocation.
The session will also cover data analysis techniques and criteria for graduating landing pages.
In the second part of the webinar, Pearmill will explore the use of A/B testing platforms. Discover common pitfalls to avoid in A/B testing and gain insights into analyzing A/B tests results effectively.
Should UI/UX be gut-feeling or data-driven? How to stand out from the tough competition by perfecting your owned asset?
A/B testing a long-grind road but it does not have to be tough! Demyth the 4 steps approach to optimization and what it can bring to you
A/B Testing: Common Pitfalls and How to Avoid ThemIgor Karpov
Since the initial boom of A/B testing’s popularity in the early 2000s, marketers have learned to apply actual science to marketing and took a lot of the guesswork out of how to get more conversions or purchases. However, after running your first A/B test, you will most likely find yourself presented with questions such as what is a conclusive result or what sample size is required?
What are the Key drivers for automation? What are the Challenges in Agile automation and How to deal with them? How to automate? Who will automate? Which tool to select? Commercial or open source? What to automate? Which features? Here is what our experience says
Supercharge your AB testing with automated causal inference - Community Works...Egor Kraev
An A/B test consists of splitting the customers into a test and a control group, and choosing a large enough sample size to observe the average treatment effect (ATE) we are interested in, in spite of all the other factors driving outcome variance. With causal inference models, we can do better than that, by estimating the effect conditional on customer features (CATE), thus turning customer variability from noise to be averaged over to a valuable source of segmentation, and potentially requiring smaller sample sizes as a result. Unfortunately, there are many different models available for estimating CATE, with many parameters to tune and very different performance. In this talk, we will present our auto-causality library, which combines the three marvelous packages from Microsoft – DoWhy, EconML, and FLAML – to do fully automated selection and tuning of causal models based on out-of-sample performance, just like any other AutoML package does. We will describe the projects inside Wise currently starting to apply it, and present results on comparative model performance and out-of-sample segmentation on Wise CRM data.
As many people have been asking how to optimise and conduct effective testing, we will be covering site optimisation this month. We will be cover the best practices as well as tools that can be use for A/B testing and Multivariate Testing
A/B testing is split-testing between two different variants – labeled A and B. This technique allows the advertiser to determine the under performing and outperforming factors of your two separate ads.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
4. Longitudinal or pre-post testing is difficult since little variance is explained by product features. Other factors
impacting conversion are:
Price
Weekend/Weekday
Seasonality
Source of Traffic
Availability
Mix of users (distribution bias)
Clarity of product thinking & avoiding snowballing of incorrect insights
Why was conversion for Android version 5.5.6 better than 5.5.5 for the first 3 days?
(Hint: Early adoptor bias- users with stable wifi and loyal to MMT app convert higher than all users)
Why is AB Testing needed?
6. Choosing Alia Bhatt as brand ambassador
A recommended hotel on the top of the listing
Impact of a fix for latency
Increase sign-in rate by increasing the size of the login button
Impact of showing packing list as a notification a day before the flight date
Quiz: What can or cannot be AB tested
AB testing is for lower hanging fruits not quantum leaps: for those user testing,
interviews and FGDs as well as analysis of existing data are better.
7. Choosing Alia Bhatt as brand ambassador: No
A recommended hotel on the top of the listing: Yes
Impact of a fix for latency: Yes
Increase sign-in rate by increasing the size of the login button: Yes
Impact of showing packing list as a notification a day before the flight date: Tough, but theoretically yes
Quiz: What can or cannot be AB tested
AB testing is for lower hanging fruits not quantum leaps: for those user testing,
interviews and FGDs as well as analysis of existing data are better.
8. Key Stages of AB Testing
Hypothesis Definition
Metric Identification
Determining Size & Duration
Tooling & Distribution
Invariance Testing
Analyzing Results
9. Almost all AB experiment hypotheses should look something like below:
Eg. 1
H0 (Null/Control): A big login button will not impact user login percentage
H1 (Test): A big login button will significantly increase user login percentage
Eg: 2
H0 (Control): Putting higher user rating hotels at the top of the listing doesn’t change conversion
H1 (Test): Putting higher user rating hotels at the top of the listing changes conversion significantly
Good to articulate the hypothesis you’re testing in simple English at the start of the experiment. The
hypothesis should have a user verbiage and not a feature verbiage. It’s okay if you skip this too as long as
you get the idea.
Hypotheses Definition
10. Counts, eg.
#Shoppers
#Users buying
#Orders
Rates, eg.
Click through Rate
Search to Shopper Rate
Bounce Rate
Probability (a user completes a task), eg.
User Conversion in the funnel
Metric identification (1/2)
11. Consider the following metrics for conversion:
1. #Order/#Visits to listing page
2. #Visitors to TY Page/#Visitors to Listing Page
3. #Visits to TY Page/#Visits to listing page
4. #Orders/#PageViews of listing page
Metric identification (2/2): Quiz
1 2 3 4
User refreshes the listing page
User breaks the booking into 2
User’s TY page gets refreshed
User does a browser back and the page is served from cache
User drops off on details and comes back via drop-off
notification
Omniture is not firing properly on listing page
12. 1. If showing a summary of hotel USPs on the details page is improving conversion?
2. If a user who purchased with MMT will come back again?
3. If we are sending too many or too few notifications to users?
How can you measure?
13. 1 .If showing a summary of hotel USPs on the details page is improving conversion?
A simple A/B set-up with and without the feature will help in evaluation
2. If a user who purchased with MMT will come back again?
A. An secondary metric captured by asking buyers this question or an NPS survey and comparing results
should give some idea
3. If we are sending too many or too few notifications to users?
A. An indirect metric measured as retained users on the app across the two variants
How can you measure?
14. Size & Duration
Reality Test Output Error
Control is better Control is better 1- α (confidence level)
Control is better Test is better α (significance)
Test is better Test is better 1-β (power)
Test is better Control is better β
α or type-I error is the probability of rejecting null when it is true (Downside Error)
β or type-II error is the probability of accepting null when control is better (Opportunity Cost Error)
Target values to test significance is at α = 5% and 1-β=80%
15. Size & Duration
Size:
• To figure out the size of the samples required to get the 80% power for the test, here
• These many users need to be targeted with the smallest of the test variant being examined
Duration:
• Is an outcome of what % of traffic can you direct to the test + some minimum duration considerations
• You might want to limit the %age exposure of the experiment due to:
• Revenue impacts
• Leaving room for other people to experiment
• Even if the sample size for the required power can be reached in a shorter duration good to reduce the exposure of
the experiment to include:
• At-least 1 weekend/weekdays
• low & high discounting periods (if possible)
• Low & high availability periods (if possible)
16. No Peeking
• It is important to not reduce power of the test by changing decision with insufficient data
• Best explained in the blog. Primary idea being that taking duration clues from early data introduces human error in
the measurement
• In-case the sample size is turning out to be very high, a few ways to reduce it are:
• Use this sequential sampling approach (reduces size by as high as 50% in some scenarios)
• Use this Bayesian sampling approach (mathematically intensive)
• Try matching the lowest unit of measurement with lowest unit of distribution (eg instead of measuring
latency/user measure latency per hit and distribute the experiment on hit)
• Try moving the experiment allocation closer to the step where there is an actual change (eg assign payment
experiment to payment page users)
17. Distribution Metric
1. Page Views
2. Cookies
3. Login-ID
4. Device ID
5. IP Address
Tooling & Distribution (1/2)
Which will not be hampered by the following 1 2 3 4 5
User shortlists 2-3 hotels and comes back after a day
User starts search on mobile and books on desktop
User changes browsers on the machine
User logs out and continues with another ID
18. Typical requirements for an AB system are:
Each experiment should support multiple variants (A/B/C..) and each variant can be defined using a combination of
experiment variables
Each user is randomly assigned a variant (as per the distribution percentage). System ensures users are served a
consistent experience basis their device ID or cookie (other distribution parameters like page view or visit might be
used but cookie/device-id is the most stable)
Auto-logs the variant that the users are being exposed to in an analytics system
There are multiple AB testing systems available by several vendors or one can be easily created internally using a tag
manager like Google tags
Tooling & Distribution (2/2)
19. A/A Testing:
Ideally, it is good to run 1 or many A/A test to measure the same metric you’re planning to measure in A/B tests before
and after your test period
Even if the above is not feasible, do try to run A/A test regularly to test the underlying system
Things to test during A/A Tests:
Key metrics you measure (like conversion, counts, page-views, etc) and their statistical difference between the
two cohorts at different ratios of test & control
A/A & Invariance Testing
20. Invariance Testing
Identify Invariance metrics- metrics that should not change between control & experiment
One of the basic metrics that will be the invariant will be the count of the users assigned to each group. Very
important to test these
Each of the invariants should be within statistical bounds between population and control
A/A & Invariance Testing
21. 1. Remember the threshold practical significance threshold used in sample size calculator. That is going to be
the least change that we care about, so a statistically significant change < the practical significance
threshold is useless.
2. Choose the distribution & test:
1. Counts: poisson distribution or poisson-mean
2. Rates: poisson distribution or possison-mean
3. Click-through-probability: binomial distribution & t-test (or chi-square test).
Analyzing Results (1/3)
25. A/B/C Setup
A particular type of experiment set-up that is beneficial where there might be server & client side affects that
introduce bias. A few examples
Measure impact of persuasion shown (say last room left)
User might be positively impacted to convert higher, v/s
Higher latency to fetch persuasion might reduce conversion
Showing a message “Cheaper than Rajdhani” on flights > 75 mins duration and fare <3000
User might be positively impacted to convert, v/s
Conversion for cheaper flight (<3000) is generally higher
Showing a USP of the hotel generated from user reviews, eg. guests love this because: “great neighborhood to
stay”
User might be positively impacted to convert, v/s
Feature might only be visible on hotels with > X reviews (and hence bookings). There is an innate hotel bias.
In these scenarios, it is best to setup 3 variants:
A= Feature Off or Control
B= Feature On but not shown to users
C= Feature on but shown to users.
A/B/C Setup
26. AB testing in an organization typically goes through the following stages:
Would encourage you all to help your organization move to the
next stage in the AB testing journey
Best to be in a state where the company culture supports quick prototyping and testing with real users
Solving for multi device (stitching sessions) and other tracking limitations in the set-up
Higher standards of experiment analysis and responsible reporting
Things to Improve
Sanity Checks
Testing for
conflict
resolution
Testing for
impact
measurement
Testing for
hypothesis
Rapid
prototyping &
testing
27. Definitely read the Evan Miller blog. It basically summarizes everything you need to know.
If keen on getting in more detail of techniques and best practices, take the course on Udacity. Just doing the first chapter
would be good enough
Further Reading