Optimizely recently released the stats engine, which moves away from the traditional statistics model and into a new framework that is more aligned with modern business operations. In this workshop, we’ll walk you through the core trade-offs in A/B Testing, and how you can use them to decide when to stop running your test.
Workshop 6: Build Your Organization's Optimization CultureOptimizely
The key output of an effective testing organization is data, but data insights cannot be achieved without the collaborative input of the people that makeup the testing organization. Join this session to learn how Optimizely's most successful customers socialize testing and structure their testing organizations.
Optimizely Workshop 1: Prioritize your roadmapOptimizely
When your testing roadmap includes dozens of ideas (each with unique requirements) and each team member is vying for her idea to be run first, effective prioritization becomes paramount. This session will focus on the considerations, tools and frameworks you can use to make sure your roadmap is appropriately prioritized to meet your goals.
Optimizely Workshop: Mobile Walkthrough Optimizely
Testing and optimizing your mobile apps can help with shorter development cycles, data-driven decision-making, and higher user conversion rates. In this highly interactive session, we encourage you to bring your app (or a sample app), and we’ll walk through the top-to-tail process for using Optimizely on your mobile app. This training is designed for iOS and Android developers who are looking to use Optimizely on their mobile apps.
Website Redesigns: Why they Fail and How to Ensure SuccessOptimizely
Learn how to avoid website redesigns that don’t deliver results.
Website redesigns are a tremendous effort: they take a lot of planning, involve many team members and are very costly. The worst about them is that they often fail, meaning that the “new” website will perform worse than the old.
- How common redesign fails really are, and how costly they can be
- The key reasons most redesigns fail to increase sales/conversions
- A better approach to increase chances of redesign success
To learn more about Optimizely, find more info here: https://www.optimizely.com/
To get more inspiration for better website redesigns, visit our blog:
https://blog.optimizely.com/2014/08/14/2-alexa-500-site-redesigns-that-should-inspire-you-to-ab-test/
What does digital marketing maturity look like? How can companies effectively benchmark their experimentation performance? Brooks Bell will share a proven framework that allows you to unlock experimentation success, make more efficient investments and confidently plan for growth.
You will learn:
- The six elements of a successful experimentation program
- How to benchmark your performance
- Proven ways to evolve your digital marketing maturity
Optimizely & Photobox - DON'T PANIC: The No-Confusion Experimentation Startup...Optimizely
How do you know where to start with experimentation? What if you don’t have enough information, or simply too much to decide where to begin and where to invest your time/effort/money?
In this breakout session we will cover how to cut the BS by treating experimentation as an “internal services startup”, where the customers are the teams in your business: commercial, trading, marketing, product, SEO etc.
You wouldn’t start a startup by hiring a bunch of people without a tool or an idea to work on, or buy an office or expensive work management solution for a startup of 3 people without developing a product and taking it to market first. So why treat experimentation that way?
Experimental statistics is only one of the many powerful analytical techniques companies are using to supercharge their experiment ideation, segmentation, and analysis. Check out this content for a refresher of key stats issues and a discussion on how to use data for better test and bigger wins.
Workshop 6: Build Your Organization's Optimization CultureOptimizely
The key output of an effective testing organization is data, but data insights cannot be achieved without the collaborative input of the people that makeup the testing organization. Join this session to learn how Optimizely's most successful customers socialize testing and structure their testing organizations.
Optimizely Workshop 1: Prioritize your roadmapOptimizely
When your testing roadmap includes dozens of ideas (each with unique requirements) and each team member is vying for her idea to be run first, effective prioritization becomes paramount. This session will focus on the considerations, tools and frameworks you can use to make sure your roadmap is appropriately prioritized to meet your goals.
Optimizely Workshop: Mobile Walkthrough Optimizely
Testing and optimizing your mobile apps can help with shorter development cycles, data-driven decision-making, and higher user conversion rates. In this highly interactive session, we encourage you to bring your app (or a sample app), and we’ll walk through the top-to-tail process for using Optimizely on your mobile app. This training is designed for iOS and Android developers who are looking to use Optimizely on their mobile apps.
Website Redesigns: Why they Fail and How to Ensure SuccessOptimizely
Learn how to avoid website redesigns that don’t deliver results.
Website redesigns are a tremendous effort: they take a lot of planning, involve many team members and are very costly. The worst about them is that they often fail, meaning that the “new” website will perform worse than the old.
- How common redesign fails really are, and how costly they can be
- The key reasons most redesigns fail to increase sales/conversions
- A better approach to increase chances of redesign success
To learn more about Optimizely, find more info here: https://www.optimizely.com/
To get more inspiration for better website redesigns, visit our blog:
https://blog.optimizely.com/2014/08/14/2-alexa-500-site-redesigns-that-should-inspire-you-to-ab-test/
What does digital marketing maturity look like? How can companies effectively benchmark their experimentation performance? Brooks Bell will share a proven framework that allows you to unlock experimentation success, make more efficient investments and confidently plan for growth.
You will learn:
- The six elements of a successful experimentation program
- How to benchmark your performance
- Proven ways to evolve your digital marketing maturity
Optimizely & Photobox - DON'T PANIC: The No-Confusion Experimentation Startup...Optimizely
How do you know where to start with experimentation? What if you don’t have enough information, or simply too much to decide where to begin and where to invest your time/effort/money?
In this breakout session we will cover how to cut the BS by treating experimentation as an “internal services startup”, where the customers are the teams in your business: commercial, trading, marketing, product, SEO etc.
You wouldn’t start a startup by hiring a bunch of people without a tool or an idea to work on, or buy an office or expensive work management solution for a startup of 3 people without developing a product and taking it to market first. So why treat experimentation that way?
Experimental statistics is only one of the many powerful analytical techniques companies are using to supercharge their experiment ideation, segmentation, and analysis. Check out this content for a refresher of key stats issues and a discussion on how to use data for better test and bigger wins.
An Experimentation Framework: How to Position for Triple Digit GrowthOptimizely
You’ve done the button color A/B test, you’ve optimized your landing pages for better conversion. What next? At B2B organizations large and small, there is still tremendous potential for experimentation to drive innovation and growth. Learn how Brion’s growth team enables rapid iteration across a variety of different domains, teams, and organizations within Cisco. With an organization of 70,000 employees and many distributed divisions, enabling experimentation can be a complex initiative. Learn the framework for upleveling from random testing to
explicit strategy to position your org for triple digit growth.
It’s 2015, and the customer experience is increasingly being driven by techniques like A/B testing and optimization.
Optimizely recently surveyed digital experience owners to learn just how they think about and allocate resources towards their testing and optimization programs.
These slides answer the questions:
- How often do teams optimizing run A/B and multivariate tests?
- What are the top benefits that optimization programs are seeing?
- How do optimization teams manage their experiment process?
Optimizely, HEMA & Farfetch - Unlock the Potential of Digital Experimentation...Optimizely
Consistency is key. But it is hard, very hard. How do you ensure you keep the ball rolling and continue to reinvent experiences for your users? This is where the Optimizely Professional Services Team comes in to act as an extension of your team. These experienced professionals have seen over 100.000 experiences being created and will help you unlock the potential of Optimizely's products and digital experimentation.
In this session, our customer Farfetch will join us for a fireside chat to talk about how they extend their team with Optimizely Services Professionals, who understand the challenge of building an experimentation program and who help them excel in their role in the business by helping them prove the value of experimentation.
Build a Winning Conversion Optimization StrategySavage Marketing
By Srikant Kotapalli – VWO
AB Testing has traditionally been one of the most commonly used methods to boost conversion rates. Many marketers have since used it tactically, that gets them some early wins but are unable to convert it into repeatable success. Building an effective optimization strategy needs an in-depth understanding of user behaviour, adopting a structured framework and constantly measuring and improving upon the results. This session will walk you through everything you need to know to transform your AB testing into a winning optimization engine.
Optimizing Your B2B Demand Generation MachineOptimizely
If generating demand for your product is a struggle, rest assured that you are in the majority. 63% of marketers say their top challenge is generating traffic and leads. So what's a marketer to do? We say: hypothesize and experiment.
Join us for this free seminar where SurveyMonkey will share how their marketing team experiments and optimizes different parts of their demand gen machine. You'll hear real stories, tactics, and outcomes that will inspire your own demand gen experimentation.
Experimentation as a growth strategy: A conversation with The Motley FoolChris Goward
In this on-demand webinar, join Nate Wallingsford—Head of US Marketing Operations & Optimization at The Motley Fool—for a virtual discussion about experimentation at his organization.
Discover how Nate and his team are leveraging experimentation to uncover massive revenue gains and actionable customer insights. And learn how Nate has worked to gain visibility and create excitement around testing.
VWO Webinar: How To Plan Your Optimisation RoadmapVWO
If your conversion optimization sprints are dependent on surprise wins, then here’s something you should know —”A surprise win might be buried deep in your A/B testing cycle; you might have to wait for weeks, maybe months to see that.”
The good news is that an experimentation roadmap can open up the possibility of seeing those wins a lot faster. This session will help you uncover ways to manage and prioritize testing ideas in a systematic manner and improve your chances of seeing wins faster with your optimization program.
Making Your Hypothesis Work Harder to Inform Future Product StrategyOptimizely
At Treatwell, each experiment goes beyond improving a single business metric. Experimentation works to evolve their product while enriching customer insights in order to deliver the best digital experience to their users. Join Laura Howard, Lead Product Manager, and Dennis Meisner, Senior Product Analyst, to learn their secret to making their hypothesis work harder and how getting their hypothesis right has improved Treatwell’s funnel progression and order health, as well as helped them make critical decisions on their product experience.
To build a successful A/B testing strategy, you'll need more than just ideas of what to test, you'll need a plan that builds data into a repeatable strategy for producing winning experiments.
Improve your content: The What, Why, Where and How about A/B Testingintrotodigital
A/B testing, also known as split testing, is a user experience research methodology where users are randomly split into two or more groups to see different versions of the same element. This presentation explains what is A/B testing, why you need it, where you can apply it and how to conduct an A/B test.
Join us for another #ImpactSalesforceSaturday, a series of online Salesforce Saturday sessions.
We invite all – Developers – Administrators – Group Leaders – Consultants with advanced, intermediate or beginner level knowledge on Salesforce(Sales Cloud, Service Cloud, Pardot, Marketing Cloud, IOT, CPQ, Einstein, etc).
Topic: Drum into understanding of prediction builder with NBA
Date and Time: Saturday, October 3, 2020,
07:30 PM to 08:30 PM IST
Speaker: Rajat Jain
Rajat is a Salesforce Einstein Champion. He is a 8x Salesforce Certified and Currently working as a Program Specialist at MTX Group.
Agenda:
1. Introduction
2. Drum into understanding of prediction builder with NBA
A/B Mythbusters: Common Optimization Objections DebunkedOptimizely
For every $92 marketers spend to drive traffic to their website, only $1 is spent on optimizing the experiences visitors encounter when they get there. But with the proven benefits of testing, why aren’t more companies spending on optimization? We’ve compiled a list of common objections to testing and asked digital marketers what they have to say about them.
SXSW 2016 - Everything you think about A/B testing is wrongDan Chuparkoff
Everything you've learned about A/B Testing is based on the fundamentally flawed belief that there's one right answer. But the era of mass-market, one-right-answers is over. A/B Testing is our most valuable tool in the battle to create a more engaging web. But our strategy is broken. Don't worry, we can gain a better understanding of our users with a little data science. And we can reinvent A/B Testing... I will show you how.
At Civis Analytics, we specialize in Data Science. From here, we can clearly see that all people are not the same. So why are A/B Tests designed to search for a single solution? In this session I'll show you where A/B Testing is headed next. See you in Austin!
A strong hypothesis is the heart of data-driven product discovery & development. It helps you turn data and insights about your users’ behavior into focused proposals that you’ll take action on.
Check out this very exclusive presentation from Jason G'Sell – Lead Training Consultant – and get a framework to help you and your team form strong experiment hypotheses and come up with the right products and features for your customers.
You’ll learn:
- How and when to introduce experimentation into your product development process
- Identifying the differences between Optimization & Discovery
- Building successful experiments in your product development lifecycle
How To Build a Winning Experimentation Program & Team | Optimizely ANZ Webinar 8Optimizely
Watch Dan Ross, Managing Director for Optimizely ANZ in our latest webinar from the Experimentation Insights Tour -- "How To Build a Winning Experimentation Program & Team."
View the presentation here: https://optimizely.wistia.com/medias/1o6xy4j0xm
Take Optimizely's Maturity Assessment here: https://www.optimizely.com/maturity-model/
DESCRIPTION: The world’s leading companies utilise experimentation to build a culture that fosters innovation and agility. The key to experimentation is to have both the right tools (software) in combination with the right people and processes
In this webinar, you will learn:
* Why experimentation is central to competing and innovating
* Areas to assess when building your experimentation capability
* How organisational culture helps scale an experimentation program
About Optimizely:
Optimizely is the world's leading experimentation platform, enabling businesses to deliver continuous experimentation and personalisation across websites, mobile apps and connected devices. Optimizely enables businesses to experiment deeply into their technology stack and broadly across the entire customer experience.
The platform’s ease of use and speed of deployment empower organisations to create and run bold experiments that help them make data-driven decisions and grow faster.
To date, marketers, developers and product managers have delivered over 700 billion experiences tailored to the needs of their customers. Optimizely’s global client base includes Atlassian, eBay, Fox, IBM, The New York Times, LendingClub, Hotwire, Microsoft and many more leading businesses.
To learn more about customer experience optimisation, visit optimizely.com
Cro webinar what you're doing wrong in your cro program (sharable version)VWO
In this session, Shiva shares insights from his experience of running conversion rate optimization programs for the past several years. He talks about collaboration, how you can navigate the politics of experimentation, testing to learn and not win, and much more.
Learn the real best practices and pitfalls of experimentation based on scientific research and insights. Hazjier is co-author of three studies on experimentation with Harvard Business School and his work is covered in the book Experimentation Works. This talk will dive into the best practices of experiment design, the role of hierarchy in experimentation teams, and the value of experimentation.
Intuit - How to Scale Your Experimentation ProgramOptimizely
Here’s the playbook Intuit uses to increase its experimentation velocity — even when they face traffic limitations.
Mike Loveridge is not new to running experimentation teams. Before Intuit, he built out programs at Ancestry.com, GE, Humana, and CheapOair. He's an expert at making experimentation work at high velocity, even in traffic-challenged situations.
In this webinar, Mike Loveridge shared his best practices for making CRO work at high velocity, key lessons from scaling multiple teams, and why he's bullish on the future of "test and learn".
Meaningful Data - Best Internet Conference 2015 (Lithuania)Simo Ahava
Here are the slides from my talk titled "Meaningful Data", which I presented at the Best Internet Conference in Vilnius, Lithuania.
I share some of my favorite Google Analytics / Google Tag Manager tweaks, along with a healthy dose of criticism towards the default configuration of our favorite analytics platforms (a phenomenon I call Plug-and-play Analytics).
7 Steps for Applying Big Data Patterns to Decision MakingWiley
Learn to apply big data patterns to decision-making in order to make better decisions, design a new business model, or redesign current business processes.
An Experimentation Framework: How to Position for Triple Digit GrowthOptimizely
You’ve done the button color A/B test, you’ve optimized your landing pages for better conversion. What next? At B2B organizations large and small, there is still tremendous potential for experimentation to drive innovation and growth. Learn how Brion’s growth team enables rapid iteration across a variety of different domains, teams, and organizations within Cisco. With an organization of 70,000 employees and many distributed divisions, enabling experimentation can be a complex initiative. Learn the framework for upleveling from random testing to
explicit strategy to position your org for triple digit growth.
It’s 2015, and the customer experience is increasingly being driven by techniques like A/B testing and optimization.
Optimizely recently surveyed digital experience owners to learn just how they think about and allocate resources towards their testing and optimization programs.
These slides answer the questions:
- How often do teams optimizing run A/B and multivariate tests?
- What are the top benefits that optimization programs are seeing?
- How do optimization teams manage their experiment process?
Optimizely, HEMA & Farfetch - Unlock the Potential of Digital Experimentation...Optimizely
Consistency is key. But it is hard, very hard. How do you ensure you keep the ball rolling and continue to reinvent experiences for your users? This is where the Optimizely Professional Services Team comes in to act as an extension of your team. These experienced professionals have seen over 100.000 experiences being created and will help you unlock the potential of Optimizely's products and digital experimentation.
In this session, our customer Farfetch will join us for a fireside chat to talk about how they extend their team with Optimizely Services Professionals, who understand the challenge of building an experimentation program and who help them excel in their role in the business by helping them prove the value of experimentation.
Build a Winning Conversion Optimization StrategySavage Marketing
By Srikant Kotapalli – VWO
AB Testing has traditionally been one of the most commonly used methods to boost conversion rates. Many marketers have since used it tactically, that gets them some early wins but are unable to convert it into repeatable success. Building an effective optimization strategy needs an in-depth understanding of user behaviour, adopting a structured framework and constantly measuring and improving upon the results. This session will walk you through everything you need to know to transform your AB testing into a winning optimization engine.
Optimizing Your B2B Demand Generation MachineOptimizely
If generating demand for your product is a struggle, rest assured that you are in the majority. 63% of marketers say their top challenge is generating traffic and leads. So what's a marketer to do? We say: hypothesize and experiment.
Join us for this free seminar where SurveyMonkey will share how their marketing team experiments and optimizes different parts of their demand gen machine. You'll hear real stories, tactics, and outcomes that will inspire your own demand gen experimentation.
Experimentation as a growth strategy: A conversation with The Motley FoolChris Goward
In this on-demand webinar, join Nate Wallingsford—Head of US Marketing Operations & Optimization at The Motley Fool—for a virtual discussion about experimentation at his organization.
Discover how Nate and his team are leveraging experimentation to uncover massive revenue gains and actionable customer insights. And learn how Nate has worked to gain visibility and create excitement around testing.
VWO Webinar: How To Plan Your Optimisation RoadmapVWO
If your conversion optimization sprints are dependent on surprise wins, then here’s something you should know —”A surprise win might be buried deep in your A/B testing cycle; you might have to wait for weeks, maybe months to see that.”
The good news is that an experimentation roadmap can open up the possibility of seeing those wins a lot faster. This session will help you uncover ways to manage and prioritize testing ideas in a systematic manner and improve your chances of seeing wins faster with your optimization program.
Making Your Hypothesis Work Harder to Inform Future Product StrategyOptimizely
At Treatwell, each experiment goes beyond improving a single business metric. Experimentation works to evolve their product while enriching customer insights in order to deliver the best digital experience to their users. Join Laura Howard, Lead Product Manager, and Dennis Meisner, Senior Product Analyst, to learn their secret to making their hypothesis work harder and how getting their hypothesis right has improved Treatwell’s funnel progression and order health, as well as helped them make critical decisions on their product experience.
To build a successful A/B testing strategy, you'll need more than just ideas of what to test, you'll need a plan that builds data into a repeatable strategy for producing winning experiments.
Improve your content: The What, Why, Where and How about A/B Testingintrotodigital
A/B testing, also known as split testing, is a user experience research methodology where users are randomly split into two or more groups to see different versions of the same element. This presentation explains what is A/B testing, why you need it, where you can apply it and how to conduct an A/B test.
Join us for another #ImpactSalesforceSaturday, a series of online Salesforce Saturday sessions.
We invite all – Developers – Administrators – Group Leaders – Consultants with advanced, intermediate or beginner level knowledge on Salesforce(Sales Cloud, Service Cloud, Pardot, Marketing Cloud, IOT, CPQ, Einstein, etc).
Topic: Drum into understanding of prediction builder with NBA
Date and Time: Saturday, October 3, 2020,
07:30 PM to 08:30 PM IST
Speaker: Rajat Jain
Rajat is a Salesforce Einstein Champion. He is a 8x Salesforce Certified and Currently working as a Program Specialist at MTX Group.
Agenda:
1. Introduction
2. Drum into understanding of prediction builder with NBA
A/B Mythbusters: Common Optimization Objections DebunkedOptimizely
For every $92 marketers spend to drive traffic to their website, only $1 is spent on optimizing the experiences visitors encounter when they get there. But with the proven benefits of testing, why aren’t more companies spending on optimization? We’ve compiled a list of common objections to testing and asked digital marketers what they have to say about them.
SXSW 2016 - Everything you think about A/B testing is wrongDan Chuparkoff
Everything you've learned about A/B Testing is based on the fundamentally flawed belief that there's one right answer. But the era of mass-market, one-right-answers is over. A/B Testing is our most valuable tool in the battle to create a more engaging web. But our strategy is broken. Don't worry, we can gain a better understanding of our users with a little data science. And we can reinvent A/B Testing... I will show you how.
At Civis Analytics, we specialize in Data Science. From here, we can clearly see that all people are not the same. So why are A/B Tests designed to search for a single solution? In this session I'll show you where A/B Testing is headed next. See you in Austin!
A strong hypothesis is the heart of data-driven product discovery & development. It helps you turn data and insights about your users’ behavior into focused proposals that you’ll take action on.
Check out this very exclusive presentation from Jason G'Sell – Lead Training Consultant – and get a framework to help you and your team form strong experiment hypotheses and come up with the right products and features for your customers.
You’ll learn:
- How and when to introduce experimentation into your product development process
- Identifying the differences between Optimization & Discovery
- Building successful experiments in your product development lifecycle
How To Build a Winning Experimentation Program & Team | Optimizely ANZ Webinar 8Optimizely
Watch Dan Ross, Managing Director for Optimizely ANZ in our latest webinar from the Experimentation Insights Tour -- "How To Build a Winning Experimentation Program & Team."
View the presentation here: https://optimizely.wistia.com/medias/1o6xy4j0xm
Take Optimizely's Maturity Assessment here: https://www.optimizely.com/maturity-model/
DESCRIPTION: The world’s leading companies utilise experimentation to build a culture that fosters innovation and agility. The key to experimentation is to have both the right tools (software) in combination with the right people and processes
In this webinar, you will learn:
* Why experimentation is central to competing and innovating
* Areas to assess when building your experimentation capability
* How organisational culture helps scale an experimentation program
About Optimizely:
Optimizely is the world's leading experimentation platform, enabling businesses to deliver continuous experimentation and personalisation across websites, mobile apps and connected devices. Optimizely enables businesses to experiment deeply into their technology stack and broadly across the entire customer experience.
The platform’s ease of use and speed of deployment empower organisations to create and run bold experiments that help them make data-driven decisions and grow faster.
To date, marketers, developers and product managers have delivered over 700 billion experiences tailored to the needs of their customers. Optimizely’s global client base includes Atlassian, eBay, Fox, IBM, The New York Times, LendingClub, Hotwire, Microsoft and many more leading businesses.
To learn more about customer experience optimisation, visit optimizely.com
Cro webinar what you're doing wrong in your cro program (sharable version)VWO
In this session, Shiva shares insights from his experience of running conversion rate optimization programs for the past several years. He talks about collaboration, how you can navigate the politics of experimentation, testing to learn and not win, and much more.
Learn the real best practices and pitfalls of experimentation based on scientific research and insights. Hazjier is co-author of three studies on experimentation with Harvard Business School and his work is covered in the book Experimentation Works. This talk will dive into the best practices of experiment design, the role of hierarchy in experimentation teams, and the value of experimentation.
Intuit - How to Scale Your Experimentation ProgramOptimizely
Here’s the playbook Intuit uses to increase its experimentation velocity — even when they face traffic limitations.
Mike Loveridge is not new to running experimentation teams. Before Intuit, he built out programs at Ancestry.com, GE, Humana, and CheapOair. He's an expert at making experimentation work at high velocity, even in traffic-challenged situations.
In this webinar, Mike Loveridge shared his best practices for making CRO work at high velocity, key lessons from scaling multiple teams, and why he's bullish on the future of "test and learn".
Meaningful Data - Best Internet Conference 2015 (Lithuania)Simo Ahava
Here are the slides from my talk titled "Meaningful Data", which I presented at the Best Internet Conference in Vilnius, Lithuania.
I share some of my favorite Google Analytics / Google Tag Manager tweaks, along with a healthy dose of criticism towards the default configuration of our favorite analytics platforms (a phenomenon I call Plug-and-play Analytics).
7 Steps for Applying Big Data Patterns to Decision MakingWiley
Learn to apply big data patterns to decision-making in order to make better decisions, design a new business model, or redesign current business processes.
From the MarTech Conference in London, UK, October 20-21, 2015. SESSION: The Human Side of Analytics. PRESENTATION: The Human Side of Data - Given by Colin Strong - @colinstrong - Managing Director - Verve, Author of Humanizing Big Data. #MarTech DAY2
Origins of "Augmented Intelligence" concept (based on the Shyam Sankar's TED talk)
List of top 3 Augmented Intelligence companies with deep dive into their products' details (and quick look into their business models, w/o numbers).
Deep dive into the "Augmented Intelligence" technology (by using Palantir as an example).
A look at the future of the Augmented Intelligence.
Internet of Things (IOT) and Machine learning are new technology trends that are booming individually: we will look at how to combine these concepts and technologies by layering machine learning on top of IOT data and driving significant insights for clients via specific use cases like predictive maintenance. Let’s look at some state of the art use cases and subsequent benefits delivered in this space to dig deeper into the “art of the possible”.
Lambda architecture for real time big dataTrieu Nguyen
Lambda Architecture in Real-time Big Data Project
Concepts & Techniques “Thinking with Lambda”
Case study in some real projects
Why lambda architecture is correct solution for big data?
Big Data Revolution: Are You Ready for the Data Overload?Aleah Radovich
Watch the Video here: https://www.youtube.com/watch?v=QYnB94WC9fM&feature=youtu.be
To ensure a future for your business, ensure that you have a plan for your data. Data tools won't be enough to consolidate and analyze your data for long. Make sure you have a plan for when this day comes.
Riot Games Scalable Data Warehouse Lecture at UCSB / UCLAsean_seannery
This is a talk that was given for the Scalable Internet Services Masters-level Computer Science class at UCLA and UCSB. It briefly discusses the server architecture for the game League of Legends before going into depth about how the data warehouse can hold petabytes of player data. Discussion about message queue architecture and scalability occurs along the way
Scott Porter, VP, Methods, and Aliza Pollack, VP Qualitative, led a session at the Planningness 2015 unconference. Scott works in marketing analytics, and regularly uses Artificial Intelligence algorithms to sort through large quantities of data to find plausible causal models for the interrelation of drivers, outcomes, and intermediate or mediating variables.
In facilitating discussions between marketers and modelers, Scott has realized that the process of gearing up to work with AI can help us think better. Computer algorithms have the advantage when sorting through large amounts of data, but they have their limitations. With current technology, artificial intelligence has to sort through the data it is given--it generally can’t intuit missing data that would be important. However, as humans, we excel at this sort of intuition. Or, at least we can... we have to overcome our human tendency to stop when we've uncovered the first plausible answer.
We shared a structured approach to brainstorming that forces us to push wider to additional context that might be important. These exercises (looking for multiple causes, looking for side effects, looking for missing causes) are the steps we would need to go through in order to select the right data for a computer to have sufficient information to build a quantified model. However, the steps are useful regardless of whether or not we later quantify the model, because the techniques help us push beyond where we would normally stop because we found a single reasonable explanation.
After going through an overview of the theory and the process, we put it into practice. We took turns leading small groups in structured hypotheses sessions to systematically unpack potential complexities of real client challenges shared by members of the session, and brainstorm what information we (or algorithms) will need to better understand potential opportunities.
Speaker note annotations are available within the deck if you download the pdf (orange boxes at the top left of each slide).
A short presentation for beginners on Introduction of Machine Learning, What it is, how it works, what all are the popular Machine Learning techniques and learning models (supervised, unsupervised, semi-supervised, reinforcement learning) and how they works with various Industry use-cases and popular examples.
The 2016 CES Report: The Trend Behind the Trend360i
Hot off the press, we’re bringing you our annual CES recap report. Our team scoured the showroom floor, and explored the week's hottest topics in social media, to bring you the best of the 2016 International Consumer Electronics & Technology Show.
Since it was introduced in 2014, Stats Engine has served as a fast, powerful, and easy-to-use foundation for tens of thousands of digital experiments. But how exactly does it work?
In this session, we will explain the key differences and advantages of Stats Engine by comparing and contrasting it with a familiar old friend: the t-test.
One of the most commonly asked questions is “when is an MVT experiment or AB test finished?”
Is it at 30 days...? 100 conversions...? 10,000 visitors...?
The short answer is... it depends.
A/B testing is split-testing between two different variants – labeled A and B. This technique allows the advertiser to determine the under performing and outperforming factors of your two separate ads.
신뢰할 수 있는 A/B 테스트를 위해 알아야 할 것들Minho Lee
2021-09-04 프롬 특강 발표자료입니다.
---
많은 사람들이 A/B 테스트가 중요하다고 말합니다.
그런데 우리는 뭘 믿고 A/B 테스트에 의사결정을 맡기는 걸까요?
A/B 테스트는 그냥 돌리면 성과를 만들어주는 마법의 도구가 아닙니다.
신뢰할 수 있는 실험 결과를 위해 어떤 고민이 더 필요한지 살펴보려고 합니다.
A primer on AB testing and it's application in ecommerce. A necessary tool in every product manager's arsenal. Covers the principles behind setting up a good test and the statistical tools required to analyze results.
Can I Test More Than One Variable at a Time? Statisticians answer some of th...MarketingExperiments
A/B testing on the Web has become incredibly sophisticated in the last few years. New software makes it easier than ever to have a test up and running on your site. Still, a software program can only take you so far, and many marketers find themselves with questions.
In our next Web clinic, statisticians and testing experts from the MECLABS research lab will be answering some of the most common questions associated with online testing:
• Can I test more than one variable at a time?
• What is a multivariate test?
• Is a multivariate testing better than an A/B split test?
• Which page element(s) should I test?
SAMPLE SIZE – The indispensable A/B test calculation that you’re not makingZack Notes
If you’re a marketer it’s very likely that you’ve run an A/B test. It’s also likely that you’ve never calculated the sample size for your tests, and instead, you run tests until they reach statistical significance. If this is the case, your strategy is statistically flawed. Conforming to sample size requires marketers to wait longer for test results, but choosing to ignore it will bear false positives and lead to bad decisions.
This deck was created for an email audience for there are valuable lessons for anyone who runs A/B tests.
A primer on how ab testing can be set-up for success in an e-commerce environment. Includes guidelines of how to set-up ab tests including hypotheses definition, sample size determination, statistical testing and avoiding bias that can come in any experiment's set-up
Should UI/UX be gut-feeling or data-driven? How to stand out from the tough competition by perfecting your owned asset?
A/B testing a long-grind road but it does not have to be tough! Demyth the 4 steps approach to optimization and what it can bring to you
Critical Checks for Pharmaceuticals and Healthcare: Validating Your Data Inte...Minitab, LLC
Watch online at: https://hubs.ly/H0hswm60
Organizations in the pharmaceutical and health sectors are being asked by regulators to:
- Apply more complete methods to validate analytical techniques and measurement systems, known as Data Integrity
-Monitor and evaluate the performance of production processes, otherwise called Statistical Process Control (SPC)
In this presentation you will learn how to:
-Improve the precision and accuracy of analytical techniques, using Minitab's tools for Gage R & R, Gage Linearity and Bias studies and Design of Experiments
-Select the relevant control charts and capability analyses for data that does and does not follow the normal distribution
The presentation will explain how data integrity and process monitoring are critical to each other for regulatory compliance. If the data is not healthy, the evaluation of the process could also be incorrect.
You will finish with the confidence to use more sophisticated statistical techniques, in particular for data integrity.
Download Invesp’s The Essentials of Multivariate & AB TestingDuy, Vo Hoang
We highly recommend that you implement the different ideas in this blog post through AB testing. Use the guide to conduct AB testing and figure out which of these ideas in the article works for your website visitors and which don’t. Download Invesp’s “The Essentials of Multivariate & AB Testing” now to start your testing program on the right foot.
Data-Driven Product Management by Shutterfly Director of ProductProduct School
Main Takeaways:
- How to set the company for growth and success through KPIs
- How to learn, iterate and grow through A/B testing
- How to use the dashboard to focus and succeed with your product
Similar to Optimizely Workshop: Take Action on Results with Statistics (20)
Clover Rings Up Digital Growth to Drive ExperimentationOptimizely
Clover's Digital Growth team is responsible for optimizing the merchant's digital experience and they rely on experimentation to guide digital decision-making. This enables them to quickly learn and measure what changes deliver the best outcomes for users.
Join us with Lead Product Manager of Growth, Monil Shah, to learn how Clover:
- Increased digital conversions amongst merchants with an investment in experimentation
- Grew experiment velocity by 4x after replacing Adobe Target
- Designed a framework to efficiently capture and prioritize test ideas, and roll out winners
Atlassian's Mystique CLI, Minimizing the Experiment Development CycleOptimizely
Mystique CLI is an Atlassian developed CLI for Optimizely Web. It is a multi-phase project that is currently focusing on improving the development cycle for growth engineers. Currently, Mystique is the standard for developing web experiments at Atlassian, and is capable of a wide variety of operations utilizing Optimizely's REST API. This includes creating, updating, testing, and duplicating experiments/personalization campaigns, as well as "promoting" these entities between Optimizely projects for different environments (e.g. from QA => Prod). It has significantly reduced manual overhead and decreased development time by up to 95% for particular actions.
Autotrader Case Study: Migrating from Home-Grown Testing to Best-in-Class Too...Optimizely
Autotrader's Product and Engineering teams were ahead of the curve many years ago when they built a home-grown solution for leveraging feature flags to support server-side testing. Over the years, the industry eventually caught up and surpassed this proprietary tooling and the team had a choice to make: Re-invest into the local solution or completely retool. In this case study, Scott Povlot, Principal Technical Architect, and Seth Stuck, Director of R&D Analytics, will discuss their journey in selecting and then migrating to their next generation of experimentation tooling. They will discuss selection criteria, pros and cons, and outline how they were able to make the migration to Optimizely successful and lessons learned along the way.
Zillow + Optimizely: Building the Bridge to $20 Billion RevenueOptimizely
Join Jason Tabert, Senior CRO Marketing Specialist, and learn how Zillow is using Optimizely’s experimentation, personalization and integrations to help grow their revenue to $20 billion by helping their customers cross the real estate chasm from despair to delight.
The Future of Optimizely for Technical TeamsOptimizely
Optimizely has been reimagining the future of progressive delivery and experimentation, improving every part of the platform to empower technical teams to build, ship, and iterate faster. Learn about the latest enhancements to Optimizely Full Stack and the Optimizely Data Platform, and get a sneak peek at the upcoming roadmap.
Empowering Agents to Provide Service from Anywhere: Contact Centers in the Ti...Optimizely
The coronavirus pandemic has pushed contact center leaders to accelerate technology adoption and empower their teams to work remotely. Join this session with State Farm, Salesforce, and Optimizely to learn how contact centers can adapt quickly and successfully in the time of COVID.
Our new normal has accelerated eCommerce trends by 4-6 years. The Optimizely team shares how experimentation can help retailers fast forward their online sales strategy with Microsoft Dynamics 365 Commerce.
Building an Experiment Pipeline for GitHub’s New Free Team OfferingOptimizely
In April 2020, GitHub announced a new Free for Teams plan. Behind the scenes, the engineering team was also setting up an experiment pipeline and an integration with Optimizely. In this session, we will take a peek at the process of setting up the integration, learning about the behavior of this new Free for Teams customer segment, and the next steps for this experiment pipeline.
AMC Networks Experiments Faster on the Server SideOptimizely
Speeding up innovation only matters if it helps you drive positive outcomes. At AMC, experimentation enables the product and platform teams to challenge their assumptions, maximize impact, and evaluate ideas as painted door tests before investing in significant development. A commitment to test everything across 9 platforms fueled their search for the most scalable solution.
In this session, you'll learn how to:
Leverage server-side testing to experiment quickly
Scale across web, mobile, and OTT applications
Determine when client-side testing is more efficient
Evolving Experimentation from CRO to Product DevelopmentOptimizely
An obsession with data, efficiency, and delivering incredible customer experiences are just a few things that the CNN Consumer Science and Software Engineering teams have in common. Simple A/B testing practices evolved into a culture of experimentation, sparking new development practices across the organization. Learn how they drive results across their entire platform from websites to mobile apps.
Overcoming the Challenges of Experimentation on a Service Oriented ArchitectureOptimizely
Growing from an early stage startup to a national leader in financial literacy is no small feat, and there are a ton of lessons that we have learned at Greenlight as we have grown. Long gone are the days where we would ship something and cross our fingers hoping that it makes some kind of impact on our customers. Now we’re in a world where we can learn ahead of time how much impact a feature will have on the business, before we even launch! In today’s conversation, we’ll discuss how we use Optimizely’s feature flags in our microservice architecture using Optimizely Agent while keeping user IDs and context synchronized.
This session will cover:
How we set up Optimizely Agent and use it in a kubernetes deployment
How we created a user-aliasing service
How we access Optimizely both on the frontend and in the backend services.
How to build a full stack feature
How to manage the rollout using Optimizely’s feature flags
How The Zebra Utilized Feature Experiments To Increase Carrier Card Engagemen...Optimizely
A/B testing is an essential element in any product managers playbook. However having the freedom and flexibility to customize testing based on what the data is saying often requires a lot of time and effort, particularly when it comes to engineering resources. Optimizely offers a flexible approach to experimentation through the use of feature testing, which provides more customization options without the additional development effort typically required to implement these feature optimizations. Megan Bubley, a Senior Product Manager at The Zebra, will share her experience working with Optimizely’s feature tests to create a results page where users can compare multiple auto insurance options driven by actual user needs, as well as her experience customizing the experience based on device platform.
Kick Your Assumptions: How Scholl's Test-Everything Culture Drives RevenueOptimizely
Amy Vetter, Consumer Experience Manager, Direct To Consumer, Europe, will walk you through some of the tests that she and her team run across the Scholl brand. Amy will highlight surprise learnings and how to remove the fear of failing. The team is empowered to test everything possible that will allow the customer to get the best experience and also support the brand’s goal for more revenue and customer data.
At Charles Schwab, they have a mantra of viewing the world through their client’s eyes. When it comes to building digital experiences and running experiments, winning isn’t just about moving metrics, it’s also about improving customer experience. Sara Tresch, SVP of Digital Services at Schwab will be discussing how Schwab designs products and experiments with a client-first mindset.
Shipping to Learn and Accelerate Growth with GitHubOptimizely
Will 2020 mark the shift to a remote-first world in the long run? For GitHub, a distributed workforce is nothing new. Join Sha Ma, VP of Engineering, and Gregory Ceccarelli, Director of Data Science, to learn how they built and scaled a successful experimentation program. They'll share their experience implementing Optimizely across timezones, a remote workforce, and a new business model.
In this session, you'll learn how to:
Optimize UX for a freemium business model
Use data to deliver customer-centered products
Scale experimentation and accelerate growth
Test Everything: TrustRadius Delivers Customer Value with ExperimentationOptimizely
When done right, experimentation can help you validate the product you’re building and create winning customer experiences. And it doesn’t take a big engineering team to make this happen.
TrustRadius, the most trusted review site for business technology, uses experimentation to build an online community through website and server-side experimentation. The small but mighty TrustRadius team runs experiments throughout the buyer’s journey to engage different user personas and understand outcomes in real-time.
Watch the webinar recording featuring Rilo Stark, product manager at TrustRadius, and Jack Peden, senior software engineer, to understand their data-driven experimentation strategy and how TrustRadius uses Optimizely Web and Full Stack products to tailor experiences to different customer segments and mitigate risk through A/B/N and painted door tests.
In this session, you will learn: how to embed feature flagging sitewide to deliver safer, faster releases, best practices for implementing feature flags in a services-oriented architecture, and the latest enhancements you need to help your team recover faster when ship happens.
Newly appointed Optimizely CTO, Lawrence Bruhmuller, will kick off Developer Summit discussing the new normals in software development. After decades of leading and scaling engineering teams for high growth startups and large tech companies, Lawrence has seen the same problems crop up repeatedly for technical teams. There is a new way of delivering software that makes it possible to move fast and get it right. That new way is Progressive Delivery & Experimentation. When Progressive Delivery & Experimentation are used together, you have an efficient system for validating both quality and customer engagement across your development lifecycle. Lawrence will discuss the key principles driving software development innovation, how our engineering team puts this into practice, and the success he’s seen at other companies.
Practical Use Case: How Dosh Uses Feature Experiments To Accelerate Mobile De...Optimizely
Engineering organizations at companies know to anticipate bugs when they are about to launch a new product but, what tools can they use to reduce the blast radius and mitigate potential risks? Now, companies are thinking about preventative methods and safeguards they can put in place to make sure they deliver frictionless experiences to their customers with measurable results. In this session, you'll learn how to use feature flags and experiments across your stack (including mobile apps) to safely release meaningful features to your customers.
Techniques to optimize the pagerank algorithm usually fall in two categories. One is to try reducing the work per iteration, and the other is to try reducing the number of iterations. These goals are often at odds with one another. Skipping computation on vertices which have already converged has the potential to save iteration time. Skipping in-identical vertices, with the same in-links, helps reduce duplicate computations and thus could help reduce iteration time. Road networks often have chains which can be short-circuited before pagerank computation to improve performance. Final ranks of chain nodes can be easily calculated. This could reduce both the iteration time, and the number of iterations. If a graph has no dangling nodes, pagerank of each strongly connected component can be computed in topological order. This could help reduce the iteration time, no. of iterations, and also enable multi-iteration concurrency in pagerank computation. The combination of all of the above methods is the STICD algorithm. [sticd] For dynamic graphs, unchanged components whose ranks are unaffected can be skipped altogether.
Chatty Kathy - UNC Bootcamp Final Project Presentation - Final Version - 5.23...John Andrews
SlideShare Description for "Chatty Kathy - UNC Bootcamp Final Project Presentation"
Title: Chatty Kathy: Enhancing Physical Activity Among Older Adults
Description:
Discover how Chatty Kathy, an innovative project developed at the UNC Bootcamp, aims to tackle the challenge of low physical activity among older adults. Our AI-driven solution uses peer interaction to boost and sustain exercise levels, significantly improving health outcomes. This presentation covers our problem statement, the rationale behind Chatty Kathy, synthetic data and persona creation, model performance metrics, a visual demonstration of the project, and potential future developments. Join us for an insightful Q&A session to explore the potential of this groundbreaking project.
Project Team: Jay Requarth, Jana Avery, John Andrews, Dr. Dick Davis II, Nee Buntoum, Nam Yeongjin & Mat Nicholas
Optimizely Workshop: Take Action on Results with Statistics
1. Take Action on Results
With Statisitcs
An Optimizely Online Workshop
Statistician: Leonid Pekelis
2. Optimizely’s Stats Engine is designed to work with you, not
against you, to provide results which are reliable and
accurate, without requiring statistical training.
At the same time, by knowing some statistics of your own, you
can tune Stats Engine to get the most performance for your
unique needs.
3. 1. Which two A/B Testing pitfalls inflate error rates when using
classical statistics, and are avoided with Stats Engine?
2. What are the three tradeoffs in an A/B Test? And how are they
related?
3. How can you use Optimizely’s results page to best tune the
tradeoffs to achieve your experimentation goals?
After this workshop, you should be able to answer…
4. How to choose the number
of goals and variations for your experiment.
We will also preview
6. • A)
The original, or baseline version of content that
you are testing through a variation.
• B)
Metric used to measure impact of control and
variation
• C)
The control group’s expected conversion rate.
• D)
The relative percentage difference of your
variation from baseline.
• E)
The number of visitors in your test.
Which is the
Improvement?
7. • A) Control and Variation
The original, or baseline version of content that
you are testing through a variation.
• B) Goal
Metric used to measure impact of control and
variation
• C) Baseline conversion rate
The control group’s expected conversion rate.
• D) Improvement
The relative percentage difference of your
variation from baseline.
• E) Sample size
The number of visitors in your test.
9. A procedure for classical statistics
(a.k.a. “T-test”, a.k.a. “Traditional Frequentist”, a.k.a “Fixed Horizon Testing” )
Farmer Fred
wants to compare the effect of two fertilizers on crop yield.
1. Chooses how many plots to use (sample size).
2. Waits for a crop cycle, collects data once at the end.
3. Asks “What are the chances I’d have gotten these results if
there was no difference between the fertilizers?” (a.k.a. p-value)
If p-value < 5%, his results are significant.
4. Goes on, maybe to test irrigation methods.
10. 1915
Data is expensive.
Data is slow.
Practitioners are trained.
2015
Data is cheap.
Data is real-time.
Practitioners are everyone.
Classical statistics were designed for an
offline world.
11. The modern A/B Testing procedure is different
1. Start without good estimate of sample size.
2. Check results early and often. Estimate ROI as quickly as
possible.
3. Ask “How likely did my testing procedure give a wrong
answer?”
4. Many variations on multiple goals, not just 1.
5. Iterate. Iterate. Iterate.
23. Winner
Winner
Loser
Classical statistics guarantee <= 5%
false positives.
What % of my winners & losers do I
expect to be false positives?
Answer: C) With 30 A/B Tests, we can
expect a
= 50% chance of a wrong conclusion!
In general, we can’t say without
knowing how many other goals &
variations were tested.
1.5
3
24. 1. Which two A/B Testing pitfalls inflate error rates when using
classical statistics, and are avoided with Stats Engine?
2. What are the three tradeoffs in an A/B Test? And how are they
related?
3. How can you use Optimizely’s results page to best tune the
tradeoffs to achieve your experimentation goals?
After this workshop you should be able to answer …
25. 1. Which two A/B Testing pitfalls inflate error rates when using
classical statistics, and are avoided with Stats Engine?
A. Peeking and mistaking “False Positive Rate” for “Chance of
a wrong conclusion.”
After this webinar, you should be able to answer …
31. Where is the error rate on Optimizely’s results page?
I. II. III. IV.
Statistical Significance
=
“Chance of a right conclusion”
= (a.k.a.)
100 x (1 - False Discovery Rate)
38. At any number of visitors,
the higher error rate I allow,
the smaller improvement you can
detect.
Error rates Runtime
Inversely
Related
Improvement
& Baseline CR
39. Error rates Runtime
Inversely
Related
At any error rate threshold,
stopping your test earlier means
you can only detect larger
improvements.
Improvement
& Baseline CR
40. For any improvement,
the lower error rate you want,
the longer you need to run your test.
Error rates Runtime
Inversely
Related
Improvement
& Baseline CR
41. What does this look like in practice?
Average Visitors needed to reach
significance with Stats Engine
Improvement (relative)
5% 10% 25%
Significance
Threshold
(Error Rate)
95 (5%) 62 K 14 K 1,800
90 (10%) 59 K 12 K 1,700
80 (20%) 53 K 11 K 1,500
Baseline conversion rate = 10%
42. ~ 1 K visitors per day
Average Visitors needed to reach
significance with Stats Engine
Improvement (relative)
5% 10% 25%
Significance
Threshold
(Error Rate)
95 (5%) 62 K 14 K 1,800
90 (10%) 59 K 12 K 1,700
80 (20%) 53 K 11 K 1,500 (1 day)
Baseline conversion rate = 10%
43. ~ 10K visitors per day
Average Visitors needed to reach
significance with Stats Engine
Improvement (relative)
5% 10% 25%
Significance
Threshold
(Error Rate)
95 (5%) 62 K 14 K 1,800
90 (10%) 59 K 12 K 1,700
80 (20%) 53 K 11 K (1 day) 1,500
Baseline conversion rate = 10%
44. ~ 50K visitors per day
Average Visitors needed to reach
significance with Stats Engine
Improvement (relative)
3% 5% 10%
Significance
Threshold
(Error Rate)
95 (5%) 190 K 62 K 14 K
90 (10%) 180 K 59 K 12 K
80 (20%) 160 K 53 K (1 day) 11 K
Baseline conversion rate = 10%
45. > 100K visitors per day
Average Visitors needed to reach
significance with Stats Engine
Improvement (relative)
3% 5% 10%
Significance
Threshold
(Error Rate)
95 (5%) 190 K 62 K 14 K
90 (10%) 180 K 59 K 12 K
80 (20%) 160 K (1 day) 53 K 11 K
Baseline conversion rate = 10%
46. 1. Which two A/B Testing pitfalls inflate error rates when using
classical statistics, and are avoided with Stats Engine?
2. What are the three tradeoffs in an A/B Test? And how are they
related?
3. How can you use Optimizely’s results page to best tune the
tradeoffs to achieve your experimentation goals?
After this workshop, you should be able to answer …
47. 1. Which two A/B Testing pitfalls inflate error rates when using
classical statistics, and are avoided with Stats Engine?
2. What are the three tradeoffs in an A/B Test? And how are they
related?
A. Error Rates, Runtime, and Effect Size. They are all inversely
related.
After this workshop, you should be able to answer …
58. … or a lot worse.
> 99% > 100K
Error rates Runtime
+.2%,
8%
Inversely
Related
Improvement
& Baseline CR
iterate,
iterate,
iterate!
59. Your experiments will not always have the same
improvement over time.
So, run A/B Tests for at least a business cycle
appropriate for that test and your company.
Seasonality & Time Variation
60. 1. Which two A/B Testing pitfalls inflate error rates when using
classical statistics, and are avoided with Stats Engine?
2. What are the three tradeoffs in an A/B Test? And how are they
related?
3. How can you use Optimizely’s results page to best tune the
tradeoffs to achieve your experimentation goals?
After this workshop, you should be able to answer …
61. 1. Which two A/B Testing pitfalls inflate error rates when using
classical statistics, and are avoided with for Stats Engine?
2. What are the three tradeoffs in one A/B Test?
3. How can you use Optimizely’s results page to best tune the
tradeoffs to achieve your experimentation goals?
A. Adjust your timeline. Accept higher / lower error rate. Admit
an inconclusive result.
After this workshop, you should be able to answer …
62. 1. Which two A/B Testing pitfalls inflate error rates when using classical statistics,
and are avoided with Stats Engine?
A. Peeking and mistaking “False Positive Rate” for “Chance of a Wrong
Answer.”
2. What are the three tradeoffs in one A/B Test?
B. Error Rates, Runtime, and Effect Size. They are all negatively related.
3. How can you use Optimizely’s results page to best tune the tradeoffs to achieve
your experimentation goals?
C. Accept higher / lower error rate. Adjust your timeline. Admit an
inconclusive result.
1. Which two A/B Testing pitfalls inflate error rates when using classical statistics,
and are avoided with Stats Engine?
A. Peeking and mistaking “False Positive Rate” for “Chance of a Wrong
Answer.”
2. What are the three tradeoffs in one A/B Test?
B. Error Rates, Runtime, and Effect Size. They are all negatively related.
3. How can you use Optimizely’s results page to best tune the tradeoffs to achieve
your experimentation goals?
C. Accept higher / lower error rate. Adjust your timeline. Admit an
inconclusive result.
1. Which two A/B Testing pitfalls inflate error rates when using classical statistics,
and are avoided with Stats Engine?
A. Peeking and mistaking “False Positive Rate” for “Chance of a Wrong
Answer.”
2. What are the three tradeoffs in one A/B Test?
B. Error Rates, Runtime, and Effect Size. They are all negatively related.
3. How can you use Optimizely’s results page to best tune the tradeoffs to achieve
your experimentation goals?
C. Accept higher / lower error rate. Adjust your timeline. Admit an
inconclusive result.
1. Which two A/B Testing pitfalls inflate error rates when using classical statistics,
and are avoided with Stats Engine?
A. Peeking and mistaking “False Positive Rate” for “Chance of a Wrong
Answer.”
2. What are the three tradeoffs in one A/B Test?
B. Error Rates, Runtime, and Effect Size. They are all negatively related.
3. How can you use Optimizely’s results page to best tune the tradeoffs to achieve
your experimentation goals?
C. Accept higher / lower error rate. Adjust your timeline. Admit an
inconclusive result.
Review
64. Stats Engine is more conservative when
there are more goals that are not affected by
a variation.
So, adding a lot of “random” goals will slow
down your experiment.
65. Tips & Tricks for using Stats Engine with multiple goals
and variations
• Ask: Which goal is most important to me?
-This should be the primary goal (not impacted by all other
goals)
• Run large, or large multivariate tests without fear of finding
spurious results, but be prepared for the cost of exploration.
• For maximum velocity, only test goals and variations that you
believe will have highest impact.
66.
67. 1. Which two A/B Testing pitfalls inflate error rates when using classical statistics,
and are avoided with Stats Engine?
A. Peeking and mistaking “False Positive Rate” for “Chance of a Wrong
Answer.”
2. What are the three tradeoffs in one A/B Test?
B. Error Rates, Runtime, and Effect Size. They are all negatively related.
3. How can you use Optimizely’s results page to best tune the tradeoffs to achieve
your experimentation goals?
C. Accept higher / lower error rate. Adjust your timeline. Admit an
inconclusive result.
1. Which two A/B Testing pitfalls inflate error rates when using classical statistics,
and are avoided with Stats Engine?
A. Peeking and mistaking “False Positive Rate” for “Chance of a Wrong
Answer.”
2. What are the three tradeoffs in one A/B Test?
B. Error Rates, Runtime, and Effect Size. They are all negatively related.
3. How can you use Optimizely’s results page to best tune the tradeoffs to achieve
your experimentation goals?
C. Accept higher / lower error rate. Adjust your timeline. Admit an
inconclusive result.
1. Which two A/B Testing pitfalls inflate error rates when using classical statistics,
and are avoided with Stats Engine?
A. Peeking and mistaking “False Positive Rate” for “Chance of a Wrong
Answer.”
2. What are the three tradeoffs in one A/B Test?
B. Error Rates, Runtime, and Effect Size. They are all negatively related.
3. How can you use Optimizely’s results page to best tune the tradeoffs to achieve
your experimentation goals?
C. Accept higher / lower error rate. Adjust your timeline. Admit an
inconclusive result.
1. Which two A/B Testing pitfalls inflate error rates when using classical statistics,
and are avoided with Stats Engine?
A. Peeking and mistaking “False Positive Rate” for “Chance of a Wrong
Answer.”
2. What are the three tradeoffs in one A/B Test?
B. Error Rates, Runtime, and Effect Size. They are all negatively related.
3. How can you use Optimizely’s results page to best tune the tradeoffs to achieve
your experimentation goals?
C. Accept higher / lower error rate. Adjust your timeline. Admit an
inconclusive result.
Review