Workshop at MeasureCamp Amsterdam about building a data driven test strategy. Where can you test? What should you test? How do you analyze the results?
Don’t Redesign Your Website in the Dark: Master the redesign process with cus...Optimizely
- How to assemble your redesign dream team
- How to navigate the HIPPO and keep the focus on your customer
- Guidelines for validating a website redesign using customer data
The Optimizely Experience Keynote by Matt Althauser - Optimizely Experience L...Optimizely
In this the keynote of the Optimizely Experience London, Matt Althauser (GM Optimizely Europe) shows where Optimizely has started in 2010 and how the product has evolved since.
During his talk fellow team members explained these additional features in more detail. Features include:
- Drag & Drop WYSIWYG editor
- Mobile
- API
- Audiences
- Balanced Content Delivery Network (CDN)
7 Habits of Highly Effective Personalisation Teams | Dan Ross from OptimizelyOptimizely
Learn about the 7 Habits of Highly Effective Personalisation Teams | Dan Ross, Managing Director | Optimizely
In this learning session, Dan Ross will talk from experience what it takes to make an organisation a champion at personalisation. You will return to your team with clear action items to upgrade your organisation into a personalisation powerhouse.
UNDERSTAND If your current optimisation programme is as mature as you think, or if you are just scratching the surface
CREATE the ‘dream team’ that can reach your personalisation goals on an ongoing basis
RETHINK and improve your audience strategy
Learn more at optimizely.com/resources
Dan is a Silicon Valley veteran and has led various Go-to-Market teams at four tech companies. An Aussie by birth (in spite of his American accent), he's returning home to grow Optimizely's Australian and New Zealand presence. In his spare time, Dan can be found attempting random hobbies like flying planes, triathlons or mountain biking.
UX Analytics and Experimentation for eCommerce GrowthVWO
The primary challenge for eCommerce businesses is to get people on their website. The next challenge is to get them to purchase products. Digital marketers work tirelessly to understand the entire customer journey from discovery to purchase and find ways to influence customer intent.
If the website does not offer a great experience to users (from the beginning), these efforts will not translate into a commensurate growth in revenue.
In this session, Narayan Keshavan from Dell Technologies will focus on the importance of UX Analytics and Experimentation as key enablers for eCommerce revenue growth. Based on his experience in this area, he will outline some key principles that are necessary to effectively leverage experimentation to offer a superior experience to users and enhance the business KPIs. He will also use specific A/B tests to articulate the significance of these principles.
Watch Dan Ross, Managing Director for Optimizely ANZ in our latest webinar from the Experimentation Insights Tour -- "7 Habits of Highly Effective Personalisation Organisations”
Watch the webinar here: https://optimizely.wistia.com/medias/cun66mnkwt
Take Optimizely's Maturity Assessment here: https://www.optimizely.com/maturity-model/
DESCRIPTION: Create a data-driven culture and affect business decisions at the broader company level. When most people think of experimentation or testing, they think of sales and marketing.
However, to do real customer experience optimisation, you need to think about all the ways your customers are interacting with you.
The right mix to support building your programme into a centre of excellence is critical: you need a team that helps create a data-driven culture.
Watch this webinar so you can:
* Think more deeply about the future of your program and the makeup of your team
* Consider which hard and soft skill sets your testing organisation needs
* Build a well-rounded optimisation team that is visible, sustainable, and efficient
About Optimizely
Optimizely is the world's leading experimentation platform, enabling businesses to deliver continuous experimentation and personalisation across websites, mobile apps and connected devices. Optimizely enables businesses to experiment deeply into their technology stack and broadly across the entire customer experience.
The platform’s ease of use and speed of deployment empower organisations to create and run bold experiments that help them make data-driven decisions and grow faster.
To date, marketers, developers and product managers have delivered over 700 billion experiences tailored to the needs of their customers. Optimizely’s global client base includes Atlassian, eBay, Fox, IBM, The New York Times, LendingClub, Hotwire, Microsoft and many more leading businesses.
To learn more about customer experience optimisation, visit optimizely.com
Optimize Everything : A framework for solving your BIGGEST Problems Through O...Optimizely
What problem are you trying to solve? In this session we'll introduce a supremely simple & road tested framework for achieving desired outcomes in every part of your business through data. The framework, called Problem Solution Mapping (PSM) will be brought to life using real-world examples that were ultimately delivered and validated through testing & personalization.
Don’t Redesign Your Website in the Dark: Master the redesign process with cus...Optimizely
- How to assemble your redesign dream team
- How to navigate the HIPPO and keep the focus on your customer
- Guidelines for validating a website redesign using customer data
The Optimizely Experience Keynote by Matt Althauser - Optimizely Experience L...Optimizely
In this the keynote of the Optimizely Experience London, Matt Althauser (GM Optimizely Europe) shows where Optimizely has started in 2010 and how the product has evolved since.
During his talk fellow team members explained these additional features in more detail. Features include:
- Drag & Drop WYSIWYG editor
- Mobile
- API
- Audiences
- Balanced Content Delivery Network (CDN)
7 Habits of Highly Effective Personalisation Teams | Dan Ross from OptimizelyOptimizely
Learn about the 7 Habits of Highly Effective Personalisation Teams | Dan Ross, Managing Director | Optimizely
In this learning session, Dan Ross will talk from experience what it takes to make an organisation a champion at personalisation. You will return to your team with clear action items to upgrade your organisation into a personalisation powerhouse.
UNDERSTAND If your current optimisation programme is as mature as you think, or if you are just scratching the surface
CREATE the ‘dream team’ that can reach your personalisation goals on an ongoing basis
RETHINK and improve your audience strategy
Learn more at optimizely.com/resources
Dan is a Silicon Valley veteran and has led various Go-to-Market teams at four tech companies. An Aussie by birth (in spite of his American accent), he's returning home to grow Optimizely's Australian and New Zealand presence. In his spare time, Dan can be found attempting random hobbies like flying planes, triathlons or mountain biking.
UX Analytics and Experimentation for eCommerce GrowthVWO
The primary challenge for eCommerce businesses is to get people on their website. The next challenge is to get them to purchase products. Digital marketers work tirelessly to understand the entire customer journey from discovery to purchase and find ways to influence customer intent.
If the website does not offer a great experience to users (from the beginning), these efforts will not translate into a commensurate growth in revenue.
In this session, Narayan Keshavan from Dell Technologies will focus on the importance of UX Analytics and Experimentation as key enablers for eCommerce revenue growth. Based on his experience in this area, he will outline some key principles that are necessary to effectively leverage experimentation to offer a superior experience to users and enhance the business KPIs. He will also use specific A/B tests to articulate the significance of these principles.
Watch Dan Ross, Managing Director for Optimizely ANZ in our latest webinar from the Experimentation Insights Tour -- "7 Habits of Highly Effective Personalisation Organisations”
Watch the webinar here: https://optimizely.wistia.com/medias/cun66mnkwt
Take Optimizely's Maturity Assessment here: https://www.optimizely.com/maturity-model/
DESCRIPTION: Create a data-driven culture and affect business decisions at the broader company level. When most people think of experimentation or testing, they think of sales and marketing.
However, to do real customer experience optimisation, you need to think about all the ways your customers are interacting with you.
The right mix to support building your programme into a centre of excellence is critical: you need a team that helps create a data-driven culture.
Watch this webinar so you can:
* Think more deeply about the future of your program and the makeup of your team
* Consider which hard and soft skill sets your testing organisation needs
* Build a well-rounded optimisation team that is visible, sustainable, and efficient
About Optimizely
Optimizely is the world's leading experimentation platform, enabling businesses to deliver continuous experimentation and personalisation across websites, mobile apps and connected devices. Optimizely enables businesses to experiment deeply into their technology stack and broadly across the entire customer experience.
The platform’s ease of use and speed of deployment empower organisations to create and run bold experiments that help them make data-driven decisions and grow faster.
To date, marketers, developers and product managers have delivered over 700 billion experiences tailored to the needs of their customers. Optimizely’s global client base includes Atlassian, eBay, Fox, IBM, The New York Times, LendingClub, Hotwire, Microsoft and many more leading businesses.
To learn more about customer experience optimisation, visit optimizely.com
Optimize Everything : A framework for solving your BIGGEST Problems Through O...Optimizely
What problem are you trying to solve? In this session we'll introduce a supremely simple & road tested framework for achieving desired outcomes in every part of your business through data. The framework, called Problem Solution Mapping (PSM) will be brought to life using real-world examples that were ultimately delivered and validated through testing & personalization.
Workshop 6: Build Your Organization's Optimization CultureOptimizely
The key output of an effective testing organization is data, but data insights cannot be achieved without the collaborative input of the people that makeup the testing organization. Join this session to learn how Optimizely's most successful customers socialize testing and structure their testing organizations.
How to Reduce Churn with Better Product AdoptionAmity
In the age where product-led businesses are beating their competition, product adoption reigns king. The more your users and customers get out of your product, the less likely they are to churn.
Ty Magnin from Appcues tells you exactly how Customer Success Managers can reduce churn with:
• Stronger customer onboarding
• Strategic lifecycle nudges
• Feature discovery
Optimizely Workshop: Mobile Walkthrough Optimizely
Testing and optimizing your mobile apps can help with shorter development cycles, data-driven decision-making, and higher user conversion rates. In this highly interactive session, we encourage you to bring your app (or a sample app), and we’ll walk through the top-to-tail process for using Optimizely on your mobile app. This training is designed for iOS and Android developers who are looking to use Optimizely on their mobile apps.
Experimental statistics is only one of the many powerful analytical techniques companies are using to supercharge their experiment ideation, segmentation, and analysis. Check out this content for a refresher of key stats issues and a discussion on how to use data for better test and bigger wins.
Getting Started with Server-Side TestingOptimizely
One of the most difficult aspects of deep experimentation ― which requires a full stack solution and server-side testing ― is laying a solid foundation for success. Join Optimizely and WiderFunnel to learn best practices for going beyond client-side testing, and implementing a full stack experimentation strategy to drive results on the entire customer journey.
-How to identify your key success metrics, such as customer retention and lifetime value
-How to integrate experimentation into your product roadmap
-How to start testing on your full customer journey
What does digital marketing maturity look like? How can companies effectively benchmark their experimentation performance? Brooks Bell will share a proven framework that allows you to unlock experimentation success, make more efficient investments and confidently plan for growth.
You will learn:
- The six elements of a successful experimentation program
- How to benchmark your performance
- Proven ways to evolve your digital marketing maturity
Retention is critical to the survival of every business. You can spend time and money acquiring users, but without a solid retention strategy, you’re throwing that money away. The retention playbook, tested with examples from growth-stage companies, will provide you with a framework for diagnosing and improving your retention at all stages of the customer lifecycle.
Qualtrics Vocalize Product Tour: An Inside Look at the Future of Voice of the...Qualtrics
Discover how easy it is to capture your customer’s voice across channels, analyze trends, and take action in real-time – all with an intuitive, point and click interface. Qualtrics Vocalize is changing the world of VoC and making it easier than ever.
Cultivating a Culture of ExperimentationOptimizely
By harnessing insights from experimentation, people across your organization can contribute ideas and decisions that take the customer experience to new levels. To take advantage of this, forward-thinking organizations are getting everyone involved in experimentation. These slides will share how General Assembly is cultivating a culture of experimentation and the impact it’s making company-wide.
Featuring speakers from Apperian, Clearhead, j2 Global Communications, Western Governors University
Tiffany Early, Director, Digital Branding, Apperian
Ryan Garner, Co-Founder and Executive Vice President, Clearhead
Matthew Vandewouwer, Marketing Manager, j2 Global Communications
Steve Petersen, Senior Website Marketing Coordinator, Western Governors University
Are you redesigning your website in the next year? Many industry experts agree that it’s best to develop a website iteratively, using A/B testing and optimization, but some websites are so rusty that only a major overhaul seems to be the solution. It can be daunting to navigate the critical process of a redesign: rising costs, missed deadlines, and plummeting conversion rates instead of anticipated lifts. Learn from the experts on how to use customer interviews, closed-loop sales feedback, web analytics, and A/B testing to determine the optimal mix of content, navigation, and calls-to-action for your site. Get tips on how to drive results in spite of tight timelines, limited resources, CMS restrictions, and small datasets.
Website Redesigns: Why they Fail and How to Ensure SuccessOptimizely
Learn how to avoid website redesigns that don’t deliver results.
Website redesigns are a tremendous effort: they take a lot of planning, involve many team members and are very costly. The worst about them is that they often fail, meaning that the “new” website will perform worse than the old.
- How common redesign fails really are, and how costly they can be
- The key reasons most redesigns fail to increase sales/conversions
- A better approach to increase chances of redesign success
To learn more about Optimizely, find more info here: https://www.optimizely.com/
To get more inspiration for better website redesigns, visit our blog:
https://blog.optimizely.com/2014/08/14/2-alexa-500-site-redesigns-that-should-inspire-you-to-ab-test/
Optimizely Workshop 1: Prioritize your roadmapOptimizely
When your testing roadmap includes dozens of ideas (each with unique requirements) and each team member is vying for her idea to be run first, effective prioritization becomes paramount. This session will focus on the considerations, tools and frameworks you can use to make sure your roadmap is appropriately prioritized to meet your goals.
Listening to the Voice of the Customer in an Omnichannel WorldZOOM International
Voice of the Customer captures your customers' perceptions of needs and wants and is the vital first step in providing superior customer service. Learn how you can collect customer feedback from multiple sources like email, IVR and SMS.
An Experimentation Framework: How to Position for Triple Digit GrowthOptimizely
You’ve done the button color A/B test, you’ve optimized your landing pages for better conversion. What next? At B2B organizations large and small, there is still tremendous potential for experimentation to drive innovation and growth. Learn how Brion’s growth team enables rapid iteration across a variety of different domains, teams, and organizations within Cisco. With an organization of 70,000 employees and many distributed divisions, enabling experimentation can be a complex initiative. Learn the framework for upleveling from random testing to
explicit strategy to position your org for triple digit growth.
A strong hypothesis is the heart of data-driven product discovery & development. It helps you turn data and insights about your users’ behavior into focused proposals that you’ll take action on.
Check out this very exclusive presentation from Jason G'Sell – Lead Training Consultant – and get a framework to help you and your team form strong experiment hypotheses and come up with the right products and features for your customers.
You’ll learn:
- How and when to introduce experimentation into your product development process
- Identifying the differences between Optimization & Discovery
- Building successful experiments in your product development lifecycle
Revealing Behavior: Web Analytics Strategy 101Ravi Singh
This talk is about Web Analytics Strategy,
Managing an in-house analytics program
And leveraging analytics to optimize your product.
Analytics involves measuring, testing and storytelling.
But it’s real purpose is to TAKE ACTION to improve, based on insights from well-interpreted data.
If you won’t take action, don’t bother with analytics.
Unveiling Our All-New Enhancement Request Model and Customer Support PortalSAP Ariba
Our totally redesigned support portal is ready to go, with a newly developed enhancement request (ER) solution coming soon. In this session we’ll demonstrate our new ER model and how customers can influence the SAP Ariba solution road map by submitting and voting on ERs and innovation requests as a community. No more ER black hole! You will also get an overview of our totally redesigned support portal and hear from our support team about exciting updates and what they mean to you.
SplitMetrics answers burning questions on mobile A/B testingSplitMetrics
SplitMetrics team members answer frequently asked questions on the SplitMetrics app store A/B testing platform, and the mobile A/B testing process itself, cover most burning topics and provide best practices, insights and actionable tips.
Workshop 6: Build Your Organization's Optimization CultureOptimizely
The key output of an effective testing organization is data, but data insights cannot be achieved without the collaborative input of the people that makeup the testing organization. Join this session to learn how Optimizely's most successful customers socialize testing and structure their testing organizations.
How to Reduce Churn with Better Product AdoptionAmity
In the age where product-led businesses are beating their competition, product adoption reigns king. The more your users and customers get out of your product, the less likely they are to churn.
Ty Magnin from Appcues tells you exactly how Customer Success Managers can reduce churn with:
• Stronger customer onboarding
• Strategic lifecycle nudges
• Feature discovery
Optimizely Workshop: Mobile Walkthrough Optimizely
Testing and optimizing your mobile apps can help with shorter development cycles, data-driven decision-making, and higher user conversion rates. In this highly interactive session, we encourage you to bring your app (or a sample app), and we’ll walk through the top-to-tail process for using Optimizely on your mobile app. This training is designed for iOS and Android developers who are looking to use Optimizely on their mobile apps.
Experimental statistics is only one of the many powerful analytical techniques companies are using to supercharge their experiment ideation, segmentation, and analysis. Check out this content for a refresher of key stats issues and a discussion on how to use data for better test and bigger wins.
Getting Started with Server-Side TestingOptimizely
One of the most difficult aspects of deep experimentation ― which requires a full stack solution and server-side testing ― is laying a solid foundation for success. Join Optimizely and WiderFunnel to learn best practices for going beyond client-side testing, and implementing a full stack experimentation strategy to drive results on the entire customer journey.
-How to identify your key success metrics, such as customer retention and lifetime value
-How to integrate experimentation into your product roadmap
-How to start testing on your full customer journey
What does digital marketing maturity look like? How can companies effectively benchmark their experimentation performance? Brooks Bell will share a proven framework that allows you to unlock experimentation success, make more efficient investments and confidently plan for growth.
You will learn:
- The six elements of a successful experimentation program
- How to benchmark your performance
- Proven ways to evolve your digital marketing maturity
Retention is critical to the survival of every business. You can spend time and money acquiring users, but without a solid retention strategy, you’re throwing that money away. The retention playbook, tested with examples from growth-stage companies, will provide you with a framework for diagnosing and improving your retention at all stages of the customer lifecycle.
Qualtrics Vocalize Product Tour: An Inside Look at the Future of Voice of the...Qualtrics
Discover how easy it is to capture your customer’s voice across channels, analyze trends, and take action in real-time – all with an intuitive, point and click interface. Qualtrics Vocalize is changing the world of VoC and making it easier than ever.
Cultivating a Culture of ExperimentationOptimizely
By harnessing insights from experimentation, people across your organization can contribute ideas and decisions that take the customer experience to new levels. To take advantage of this, forward-thinking organizations are getting everyone involved in experimentation. These slides will share how General Assembly is cultivating a culture of experimentation and the impact it’s making company-wide.
Featuring speakers from Apperian, Clearhead, j2 Global Communications, Western Governors University
Tiffany Early, Director, Digital Branding, Apperian
Ryan Garner, Co-Founder and Executive Vice President, Clearhead
Matthew Vandewouwer, Marketing Manager, j2 Global Communications
Steve Petersen, Senior Website Marketing Coordinator, Western Governors University
Are you redesigning your website in the next year? Many industry experts agree that it’s best to develop a website iteratively, using A/B testing and optimization, but some websites are so rusty that only a major overhaul seems to be the solution. It can be daunting to navigate the critical process of a redesign: rising costs, missed deadlines, and plummeting conversion rates instead of anticipated lifts. Learn from the experts on how to use customer interviews, closed-loop sales feedback, web analytics, and A/B testing to determine the optimal mix of content, navigation, and calls-to-action for your site. Get tips on how to drive results in spite of tight timelines, limited resources, CMS restrictions, and small datasets.
Website Redesigns: Why they Fail and How to Ensure SuccessOptimizely
Learn how to avoid website redesigns that don’t deliver results.
Website redesigns are a tremendous effort: they take a lot of planning, involve many team members and are very costly. The worst about them is that they often fail, meaning that the “new” website will perform worse than the old.
- How common redesign fails really are, and how costly they can be
- The key reasons most redesigns fail to increase sales/conversions
- A better approach to increase chances of redesign success
To learn more about Optimizely, find more info here: https://www.optimizely.com/
To get more inspiration for better website redesigns, visit our blog:
https://blog.optimizely.com/2014/08/14/2-alexa-500-site-redesigns-that-should-inspire-you-to-ab-test/
Optimizely Workshop 1: Prioritize your roadmapOptimizely
When your testing roadmap includes dozens of ideas (each with unique requirements) and each team member is vying for her idea to be run first, effective prioritization becomes paramount. This session will focus on the considerations, tools and frameworks you can use to make sure your roadmap is appropriately prioritized to meet your goals.
Listening to the Voice of the Customer in an Omnichannel WorldZOOM International
Voice of the Customer captures your customers' perceptions of needs and wants and is the vital first step in providing superior customer service. Learn how you can collect customer feedback from multiple sources like email, IVR and SMS.
An Experimentation Framework: How to Position for Triple Digit GrowthOptimizely
You’ve done the button color A/B test, you’ve optimized your landing pages for better conversion. What next? At B2B organizations large and small, there is still tremendous potential for experimentation to drive innovation and growth. Learn how Brion’s growth team enables rapid iteration across a variety of different domains, teams, and organizations within Cisco. With an organization of 70,000 employees and many distributed divisions, enabling experimentation can be a complex initiative. Learn the framework for upleveling from random testing to
explicit strategy to position your org for triple digit growth.
A strong hypothesis is the heart of data-driven product discovery & development. It helps you turn data and insights about your users’ behavior into focused proposals that you’ll take action on.
Check out this very exclusive presentation from Jason G'Sell – Lead Training Consultant – and get a framework to help you and your team form strong experiment hypotheses and come up with the right products and features for your customers.
You’ll learn:
- How and when to introduce experimentation into your product development process
- Identifying the differences between Optimization & Discovery
- Building successful experiments in your product development lifecycle
Revealing Behavior: Web Analytics Strategy 101Ravi Singh
This talk is about Web Analytics Strategy,
Managing an in-house analytics program
And leveraging analytics to optimize your product.
Analytics involves measuring, testing and storytelling.
But it’s real purpose is to TAKE ACTION to improve, based on insights from well-interpreted data.
If you won’t take action, don’t bother with analytics.
Unveiling Our All-New Enhancement Request Model and Customer Support PortalSAP Ariba
Our totally redesigned support portal is ready to go, with a newly developed enhancement request (ER) solution coming soon. In this session we’ll demonstrate our new ER model and how customers can influence the SAP Ariba solution road map by submitting and voting on ERs and innovation requests as a community. No more ER black hole! You will also get an overview of our totally redesigned support portal and hear from our support team about exciting updates and what they mean to you.
SplitMetrics answers burning questions on mobile A/B testingSplitMetrics
SplitMetrics team members answer frequently asked questions on the SplitMetrics app store A/B testing platform, and the mobile A/B testing process itself, cover most burning topics and provide best practices, insights and actionable tips.
Online dialogues and conversion optimization (online tuesday feb 9, 2010)Bart Schutz
Online conversion optimization: theory and practice. From mass optimization to segmentation and 1-on-1 dialogues.
How to build groups of customer profiles, based on their (historical) behavior and collect the knowledge of how to communicate better and better with them.
Conversion Optimization Framework to Build Sustainable and Repeat GrowthTushar Purohit
The goal of the this presentation on Conversion optimization Framework is to remove the guesswork from the conversion optimization process. It provides comprehensive analysis to anyone interested in optimization with a specific methodology to produce consistent results.
Transforming Customer and Client Outcomes Through Engaging User ExperiencesDOYO Live
User experience is a huge buzzword in the design world right now, but what does it really mean? The truth is it means lots of things, and can best be thought of as a philosophy for creating engaging experiences for digital points-of-contact. In my talk, I’ll introduce you to tools, best practices, and approaches to design that leverage user goals and needs to build better products of all stripes.
Guiseppe Getto, Ph.D. is a college professor based in North Carolina and is President and Co-Founder of Content Garden, Inc., a digital marketing and UX consulting firm.
He consults with a broad range of organizations who want to develop better customer experiences, better writing, better content, better SEO, better designs, and better reach for their target audience. He has taught at the college level for over ten years. During that time, he has also consulted and formed service-learning partnerships with many non-profits and businesses, from technical writing firms to homeless shelters to startups.
Techniques, tools and examples of integrated marketing to apply in 2014.
Presented as a webinar by Dave Chaffey at the Smart Insights Digital Marketing summit.
[Webinar] The Scalable Way: Unlocking Data To Drive Great Customer Experience...VWO
Watch this webinar to understand how some of the leading Fortune 2000 organizations have built a robust and scalable process to turn data into insights, and to use insights to elevate their CX leading to drastically higher conversions and in turn Revenue/ROI.
The Scalable Way: Unlocking Data To Drive Great Customer Experience and Conve...VWO
Watch this webinar to understand how some of the leading Fortune 2000 organizations have built a robust and scalable process to turn data into insights, and to use insights to elevate their CX leading to drastically higher conversions and in turn Revenue/ROI.
Today’s sales forces are increasingly mobile, tech-savvy and open to untraditional learning methods. Over the past few years, sales enablement has evolved into a multi-faceted and multi-channel-based blended learning approach. This sales university case study explores common practices found in training SAP’s approximately 6,500 quota carriers. Review the benefits and challenges of implementing virtual and digital learning strategies and what you need for these strategies to succeed. Learn how to align corporate strategy, a go-to-market approach, and learning offerings with innovative ways to market learning and create a learning culture in sales. Leave the session with SAP’s top five hands-on best practices to use immediately in your organization.
Speaker: Malte Bong-Schmidt and Christine Shaw, SAP
3. Driving Faster Innovation with SAP_Andy HoSing Yee Khoo
Learn4Success in Malaysia - Kuala Lumpur 27 April 2017
Business survival in the digital age is dependent on innovation. To compete with traditional competition, innovation has always been important, and now with the new wave of disruption we see in industry, it is elevated to essential. How do you ensure that your users can keep up with the latest developments, and get to learn new technologies before these are widely deployed?
5 tips als je nu wilt starten met digital marketing analyticsAvanade Nederland
Uit onderzoek van de DDMA blijkt dat 47 procent van de Nederlandse ondernemingen het onderbuikgevoel en ervaring als belangrijke factoren zien in het besluitvormingsproces. Maar liefst 90 procent geeft daarnaast aan dat marketeers meer en meer kennis in huis moeten hebben op het gebied van data, dataverzameling en data analyse. Hoe ga jij als digital marketeer hiermee aan de slag? Wij geven jou 5 tips om vandaag nog aan de slag te gaan met digital marketing analytics. Daarnaast gebruiken we praktijkvoorbeelden om je te laten zien hoe je met analytics nieuwe afzetmarkten en doelgroepen kunt ontdekken.
Customer experience management (CxM) platforms are technology enablers within wider digital transformation programs. They help brands to deliver those right-time experiences at key decision points in the customer journey to drive engagement. With over a decade of lessons learned delivering CxM platforms for major brands, this session provides practical and pragmatic advice using concrete platform examples spanning the financial, media, news and entertainment, healthcare, and insurance sectors. How do you take a platform-first approach to CxM? What does good look like? How do you incentivize adoption? What works well? What should be avoided? This session shares war stories on the good, the bad and the ugly documented as a collection of useful and useable design principles for delivering sustainable CxM platforms.
3. Driving faster innovation with SAP_Anne KohSing Yee Khoo
Business survival in the digital age is dependent on innovation. To compete with traditional competition, innovation has always been important, and now with the new wave of disruption we see in industry, it is elevated to essential. How do you ensure that your users can keep up with the latest developments, and get to learn new technologies before these are widely deployed?
Discover how incorporating services into your project lifecycle can help maximize your project value and mitigate risks. This deck focuses on how you can make the biggest impact with your project to drive business success using SAP Hybris solutions.
For more from SAP Hybris, please visit: https://hybris.com/en/products/expert-services
3. Driving Faster Innovation with SAP - Oviani NataliaSing Yee Khoo
Business survival in the digital age is dependent on innovation. To compete with traditional competition, innovation has always been important, and now with the new wave of disruption we see in industry, it is elevated to essential. How do you ensure that your users can keep up with the latest developments, and get to learn new technologies before these are widely deployed?
Similar to Workshop data driven test strategy (20)
Are you running lots of A/B-tests and is your win ratio through the roof? Or do you have too few winners? In both cases you should be wary of the quality and validity of your experiments. In this session Annemarie will give you 10 concrete tips how to check and warrant the validity of your A/B-tests. This will help you run a more successful optimization program.
Presentation for Digital Elite Day 2020
Ben jij al aan het A/B-testen en behaal je daarmee veel successen? Maar heb je het gevoel dat je niet alle cijfers of uitkomsten kunt vertrouwen? Of vind je juist veel te weinig winnaars? In beide gevallen moet je kritisch kijken naar de kwaliteit van jouw uitgevoerde experimenten.
In deze sessie deelt Annemarie aan de hand van concrete voorbeelden, cases en tips hoe je de validiteit van je A/B-testen kunt waarborgen en checken voor een succesvoller CRO programma.
CRO is supposed to be really easy. Everyone can set up an A/B-test in the WYSIWYG editors, the testing tool does all the difficult computations for you and it will tell if you have found a winner. It’s child’s play, right? No, you’re wrong! WYSIWYG editors are very error prone (especially with different browsers) and in order to really analyse and interpret A/B-test results correctly you need a basic understanding of statistics.
This presentation will help you understand:
-The importance of Test Power
-How to correctly set up an A/B-test
-How to analyse test results yourself
-The difference between Frequentist and Bayesian statistics
-How to decide to implement a variation
The Building Blocks of QuestDB, a Time Series Databasejavier ramirez
Talk Delivered at Valencia Codes Meetup 2024-06.
Traditionally, databases have treated timestamps just as another data type. However, when performing real-time analytics, timestamps should be first class citizens and we need rich time semantics to get the most out of our data. We also need to deal with ever growing datasets while keeping performant, which is as fun as it sounds.
It is no wonder time-series databases are now more popular than ever before. Join me in this session to learn about the internal architecture and building blocks of QuestDB, an open source time-series database designed for speed. We will also review a history of some of the changes we have gone over the past two years to deal with late and unordered data, non-blocking writes, read-replicas, or faster batch ingestion.
Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...Subhajit Sahu
Abstract — Levelwise PageRank is an alternative method of PageRank computation which decomposes the input graph into a directed acyclic block-graph of strongly connected components, and processes them in topological order, one level at a time. This enables calculation for ranks in a distributed fashion without per-iteration communication, unlike the standard method where all vertices are processed in each iteration. It however comes with a precondition of the absence of dead ends in the input graph. Here, the native non-distributed performance of Levelwise PageRank was compared against Monolithic PageRank on a CPU as well as a GPU. To ensure a fair comparison, Monolithic PageRank was also performed on a graph where vertices were split by components. Results indicate that Levelwise PageRank is about as fast as Monolithic PageRank on the CPU, but quite a bit slower on the GPU. Slowdown on the GPU is likely caused by a large submission of small workloads, and expected to be non-issue when the computation is performed on massive graphs.
06-04-2024 - NYC Tech Week - Discussion on Vector Databases, Unstructured Data and AI
Discussion on Vector Databases, Unstructured Data and AI
https://www.meetup.com/unstructured-data-meetup-new-york/
This meetup is for people working in unstructured data. Speakers will come present about related topics such as vector databases, LLMs, and managing data at scale. The intended audience of this group includes roles like machine learning engineers, data scientists, data engineers, software engineers, and PMs.This meetup was formerly Milvus Meetup, and is sponsored by Zilliz maintainers of Milvus.
Adjusting primitives for graph : SHORT REPORT / NOTESSubhajit Sahu
Graph algorithms, like PageRank Compressed Sparse Row (CSR) is an adjacency-list based graph representation that is
Multiply with different modes (map)
1. Performance of sequential execution based vs OpenMP based vector multiply.
2. Comparing various launch configs for CUDA based vector multiply.
Sum with different storage types (reduce)
1. Performance of vector element sum using float vs bfloat16 as the storage type.
Sum with different modes (reduce)
1. Performance of sequential execution based vs OpenMP based vector element sum.
2. Performance of memcpy vs in-place based CUDA based vector element sum.
3. Comparing various launch configs for CUDA based vector element sum (memcpy).
4. Comparing various launch configs for CUDA based vector element sum (in-place).
Sum with in-place strategies of CUDA mode (reduce)
1. Comparing various launch configs for CUDA based vector element sum (in-place).
Adjusting OpenMP PageRank : SHORT REPORT / NOTESSubhajit Sahu
For massive graphs that fit in RAM, but not in GPU memory, it is possible to take
advantage of a shared memory system with multiple CPUs, each with multiple cores, to
accelerate pagerank computation. If the NUMA architecture of the system is properly taken
into account with good vertex partitioning, the speedup can be significant. To take steps in
this direction, experiments are conducted to implement pagerank in OpenMP using two
different approaches, uniform and hybrid. The uniform approach runs all primitives required
for pagerank in OpenMP mode (with multiple threads). On the other hand, the hybrid
approach runs certain primitives in sequential mode (i.e., sumAt, multiply).
Chatty Kathy - UNC Bootcamp Final Project Presentation - Final Version - 5.23...John Andrews
SlideShare Description for "Chatty Kathy - UNC Bootcamp Final Project Presentation"
Title: Chatty Kathy: Enhancing Physical Activity Among Older Adults
Description:
Discover how Chatty Kathy, an innovative project developed at the UNC Bootcamp, aims to tackle the challenge of low physical activity among older adults. Our AI-driven solution uses peer interaction to boost and sustain exercise levels, significantly improving health outcomes. This presentation covers our problem statement, the rationale behind Chatty Kathy, synthetic data and persona creation, model performance metrics, a visual demonstration of the project, and potential future developments. Join us for an insightful Q&A session to explore the potential of this groundbreaking project.
Project Team: Jay Requarth, Jana Avery, John Andrews, Dr. Dick Davis II, Nee Buntoum, Nam Yeongjin & Mat Nicholas
34. Scientific literature Verified
(2nd party)
What do we know from scientific literature?
In general about decision-making processes
And specifically about the type of products sold
107. So the p-value only tells you:
How unlikely is it that you found this result,
given that the null hypothesis is true (that there
is no difference between the conversion rates)
I am Annemarie Klaassen and I work as an analytics and optimization expert at online dialogue. I studied at Tilburg University where I completed my master in Leisure Studies and Marketing Management. I have a real passion for data and traveling. I actually just returned from a trip to NY, so I’m a bit jetlagged. Hopefully you won’t be able to notice it too much.
We work at OD: a conversion rate optimization agency in Utrecht. Our goal is to grow businesses by improving their conversion rate.
There are a couple more conversion rate optimization agencies in the Netherlands, but our USP is the combination we make between analytics and psychology.
We combine data insights with psychological insights for evidence based growth.
AN
We do this for a bunch of clients in the Netherlands and also for some pretty cool international clients.
For most we do high velocity testing. Which means , we run multiple tests per week for them.
P: Potential
I: Impact
P: Power
E: Ease
Waar zit de aandacht op de pagina? Welke elementen worden wel/niet gebruikt?
P: Potential
I: Impact
P: Power
E: Ease
The first thing you do is map out all the different page types you have on your website, then look at the weekly unique visitors you have on that page type and the conversions through that page as well.
Then you determine whether the pages have enough test power – based on these numbers.
Now you might wonder what test Power actually means, well..
Frequentist testing is very much like a court trial in the US.
The null hypothesis says that the defendant is innocent
and the alternative hypothesis says that the defendant is guilty.
We then present evidence or, or in other words, collect data.
Then, we judge this evidence and ask ourselves the question, could the data plausibly have happened by chance if the null hypothesis were true?
If the data were likely to have occurred under the assumption that the null hypothesis were true, then we would fail to reject the null hypothesis, and state that the evidence is not sufficient to suggest that the defendant is guilty.
If the data were very unlikely to have occurred, then the evidence raises more than a reasonable doubt about the null hypothesis, and hence we reject the null hypothesis.
This judging of evidence is done with the p-value.
If you test against a significance level of 90%, then you will have a 10% false positive rate (10% of your declared winners aren’t real winners)
If you test against a Power of 80%, then in 20% of the tests you won’t declare a winner, when in fact it is.
The test power is the likelihood that an experiment will detect an effect when there is an effect to be detected. You want to make sure you can find the winning variation in the collected data.
The power depends on 3 elements: the sample size (so on how much traffic you run your test), the effect size (that means the actual uplift in conversion) and the chosen significance level.
If you visit Abtestguide.com you can calculate the Power of a test given the number of visitors and conversions and the expected uplift of the test. In this case you have 10.000 visitors per variation and 1000 conversions in the control. You expect an uplift of 5%. This results in a Power of only 65%. This is not very high! You will only detect 65% of the time a winner when there is a winner to be detected.
A ground rule for the Power is at least 80%. To increase the Power of the test you can do 3 things:
If you visit Abtestguide.com you can calculate the Power of a test given the number of visitors and conversions and the expected uplift of the test. In this case you have 10.000 visitors per variation and 1000 conversions in the control. You expect an uplift of 5%. This results in a Power of only 65%. This is not very high! You will only detect 65% of the time a winner when there is a winner to be detected.
A ground rule for the Power is at least 80%. To increase the Power of the test you can do 3 things:
You can increase the sample size: or the number of visitors in your experiment. If you double the test duration (so you get 20.000 visitors and 2000 conversions), then you see that the distributions of the 2 variations lie further apart. Hence, the power increases to 85,4%.
You can increase the sample size: or the number of visitors in your experiment. If you double the test duration (so you get 20.000 visitors and 2000 conversions), then you see that the distributions of the 2 variations lie further apart. Hence, the power increases to 85,4%.
The other element is effect size; how much uplift do you expect from your variation? If you expect an uplift of 10% instead of 5% then your test Power increases immensely.
You need to be aware what kind of uplift can be expected of the A/B-test. You learn this by doing a lot of experiments, but it’s quite rare to find winning variation with an uplift higher than 10%. Most of the time it’s not higher than 5%. This of course also depends on the type of test your doing. If you only change a headline you probably won’t get a 10% uplift.
The other element is effect size; how much uplift do you expect from your variation? If you expect an uplift of 10% instead of 5% then your test Power increases immensely.
You need to be aware what kind of uplift can be expected of the A/B-test. You learn this by doing a lot of experiments, but it’s quite rare to find winning variation with an uplift higher than 10%. Most of the time it’s not higher than 5%. This of course also depends on the type of test your doing. If you only change a headline you probably won’t get a 10% uplift.
P: Potential
I: Impact
P: Power
E: Ease
You can look at different segments in your data, look at click behavior per variation, time on page and other micro conversions.
What are the main ways of analysing A/B-tests then?
The most common approach to analysing A/B-tests is the t-test (which is based on frequentist statistics).
But, over the last couple of years Bayesian statistics have grown in popularity.
I will try to explain both in a bit.
We will start with frequentist statistics.
Frequentist testing is very much like a court trial in the US.
The null hypothesis says that the defendant is innocent
and the alternative hypothesis says that the defendant is guilty.
We then present evidence or, or in other words, collect data.
Then, we judge this evidence and ask ourselves the question, could the data plausibly have happened by chance if the null hypothesis were true?
If the data were likely to have occurred under the assumption that the null hypothesis were true, then we would fail to reject the null hypothesis, and state that the evidence is not sufficient to suggest that the defendant is guilty.
If the data were very unlikely to have occurred, then the evidence raises more than a reasonable doubt about the null hypothesis, and hence we reject the null hypothesis.
IT’s a mnemonic to remember what to do
I will give an example how this translates to an A/B-test.
When you use a t-test you first state a null hypothesis. You calculate the p-value and decide to reject the null hypothesis or not. So you try to reject the hypothesis that the conversion rates are the same.
So, suppose you did an experiment and the p-value of that test was 0.01. The p-value in this experiment tells you that There is a 1% chance of observing a difference as large as you observed even if the two means are identical.
The p-value is very low, so the H0 gets to go.
The other challenge with using frequentist statistics is that an A/B-test can only have 2 outcomes: you either have a winner of no winner.
And the focus is on finding those real winners. You want to take as little risk as possible.
This is not so surprising if you take into account that t-tests have been used in a lot of medical research as well. Of course you don’t want to bring a medicine to the market if you’re not 100% sure that it won’t make people worse of kill them. You don’t want to take any risk whatsoever.
But businesses aren’t run this way. You need to take some risk in order to grow your business.
If you take a look at this test-result you would conclude that there is no winner, that it mustn’t be implemented and that the measured uplift in conversion rate wasn’t enough. So you will see this a loser and move on to another test idea.
However, there seems to be a positive movement (the measured uplift is 5%), but it isn’t big enough to recognize as a significant winner. You probably only need a few more conversions.
If Frequentists statistics confronts us with these kind of challenges, what’s the alternative then?
Well as I said earlier, the most common approach to analysing A/B-tests is the t-test (which is based on frequentist statistics).
But, over the last couple of years more and more software packages (like VWO and Google Optimize) are switching to Bayesian statistics.
And that’s not without reason, because using Bayesian statistics makes more sense, since it better suits how businesses are run and I will show you why.
So, when you use Bayesian statistics, to evaluate your A/B-test, then there is no difficult statistical terminology involved anymore. There’s no null hypothesis, no p-value or z-value et cetera. It just shows you the measured uplift and the probability that B is better than A. Easy right?
Everyone can understand this.
Based on the same numbers of the A/B-test we showed you earlier you have a 89,1% chance that B will actually be better than A.
Probably every manager would understand this and will like these odds.
Recently we turned this Bayesian Excel calculator into a webtool as well. It’s for everyone free to use.
If you visit this URL you can input your test data and calculate! It will return the chance that B outperforms A.
When using a Bayesian A/B-test evaluation method you no longer have a binary outcome like the t-test does.
A test result won’t tell you winner / no winner, but a percentage between 0 and 100% whether the variation performs better than the original.
In this example 89,1%.
The question that remains is: is this enough to be implemented?
What you can do is make a risk assessment. You can calculate what the results mean in terms of revenue.
When the client decides to implement the variation they have a 10.9% chance of a drop in revenue of 200.000 in 6 months time (and an average order value of 175)
But on the other hand, they also have a 89.1% chance that the variation is actually better and brings in nearly 650.000 euro.
You can show this table to your boss and ask whether he would place the bet.
Well that depends on a couple of things. If you would implement a test variation with a probability of 51% then you’re not doing much better than just flipping a coin. The risk of implementing a losing variation is quite high.
Depending in the type of business you may be more or less willing to take risks. If you are a start-up you might want to take more risk then a full grown business, but still we don’t really like the chance to lose money so what we see with our clients that most need at least a probability of 70%.
But it also depends on the type of test. If you only changed a headline then the risk is lower, then when you need to implement a new functionality on the page. This will consume much more resources. Hence, you will need a higher probability.
The purpose of A/B-testing is of course to add direct value, but we still want to learn about user behavior. If you really want to learn from user behavior then you need to test very strict (say with >95%). Otherwise you only have a hunch, but you don’t have proof.
We take these numbers as a ballpark. If the test has a probability lower than 70% we won’t see it as a learning. If the percentage lies between 70 and 85% we see it as an indication something is there, but we need a retest to confirm the learning.
Anything between 85 en 95% is a very strong indication. So we would do follow-up tests on other parts of the website to see if it works there too. And the same as with a t-test: when the chance is higher than 95% we see it as a real learning.
So even though you would implement the previous test, it doesn’t prove the stated hypothesis. It shows a strong indication, but to be sure the hypothesis is true you need follow-up tests to confirm this learning.
Recently we turned this Bayesian Excel calculator into a webtool as well. It’s for everyone free to use.
If you visit this URL you can input your test data and calculate! It will return the chance that B outperforms A.