The presentation discussed managing experiments and feature flags across Optimizely and a software application. It began with an experimentation maturity curve showing increasing levels of experimentation from executional to a culture of experimentation. Examples were given of how Optimizely was used at different levels from managing datafiles to consolidating projects and increasing automated testing. Takeaways included passing datafiles between front-end and back-end for performance, caching datafiles in memcache, and improving quality through easy user testing and automated tests.
BDD Scenarios in a Testing & Traceability Strategy (Webinar 19/02/2021)Gáspár Nagy
We are inviting you to join our free webinar to see a case study of a real project developed with Behaviour Driven Development (BDD).
Our product, "SpecSync for Azure DevOps" has been developed with BDD. The functionalities are specified as BDD scenarios that can be verified as automated tests. But BDD scenarios alone would not be enough for us to meet our quality expectations, so there are other tests and quality considerations as well -- all these fit into our agile testing and traceability strategy.
In this webinar Gaspar Nagy gives you a walkthrough of the quality considerations and tests of the product by focusing on the following questions:
• What kind of tests are needed? How to decide what is the right way to specify and test a concrete function?
• What kind of feedback can we get from the different tests and how? Do they form a test automation pyramid?
• How are the requirements expressed in BDD scenarios and how are they connected to the other development artifacts?
• BDD, tests automation and continuous integration tips & tricks
The webinar focuses on the general testing and automation challenges, so people with or without coding skills are both welcome. At the end of the webinar there will be a Q&A, so you can also ask your own questions. The webinar recording will be made available for everyone who registered.
Accelerating Your Test Execution PipelineSmartBear
Learn how to accelerate your test execution pipeline with test frameworks, automation and parallel testing from our very own Bria Grangard, Product Marketing Manager.
AATs can be expensive and not valuable if not done right, and doing them right is not easy. They provide enormous benefit though, and are critical as software takes over the world and manual regression testing becomes infeasible. This goes through key benefits.
DevOps for Data Science on Azure - Marcel de Vries (Xpirit) and Niels Zeilema...GoDataDriven
The typical organizational model is that teams are in constant flux, are created for work, are only responsible for the change and are not empowered, or lack trust, to run products. A high performance organization model allows teams to take full responsibility for cost, compliance and security, and lets them own their own incidents. This improves quality, change failure rates, lower costs and leads to more happy employees. DevOps is about creating with the end in mind, cross-functional autonomous teams and end-tn-end responsibility. You build it, you run it. You break it, you fix it. This means you want to automate everything in a CI/CD pipeline. Roll-forward, don't roll-back. DevOps principles play an important role in a data-driven maturity model. Continuous prototyping and a data mindset and skills for everybody. In a Data Science Workflow combining input data and deriving the model features usually requires the most of the work, and lots of iterations before its done. Implement features one-by-one. So, start with a baseline model and compare this against more complex models, to see if additional complexity is worth the performance gain. The result of a data scientist is a trained model. Such a model contains 4 components: input data, derived features, chosen model type and hyperparameters. A trained model is always the combination of data and the code. So where do you run this trained model? Model management is versioning code but not the data. A model management server stores hyperparameters, performance metrics, metadata, trained models. IN a data science pipeline, we have two components for deployment: the application and the trained model. So we split the pipeline into parts: a build pipeline, a train pipeline and a deploy pipeline. A complete pipeline mapped to azure components would look largely like this: An Azure DevOps Build pipeline, an Azure ML Training pipeline and an Azure DevOps Release pipeline.
Team Foundation Server - Tracking & ReportingSteve Lange
Comprehensive presentation detailing reporting and tracking capabilities of Team Foundation Server. Focuses on Excel workbooks and Reporting Services, but touches on other technologies as well.
BDD Scenarios in a Testing & Traceability Strategy (Webinar 19/02/2021)Gáspár Nagy
We are inviting you to join our free webinar to see a case study of a real project developed with Behaviour Driven Development (BDD).
Our product, "SpecSync for Azure DevOps" has been developed with BDD. The functionalities are specified as BDD scenarios that can be verified as automated tests. But BDD scenarios alone would not be enough for us to meet our quality expectations, so there are other tests and quality considerations as well -- all these fit into our agile testing and traceability strategy.
In this webinar Gaspar Nagy gives you a walkthrough of the quality considerations and tests of the product by focusing on the following questions:
• What kind of tests are needed? How to decide what is the right way to specify and test a concrete function?
• What kind of feedback can we get from the different tests and how? Do they form a test automation pyramid?
• How are the requirements expressed in BDD scenarios and how are they connected to the other development artifacts?
• BDD, tests automation and continuous integration tips & tricks
The webinar focuses on the general testing and automation challenges, so people with or without coding skills are both welcome. At the end of the webinar there will be a Q&A, so you can also ask your own questions. The webinar recording will be made available for everyone who registered.
Accelerating Your Test Execution PipelineSmartBear
Learn how to accelerate your test execution pipeline with test frameworks, automation and parallel testing from our very own Bria Grangard, Product Marketing Manager.
AATs can be expensive and not valuable if not done right, and doing them right is not easy. They provide enormous benefit though, and are critical as software takes over the world and manual regression testing becomes infeasible. This goes through key benefits.
DevOps for Data Science on Azure - Marcel de Vries (Xpirit) and Niels Zeilema...GoDataDriven
The typical organizational model is that teams are in constant flux, are created for work, are only responsible for the change and are not empowered, or lack trust, to run products. A high performance organization model allows teams to take full responsibility for cost, compliance and security, and lets them own their own incidents. This improves quality, change failure rates, lower costs and leads to more happy employees. DevOps is about creating with the end in mind, cross-functional autonomous teams and end-tn-end responsibility. You build it, you run it. You break it, you fix it. This means you want to automate everything in a CI/CD pipeline. Roll-forward, don't roll-back. DevOps principles play an important role in a data-driven maturity model. Continuous prototyping and a data mindset and skills for everybody. In a Data Science Workflow combining input data and deriving the model features usually requires the most of the work, and lots of iterations before its done. Implement features one-by-one. So, start with a baseline model and compare this against more complex models, to see if additional complexity is worth the performance gain. The result of a data scientist is a trained model. Such a model contains 4 components: input data, derived features, chosen model type and hyperparameters. A trained model is always the combination of data and the code. So where do you run this trained model? Model management is versioning code but not the data. A model management server stores hyperparameters, performance metrics, metadata, trained models. IN a data science pipeline, we have two components for deployment: the application and the trained model. So we split the pipeline into parts: a build pipeline, a train pipeline and a deploy pipeline. A complete pipeline mapped to azure components would look largely like this: An Azure DevOps Build pipeline, an Azure ML Training pipeline and an Azure DevOps Release pipeline.
Team Foundation Server - Tracking & ReportingSteve Lange
Comprehensive presentation detailing reporting and tracking capabilities of Team Foundation Server. Focuses on Excel workbooks and Reporting Services, but touches on other technologies as well.
Accelerating Your Test Execution PipelineSmartBear
Our very own Bria Grangard will take you through the ways in which you can speed up your testing process. Check it out to learn about test frameworks, automation, parallel testing and more.
Test and Behaviour Driven Development (TDD/BDD)Lars Thorup
In this introduction to Test Driven Development (TDD) or Behaviour Driven Development (BDD) we give a high level description of what it is and why it is useful for developers. Then we go into some details on stubs and mocks, test data, UI testing, SQL testing, JavaScript testing, web services testing and how to start doing TDD/BDD on an existing code base.
Upgrade to SharePoint 2010, Shai Petel SharePoint Conference Las Vegas Sep 2009KWizCom Team
KWizCom's Shai Petel discusses upgrading to SharePoint 2010, using his experience of upgrading KWizCom's SharePoint List Forms Extension Feature to SharePoint 2010 as an example
Writing less code with Serverless on AWS at AWS Community Day DACH 2021Vadym Kazulkin
The purpose of Serverless is to focus on writing the code that delivers business value and offload undifferentiated heavy lifting to the Cloud providers or SaaS vendors of your choice. Today’s code quickly becomes tomorrow’s technical debt even if you meet the perfect decision. The less you own, the better it is from the maintainability point of view. In this talk I will go through examples of the various Serverless architectures on AWS where you glue together different Serverless managed services relying mostly on configuration, significantly reducing the amount of the code written to perform the task. Own less, build more!
How Optimizely Scaled its REST API with asyncio Optimizely
With developers, an awesome product isn’t everything, or at least we found that out. More than a product you need a platform. But what is a platform? Learn tips and tricks about building a public API using the latest and greatest tools: OpenAPI, Python 3 and asyncio.
This presentation was given by Optimizely engineers Nick DiRienzo & Vinay Tota at PyBay 2017.
Lightning talks on best practices for product and engineering teams to experiment everywhere in their applications.
First presented at Optimizely's user conference, Opticon18 on September 12th, 2018.
This webinar lays the foundation for your PHP app. If you have at least one year of PHP experience, this webinar explains these key building blocks for creating and maintaining enterprise-class applications, mobile services, and third-party libraries. It covers: what makes mission-critical PHP different? (including cloud-based solutions); how to maintain your PHP stack; how to ensure code security; and what to do when your system goes down?
Our team just released Keptn (https://keptn.sh/), an open source framework for event-based, automated continuous operations in cloud-native environments. In this session, we will talk about WHY we built Keptn, HOW we implemented it (Architecture) and where we want the community to take it.
How the economist with cloud BI and Looker have improved data-driven decision...Looker
This session by The Economist Group, Cloud BI Ltd and Looker explores the challenges of data-driven decision making and how powerful the approach can be. Hear how the solution was implemented quickly and evolved in the cloud and the benefits of being able to see and understand customer preferences through a 360-degree view.
Accelerating Your Test Execution PipelineSmartBear
Our very own Bria Grangard will take you through the ways in which you can speed up your testing process. Check it out to learn about test frameworks, automation, parallel testing and more.
Test and Behaviour Driven Development (TDD/BDD)Lars Thorup
In this introduction to Test Driven Development (TDD) or Behaviour Driven Development (BDD) we give a high level description of what it is and why it is useful for developers. Then we go into some details on stubs and mocks, test data, UI testing, SQL testing, JavaScript testing, web services testing and how to start doing TDD/BDD on an existing code base.
Upgrade to SharePoint 2010, Shai Petel SharePoint Conference Las Vegas Sep 2009KWizCom Team
KWizCom's Shai Petel discusses upgrading to SharePoint 2010, using his experience of upgrading KWizCom's SharePoint List Forms Extension Feature to SharePoint 2010 as an example
Writing less code with Serverless on AWS at AWS Community Day DACH 2021Vadym Kazulkin
The purpose of Serverless is to focus on writing the code that delivers business value and offload undifferentiated heavy lifting to the Cloud providers or SaaS vendors of your choice. Today’s code quickly becomes tomorrow’s technical debt even if you meet the perfect decision. The less you own, the better it is from the maintainability point of view. In this talk I will go through examples of the various Serverless architectures on AWS where you glue together different Serverless managed services relying mostly on configuration, significantly reducing the amount of the code written to perform the task. Own less, build more!
How Optimizely Scaled its REST API with asyncio Optimizely
With developers, an awesome product isn’t everything, or at least we found that out. More than a product you need a platform. But what is a platform? Learn tips and tricks about building a public API using the latest and greatest tools: OpenAPI, Python 3 and asyncio.
This presentation was given by Optimizely engineers Nick DiRienzo & Vinay Tota at PyBay 2017.
Lightning talks on best practices for product and engineering teams to experiment everywhere in their applications.
First presented at Optimizely's user conference, Opticon18 on September 12th, 2018.
This webinar lays the foundation for your PHP app. If you have at least one year of PHP experience, this webinar explains these key building blocks for creating and maintaining enterprise-class applications, mobile services, and third-party libraries. It covers: what makes mission-critical PHP different? (including cloud-based solutions); how to maintain your PHP stack; how to ensure code security; and what to do when your system goes down?
Our team just released Keptn (https://keptn.sh/), an open source framework for event-based, automated continuous operations in cloud-native environments. In this session, we will talk about WHY we built Keptn, HOW we implemented it (Architecture) and where we want the community to take it.
How the economist with cloud BI and Looker have improved data-driven decision...Looker
This session by The Economist Group, Cloud BI Ltd and Looker explores the challenges of data-driven decision making and how powerful the approach can be. Hear how the solution was implemented quickly and evolved in the cloud and the benefits of being able to see and understand customer preferences through a 360-degree view.
Testing for Logic App Solutions | Integration MondayBizTalk360
In this Integration Monday session, Mike discussed the challenges and approaches for some of the common testing scenarios when delivering integration solutions with Microsoft Azure.
Grokking TechTalk #30: From App to Ecosystem: Lessons Learned at ScaleGrokking VN
When we were faced with the challenge of going from one to multiple apps, we had to make significant changes to the way we did frontend development. Learn about the tooling and architecture we use to manage a suite of apps, and how you can apply the same principles to your own frontend.
Speaker: Kristian Randall - Frontend Engineering Manager @ Axon
Quick overview on Visual Studio 2012 Profiler & Profiling tools : the importance of the profiling methods (sampling, instrumentation, memory, concurrency, … ), how to run a profiling session, how to profile unit test/load test, how to use API and a few samples
Building A Product Assortment Recommendation EngineDatabricks
Amid the increasingly competitive brewing industry, the ability of retailers and brewers to provide optimal product assortments for their consumers has become a key goal for business stakeholders. Consumer trends, regional heterogeneities and massive product portfolios combine to scale the complexity of assortment selection. At AB InBev, we approach this selection problem through a two-step method rooted in statistical learning techniques. First, regression models and collaborative filtering are used to predict product demand in partnering retailers. The second step involves robust optimization techniques to recommend a set of products that enhance business-specified performance indicators, including retailer revenue and product market share.
With the ultimate goal of scaling our approach to over 100k brick-and-mortar retailers across the United States and online platforms, we have implemented our algorithms in custom-built Python libraries using Apache Spark. We package and deploy production versions of Python wheels to a hosted repository for installation to production infrastructure.
To orchestrate the execution of these processes at scale, we use a combination of the Databricks API, Azure App Configuration, Azure Functions, Azure Event Grid and some custom-built utilities to deploy the production wheels to on-demand and interactive Databricks clusters. From there, we monitor execution with Azure Application Insights and log evaluation metrics to Databricks Delta tables on ADLS. To create a full-fledged product and deliver value to customers, we built a custom web application using React and GraphQL which allows users to request assortment recommendations in a self-service, ad-hoc fashion.
Optimus XPages: An Explosion of Techniques and Best PracticesTeamstudio
Are you starting a new XPages project, but not sure it’s going to be done right the first time? Do you have an existing application that doesn’t seem to have that “X” Factor? In this webinar, John Jardin demonstrates how XPages developers can apply proven techniques and best practices to take their applications to a game-changing level.
You'll learn how to:
-Rapidly develop responsive applications,
-Improve user experience and response times with background and multi-threaded operations,
-Keep your XPages lightweight with code injection,
-Create scheduled tasks the XPages way,
-And much more.
Building a Real-Time Security Application Using Log Data and Machine Learning...Sri Ambati
Building a Real-Time Security Application Using Log Data and Machine Learning- Karthik Aaravabhoomi
- Powered by the open source machine learning software H2O.ai. Contributors welcome at: https://github.com/h2oai
- To view videos on H2O open source machine learning software, go to: https://www.youtube.com/user/0xdata
MLOps and Reproducible ML on AWS with Kubeflow and SageMakerProvectus
Looking to implement MLOps using AWS services and Kubeflow? Come and learn about machine learning from the experts of Provectus and Amazon Web Services (AWS)!
Businesses recognize that machine learning projects are important but go beyond just building and deploying models, which is mostly done by organizations. Successful ML projects entail a complete lifecycle involving ML, DevOps, and data engineering and are built on top of ML infrastructure.
AWS and Amazon SageMaker provide a foundation for building infrastructure for machine learning while Kubeflow is a great open source project, which is not given enough credit in the AWS community. In this webinar, we show how to design and build an end-to-end ML infrastructure on AWS.
Agenda
- Introductions
- Case Study: GoCheck Kids
- Overview of AWS Infrastructure for Machine Learning
- Provectus ML Infrastructure on AWS
- Experimentation
- MLOps
- Feature Store
Intended Audience
Technology executives & decision makers, manager-level tech roles, data engineers & data scientists, ML practitioners & ML engineers, and developers
Presenters
- Stepan Pushkarev, Chief Technology Officer, Provectus
- Qingwei Li, ML Specialist Solutions Architect, AWS
Feel free to share this presentation with your colleagues and don't hesitate to reach out to us at info@provectus.com if you have any questions!
REQUEST WEBINAR: https://provectus.com/webinar-mlops-and-reproducible-ml-on-aws-with-kubeflow-and-sagemaker-aug-2020/
Lightning talks on best practices for product and engineering teams to experiment everywhere in their applications.
Originally given at Optimizely's conference: Opticon on October 17th, 2017.
Clover Rings Up Digital Growth to Drive ExperimentationOptimizely
Clover's Digital Growth team is responsible for optimizing the merchant's digital experience and they rely on experimentation to guide digital decision-making. This enables them to quickly learn and measure what changes deliver the best outcomes for users.
Join us with Lead Product Manager of Growth, Monil Shah, to learn how Clover:
- Increased digital conversions amongst merchants with an investment in experimentation
- Grew experiment velocity by 4x after replacing Adobe Target
- Designed a framework to efficiently capture and prioritize test ideas, and roll out winners
Learn the real best practices and pitfalls of experimentation based on scientific research and insights. Hazjier is co-author of three studies on experimentation with Harvard Business School and his work is covered in the book Experimentation Works. This talk will dive into the best practices of experiment design, the role of hierarchy in experimentation teams, and the value of experimentation.
Atlassian's Mystique CLI, Minimizing the Experiment Development CycleOptimizely
Mystique CLI is an Atlassian developed CLI for Optimizely Web. It is a multi-phase project that is currently focusing on improving the development cycle for growth engineers. Currently, Mystique is the standard for developing web experiments at Atlassian, and is capable of a wide variety of operations utilizing Optimizely's REST API. This includes creating, updating, testing, and duplicating experiments/personalization campaigns, as well as "promoting" these entities between Optimizely projects for different environments (e.g. from QA => Prod). It has significantly reduced manual overhead and decreased development time by up to 95% for particular actions.
Autotrader Case Study: Migrating from Home-Grown Testing to Best-in-Class Too...Optimizely
Autotrader's Product and Engineering teams were ahead of the curve many years ago when they built a home-grown solution for leveraging feature flags to support server-side testing. Over the years, the industry eventually caught up and surpassed this proprietary tooling and the team had a choice to make: Re-invest into the local solution or completely retool. In this case study, Scott Povlot, Principal Technical Architect, and Seth Stuck, Director of R&D Analytics, will discuss their journey in selecting and then migrating to their next generation of experimentation tooling. They will discuss selection criteria, pros and cons, and outline how they were able to make the migration to Optimizely successful and lessons learned along the way.
Zillow + Optimizely: Building the Bridge to $20 Billion RevenueOptimizely
Join Jason Tabert, Senior CRO Marketing Specialist, and learn how Zillow is using Optimizely’s experimentation, personalization and integrations to help grow their revenue to $20 billion by helping their customers cross the real estate chasm from despair to delight.
The Future of Optimizely for Technical TeamsOptimizely
Optimizely has been reimagining the future of progressive delivery and experimentation, improving every part of the platform to empower technical teams to build, ship, and iterate faster. Learn about the latest enhancements to Optimizely Full Stack and the Optimizely Data Platform, and get a sneak peek at the upcoming roadmap.
Empowering Agents to Provide Service from Anywhere: Contact Centers in the Ti...Optimizely
The coronavirus pandemic has pushed contact center leaders to accelerate technology adoption and empower their teams to work remotely. Join this session with State Farm, Salesforce, and Optimizely to learn how contact centers can adapt quickly and successfully in the time of COVID.
Our new normal has accelerated eCommerce trends by 4-6 years. The Optimizely team shares how experimentation can help retailers fast forward their online sales strategy with Microsoft Dynamics 365 Commerce.
Building an Experiment Pipeline for GitHub’s New Free Team OfferingOptimizely
In April 2020, GitHub announced a new Free for Teams plan. Behind the scenes, the engineering team was also setting up an experiment pipeline and an integration with Optimizely. In this session, we will take a peek at the process of setting up the integration, learning about the behavior of this new Free for Teams customer segment, and the next steps for this experiment pipeline.
AMC Networks Experiments Faster on the Server SideOptimizely
Speeding up innovation only matters if it helps you drive positive outcomes. At AMC, experimentation enables the product and platform teams to challenge their assumptions, maximize impact, and evaluate ideas as painted door tests before investing in significant development. A commitment to test everything across 9 platforms fueled their search for the most scalable solution.
In this session, you'll learn how to:
Leverage server-side testing to experiment quickly
Scale across web, mobile, and OTT applications
Determine when client-side testing is more efficient
Evolving Experimentation from CRO to Product DevelopmentOptimizely
An obsession with data, efficiency, and delivering incredible customer experiences are just a few things that the CNN Consumer Science and Software Engineering teams have in common. Simple A/B testing practices evolved into a culture of experimentation, sparking new development practices across the organization. Learn how they drive results across their entire platform from websites to mobile apps.
Overcoming the Challenges of Experimentation on a Service Oriented ArchitectureOptimizely
Growing from an early stage startup to a national leader in financial literacy is no small feat, and there are a ton of lessons that we have learned at Greenlight as we have grown. Long gone are the days where we would ship something and cross our fingers hoping that it makes some kind of impact on our customers. Now we’re in a world where we can learn ahead of time how much impact a feature will have on the business, before we even launch! In today’s conversation, we’ll discuss how we use Optimizely’s feature flags in our microservice architecture using Optimizely Agent while keeping user IDs and context synchronized.
This session will cover:
How we set up Optimizely Agent and use it in a kubernetes deployment
How we created a user-aliasing service
How we access Optimizely both on the frontend and in the backend services.
How to build a full stack feature
How to manage the rollout using Optimizely’s feature flags
How The Zebra Utilized Feature Experiments To Increase Carrier Card Engagemen...Optimizely
A/B testing is an essential element in any product managers playbook. However having the freedom and flexibility to customize testing based on what the data is saying often requires a lot of time and effort, particularly when it comes to engineering resources. Optimizely offers a flexible approach to experimentation through the use of feature testing, which provides more customization options without the additional development effort typically required to implement these feature optimizations. Megan Bubley, a Senior Product Manager at The Zebra, will share her experience working with Optimizely’s feature tests to create a results page where users can compare multiple auto insurance options driven by actual user needs, as well as her experience customizing the experience based on device platform.
Making Your Hypothesis Work Harder to Inform Future Product StrategyOptimizely
At Treatwell, each experiment goes beyond improving a single business metric. Experimentation works to evolve their product while enriching customer insights in order to deliver the best digital experience to their users. Join Laura Howard, Lead Product Manager, and Dennis Meisner, Senior Product Analyst, to learn their secret to making their hypothesis work harder and how getting their hypothesis right has improved Treatwell’s funnel progression and order health, as well as helped them make critical decisions on their product experience.
Kick Your Assumptions: How Scholl's Test-Everything Culture Drives RevenueOptimizely
Amy Vetter, Consumer Experience Manager, Direct To Consumer, Europe, will walk you through some of the tests that she and her team run across the Scholl brand. Amy will highlight surprise learnings and how to remove the fear of failing. The team is empowered to test everything possible that will allow the customer to get the best experience and also support the brand’s goal for more revenue and customer data.
At Charles Schwab, they have a mantra of viewing the world through their client’s eyes. When it comes to building digital experiences and running experiments, winning isn’t just about moving metrics, it’s also about improving customer experience. Sara Tresch, SVP of Digital Services at Schwab will be discussing how Schwab designs products and experiments with a client-first mindset.
Shipping to Learn and Accelerate Growth with GitHubOptimizely
Will 2020 mark the shift to a remote-first world in the long run? For GitHub, a distributed workforce is nothing new. Join Sha Ma, VP of Engineering, and Gregory Ceccarelli, Director of Data Science, to learn how they built and scaled a successful experimentation program. They'll share their experience implementing Optimizely across timezones, a remote workforce, and a new business model.
In this session, you'll learn how to:
Optimize UX for a freemium business model
Use data to deliver customer-centered products
Scale experimentation and accelerate growth
Test Everything: TrustRadius Delivers Customer Value with ExperimentationOptimizely
When done right, experimentation can help you validate the product you’re building and create winning customer experiences. And it doesn’t take a big engineering team to make this happen.
TrustRadius, the most trusted review site for business technology, uses experimentation to build an online community through website and server-side experimentation. The small but mighty TrustRadius team runs experiments throughout the buyer’s journey to engage different user personas and understand outcomes in real-time.
Watch the webinar recording featuring Rilo Stark, product manager at TrustRadius, and Jack Peden, senior software engineer, to understand their data-driven experimentation strategy and how TrustRadius uses Optimizely Web and Full Stack products to tailor experiences to different customer segments and mitigate risk through A/B/N and painted door tests.
In this session, you will learn: how to embed feature flagging sitewide to deliver safer, faster releases, best practices for implementing feature flags in a services-oriented architecture, and the latest enhancements you need to help your team recover faster when ship happens.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdfPeter Spielvogel
Building better applications for business users with SAP Fiori.
• What is SAP Fiori and why it matters to you
• How a better user experience drives measurable business benefits
• How to get started with SAP Fiori today
• How SAP Fiori elements accelerates application development
• How SAP Build Code includes SAP Fiori tools and other generative artificial intelligence capabilities
• How SAP Fiori paves the way for using AI in SAP apps
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Welocme to ViralQR, your best QR code generator.ViralQR
Welcome to ViralQR, your best QR code generator available on the market!
At ViralQR, we design static and dynamic QR codes. Our mission is to make business operations easier and customer engagement more powerful through the use of QR technology. Be it a small-scale business or a huge enterprise, our easy-to-use platform provides multiple choices that can be tailored according to your company's branding and marketing strategies.
Our Vision
We are here to make the process of creating QR codes easy and smooth, thus enhancing customer interaction and making business more fluid. We very strongly believe in the ability of QR codes to change the world for businesses in their interaction with customers and are set on making that technology accessible and usable far and wide.
Our Achievements
Ever since its inception, we have successfully served many clients by offering QR codes in their marketing, service delivery, and collection of feedback across various industries. Our platform has been recognized for its ease of use and amazing features, which helped a business to make QR codes.
Our Services
At ViralQR, here is a comprehensive suite of services that caters to your very needs:
Static QR Codes: Create free static QR codes. These QR codes are able to store significant information such as URLs, vCards, plain text, emails and SMS, Wi-Fi credentials, and Bitcoin addresses.
Dynamic QR codes: These also have all the advanced features but are subscription-based. They can directly link to PDF files, images, micro-landing pages, social accounts, review forms, business pages, and applications. In addition, they can be branded with CTAs, frames, patterns, colors, and logos to enhance your branding.
Pricing and Packages
Additionally, there is a 14-day free offer to ViralQR, which is an exceptional opportunity for new users to take a feel of this platform. One can easily subscribe from there and experience the full dynamic of using QR codes. The subscription plans are not only meant for business; they are priced very flexibly so that literally every business could afford to benefit from our service.
Why choose us?
ViralQR will provide services for marketing, advertising, catering, retail, and the like. The QR codes can be posted on fliers, packaging, merchandise, and banners, as well as to substitute for cash and cards in a restaurant or coffee shop. With QR codes integrated into your business, improve customer engagement and streamline operations.
Comprehensive Analytics
Subscribers of ViralQR receive detailed analytics and tracking tools in light of having a view of the core values of QR code performance. Our analytics dashboard shows aggregate views and unique views, as well as detailed information about each impression, including time, device, browser, and estimated location by city and country.
So, thank you for choosing ViralQR; we have an offer of nothing but the best in terms of QR code services to meet business diversity!
3. Agenda
1. Integrating Analytics the Right Way
2. Going Deeper with Heap and Optimizely
3. Segmenting results with audiences
4. Experimenting in a DevOps World
5. Server-side testing in a Serverless world
6. Managing Your Full Stack Experiments From Within
Your Own Repository
7. Optimizing the Performance of Client-Side
Experimentation
8. How Optimizely uses Full Stack
9. Q&A
4. Integrating Analytics the
Right Way
Rocky McGredy
Solutions Engineer, Optimizely
Ali Baker
Technical Support Engineer, Optimizely
5. 1. Why Integrate?
2. How Integrations Work
3. Implementation
Challenges
4. Solutions/Best Practices
Agenda
6. Why Integrate?
• Our results are best for finding winning
variations
• Analytics platforms contain historical user data
and reporting
• Knowing when a user is in a variation is useful
21. Integrating With Full Stack
• Similar to custom
analytics
• Notification listeners
• Use first party data
22. Key Learnings
• Integrations send experiment decision data
• Consider independent factors: timing, tag
managers, reporting
• Run a test to validate data
• Use your debugging tools
• We’re here to help!
23. Diving Deeper with Heap
and Optimizely
Taylor Udell
Lead Solutions Architect, Heap
24. What is Heap? A behavioral analytics platform
that has revolutionized
collecting and managing data
without implementing any
tracking calls
30. Going Deeper than your Goal Metrics
Heap helps you
understand the why
behind your results
without slowing your
team down
31. Key Benefits of
using Heap +
Optimizely in
your stack
1. Develop More
Hypotheses
2. Deploy tests and
changes without
delaying for tracking
code
3. Go deeper than goal
metrics
34. Custom Analytics
Integrations
• For building your own analytics
integrations on top of Optimizely
X Web
• Great for sending Optimizely
data to 3rd party analytics
platforms
35. Custom Analytics
Integrations Extensions
• Build reusable ‘plugins’ that
can be added to your
experiments
• Create visitor segments
based on the pre-defined
Optimizely audiences
36. • Create custom attributes that
correspond with your audiences
• Add them to an experiment
• Create audiences matching the
segments you care about
How does it work?
• Build the custom analytics
extension and add it to the
experiment
39. Why is this awesome?
• You can re-purpose audiences that
are already used for targeting
• No additional costs
• Works automatically across the
entire project
40. Resources
• Github repository containing the code sample
used in this presentation:
https://github.com/michal-
optimizely/audience_segment_builder
• Documentation for Custom analytics extensions
• Documentation for Custom Attributes
• How to: Segmenting experiment results
41. Experimenting in a
DevOps World
Joy Scharmen
Director DevOps, Optimizely
All life DevOps is an experiment.
The more experiments you make
the better.
-Ralph Waldo Emerson, sort of
54. • Easier: Run your backend code without concern for provisioning,
managing, or scaling your own server architecture
• Cheaper*: Ephemeral resources - only pay for your event driven
code execution time
• Flexible: Manage functions as microservices
What is serverless (or FaaS) anyway?
55. How does it work?
Use Case: Image processing
Output
hilarious meme
Function
Contain provisioned,
code executes
Event
image uploaded to
file storage
56. Serverless: Not just
for hobbyists
Encode media files from S3
Streamline real-time
processing of interdependent
data sets
Lowered costs by ~66% with
serverless vending machine
loyalty service
59. Serverless + Full Stack
Stateless + Stateless
Benefits of full stack
• Stateless - no network
requests for decisioning
• Remote configuration of
variables
• Test anything in code!
Drawbacks when using
FaaS
• Stateless - each run is
basically a new instance
• No easy way to cache
datafile/client object
60. How does it work w/ Optimizely?
OutputFunction w/
Optimizely
SDK installed
Event
Initialize SDK
sever provisioned and scaled
by cloud provider LATENCY
72. Put it into practice w/ Alexa
else.. get it from
CDN / API
Alexa event
executes
Send back variation
response
My Puppy Store Daily Deal Alexa Skill
Alexa, ask
Puppy Store for
a daily deal!
Dashboard
1
check if cache
exists
Events sent
back to
Optimizely
Save on
squeaky toys!
upload code to
lambda
JSON Datafile
(Akamai CDN or
REST API)
2
3
4
5
6
76. Performance testing is a kind of
experimentation
You have...
hypotheses “Increasing image compression will increase
session length”
independent
variables
Image compression magnitude
dependent variables Session length
results interpretation Should we roll out any variation to 100% of
traffic?
83. Metrics
● DO: Endeavor to measure impact on users
○ first [contentful] paint, via browser
○ first meaningful paint, via you
○ key business metrics
● DON’T:
○ Use any single metric, like the document’s
load event
85. Recommended initial configuration.
Test what works best for your product!
Placement ● First script element in the document
● In the <head>
Attributes ● None (synchronous)
Resource
Hints
● If script element is in HTML: none
● Otherwise: <link rel=”preload” as=”script”
href=”...url”>
● If using cross-origin targeting, preload the
iframe document
87. In summary
1. Experimentation is an effective tool for
informing perf-impacting decisions. Use it!
2. Focus on high-level business metrics.
Low-level metrics are supplementary.
3. Attend the perf workshop tomorrow at
noon. It will teach tips for going fast.
88. References
● Steve Souders, “I <3 image bytes”: https://www.stevesouders.com/blog/2013/04/26/i/
● Ilya Grigorik, “Chrome’s preloader delivers a ~20% speed improvement!”:
https://plus.google.com/+IlyaGrigorik/posts/8AwRUE7wqAE
● Tony Gentilcore, “The WebKit PreloadScanner”: http://gent.ilcore.com/2011/01/webkit-
preloadscanner.html
● Philip Walton, “User-centric Performance Metrics”:
https://developers.google.com/web/fundamentals/performance/user-centric-performance-
metrics
● Addy Osmani, “Preload, Prefetch and Priorities in Chrome”:
https://medium.com/reloading/preload-prefetch-and-priorities-in-chrome-776165961bbf
89. Managing Your Full Stack
Experiments From Within Your
Own Repository
Travis Beck
Software Engineer, Optimizely
94. Testing the New Feature
attributes = {'plan': 'basic', 'language': 'en'}
enabled = optimizely_client.is_feature_enabled('turbo_mode', user_id, attributes)
if enabled:
# feature implementation
... later ...
optimizely_client.track(‘task_complete', user_id, attributes)
tags = {‘value’: 100}
optimizely_client.track(‘completion_time', user_id, attributes, tags)
95. What do we have to create in Optimizely?
attributes = {'plan': 'basic', 'language': 'en'}
enabled = optimizely_client.is_feature_enabled('turbo_mode', user_id, attributes)
if enabled:
# feature implementation
... later ...
optimizely_client.track(‘task_complete', user_id, attributes)
tags = {‘value’: 100}
optimizely_client.track(‘completion_time', user_id, attributes, tags)
Attributes
96. What do we have to create in Optimizely?
attributes = {'plan': 'basic', 'language': 'en'}
enabled = optimizely_client.is_feature_enabled('turbo_mode', user_id, attributes)
if enabled:
# feature implementation
... later ...
optimizely_client.track(‘task_complete', user_id, attributes)
tags = {‘value’: 100}
optimizely_client.track(‘completion_time', user_id, attributes, tags)
Attributes
Feature
97. What do we have to create in Optimizely?
attributes = {'plan': 'basic', 'language': 'en'}
enabled = optimizely_client.is_feature_enabled('turbo_mode', user_id, attributes)
if enabled:
# feature implementation
... later ...
optimizely_client.track(‘task_complete', user_id, attributes)
tags = {‘value’: 100}
optimizely_client.track(‘completion_time', user_id, attributes, tags)
Attributes
Feature
Events
98. What do we have to create in Optimizely?
attributes = {'plan': 'basic', 'language': 'en'}
enabled = optimizely_client.is_feature_enabled('turbo_mode', user_id, attributes)
if enabled:
# feature implementation
... later ...
optimizely_client.track(‘task_complete', user_id, attributes)
tags = {‘value’: 100}
optimizely_client.track(‘completion_time', user_id, attributes, tags)
Attributes
Feature
Events
+ Experiment (since we’re testing the Feature)
99. What do we have to create in Optimizely?
attributes = {'plan': 'basic', 'language': 'en'}
enabled = optimizely_client.is_feature_enabled('turbo_mode', user_id, attributes)
if enabled:
# feature implementation
... later ...
optimizely_client.track(‘task_complete', user_id, attributes)
tags = {‘value’: 100}
optimizely_client.track(‘completion_time', user_id, attributes, tags)
Attributes
Feature
Events
+ Experiment (since we’re testing the Feature)
+ Audience (for targeting)
100. Questions you may be asking
Do I have to have constant access to Optimizely to do
basic application development?
Can we keep this metadata that is fundamental to our
code running properly closer to the code itself?
101. optimizely-cli
A command line tool for managing your Optimizely data
Every serious developer-focused service needs a
command-line interface
Built entirely on top of the Optimizely v2 REST API
Works well for Full Stack. May work for Web.
102. setup
$ opti init
OAuths with your Optimizely Account once
Links your code with a specific Optimizely project
103. Pulling Experiment Data
$ opti pull
Pulls all Optimizely data and writes it to an optimizely/
directory as yaml files
104. Pushing Back Experiment Data
$ opti push
Detects changes to your experiments and pushes back
modified experiments to Optimizely
105. Advantages
Scriptable - Automate changes to Optimizely in scripts
Code Review - Make important modifications as a Pull
Request in your own repo
Historical Record - Use a webhook or update on a
schedule to track changes over time
106. Try it out
Install:
pip install optimizely-cli
Repository: https://github.com/optimizely/optimizely-cli
Take a look at the code for good v2 REST API examples
107. Journey Up Mt. Experimentation
Ali Rizvi, Software Engineer
Mike Ng, Software Engineer
136. Reaching the next stage
• Consolidate projects
• Use environments across all projects
• Experiments / Feature Flags cleanup
• Increase automated test coverage of all experiment
paths
137. Takeaways
Performance
- Pass datafile between frontend to backend
- Cache datafile in memcache - can also cache instance of Optimizely if appropriate
Quality
- Make it easy for users to QA your features and tests
- Write automated tests for the different forks/paths created for experiments
Productivity
- Make it easy for developers to run experiments with wrapper/convenience methods
- Always include a logger with the implementation