1st in the "Rewriting the Rules of Perfomance Testing" series. Scott Barber and Dan Bartow discuss ways load and performance teams have "cheated" in the past due to constraints that are eliminated with new cloud-based approaches to testing.
Continuous Automated Testing - Cast conference workshop august 2014Noah Sussman
CAST 2014 New York: The Art and Science of Testing
The Association for Software Testing www.associationforsoftwaretesting.org
COURSE DESCRIPTION
Automated tools provide test professionals with the capability to make relevant observations even in the fastest-paced environments. Automated testing is also a powerful tool for improving communication between software engineers. This is important because good communication is a prerequisite for growing a great software engineering organization.
This workshop will explore the continuous testing of software systems. Special focus will be given to the situation where the engineering team is deploying code to production so frequently that it is not possible to perform deep regression testing before each release.
People who participate in this course will learn pragmatic automated testing strategies like:
* Data analysis on the command line with find, grep and wc.
* Network analysis with Chrome Inspector, Charles and netcat.
* Using code churn to predict hotspots where bugs may occur.
* Putting stack traces in context with automated SCM blame emails.
* Using statsd to instrument a whole application.
* Testing in production.
* Monitoring-as-testing.
Technical level: participants should have some familiarity with the command line and with editing code using a text editor or IDE. Familiarity with Git, SVN or another version control system is helpful but not required. Likewise some knowledge of Web servers is helpful but not required. It is desirable for participants to bring laptops.
BIO
From 2010 to 2012 Noah was a Test Architect at Etsy. He helped build Etsy's continuous integration system, and has helped countless other engineers develop successful automated testing strategies.These days Noah is an independent consultant in New York. He is passionate about helping engineers understand and use automated tools as they work to scale their applications more effectively.
Software Entomology or Where Do Bugs Come From?Noah Sussman
An internal training talk that Michelle D'Netto and I periodically give for Customer Support representatives at Etsy. Introduces advanced Software Quality concepts such as the halting problem, the impossibility of complete testing and the extreme difficulty of discovering all of the significant bugs in one's own software. Winds up by encouraging anyone responsible for online customer experience, to envision themselves as a participant observer embedded in the rapidly-evolving culture of the Web.
Discount Usability Testing for Agile TeamsBen Carey
A talk from Agile Roots in 2010. You can't get the whole picture or much context from the slides.
The last part of the talk was referring to how you'll be remembered and your legacy in a social-media-based world.
It would be unfortunate if your last status update was the one that you see in the facebook wall post.
Video from the talk will be posted later.
Velocity Conference: Building a Scalable, Global SaaS Offering: Lessons from ...Intuit Inc.
QuickBooks Online is the no. 1 small business cloud accounting solution worldwide. In this session we discussed how we built a highly scalable, global SaaS offering and the lessons learnt along the way.
SOASTA CloudTest offers FREE functional test automation with the power of Selenium coupled with the ease of a visual testing environment and the power of the cloud for users of the leading cloud-enabled test automation platform.
Continuous Automated Testing - Cast conference workshop august 2014Noah Sussman
CAST 2014 New York: The Art and Science of Testing
The Association for Software Testing www.associationforsoftwaretesting.org
COURSE DESCRIPTION
Automated tools provide test professionals with the capability to make relevant observations even in the fastest-paced environments. Automated testing is also a powerful tool for improving communication between software engineers. This is important because good communication is a prerequisite for growing a great software engineering organization.
This workshop will explore the continuous testing of software systems. Special focus will be given to the situation where the engineering team is deploying code to production so frequently that it is not possible to perform deep regression testing before each release.
People who participate in this course will learn pragmatic automated testing strategies like:
* Data analysis on the command line with find, grep and wc.
* Network analysis with Chrome Inspector, Charles and netcat.
* Using code churn to predict hotspots where bugs may occur.
* Putting stack traces in context with automated SCM blame emails.
* Using statsd to instrument a whole application.
* Testing in production.
* Monitoring-as-testing.
Technical level: participants should have some familiarity with the command line and with editing code using a text editor or IDE. Familiarity with Git, SVN or another version control system is helpful but not required. Likewise some knowledge of Web servers is helpful but not required. It is desirable for participants to bring laptops.
BIO
From 2010 to 2012 Noah was a Test Architect at Etsy. He helped build Etsy's continuous integration system, and has helped countless other engineers develop successful automated testing strategies.These days Noah is an independent consultant in New York. He is passionate about helping engineers understand and use automated tools as they work to scale their applications more effectively.
Software Entomology or Where Do Bugs Come From?Noah Sussman
An internal training talk that Michelle D'Netto and I periodically give for Customer Support representatives at Etsy. Introduces advanced Software Quality concepts such as the halting problem, the impossibility of complete testing and the extreme difficulty of discovering all of the significant bugs in one's own software. Winds up by encouraging anyone responsible for online customer experience, to envision themselves as a participant observer embedded in the rapidly-evolving culture of the Web.
Discount Usability Testing for Agile TeamsBen Carey
A talk from Agile Roots in 2010. You can't get the whole picture or much context from the slides.
The last part of the talk was referring to how you'll be remembered and your legacy in a social-media-based world.
It would be unfortunate if your last status update was the one that you see in the facebook wall post.
Video from the talk will be posted later.
Velocity Conference: Building a Scalable, Global SaaS Offering: Lessons from ...Intuit Inc.
QuickBooks Online is the no. 1 small business cloud accounting solution worldwide. In this session we discussed how we built a highly scalable, global SaaS offering and the lessons learnt along the way.
SOASTA CloudTest offers FREE functional test automation with the power of Selenium coupled with the ease of a visual testing environment and the power of the cloud for users of the leading cloud-enabled test automation platform.
SOASTA's tens-of-thousands of tests in customer labs and production environments have uncovered issues that range from code level bugs to issues in 3rd party services. Testing early, often and at real scale is the only way to be fully prepared.
Closing the Mobile App Quality Gap
Past Webinar
Archived (originally presented February 7th, 2013)
94% of companies today lack fundamental capabilities to validate end user success with mobile apps. Major gaps in mobile testing skills and tools have both surfaced the need for change and driven exciting innovation and opportunity.
This SOASTA webinar explores how test and development managers can take advantage of the mobile transformation to build world-class solutions that bridge testing gaps, compress delivery cycles and optimize app quality.
Industry veterans and SOASTA experts Fred Beringer and Jason Slater will explore:
The business impact of poor mobile quality
Areas to focus on quality processes for the greatest impact
The latest updates to SOASTA’s mobile platform
Heads Up Display to further accelerate test development
Validating what matters, from performance to partial images
Test automation that withstands operating system & device updates
Collecting real mobile user intelligence to complete the cycle
The mobile quality gaps won’t close without action.
Fast, Strong & Nimble Mobile Performance TestingSOASTA
Sept. 18, 2012 webinar on Mobile Performance Testing. Mark Tomlinson (former LoadRunner PM) and Dan Bartow (SOASTA VP Product Management) provide overview of planning and executing mobile testing that measures for Fast (front-end) and Strong (back-end) mobile systems. Recording of event will be available here: http://www.soasta.com/knowledge-center/webinars/
How to measure the business impact of web performanceSOASTA
If your site were one second slower, how many of your visitors would bounce?
If your site were one second faster, how many additional orders would you receive?
Bottom line: Do you know what one second of latency is worth to your business?
Traditional approaches to performance monitoring are fatally flawed. They measure performance only in a silo, telling you how long key actions took but not putting that information into a context you can use to improve the one metric that ultimately matters: revenue. Bridging this gap requires the collection of performance and business data together, and then analyzing this data using the proper analytic methods.
Using modern Real User Monitoring (RUM) techniques, Buddy Brewer will show you how to quantify the impact even one second of latency has on key business metrics like bounce and conversion rate.
Helps create awareness on how to maintain software quality every step of the way. This will take you along every step of a software life cycle pointing out the best practices you should follow to ensure developing and releasing a high quality product.
Eric Proegler Oredev Performance Testing in New ContextsEric Proegler
Virtualization, Cloud Deployments, and Cloud-Based Tools have challenged and changed performance testing practices. Today’s performance tester can summons tens of thousands of virtual users from the cloud in a few minutes at a cost far lower than the expensive on-premise installations of yesteryear.
Meanwhile, systems under test have changed more. Updated software stacks have increased the complexity of scripting and performance measurement, but the biggest changes are in the nature and quantities of resources powering the systems. Interpreting resource usage when resources are shared on a private virtualization platform is exceedingly difficult. Understanding resources when they live in a large public cloud is impossible.
Exploratory testing is an approach to testing that emphasizes the freedom and responsibility of testers to continually optimize the value of their work. It is the process of three mutually supportive activities—learning, test design, and test execution—done in parallel. With skill and practice, exploratory testers typically uncover an order of magnitude more problems than when the same amount of effort is spent on procedurally scripted testing. All testers conduct exploratory testing in one way or another, but few know how to do it systematically to obtain the greatest benefits. Even fewer can articulate the process. Jon Bach looks at specific heuristics and techniques of exploratory testing that will help you get the most from this highly productive approach. Jon focuses on the skills and dynamics of exploratory testing, and how it can be combined with scripted approaches.
The complexity of a typical OpenNebula installation brings a special set of challenges on the monitoring side. In this talk, I will show monitoring of a full stack of from the physical servers to storage layer and ONE daemon. Providing an aggregated view of this information allows you see the real impact of a certain failure. I would like to also present a use case for a “closed-loop” setup where new VMs are automatically added to the monitoring without human intervention, allowing for an efficient approach to monitoring the services a OpenNebula setup provides.
OpenNebulaConf 2013 - Monitoring of OpenNebula installations by Florian Heigl OpenNebula Project
The complexity of a typical OpenNebula installation brings a special set of challenges on the monitoring side. In this talk, I will show monitoring of a full stack of from the physical servers to storage layer and ONE daemon. Providing an aggregated view of this information allows you see the real impact of a certain failure. I would like to also present a use case for a “closed-loop” setup where new VMs are automatically added to the monitoring without human intervention, allowing for an efficient approach to monitoring the services a OpenNebula setup provides.
Bio:
I’ve been into virtualization and storage for a long time and I like the amount of abstraction OpenNebula offers. Professionally I have been a Unix systems administrator for most of my working life. I’ve also done systems integration and monitoring work on the Check_MK project. Now I’m one of very few Nagios experts in Germany that aren’t working for one of the 3-5 leading Nagios outfits and as such I’m able to speak freely about what I think works best for the users. My strength is simply sitting down and listening to what people really need.
Coaching teams in creative problem solvingFlowa Oy
Agile has helped teams to collaborate and organize work better. That’s great. Better teamwork and better understanding of the work definitely helps a team to do right things. Agile has also lead the way toward technical practices such as Continuous Integration and Delivery, Test Driven Development and SOLID-architecture principles. Great, these things definitely help the team to do things right.
Then again, most of the time in software projects goes into problem solving and similar creative acts. Agile has relatively little to give on these areas. Currently, agile is not about creativity nor is it about problem solving.
This coaching circle session will focus on the creative core of software development: solving creatively novel, original and broad problems more effectively all the time. I will introduce some principles and tools I’ve found useful when helping people to solve hard problems and to find creative solutions.
Feedback loops between tooling and cultureChris Winters
Discussion of how tools technologists create impact culture, and how culture impacts those tools. Not really a standalone presentation but hopefully useful.
Observability for Emerging Infra (what got you here won't get you there)Charity Majors
Distributed systems, microservices, containers and schedulers, polyglot persistence .. modern infrastructure patterns are fluid and dynamic, chaotic and transient. So why are we still using LAMP-stack era tools to debug and monitor them? We'll cover some of the many shortcomings of traditional metrics and logs (and APM tools backed by metrics or logs), and show how complexity is their kryptonite. So how do we handle the coming complexity Armageddon? What are the implications for teams and roles and the way we build and ship software? Let's talk about the industry-wide shifts underway from metrics to events, from monitoring to observability, and from caring about the system as whole to the health of each and every request.
SOASTA's tens-of-thousands of tests in customer labs and production environments have uncovered issues that range from code level bugs to issues in 3rd party services. Testing early, often and at real scale is the only way to be fully prepared.
Closing the Mobile App Quality Gap
Past Webinar
Archived (originally presented February 7th, 2013)
94% of companies today lack fundamental capabilities to validate end user success with mobile apps. Major gaps in mobile testing skills and tools have both surfaced the need for change and driven exciting innovation and opportunity.
This SOASTA webinar explores how test and development managers can take advantage of the mobile transformation to build world-class solutions that bridge testing gaps, compress delivery cycles and optimize app quality.
Industry veterans and SOASTA experts Fred Beringer and Jason Slater will explore:
The business impact of poor mobile quality
Areas to focus on quality processes for the greatest impact
The latest updates to SOASTA’s mobile platform
Heads Up Display to further accelerate test development
Validating what matters, from performance to partial images
Test automation that withstands operating system & device updates
Collecting real mobile user intelligence to complete the cycle
The mobile quality gaps won’t close without action.
Fast, Strong & Nimble Mobile Performance TestingSOASTA
Sept. 18, 2012 webinar on Mobile Performance Testing. Mark Tomlinson (former LoadRunner PM) and Dan Bartow (SOASTA VP Product Management) provide overview of planning and executing mobile testing that measures for Fast (front-end) and Strong (back-end) mobile systems. Recording of event will be available here: http://www.soasta.com/knowledge-center/webinars/
How to measure the business impact of web performanceSOASTA
If your site were one second slower, how many of your visitors would bounce?
If your site were one second faster, how many additional orders would you receive?
Bottom line: Do you know what one second of latency is worth to your business?
Traditional approaches to performance monitoring are fatally flawed. They measure performance only in a silo, telling you how long key actions took but not putting that information into a context you can use to improve the one metric that ultimately matters: revenue. Bridging this gap requires the collection of performance and business data together, and then analyzing this data using the proper analytic methods.
Using modern Real User Monitoring (RUM) techniques, Buddy Brewer will show you how to quantify the impact even one second of latency has on key business metrics like bounce and conversion rate.
Helps create awareness on how to maintain software quality every step of the way. This will take you along every step of a software life cycle pointing out the best practices you should follow to ensure developing and releasing a high quality product.
Eric Proegler Oredev Performance Testing in New ContextsEric Proegler
Virtualization, Cloud Deployments, and Cloud-Based Tools have challenged and changed performance testing practices. Today’s performance tester can summons tens of thousands of virtual users from the cloud in a few minutes at a cost far lower than the expensive on-premise installations of yesteryear.
Meanwhile, systems under test have changed more. Updated software stacks have increased the complexity of scripting and performance measurement, but the biggest changes are in the nature and quantities of resources powering the systems. Interpreting resource usage when resources are shared on a private virtualization platform is exceedingly difficult. Understanding resources when they live in a large public cloud is impossible.
Exploratory testing is an approach to testing that emphasizes the freedom and responsibility of testers to continually optimize the value of their work. It is the process of three mutually supportive activities—learning, test design, and test execution—done in parallel. With skill and practice, exploratory testers typically uncover an order of magnitude more problems than when the same amount of effort is spent on procedurally scripted testing. All testers conduct exploratory testing in one way or another, but few know how to do it systematically to obtain the greatest benefits. Even fewer can articulate the process. Jon Bach looks at specific heuristics and techniques of exploratory testing that will help you get the most from this highly productive approach. Jon focuses on the skills and dynamics of exploratory testing, and how it can be combined with scripted approaches.
The complexity of a typical OpenNebula installation brings a special set of challenges on the monitoring side. In this talk, I will show monitoring of a full stack of from the physical servers to storage layer and ONE daemon. Providing an aggregated view of this information allows you see the real impact of a certain failure. I would like to also present a use case for a “closed-loop” setup where new VMs are automatically added to the monitoring without human intervention, allowing for an efficient approach to monitoring the services a OpenNebula setup provides.
OpenNebulaConf 2013 - Monitoring of OpenNebula installations by Florian Heigl OpenNebula Project
The complexity of a typical OpenNebula installation brings a special set of challenges on the monitoring side. In this talk, I will show monitoring of a full stack of from the physical servers to storage layer and ONE daemon. Providing an aggregated view of this information allows you see the real impact of a certain failure. I would like to also present a use case for a “closed-loop” setup where new VMs are automatically added to the monitoring without human intervention, allowing for an efficient approach to monitoring the services a OpenNebula setup provides.
Bio:
I’ve been into virtualization and storage for a long time and I like the amount of abstraction OpenNebula offers. Professionally I have been a Unix systems administrator for most of my working life. I’ve also done systems integration and monitoring work on the Check_MK project. Now I’m one of very few Nagios experts in Germany that aren’t working for one of the 3-5 leading Nagios outfits and as such I’m able to speak freely about what I think works best for the users. My strength is simply sitting down and listening to what people really need.
Coaching teams in creative problem solvingFlowa Oy
Agile has helped teams to collaborate and organize work better. That’s great. Better teamwork and better understanding of the work definitely helps a team to do right things. Agile has also lead the way toward technical practices such as Continuous Integration and Delivery, Test Driven Development and SOLID-architecture principles. Great, these things definitely help the team to do things right.
Then again, most of the time in software projects goes into problem solving and similar creative acts. Agile has relatively little to give on these areas. Currently, agile is not about creativity nor is it about problem solving.
This coaching circle session will focus on the creative core of software development: solving creatively novel, original and broad problems more effectively all the time. I will introduce some principles and tools I’ve found useful when helping people to solve hard problems and to find creative solutions.
Feedback loops between tooling and cultureChris Winters
Discussion of how tools technologists create impact culture, and how culture impacts those tools. Not really a standalone presentation but hopefully useful.
Observability for Emerging Infra (what got you here won't get you there)Charity Majors
Distributed systems, microservices, containers and schedulers, polyglot persistence .. modern infrastructure patterns are fluid and dynamic, chaotic and transient. So why are we still using LAMP-stack era tools to debug and monitor them? We'll cover some of the many shortcomings of traditional metrics and logs (and APM tools backed by metrics or logs), and show how complexity is their kryptonite. So how do we handle the coming complexity Armageddon? What are the implications for teams and roles and the way we build and ship software? Let's talk about the industry-wide shifts underway from metrics to events, from monitoring to observability, and from caring about the system as whole to the health of each and every request.
Ho Chi Minh City Software Testing Conference January 2015
Software Testing in the Agile World
Website: www.hcmc-stc.org
Author: Lee Copeland
Over the years writers have defined testing as a process of finding, a process of evaluating, a process of measuring, a process of improving. For a quarter of a century we as testers have been focused on the internal process of testing, while generally disregarding its real purpose. The real purpose of testing is to create information. James Bach nailed it when he wrote, “The ultimate reason testers exist is to provide information that others on the project use to create things of value.” That is why testing exists — to provide information of value. So, when managers complain that testing “costs too much” perhaps they are really trying to say, “I’m not getting enough valuable information to justify the cost of testing.” When testers say “my management doesn’t see the value in our work” perhaps they are really trying to say, “My management doesn’t value the information I’m providing to them.” To prove our worth, to increase the value of testing, we must first focus on testing’s purpose — providing valuable information — not its process. Join Lee as he discusses why quantifying the value of testing is difficult work — perhaps that’s why we concentrate so much on testing process—that’s much easier. But until we do this difficult work, until we prove our worth through quantifying our contribution, we should expect the bombardments to continue.
Similar to Changing rules 1_stopcheating_slideshare (20)
This webinar looks at performance metrics such as load time, time to interact, page size, page composition, and adoption of performance best practices.
Techniques, Tips & Tools For Mobile App TestingSOASTA
Today, mobile app testing expertise is in high demand and offers an exciting career path in test/QA. However, the recent Future of Testing study, sponsored by TechWell, noted that the biggest challenge in mobile―just behind having enough time to test―is expertise. Brad Johnson shares how companies from banking to retail use data from real production users, continuous integration frameworks, cloud-based testing platforms, and real mobile devices to help ensure every user experiences top-rated performance—all the time. Brad shares insight about what to test for mobile, when to first automate, and a metric that will drive real change. Explore how organizations are communicating across teams and improving developer-to-tester collaboration with new approaches. Testers need to develop new skills ranging from software coding requirements to data science. Takeaway tips and ideas to impact your company, enhance your skill set, and propel your career with exciting options and new challenges.
Metrics, Metrics Everywhere (but where the heck do you start?)SOASTA
Not surprisingly, there’s no one-size-fits-all performance metric (though life would be simpler if there were). Different metrics will give you different critical insights into whether or not your pages are delivering the results you want — both from your end user’s perspective and ultimately from your organization’s perspective. Join Tammy Everts, and walk through various metrics that answer performance questions from multiple perspectives. You’ll walk away with a better understanding of your options, as well as a clear understanding of how to choose the right metric for the right audience.
“Mobile Test Coverage: It’s not just about the devices!"
With the proliferation of mobile devices, there is a renewed discussion on Test Coverage as it relates to mobile functional testing. Many of our customers have taken a fresh look at their mobile strategy with a renewed focus on their test coverage strategy. What do they discover? That test coverage is not just about devices. Join us for an hour and we will walk you through the key areas that you need to focus on to keep your mobile strategy covered.
Webinar: Was die Top eCommerce Firmen über Ihre Performance auf Web- & MobileSOASTA
Firmen, die im eCommerce ein Standbein haben, wissen, dass schlechte Ladezeiten Nutzerabwanderungen bedeuten. Dies kann nicht nur dem Umsatz schaden, sondern auch langfristig das Image der Firma schädigen. Deshalb ist es umso beindruckender, wie viele eCommerce Websites heute noch ohne jegliches Monitoring und Optimieren betrieben werden.
In diesem Webinar lernen Sie:
die Risiken kennen, welchen Sie sich aussetzen, wenn Sie mit unpassenden, zu kleinen oder gar keinen Lösungen arbeiten
wie Sie international problemlos und schnell einen Test skalieren können, um ein Abbild der Realität zu erhalten
wie mit Real-User-Monitoring die genauen Ladezeiten und Latenzzeiten Ihrer Kunden erkennen und wie Sie in diesem Bereich Optimierungen machen können.
welche Rolle Informationen in Echtzeit spielen, um direkt Kundenabwanderungen zu vermeiden
wie Sie Testszenarios aus echten User-Journeys generieren
wie die SOASTA Platform Ihnen Kontrolle, Sichtbarkeit und eCommerce Sicherheit gibt
Melden Sie sich jetzt an, um von den Performance Experten zu lernen und auch von Kunden zu hören, die bereits durch die SOASTA Platform Umsatzpotentiale optimieren und Kundenabwanderungen verhindern konnten.
Get Ready for Changes to Load Testing
It's time to step up your load testing game! We've invited performance industry veterans, Mark Tomlinson and Brad Johnson, to join us to share market trends and tips so you can get caught up on the latest in the world of performance testing.
We will also dive into what's coming next in performance testing. Many companies are still using load testing practices designed for a bygone era. But websites and apps have changed development forever, resulting in the need for a new approach.
In this webinar, you'll learn:
- What's breaking down the load testing status quo?
- How apps are changing the performance testing game
- How Centers of Excellence survive with DevOps and the agile release cycle
Join us and we'll talk about the road ahead in Load Testing, and what you need to make sure you're ready for this change.
Dach webinar - Image Absicherung – Lektionen aus dem Facebook CrashSOASTA
Die aktuellen Facebook & Instagram Ausfälle verursachten nicht nur steigende Skepsis unter den Social Media-Nutzern, sondern focierten auch weitere Probleme, wie das Ansehen & Wahrnehmen dieser Marken. Ausfälle und langsames Laden von Webseiten wirken sich negativ auf den Ruf eines Unternehmens aus.
In den Köpfen der Nutzer setzen sich langfristige Bedenken über das Unternehmen fest, die zu Nutzerabwanderung und sinkenden Umsätzen führen. In der heutigen Zeit gibt es einige Lektionen zu lernen, wenn man weiterhin Kunden gewinnen und einen guten Ruf genießen will: Performance ist alles, wenn es um Ihre Anwendungen geht.
Registrieren Sie sich für das SOASTA Webinar und lernen Sie:
Warum Website- und App- Performance einen neuen Ansatz des Testens fodern,
Wie Sie durch eine kontextuelle Sicht eine bessere Einsicht in die Erwartungen und Online-Erfahrungen Ihre Kunden gewinnen,
Wie Sie sich vor den Auswirkungen eines Website- Crash schützen
Join us to learn how to tune your web performance by combining synthetic, real-user, and competitive benchmarking metrics to give you the most complete dataset needed to optimize your site – and beat your competitors.
You will learn:
-Choosing the right tool for the job
-Using competitive benchmarking data
-Mine key performance analytics that matter
-Putting performance in the context of your business
Join us for this webinar that will introduce you to the latest mobile testing technology and processes implemented by Forbes Fortune 5 Companies and the Top 10 Internet Retailers, reducing time to market and giving back valuable time to your business with every test cycle.
With the implementation of leading technology, people and processes, our customers have turned taxing four-week long test cycles to simple overnight automation.
Give us an hour and let us show you the seven steps on the path to successful Mobile Test Automation.
Topics we will cover will include:
1. Know your User
2. Know your App
3. Know your Matrix
4. Know your Devices
5. Know your plan to Automate
6. Know your Performance
7. Know your Edge
It's all about conversion. Every e-commerce business that cares about improving revenue has a narrow focus of optimizing their website to improve customer experience.
However, most companies still lack the ability to create realistic website performance tests due to limitations in their current test methods.
In this webinar you'll learn:
1) How to tie business metrics (ROI) with website performance metrics and real user data
2) How to build performance tests that will model user behavior on your site
3) How to correlate data analytics so you can troubleshoot bottlenecks to improve performance
Lasttest auf Zuruf CloudTest on Demand webinar presentationSOASTA
SOASTA CloudTest on Demand ist die schnelle und kompetente Hilfe bei akuten oder drohenden Performance-Problemen Ihrer Web- oder Mobile-Applikationen - alles als Full Managed Service.
Besuchen Sie unser Live-Webinar und erfahren Sie, wie Sie:
Wie Lasttests innerhalb weniger Tage komplett durchgeführt werden – unabhängig von Größe, Geographie und Komplexität
Wie durch Echtzeit-Analysen während des Lasttests bereits Optimierungen durchgeführt werden können
Wie die hohe Expertise unserer Performance Engineers Sie auf ein neues Level der Performance hebt
Accelerate Web and Mobile Testing for Continuous Integration and DeliverySOASTA
Accelerating Web and Mobile Testing for Continuous Delivery
Automated load and performance testing of your web and mobile apps can ensure quality throughout the application lifecycle. Automated and continuous testing can increase the speed and accuracy of application readiness, and eliminate time-consuming, error-prone manual processes.
In this webinar, led by SOASTA experts, you will learn:
• How to create a continuous load and performance testing framework
• How to trigger testing every time code changes are delivered
• How to use TouchTest for mobile apps functional testing
• How to use CloudTest for load testing
Testing mobile apps is different. There are more form factors, more combinations, more complexity and more users. You need a checklist to be sure you don't overbuild or under test. SOASTA and Utopia have the experience and technology you need to be successful.
Join this free webinar and learn:
The most common mobile app issues
Missed areas like app interrupts, poor connections and device settings
When to automate for functionality and performance
Technology for end-to-end mobile testing
How to collect mobile user information for continuous improvement
Utopia Solutions Founder and CTO, Lee Barnes and the SOASTA team will share customer experiences and demonstrations that will help you cross off every critical element of your mobile testing checklist.
How To Use Jenkins for Continuous Load and Mobile Testing with SOASTA & Cloud...SOASTA
How to use Jenkins for Continuous Load Testing and Mobile Automation
Today’s rapid development pace demands continuous testing, and Jenkins, the leading open source automation platform, has emerged as the hub of continuous delivery. SOASTA and CloudBees have tapped Jenkins to enable more test types and approaches that utilize cloud and agile processes for higher quality apps.
Join this free webinar and learn:
How to use Jenkins for continuous delivery and load testing of mobile applications
How to incorporate cloud resources into your development and test environments
Using the largest global test cloud for load generation
CloudBees’ on premise, in the cloud and hybrid solutions for continuous delivery with Jenkins
SOASTA’s Jenkins plugins for testing with real mobile devices and tracking performance baselines
Experts from both companies will share stories and demonstrations that will help you implement a continuous approach to quality.
Reducing 3rd party content risk with Real User MonitoringSOASTA
Trusting 3rd party content providers without full visibility puts your web and mobile business at risk with single points of failure (SPOF), outages and serious performance bottlenecks outside your control. Real User Monitoring (RUM) empowers you to set appropriate Service Level Agreements (SLAs) and delivers indisputable facts to keep your providers honest. But, you also need to know what to expect.
In this webinar you’ll learn:
Common third-party services and how to measure them with RUM
Using synthetic monitoring services to know what to expect
Understanding and testing for SPOF
Setting reality-based SLAs with your providers
Information sharing for full accountability
Join Web Performance veteran Cliff Crocker for this free webinar on a hot issue.
Tis The Season: Load Testing Tips and Checklist for Retail Seasonal ReadinessSOASTA
‘Tis the Season – Holiday 2014 eCommerce Quality Checklist
Past Webinar
Archived (originally presented June 26th, 2014)
This year, your holiday traffic will increase 15% or more, and 50% of the users will be mobile. Recent research shows 71% of your revenue comes from multi-channel users, so if you haven’t started planning, you’re already behind. Leading retailers are preparing for Holiday “14 and testing their production sites for multi-channel access to 115% capacity, or beyond! If you’re not one of them, your plans are incomplete.
Cover your risks. Join Tenzing and SOASTA experts as they discuss the must-do checklist for peak performance.
In this webinar you’ll learn:
Align your Marketing and Quality plans
Cover the multichannel user experience
Test early in the lab and fully in production
Optimize end-to-end site speed and performance
When to freeze for the winter
Don’t miss this opportunity to “shop early” and see how the leading retailers are already beating the odds with cloud testing.
Modern Load Testing: Move Your Load Testing from the Past to the PresentSOASTA
Load testing approaches of the past support application delivery of the past. Times have changed. Today’s leading companies do more testing in less time with higher coverage of their web and mobile applications, everyday.
In this webinar you’ll learn:
- Why user experience is king
- How to do front-to-back performance testing for mobile and web apps
- How to deploy web and mobile load tests with global scale and distribution
- Live production testing enabled with real-time analysis and control
- How real user monitoring drives test creation and guides production testing
The time is now to move your testing from the past to the present! Join us for tips and tricks to get you there.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Connector Corner: Automate dynamic content and events by pushing a button
Changing rules 1_stopcheating_slideshare
1. SOASTA Webinar Series
CLOUD TESTING RULE 1:
Rewriting the Rules of Stop Cheating and
Start Running
Performance Testing
Realistic Tests
2. BC: (Before Cloud)
We Worked With What We Had…
Before the web, when apps served
hundreds, there was…
Circa 1991
When apps peaked at
thousands, we had a few more
options
Turn of 21st
Century
“Virtual Users” were a valuable
commodity
1 VU = $1200!
Yet many were left
wanting
Untested websites, 2011: 75%
3. Necessity Led to Workarounds
How we’ve “cheated” to get the job done
1) Modified “Think Time” to stretch VUs
Example 2 virtual users ≠ 1 divided in 2
≠
2) Extrapolated results based on small lab
tests
Educated or assisted guessing is no match
for measuring at real scale
4. Necessity Led to Workarounds
How we’ve “cheated” to get the job done
3) Tested pages or assets in a silo, ignoring
realistic pace and flow of user behavior
Optimizes limited test hardware, but
disregards session states, caching, etc.
4) Accepted blind spots by focusing on
limited, single metrics (e.g. response time)
Without complete end-to-end
views, everything’s a black box
5. Let’s Look at the NEW RULES
Establishing Accuracy and Realism
Scott Barber
6. 1) Modifying Think Time: The Wrong Way
“If all you have is a hammer, everything looks like a nail”
-- Bernard Baruch
To Cheat a Software License
• We did what we had to so we could generate some semblance of load
• We often found real and serious performance issues
• Compared to *not* cheating, we added increased value
• But they were often not the “right” ones
• We still couldn’t simulate production, and we still got burned
Stretch Limited Hardware
• We had the same issue with hardware, so we overloaded what we had
• Again, we found real and serious performance issues
• Again, it increased value, but again, we rarely found the “right” issues
• And, again, we got burned in production
7. 1) Modifying Think Time: The Right Way
The only way to simulate production…
…is to simulate production.
Users Think… and Type
• Guess what? They all do it at different speeds!
• Guess what else? It's your job to figure out how to model and script those
varying speeds
Determine how long they think
• Log files
• Industry research
• Observation
• Educated guess/Intuition
• Combinations are best
8. 1) Modifying Think Time: The Right Way
When you get it wrong, it’s… When you get it right, it’s…
Not
Frightening
Frightening
9. 2) Extrapolating Capacity: The Wrong Way
Extrapolating performance test results is black magic
DON’T DO IT
Unless you are, or were trained by, Connie Smith, Ph.D.
The most common type of bad extrapolation…
• 1 leg of an n leg system ≠ 1/nth capacity
• Fractional virtual resources ≠ fractional capacity
Other types of bad extrapolation...
• Faster processors in production ≠ faster response time
• More resources ≠ faster response time
• Any extrapolation that presumes linear correlations
10. 2) Measuring Capacity: The Right Way
Realistically, there are 3 ways to predict capacity
Trust your gut & cross your fingers
• Gut feelings are sometimes very accurate
• They can also cost you your job
Reverse cross-validate
• Use post-release production data to modify & re-measure test environment
• Use new results to make predictions for prod
• Check new predictions vs. reality, revise repeat
Find a way to run some tests in the actual production environment
• You can learn a lot from loads below expected peak
• A few of hours of scheduled maintenance in the middle of the night can
change *everything*
11. 3) Modeling User Flows: The Wrong Way
You can’t test everything…
…the possibilities are literally endless.
Implementing functional use cases or scenarios…
• Will have you scripting until the sun explodes, AND
• Will regularly miss “easy” stuff by choosing and prioritizing poorly
Picking the most common, or most “important” flow
• Is unlikely to catch the worst performance issues
• Is likely to lead the application to be “hyper-tuned” for that scenario
• Is likely to yield unwanted surprises
13. 3) Modeling User Flows: The Right Way
Tell lots of little lies?
…No! FIBLOTS
Common activities (get from logs)
e.g. Resource hogs (get from developers/admins)
Even if these activities are both rare and not risky
SLA’s, Contracts and other stuff that will get you sued
What the users will see and are mostly likely to
complain about. What is likely to earn you bad press
New technologies, old technologies, places where it’s
failed before, previously under-tested areas
Don’t argue with the boss (too much)