Throughout the years, Lightning Talks have been a popular part of the STAR conferences. If you’re not familiar with the concept, Lightning Talks consists of a series of five-minute talks by different speakers within one presentation period. Lightning Talks are the opportunity for speakers to deliver their single biggest bang-for-the-buck idea in a rapid-fire presentation. And now, lightning has struck the STAR keynotes. Some of the best-known experts in testing—James Bach, Jon Bach, Michael Bolton, Jennifer Bonine, Hans Buwalda, Bob Galen, John Fodeh, Dawn Haynes, Geoff Horne, and Griffin Jones—will step up to the podium and give you their best shot of lightning. Get ten keynote presentations for the price of one—and have some fun at the same time.
Throughout the years, Lightning Talks have been a popular part of the STAR conferences. If you’re not familiar with the concept, Lightning Talks consists of a series of five-minute talks by different speakers within one presentation period. Lightning Talks are the opportunity for speakers to deliver their single biggest bang-for-the-buck idea in a rapid-fire presentation. And now, lightning has struck the STAR keynotes. Some of the best-known experts in testing will step up to the podium and give you their best shot of lightning. Get ten keynote presentations for the price of one—and have some fun at the same time.
A number of test automation ideas that at first glance seem very sensible actually contain pitfalls and problems that you should avoid. Dot Graham describes five of these “intelligent mistakes”—automated tests will find more bugs more quickly; spending a lot on a tool must guarantee great benefits; it’s necessary to automate all of our manual tests; tools are expensive so we have to show a substantial return on investment; and testing tools must be used by the testers. Dot points out that automation doesn’t find bugs; tests do. Good automation does not come out of the box and is not automatic. Automating everything may not give you better (or faster) testing. Determining the actual rate of return is not only surprisingly difficult but may actually be harmful. Turning testers into test automators may waste their skills and talents. Join Dot for a rousing discussion of intelligent mistakes—so you can be smart enough to avoid them.
How to Deliver the Right Software (Specification by example)Asier Barrenetxea
Talk about Specification by Example. What's the problems it tries to tackle and how to solve them.
I gave this talk at findmypast.com on a "lunch and learn" weekly meeting for the company.
This is a new version of my previous presentation about "Specification by example"
http://www.slideshare.net/AsierBarrenetxea1/specification-by-example-33594438
In chess, the word blunder means a very bad move by someone who should know better. Even though functional test automation has been around for a long time, people still make some very bad moves and serious blunders. The most common misconception in automation is thinking that manual testing is the same as automated testing. And this misguided thinking accounts for most of the blunders in system level test automation. Dorothy Graham takes you on a tour of these blunders, including the Stable-Application Myth (you can’t start automating until the application is stable), Inside-the-Box Thinking (automating only the obvious test execution), and the Project/Non-Project Dilemma (failing to treat automation like a project by not funding or resourcing it and treating automation as only a project). Other blunders include Testing–Tools–Test, Silver Bullet, Automating the Wrong Thing, Who Needs GPS, How Hard Can It Be, and Isolationism. New skills, approaches, and objectives are needed or you’ll end up with inefficient automation, high-maintenance costs, and wasted effort. Join Dot to discover how you can avoid these common blunders and achieve valuable test automation.
03 - chomu prohramisty ne testuiut - yurii chulovskyi - it event 2013 (5)Igor Bronovskyy
Чи траплялася вам ситуація, коли на ретроспективі, ви домовилися що будете писати тести. І всі розуміють яку користь вони дадуть. Але й після цього тести пишуться дуже рідко.
Ця презентація є спробою знайти причини і роздумом на тему: “Які технічні знання, зміни в технічному і соціальному середовиці необхідні для покращення тестів”.
Юрій Чуловський
http://itevent.if.ua/lecture/chomu-programisti-ne-testuyut
Throughout the years, Lightning Talks have been a popular part of the STAR conferences. If you’re not familiar with the concept, Lightning Talks consists of a series of five-minute talks by different speakers within one presentation period. Lightning Talks are the opportunity for speakers to deliver their single biggest bang-for-the-buck idea in a rapid-fire presentation. And now, lightning has struck the STAR keynotes. Some of the best-known experts in testing will step up to the podium and give you their best shot of lightning. Get ten keynote presentations for the price of one—and have some fun at the same time.
A number of test automation ideas that at first glance seem very sensible actually contain pitfalls and problems that you should avoid. Dot Graham describes five of these “intelligent mistakes”—automated tests will find more bugs more quickly; spending a lot on a tool must guarantee great benefits; it’s necessary to automate all of our manual tests; tools are expensive so we have to show a substantial return on investment; and testing tools must be used by the testers. Dot points out that automation doesn’t find bugs; tests do. Good automation does not come out of the box and is not automatic. Automating everything may not give you better (or faster) testing. Determining the actual rate of return is not only surprisingly difficult but may actually be harmful. Turning testers into test automators may waste their skills and talents. Join Dot for a rousing discussion of intelligent mistakes—so you can be smart enough to avoid them.
How to Deliver the Right Software (Specification by example)Asier Barrenetxea
Talk about Specification by Example. What's the problems it tries to tackle and how to solve them.
I gave this talk at findmypast.com on a "lunch and learn" weekly meeting for the company.
This is a new version of my previous presentation about "Specification by example"
http://www.slideshare.net/AsierBarrenetxea1/specification-by-example-33594438
In chess, the word blunder means a very bad move by someone who should know better. Even though functional test automation has been around for a long time, people still make some very bad moves and serious blunders. The most common misconception in automation is thinking that manual testing is the same as automated testing. And this misguided thinking accounts for most of the blunders in system level test automation. Dorothy Graham takes you on a tour of these blunders, including the Stable-Application Myth (you can’t start automating until the application is stable), Inside-the-Box Thinking (automating only the obvious test execution), and the Project/Non-Project Dilemma (failing to treat automation like a project by not funding or resourcing it and treating automation as only a project). Other blunders include Testing–Tools–Test, Silver Bullet, Automating the Wrong Thing, Who Needs GPS, How Hard Can It Be, and Isolationism. New skills, approaches, and objectives are needed or you’ll end up with inefficient automation, high-maintenance costs, and wasted effort. Join Dot to discover how you can avoid these common blunders and achieve valuable test automation.
03 - chomu prohramisty ne testuiut - yurii chulovskyi - it event 2013 (5)Igor Bronovskyy
Чи траплялася вам ситуація, коли на ретроспективі, ви домовилися що будете писати тести. І всі розуміють яку користь вони дадуть. Але й після цього тести пишуться дуже рідко.
Ця презентація є спробою знайти причини і роздумом на тему: “Які технічні знання, зміни в технічному і соціальному середовиці необхідні для покращення тестів”.
Юрій Чуловський
http://itevent.if.ua/lecture/chomu-programisti-ne-testuyut
Use Model-Based Testing to Navigate the Software ForestTechWell
Even seemingly simple software systems can be a dense forest of intersecting logical pathways which may leave you wondering if your testing was robust enough. Traditional test cases are flawed since they only execute the pathways the tester considered at the time the test case was written, and they will execute the same way—every time and without variation. Jon Fetrow shows how, using model-based testing, you can create a map of your software forest and answer the question “Did you test enough?” Jon discusses the use of models to catch defects in the requirements and design phase by helping visualize requirements interactions, and how to use models to aid in test case development. He demonstrates how his team implemented an automation test framework based on models, integrated the model-based tests into their continuous integration test approach, and incorporated the models as part of the requirements trace matrix. Discover how a model-based test automation framework can address shortcomings of traditional test cases—both manual and automated.
Specification by example and agile acceptance testinggojkoadzic
Specification by example and agile acceptance testing, presentation given to HSBC developers on 21/09/09 for more info see http://specificationbyexample.com
The 7 minute accessibility assessment and app rating systemAidan Tierney
"Is it accessible?" When a formal assessment isn't possible, here's a framework for rapid user-centered testing with a five star rating system non-specialists will understand.
Build Your Mobile App Quality and Test StrategyTechWell
Let’s build a mobile app quality and testing strategy together. Whether you have a web, hybrid, or native app, building a quality and testing strategy means (1) knowing what data and tools you have available to make agile decisions, (2) understanding your customers and your competitors, and (3) testing your app under real-world conditions. Jason Arbon guides you through the latest techniques, data, and tools to ensure the awesomeness of your mobile app quality and testing strategy. Leave this interactive session with a strategy for your very own app—or one you pretend to own. The information Jason shares is based on data from Appdiff’s next-gen mobile app testing platform, lessons from Applause/uTest’s crowd, text mining hundreds of millions of app store reviews, and in-depth discussions with top mobile app development teams.
Based on hands-on Agile experience acquired over multiple delivery projects and Agile coaching/consulting assignments, Vatsala and Aman share common Agile Testing dilemmas, and possible solutions, tying them to the principle of "moving testing upstream".
Presented at Next Generation Testing conference in Bangalore, India (July 2014).
In chess, the word blunder means a very bad move by someone who should know better. Even though functional test automation has been around for a long time, people still make some very bad moves and serious blunders. The most common misconception in automation is thinking that manual testing is the same as automated testing. And this thinking accounts for most of the blunders in system level test automation. Dorothy Graham takes us on a tour of these blunders, including: the Stable-Application Myth (you can’t start automating until the application is stable), Inside-the-Box Thinking (automating only the obvious test execution), the Project/Non-Project Dilemma (failing to treat automation like a project by not funding or resourcing it, and treating automation as only a project). Other blunders include Testing-Tools-Test, Silver Bullet, Automating the Wrong Thing, Who Needs GPS, How Hard Can It Be, and Isolationism. Different skills, approaches, and objectives are needed or you’ll end up with inefficient automation, high maintenance costs, and wasted effort. Join Dot to discover how you can avoid these common blunders and achieve valuable test automation.
Accessibility Support Baseline: Balancing User Needs Against Test EffortAidan Tierney
A guided conversation about the accessibility support baseline and an opportunity to find out what others are supporting and to share thoughts and experiences. The support baseline is the term WCAG uses for the set of user technologies that an application is expected to work with, specifically the combinations of assistive technology (AT), operating system (OS) and where relevant browser and device. WCAG doesn’t define what needs to be in the baseline because it depends on your users, and on the technologies available to them.
Is testing only one combination for web and one for mobile sufficient? Changes in desktop screen reader usage, mobile fragmentation, and frequent updates to OS and AT may mean that it isn’t. But there are hundreds if not thousands of potential combinations. How do we balance support for these combinations against testing effort?
We will discuss:
• Who or what does a baseline impact?
• Variables to account for in a baseline
• Effort and costs
• Assistive Technology usage data
• Organizational challenges to supporting AT
• Sample web and mobile support baselines.
David Hayman - Say What? Testing a Voice Avtivated System - EuroSTAR 2010TEST Huddle
EuroSTAR Software Testing Conference 2010 presentation on Say What? Testing a Voice Avtivated System by David Hayman. See more at: http://conference.eurostarsoftwaretesting.com/past-presentations/
It Seemed a Good Idea at the Time: Intelligent Mistakes in Test AutomationTechWell
Some test automation ideas seem very sensible at first glance but contain pitfalls and problems that can and should be avoided. Dot Graham describes five of these “intelligent mistakes”—1. Automated tests will find more bugs quicker. (Automation doesn’t find bugs, tests do.) 2. Spending a lot on a tool must guarantee great benefits. (Good automation does not come “out of the box” and is not automatic.) 3. Let’s automate all of our manual tests. (This may not give you better or faster testing, and you will miss out on some benefits.) 4. Tools are expensive so we have to show a return on investment. (This is not only surprisingly difficult but may actually be harmful.) 5. Because they are called “testing tools,” they must be tools for testers to use. (Making testers become test automators may be damaging to both testing and automation.) Join Dot for a rousing discussion of “intelligent mistakes”—so you can be smart enough to avoid them.
Better Security Testing: Using the Cloud and Continuous DeliveryTechWell
Even though many organizations claim that security is a priority, that claim doesn’t always translate into supporting security initiatives in software development or test. Security code reviews often are overlooked or avoided, and when development schedules fall behind, security testing may be dropped to help the team “catch up.” Everyone wants more secure development; they just don’t want to spend time or money to get it. Gene Gotimer describes his experiences with implementing a continuous delivery process in the cloud and how he integrated security testing into that process. Gene discusses how to take advantage of the automated provisioning and automated deploys already being implemented to give more opportunities along the way for security testing without schedule disruption. Learn how you can incrementally mature a practice to build security into the process—without a large-scale, time-consuming, or costly effort.
Use Model-Based Testing to Navigate the Software ForestTechWell
Even seemingly simple software systems can be a dense forest of intersecting logical pathways which may leave you wondering if your testing was robust enough. Traditional test cases are flawed since they only execute the pathways the tester considered at the time the test case was written, and they will execute the same way—every time and without variation. Jon Fetrow shows how, using model-based testing, you can create a map of your software forest and answer the question “Did you test enough?” Jon discusses the use of models to catch defects in the requirements and design phase by helping visualize requirements interactions, and how to use models to aid in test case development. He demonstrates how his team implemented an automation test framework based on models, integrated the model-based tests into their continuous integration test approach, and incorporated the models as part of the requirements trace matrix. Discover how a model-based test automation framework can address shortcomings of traditional test cases—both manual and automated.
Specification by example and agile acceptance testinggojkoadzic
Specification by example and agile acceptance testing, presentation given to HSBC developers on 21/09/09 for more info see http://specificationbyexample.com
The 7 minute accessibility assessment and app rating systemAidan Tierney
"Is it accessible?" When a formal assessment isn't possible, here's a framework for rapid user-centered testing with a five star rating system non-specialists will understand.
Build Your Mobile App Quality and Test StrategyTechWell
Let’s build a mobile app quality and testing strategy together. Whether you have a web, hybrid, or native app, building a quality and testing strategy means (1) knowing what data and tools you have available to make agile decisions, (2) understanding your customers and your competitors, and (3) testing your app under real-world conditions. Jason Arbon guides you through the latest techniques, data, and tools to ensure the awesomeness of your mobile app quality and testing strategy. Leave this interactive session with a strategy for your very own app—or one you pretend to own. The information Jason shares is based on data from Appdiff’s next-gen mobile app testing platform, lessons from Applause/uTest’s crowd, text mining hundreds of millions of app store reviews, and in-depth discussions with top mobile app development teams.
Based on hands-on Agile experience acquired over multiple delivery projects and Agile coaching/consulting assignments, Vatsala and Aman share common Agile Testing dilemmas, and possible solutions, tying them to the principle of "moving testing upstream".
Presented at Next Generation Testing conference in Bangalore, India (July 2014).
In chess, the word blunder means a very bad move by someone who should know better. Even though functional test automation has been around for a long time, people still make some very bad moves and serious blunders. The most common misconception in automation is thinking that manual testing is the same as automated testing. And this thinking accounts for most of the blunders in system level test automation. Dorothy Graham takes us on a tour of these blunders, including: the Stable-Application Myth (you can’t start automating until the application is stable), Inside-the-Box Thinking (automating only the obvious test execution), the Project/Non-Project Dilemma (failing to treat automation like a project by not funding or resourcing it, and treating automation as only a project). Other blunders include Testing-Tools-Test, Silver Bullet, Automating the Wrong Thing, Who Needs GPS, How Hard Can It Be, and Isolationism. Different skills, approaches, and objectives are needed or you’ll end up with inefficient automation, high maintenance costs, and wasted effort. Join Dot to discover how you can avoid these common blunders and achieve valuable test automation.
Accessibility Support Baseline: Balancing User Needs Against Test EffortAidan Tierney
A guided conversation about the accessibility support baseline and an opportunity to find out what others are supporting and to share thoughts and experiences. The support baseline is the term WCAG uses for the set of user technologies that an application is expected to work with, specifically the combinations of assistive technology (AT), operating system (OS) and where relevant browser and device. WCAG doesn’t define what needs to be in the baseline because it depends on your users, and on the technologies available to them.
Is testing only one combination for web and one for mobile sufficient? Changes in desktop screen reader usage, mobile fragmentation, and frequent updates to OS and AT may mean that it isn’t. But there are hundreds if not thousands of potential combinations. How do we balance support for these combinations against testing effort?
We will discuss:
• Who or what does a baseline impact?
• Variables to account for in a baseline
• Effort and costs
• Assistive Technology usage data
• Organizational challenges to supporting AT
• Sample web and mobile support baselines.
David Hayman - Say What? Testing a Voice Avtivated System - EuroSTAR 2010TEST Huddle
EuroSTAR Software Testing Conference 2010 presentation on Say What? Testing a Voice Avtivated System by David Hayman. See more at: http://conference.eurostarsoftwaretesting.com/past-presentations/
It Seemed a Good Idea at the Time: Intelligent Mistakes in Test AutomationTechWell
Some test automation ideas seem very sensible at first glance but contain pitfalls and problems that can and should be avoided. Dot Graham describes five of these “intelligent mistakes”—1. Automated tests will find more bugs quicker. (Automation doesn’t find bugs, tests do.) 2. Spending a lot on a tool must guarantee great benefits. (Good automation does not come “out of the box” and is not automatic.) 3. Let’s automate all of our manual tests. (This may not give you better or faster testing, and you will miss out on some benefits.) 4. Tools are expensive so we have to show a return on investment. (This is not only surprisingly difficult but may actually be harmful.) 5. Because they are called “testing tools,” they must be tools for testers to use. (Making testers become test automators may be damaging to both testing and automation.) Join Dot for a rousing discussion of “intelligent mistakes”—so you can be smart enough to avoid them.
Better Security Testing: Using the Cloud and Continuous DeliveryTechWell
Even though many organizations claim that security is a priority, that claim doesn’t always translate into supporting security initiatives in software development or test. Security code reviews often are overlooked or avoided, and when development schedules fall behind, security testing may be dropped to help the team “catch up.” Everyone wants more secure development; they just don’t want to spend time or money to get it. Gene Gotimer describes his experiences with implementing a continuous delivery process in the cloud and how he integrated security testing into that process. Gene discusses how to take advantage of the automated provisioning and automated deploys already being implemented to give more opportunities along the way for security testing without schedule disruption. Learn how you can incrementally mature a practice to build security into the process—without a large-scale, time-consuming, or costly effort.
Specification-by-Example: A Cucumber ImplementationTechWell
We've all been there. You work incredibly hard to develop a feature and design tests based on written requirements. You build a detailed test plan that aligns the tests with the software and the documented business needs. When you put the tests to the software, it all falls apart because the requirements were updated without informing everyone. But help is at hand. Enter business-driven development and Cucumber, a tool for running automated acceptance tests. Join Mary Thorn as she explores the nuances of Cucumber and shows you how to implement specification-by-example, behavior-driven development, and agile acceptance testing. By fostering collaboration for implementing active requirements via a common language and format, Cucumber bridges the communication gap between business stakeholders and implementation teams. If you experience developers not coding to requirements, testers not getting requirements updates, or customers who feel out of the loop and don't get what they ask for, be here!
Tune Agile Test Strategies to Project and Product MaturityTechWell
For optimum results, you need to tune agile project's test strategies to fit the different stages of project and product maturity. Testing tasks and activities should be lean enough to avoid unnecessary bottlenecks and robust enough to meet your testing goals. Exploring what "quality" means for various stakeholder groups, Anna Royzman describes testing methods and styles that fit best along the maturity continuum. Anna shares her insights on strategic ways to use test automation, when and how to leverage exploratory testing as a team activity, ways to prepare for live pilots and demos of the real product, approaches to refine test coverage based on customer feedback, and techniques for designing a production "safety net" suite of automated tests. Leave with a better understanding of how to satisfy your stakeholders’ needs for quality-and a roadmap for tuning your agile test strategies.
New Testing Standards Are on the Horizon: What Will Be Their Impact?TechWell
The history of testing standards has not always been auspicious. Testing standards documents have been expensive to obtain, limited in scope, inflexible in expectations, and inconsistent. However, they contain important lessons learned from experienced practitioners—if a tester is willing to overcome the obstacles to get to the useful information. A set of new international standards is coming. These new standards are tailorable, consistent, and comprehensive in scope. In addition, they will be freely available (some are already). Claire Lohr provides a complete roadmap to all of the available—or soon-to-be-available—testing-related standards. Learn where to go for testing process guidelines, complete definitions of all test design techniques, full examples of test documentation (for both agile and traditional projects), and free international standards documents. Take away a “start-up guide” for how different types of projects can use the new standards along with valuable tips and practical lessons you can get from these standards.
In today’s competitive world, more and more HTML5 applications are being developed for mobile and desktop platforms. Spotify has partnered with world-renowned organizations to create high quality apps to enrich the user experience. Testing a single application within a few months can be a challenge. But it's a totally different beast to test multiple world-class music discovery apps every week. Alexander Andelkovic shares insights into the challenges they face coordinating all aspects of app testing to meet their stringent testing requirements. Alexander describes an agile way to use the Kanban process to help out. He shares lessons learned including the need for management of acceptable levels of quality, support, smoke tests, and development guidelines. If you are thinking of starting agile app development or want to streamline your current app development process, Alexander’s experience gives you an excellent starting point.
Testers have been taught they are responsible for all testing. Some even say “It’s not tested until I run the product myself.” Eric Jacobson thinks this old school way of thinking can hurt a tester’s reputation and—even worse—may threaten team success. Learning to recognize opportunities where you may NOT have to test can eliminate bottlenecks and make you everyone’s favorite tester. Eric shares eight patterns from his personal experiences where not testing was the best approach. Examples include patches for critical production problems that can’t get worse, features that are too technical for the tester, cosmetic bug fixes with substantial test setup, and more. Challenge your natural testing assumptions. Become more comfortable with approaches that don’t require testing. Eliminate waste in your testing process by asking, “Does this need to be tested? By me?” Take back ideas to manage not testing including using lightweight documentation for justification. Not testing may actually be a means to better testing.
Build Your Own Performance Test Lab in the CloudTechWell
Many cloud-based performance and load testing tools claim to offer “cost-effective, flexible, pay-as-you-go pricing.” However, the reality is often neither cost-effective nor flexible. With many vendors, you will be charged whether or not you use the time (not cost effective), and you must pre-schedule test time (not always when you want and not always flexible). In addition, many roadblocks are thrown up—from locked-down environments that make it impossible to load test anything other than straight-forward applications, to firewall, security, and IP spoofing issues. Join Leslie Segal to discover when it makes sense to set up your own cloud-based performance test lab, either as a stand-alone or as a supplement to your current lab. Learn about the differences in licensing tools, running load generators on virtual machines, the real costs, and data about various cloud providers. Take home a road map for setting up your own performance test lab—in less than twenty-four hours.
Innovations in Test Automation: It’s Not All about RegressionTechWell
Although classic test automation, which usually focuses on regression testing, has its its place in testing, there is much more you can do to improve testing productivity and its value to the project and your organization. Through experience-based examples, video clips, and demonstrations, John Fodeh shares one company’s innovation journey to improve its test automation practice. John illustrates how they learned to apply automated “test monkeys” that explore the software in new ways each time a test is executed. Then, he describes how the test team uses weighted probability tables to increase each test’s “intelligence” factor. Find out how they implemented model-based testing to improve automation effectiveness and how this practice led to the even more valuable behavior-driven testing approach they employ today. With these and other alternative approaches you, too, can get more mileage from your automation efforts. Join John to get inspired and start your own journey of innovation with new ideas that enhance your test automation strategy.
Planning Your Agile Testing: A Practical GuideTechWell
Traditional test plans are incompatible with agile software development because we don't know all the details about all the requirements up front. However, in an agile software release, you still must decide what types of testing activities will be required—and when you need to schedule them. Janet Gregory explains how to use the Agile Testing Quadrants, a model identifying the different purposes of testing, to help your team understand your testing needs as you plan the next release. Janet introduces you to alternative, lightweight test planning tools that allow you to plan and communicate your big picture testing needs and risks. Learn how to decide who does what testing—and when. Determine what types of testing to consider when planning an agile release, the infrastructure and environments needed for testing, what goes into an agile “test plan,” how to plan for acquiring test data, and lightweight approaches for documenting your tests and recording test results.
Keynote: Lean Software Delivery: Synchronizing Cadence with ContextTechWell
Daily, we are told that adopting agile, PaaS, DevOps, crowdsourced testing, or any of the myriad of current buzzwords will help us deliver better software faster. However, for the majority of software development organizations, naïve agile transformations that don’t look beyond the needs of developers will fail to produce the promised results. Mik Kersten says that instead of focusing on development alone to transform our software delivery, we must acknowledge the different contexts and mismatched cadences that define the work of business analysts, developers, testers, and project managers. For example, a developer working in an agile team may deliver code every two weeks, but the performance testing group may need more time for its work, while the operations group has a planned release cycle of once per quarter. To achieve optimum flow, which is the goal of end-to-end lean delivery, we must identify the different cadences of each group and interconnect the collaborators and their work—requirements, development, testing, and deployment.
Introducing Mobile Testing to Your OrganizationTechWell
Mobile is an integral part of our daily lives, and if it’s not already part of your business model, it soon will be. When that happens, will you be ready to tackle the demands of testing web and native mobile apps? From the perspective of a test lead, Eric Montgomery describes the challenges Progressive Insurance, a company with a strong web presence, recently faced—learning new technologies, transforming the approach of testers from PC-based to mobile-based, and working with testing tools in a market that has yet to see a definitive leader emerge. Learn from Eric's experiences and return to your job with ideas on training web testers to be mobile testers. Take back proven techniques for testing mobile devices, ways of choosing devices for test, methods of sharing information, developing a sense of community among testers, choosing tools from the available market, and keeping up with rapid technology changes.
Pay Now or Pay More Every Day: Reduce Technical Debt Now!TechWell
Is your team missing delivery dates? Is your velocity inconsistent from sprint to sprint? Are customers complaining about defects or the time it takes to add new features? These are signs that you are mired in technical debt-a metaphor that describes the long-term costs of doing something in a quick and dirty way and not going back to clean up the mess. Fadi Stephan shares a technical debt management approach to help you make prudent decisions on how much effort to invest in reducing technical debt. Discover ways to measure the quality of your current code base and determine the cost of eventual rework hanging over your system. Learn how to engage executives and get buy-in on a debt removal plan that will improve system design, increase the quality of your code, and return your team to high productivity. If you are burdened with technical debt, the choice is to pay now or continue paying more every day-forever.
Stakeholders always want to release when they think we’ve finished testing. They believe we have discovered “all of the important problems” and “verified all of the fixes”—and now it’s time to reap the rewards. However, as testers we still can assist in improving software by learning about problems after code has rolled live—especially if it’s a website. Jon Bach explores why and how at eBay they have a post-ship site quality mindset in which testers continue to learn from live A/B testing, operational issues, customer sentiment analysis, discussion forums, and customer call patterns—just to name a few. Jon explains what eBay’s Live Site Quality team learns every day about what they just released to production. Take away new ideas on what you can do to test and improve value—even after you’ve shipped.
The nature of exploration, coupled with the ability of testers to rapidly apply their skills and experience, make exploratory testing a widely used test approach—especially when time is short. Unfortunately, exploratory testing often is dismissed by project managers who assume that it is not reproducible, measurable, or accountable. If you have these concerns, you may find a solution in a technique called session-based test management (SBTM), developed by Jon Bach and his brother James to specifically address these issues. In SBTM, testers are assigned areas of a product to explore, and testing is time boxed in “sessions” that have mission statements called “charters” to create a meaningful and countable unit of work. Jon discusses—and you practice—the skills of exploration using the SBTM approach. He demonstrates a freely available, open source tool to help manage your exploration and prepares you to implement SBTM in your test organization.
Exploratory testing is an approach to testing that emphasizes the freedom and responsibility of testers to continually optimize the value of their work. It is the process of three mutually supportive activities done in parallel: learning, test design, and test execution. With skill and practice, exploratory testers typically uncover an order of magnitude more problems than when the same amount of effort is spent on procedurally scripted testing. All testers conduct exploratory testing in one way or another, but few know how to do it systematically to obtain the greatest benefits. Even fewer can articulate the process. Jon Bach looks at specific heuristics and techniques of exploratory testing that will help you get the most from this highly productive approach. Jon focuses on the skills and dynamics of exploratory testing, and how it can be combined with scripted approaches.
There is no doubt about the importance of automated frameworks in the Agile environment and as part of the day-to-day testing process. These are some insights to guide any automation project.
You name the testing topic, and Alan Page has an opinion on it, hands-on practical experience with it—or both. Spend the morning with Alan as he discusses a variety of topics, trends, and tales of software engineering and software testing. In an interactive format loosely based on discovering new testing ideas—and bringing new life to some of the old ideas—Alan shares experiences and stories from his twenty year career as a software tester. Topics may include philosophical rants about code coverage and test pass rates; thoughts on the developer/tester relationship and quality ownership; and insights on test leadership and the real future of test. Join Alan for a unique opportunity to participate in intriguing discussions about testing that will expand your testing knowledge, give you the insight you need to grow your own career, and help your organization succeed.
Saksham Sarode - Innovation Through Introspection - EuroSTAR 2012TEST Huddle
EuroSTAR Software Testing Conference 2012 presentation on Innovation Through Introspection by Saksham Sarode. See more at: http://conference.eurostarsoftwaretesting.com/past-presentations/
In which Professor Koopman talks about why embedded software is often bad, why machine learning will make it more complicated...and why embedded software is critically important.
How do you know if you have too much process, too little, or just the right amount? If you ignore process completely, unpredictability and chaos can follow. If you define the process to the nth degree and follow it religiously, the work grinds to a halt. Janet Gregory shares her experiences about how to find the tastiest balance of process and creativity for your projects and organization. She proposes that a formally defined process is sometimes necessary, but that it should be the exception. Explore with Janet the many variables—team size, complexity, criticality, organization structure, and culture—you must assess to find just the right balance. Learn how to make existing processes better by adding visibility to the process, getting team members’ input, and adapting documentation you need. Learn how to transform complicated processes into simpler ones—such as reporting a simple “thumbs up” or “thumbs down”—and go home with new tools to sprinkle on just enough process.
Successful Test Automation: A Manager’s ViewTechWell
Many organizations invest substantial time and effort in test automation but do not achieve the significant returns they expected. Some blame the tool they used; others conclude test automation just doesn't work in their situation. The truth, however, is often very different. These organizations are typically doing many of the right things but they are not addressing key issues that are vital to long term test automation success. Describing the most important issues that you must address, Mark Fewster helps you understand and choose the best approaches for your organization—no matter which automation tools you use. We’ll discuss both management issues—responsibilities, automation objectives, and return on investment—and technical issues—testware architecture, pre- and post-processing, and automated comparison techniques. If you are involved with managing test automation and need to understand the key issues in making test automation successful, join Mark for this enlightening tutorial.
5 reasons you'll love to hate Agile DevelopmentArin Sime
This is a presentation that Arin Sime of AgilityFeat gave at the 2013 Innovate Virginia conference, on 5 reasons why you will love to hate agile development. He presents 5 different areas that as an agile coach he has often seen teams struggle with when moving to agile methods. For each area, Arin discussed why you should try it anyways and suggested strategies for tackling the problems head on.
Ben Walters - Creating Customer Value With Agile Testing - EuroSTAR 2011TEST Huddle
EuroSTAR Software Testing Conference 2011 presentation on Creating Customer Value With Agile Testing by Ben Walters. See more at: http://conference.eurostarsoftwaretesting.com/past-presentations/
Do you ever feel you have lost confidence in your own abilities? Why does this happen? Isabel Evans spends a lot of time painting. Someone once commented, “Why are you doing this, when you are not very good at it?” And gradually she stopped drawing and painting, after being intimidated by a conventional vision of what good art should look like. At the same time, she experienced a parallel loss of confidence in her professional abilities. Attempting creative pursuits like drawing and painting is essential to cognitive, emotional, creative abilities and she began to understand the correlation between her creative activities and her confidence. Making errors, being wrong, failing – that is a generous gift we receive when we practice outside our skill level. By staying in a comfort zone and repeating successes, we stagnate. As Isabel started to create again she thought “I don’t feel good at it, I do feel good doing it” The difference was that she was learning, having ideas and the act of re-engaging with failure, together with the comradeship of friends and colleagues, including at Women Who Test, Isabel has regained her confidence in her professional abilities, and been able to reboot her career and joy. Join Isabel to share a journey from self-perceived failure, to recovery and renewed learning.
Instill a DevOps Testing Culture in Your Team and Organization TechWell
The DevOps movement is here. Companies across many industries are breaking down siloed IT departments and federating them into product development teams. Testing and its practices are at the heart of these changes. Traditionally, IT organizations have been staffed with mostly manual testers and a limited number of automation and performance engineers. To keep pace with development in the new “you build it, you own it” environment, testing teams and individuals must develop new technical skills and even embrace coding to stay relevant and add greater value to the business. DevOps really starts with testing. Join Adam Auerbach as he explains what DevOps is and how it relates to testing. He describes how testing must change from top to bottom and how to access your own environment to identify improvement opportunities. Adam dives into practices like service virtualization, test data management, and continuous testing so you can understand where you are now and identify steps needed to instill a DevOps testing culture in your team and organization.
Test Design for Fully Automated Build ArchitectureTechWell
Imagine this … As soon as any developed functionality is submitted into the code repository, it is automatically subjected to the appropriate battery of tests and then released straight into production. Setting up the pipeline capable of doing just that is becoming more and more common and something you need to know about. But most organizations hit the same stumbling block—just what IS the appropriate battery of tests? Automated build architectures don't always lend themselves well to the traditional stages of testing. In this hands-on tutorial, Melissa Benua introduces you to key test design principles—applicable to organizations both large and small—that allow you to take full advantage of the pipeline's capabilities without introducing unnecessary bottlenecks. Learn how to make highly reliable tests that run fast and preserve just enough information to let testers and developers determine exactly what went wrong and how to reproduce the error locally. Explore ways to reduce overlap while still maintaining adequate test coverage. Take back ideas about which test areas could benefit from being combined into a single suite and which areas could benefit most from being broken out altogether.
System-Level Test Automation: Ensuring a Good StartTechWell
Many organizations invest a lot of effort in test automation at the system level but then have serious problems later on. As a leader, how can you ensure that your new automation efforts will get off to a good start? What can you do to ensure that your automation work provides continuing value? This tutorial covers both “theory” and “practice”. Dot Graham explains the critical issues for getting a good start, and Chris Loder describes his experiences in getting good automation started at a number of companies. The tutorial covers the most important management issues you must address for test automation success, particularly when you are new to automation, and how to choose the best approaches for your organization—no matter which automation tools you use. Focusing on system level testing, Dot and Chris explain how automation affects staffing, who should be responsible for which automation tasks, how managers can best support automation efforts to promote success, what you can realistically expect in benefits and how to report them. They explain—for non-techies—the key technical issues that can make or break your automation effort. Come away with your own clarified automation objectives, and a draft test automation strategy to use to plan your own system-level test automation.
Testing Transformation: The Art and Science for SuccessTechWell
Technologies, testing processes, and the role of the tester have evolved significantly in the past few years with the advent of agile, DevOps, and other new technologies. It is critical that we testing professionals evaluate ourselves and continue to add tangible value to our organizations. In your work, are you focused on the trivial or on real game changers? Jennifer Bonine describes critical elements that help you artfully blend people, process, and technology to create a synergistic relationship that adds value. Jennifer shares ideas on mastering politics, maneuvering core vs. context, and innovating your technology strategies and processes. She explores how new processes can be introduced in an organization, what the role of organizational culture is in determining the success of a project, and how you can know what tools will add value vs. simply adding overhead and complexity. Jennifer reviews critically needed tester skills and discusses a continual learning model to evolve your skills and stay relevant. This discussion can lead you to technologies, processes, and skills you can stake your career on.
We’ve all been there. We work incredibly hard to develop a feature and design tests based on written requirements. We build a detailed test plan that aligns the tests with the software and the documented business needs. And when we put the tests to the software, it all falls apart because the requirements were changed without informing everyone. Mary Thorn says help is at hand. Enter behavior-driven development (BDD), and Cucumber and SpecFlow, tools for running automated acceptance tests and facilitating BDD. Mary explores the nuances of Cucumber and SpecFlow, and shows you how to implement BDD and agile acceptance testing. By fostering collaboration for implementing active requirements via a common language and format, Cucumber and SpecFlow bridge the communication gap between business stakeholders and implementation teams. In this workshop, practice writing feature files with the best practices Mary has discovered over numerous implementations. If you experience developers not coding to requirements, testers not getting requirements updates, or customers who feel out of the loop and don’t get what they ask for, Mary has answers for you.
Develop WebDriver Automated Tests—and Keep Your SanityTechWell
Many teams go crazy because of brittle, high-maintenance automated test suites. Jim Holmes helps you understand how to create a flexible, maintainable, high-value suite of functional tests using Selenium WebDriver. Learn the basics of what to test, what not to test, and how to avoid overlapping with other types of testing. Jim includes both philosophical concepts and hands-on coding. Testers who haven't written code should not be intimidated! We'll pair you up to make sure you're successful. Learn to create practical tests dealing with advanced situations such as input validation, AJAX delays, and working with file downloads. Additionally, discover when you need to work together with developers to create a system that's more easily testable. This tutorial focuses primarily on automating web tests, but many of the same concepts can be applied to other UI environments. Demos and labs will be in C# and Java using WebDriver. Leave this tutorial having learned how to write high-value WebDriver tests—and stay sane while doing so.
DevOps is a cultural shift aimed at streamlining intergroup communication and improving operational efficiency for development and operations groups. Over time, inclusion of other IT groups under the DevOps umbrella has become the norm for many organizations. But even broadening the boundaries of DevOps, the conversation has been largely devoid of the business units’ place at the table. A common mistake organizations make while going through the DevOps transformation is drawing a line at the IT boundary. If that occurs, a larger, more inclusive silo within the organization is created, operating in an informational vacuum and causing operational inefficiency and goal misalignment. Sharing his experiences working on both sides of the fence, Leon Fayer describes the importance of including business units in order to align technology decisions with business goals. Leon discusses inclusion of business units in existing agile processes, benefits of cross-departmental monitoring, and a business-first approach to technology decisions.
Eliminate Cloud Waste with a Holistic DevOps StrategyTechWell
Chris Parlette maintains that renting infrastructure on demand is the most disruptive trend in IT in decades. In 2016, enterprises spent $23B on public cloud IaaS services. By 2020, that figure is expected to reach $65B. The public cloud is now used like a utility, and like any utility, there is waste. Who's responsible for optimizing the infrastructure and reducing wasted expenses? It’s DevOps. The excess expense, known as cloud waste, comprises several interrelated problems: services running when they don't need to be, improperly sized infrastructure, orphaned resources, and shadow IT. There are a few core tenets of DevOps—holistic thinking, no silos, rapid useful feedback, and automation—that can be applied to reducing your cloud waste. Join Chris to learn why you should include continuous cost optimization in your DevOps processes. Automate cost control, reduce your cloud expenses, and make your life easier.
Transform Test Organizations for the New World of DevOpsTechWell
With the recent emergence of DevOps across the industry, testing organizations are being challenged to transform themselves significantly within a short period of time to stay meaningful within their organizations. It’s not easy to plan and approach these changes considering the way testing organizations have remained structured for ages. These challenges start from foundational organizational structures and can cut across leadership influence, competencies, tools strategy, infrastructure, and other dimensions. Sumit Kumar shares his experience assisting various organizations to overcome these challenges using an organized DevOps enablement framework. The framework includes radical restructuring, turning the tools strategy upside down, a multidimensional workforce enablement supported by infrastructure changes, redeveloped collaborations models, and more. From his real world experiences Sumit shares tips for approaching this journey and explains the roadmap for testing organizations to transform themselves to lead the quality in DevOps.
The Fourth Constraint in Project Delivery—LeadershipTechWell
All too often, the triple constraints—time, cost, and quality—are bandied about as if they are the be-all, end-all. While they are important, leadership—the fourth and larger underpinning constraint—influences the first three. Statistics on project success and failure abound, and these measurements are usually taken against the triple constraints. According to the Project Management Institute, only 53 percent of projects are completed within budget, and only 49 percent are completed on time. If so many projects overrun budget and are late, we can’t really say, “Good, fast, or cheap—pick two.” Rob Burkett talks about leadership at every level of a team. He shares his insights and stories gleaned from his years of IT and project management experience. Rob speaks to some of the glaring difficulties in the workplace in general and some specifically related to IT delivery and project management. Leave with a clearer understanding of how to communicate with teams and team members, and gain a better understanding of how you can be a leader—up and down your organization.
Resolve the Contradiction of Specialists within Agile TeamsTechWell
As teams grow, organizations often draw a distinction between feature teams, which deliver the visible business value to the user, and component teams, which manage shared work. Steve Berczuk says that this distinction can help organizations be more productive and scale effectively, but he recognizes that not all shared work fits into this model. Some work is best handled by “specialists,” that is people with unique skills. Although teams composed entirely of T-shaped people is ideal, certain skills are hard to come by and are used irregularly across an organization. Since these specialists often need to work closely with teams, rather than working from their own backlog, they don’t fit into the component team model. The use of shared resources presents challenges to the agile planning model. Steve Berczuk shares how teams such as those providing infrastructure services and specialists can fit into a feature+component team model, and how variations such as embedding specialists in a scrum team can both present process challenges and add significant value to both the team and the larger organization.
Pin the Tail on the Metric: A Field-Tested Agile GameTechWell
Metrics don’t have to be a necessary evil. If done right, metrics can help guide us to make better forward-looking decisions, rather than being used for simply managing or monitoring. They can help us identify trade-offs between options for what to do next versus punitive or worse, purely managerial measures. Steve Martin won’t be giving the Top Ten List of field-tested metrics you should use. Instead, in this interactive mini-workshop, he leads you through the critical thinking necessary for you to determine what is right for you to measure. First, Steve explores why you want to measure something—whether it’s for a team, a portfolio, or even an agile transformation. Next, he provides multiple real-life metrics examples to help drive home concepts behind characteristics of good and bad metrics. Finally, Steve shows how to run his field-tested agile game—Pin the Tail on the Metric. Take back this activity to help you guide metrics conversations at your organization.
Agile Performance Holarchy (APH)—A Model for Scaling Agile TeamsTechWell
A hierarchy is an organizational network that has a top and a bottom, and where position is determined by rank, importance, and value. A holarchy is a network that has no top or bottom and where each person’s value derives from his ability, rather than position. As more companies seek the benefits of agile, leaders need to build and sustain delivery capability while scaling agile without introducing unnecessary process and overhead. The Agile Performance Holarchy (APH) is an empirical model for scaling and sustaining agility while continuing to deliver great products. Jeff Dalton designed the APH by drawing from lessons learned observing and assessing hundreds of agile companies and teams. The APH helps implement a holarchy—a system composed of interacting organizational units called holons—centered on a series of performance circles that embody the behaviors of high performing agile organizations. Jeff describes how APH provides guidelines in the areas of leadership, values, teaming, visioning, governing, building, supporting, and engaging within an all-agile organization. Join Jeff to see what the APH is all about and how you can use it in your team and organization.
A Business-First Approach to DevOps ImplementationTechWell
DevOps is a cultural shift aimed at streamlining intergroup communication and improving operational efficiency for development and operations groups. Over time, inclusion of other IT groups under the DevOps umbrella has become the norm for many organizations. But even broadening the boundaries of DevOps, the conversation has been largely devoid of the business units’ place at the table. A common mistake organizations make while going through the DevOps transformation is drawing a line at the IT boundary. If that occurs, a larger, more inclusive silo within the organization is created, operating in an informational vacuum and causing operational inefficiency and goal misalignment. Sharing his experiences working on both sides of the fence, Leon Fayer describes the importance of including business units in order to align technology decisions with business goals. Leon discusses inclusion of business units in existing agile processes, benefits of cross-departmental monitoring, and a business-first approach to technology decisions.
Databases in a Continuous Integration/Delivery ProcessTechWell
DevOps is transforming software development with many organizations adopting lean development practices, implementing continuous integration (CI), and performing regular continuous deployment (CD) to their production environments. However, the database is largely ignored and often seen as a bottleneck in the DevOps process. Steve Jones discusses the challenges of database development and why many developers find the database to be an impediment to the CD process. Steve shares the techniques you can use to fit a database into the DevOps process. Learn how to store database code in a version control system, and the differences between that and application code. Steve demonstrates a CI process with SQL code and uses automated testing frameworks to check the code. Steve then shows how automated releases with manual gates can reduce the stress and risk of database deployments while ensuring consistent, reliable, repeatable releases to QA, UAT, and production.
Mobile Testing: What—and What Not—to AutomateTechWell
Organizations are moving rapidly into mobile technology, which has significantly increased the demand for testing of mobile applications. David Dangs says testers naturally are turning to automation to help ease the workload, increase potential test coverage, and improve testing efficiency. But should you try to automate all things mobile? Unfortunately, the answer is not always clear. Mobile has its own set of complications, compounded by a wide variety of devices and OS platforms. Join David to learn what mobile testing activities are ripe for automation—and those items best left to manual efforts. He describes the various considerations for automating each type of mobile application: mobile web, native app, and hybrid applications. David also covers device-level testing, types of testing, available automation tools, and recommendations for automation effectiveness. Finally, based on his years of mobile testing experience, David provides some tips and tricks to approach mobile automation. Leave with a clear plan for automating your mobile applications.
Cultural Intelligence: A Key Skill for SuccessTechWell
Diversity is becoming the norm in everyday life. However, introducing global delivery models without a proper understanding of intercultural differences can lead to difficulty, frustration, and reduced productivity. Priyanka Sharma and Thena Barry say that in our diverse world, we need teams with people who can cross these boundaries, communicate effectively, and build the diverse networks necessary to avoid problems. We need to learn about cultural intelligence (CI) and cultural quotient (CQ). CI is the ability to relate and work effectively across cultures. CQ is the cognitive, motivational, and behavioral capacity to understand and respond to beliefs, values, attitudes, and behaviors of individuals and groups. Together, CI and CQ can help us build behavioral capacities that aid motivation, behavior, and productivity in teams as well as individuals. Priyanka and Thena show how to build a more culturally intelligent place with tools and techniques from Leading with Cultural Intelligence, as well as content from the Hofstede cultural model. In addition, they illustrate the model with real-life experiences and demonstrate how they adapted in similar circumstances.
Turn the Lights On: A Power Utility Company's Agile TransformationTechWell
Why would a century-old utility with no direct competitors take on the challenge of transforming its entire IT application organization to an agile methodology? In an increasingly interconnected world, the expectations of customers continue to evolve. From smart meters to smart phones, IoT is creating a crisis point for industries not accustomed to rapid change. Glen Morris explains that pizzas can be tracked by the minute and packages at every stop, and customers now expect this same customer service model should exist for all industries—including power. Glen examines how to create momentum and transform non-IT-focused industries to an agile model. If you are struggling with gaining traction in your pursuit of agile within your business, Glen gives you concrete, practical experiences to leverage in your pursuit. Finally, he communicates how to gain buy-in from business partners who have no idea or concern about agile or its methodologies. If your business partners look at you with amusement when you mention the need for a dedicated Product Owner, join Glen as he walks you through the approaches to overcoming agile skepticism.
Scale: The Most Hyped Term in Agile Development TodayTechWell
Scrum is everywhere. More than 90 percent of agile teams use it. But for many organizations wanting to scale agile, one team using Scrum is not enough. Dave West says the Nexus Framework, created by Ken Schwaber, the co-creator of Scrum, provides an exoskeleton for Scrum. Nexus allows multiple teams to work together to produce an integrated increment regularly. It addresses the key challenges of scaling agile development by adding new yet minimal events, artifacts, and roles to the Scrum framework. Dave discusses Nexus, addresses its boundaries, and explains what else is needed for agile to thrive in an organization. Dave explores how organizations have transitioned to agile, and examines their successes and challenges in implementing Scrum, how they envision scaling with Nexus, and goals for creating a Scrum Studio.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
Keynote: Lightning Strikes the Keynotes
1. KW3
Keynote
5/1/2013 4:30:00 PM
Lightning Strikes the Keynotes
Presented by:
Lee Copeland
Software Quality Engineering
Brought to you by:
340 Corporate Way, Suite 300, Orange Park, FL 32073
888-268-8770 ∙ 904-278-0524 ∙ sqeinfo@sqe.com ∙ www.sqe.com
2. Lee Copeland
With more than thirty years of experience as an information systems professional at commercial and
nonprofit organizations, Lee Copeland has held technical and managerial positions in applications
development, software testing, and software process improvement. Lee has developed and taught
numerous training courses on software development and testing issues and is a well-known speaker with
Software Quality Engineering. Lee presents at software conferences in the United States and abroad. He
is the author of the popular reference book, A Practitioner’s Guide to Software Test Design.
3. 5/13/2013
Lightning Strikes
The Keynotes
Moderated by
Lee Copeland
lee@sqe.com
Featuring
James Bach
Jennifer Bonine
Dawn Haynes
Jon Bach
Hans Buwalda
Michael Bolton
Bob Galen
Geoff Horne
John Fodeh
Griffin Jones
1
8. 5/13/2013
Error Message
Via Rail, between Montreal and Toronto, 2007
But I can’t contact my… oh, never mind.
Error Message
Via Rail, between Montreal and Toronto, 2007
6
10. 5/13/2013
Why you shouldn’t let an unsupervised
algorithm choose your sponsored links (1).
Vimeo’s Web Page
Spring 2010
Why you shouldn’t let an unsupervised
algorithm choose your sponsored links (2).
Vimeo’s Web Page
Spring 2010
8
11. 5/13/2013
Why you shouldn’t let an unsupervised
algorithm choose your sponsored links (3).
Vimeo’s Web Page
Spring 2010
Google Chrome
9
22. 5/13/2013
Misconceptions of
Test Automation
(and keyword testing)
Hans Buwalda
" automation is easy, no need to think
about it much "
Comments:
I'm still waiting to see my first "easy" automation project
Development is hard, testing is harder, automated testing is the
hardest
If you can't do automation well, be ready to lose time and money
If you can do automated testing well... you're in an enviable position
20
23. 5/13/2013
" test automation means automating
manual tests "
Comments:
+
=
A car is not the same as a carriage with an engine
Good automated testing is not the same as good automation of good
manual testing
Automating manual test designs tends to be cumbersome, uninspiring,
maintenance sensitive, and hard to scale
How you organize and design your tests is the main driver for automation
success
" keywords is a method "
Comments:
Keywords are not much more than a format to write tests in, in itself
not much different from (good) coding of test cases
Some of the worst tests I have seen were keyword tests
I do believe however that keywords are just about the only way to go
for big and complex projects (in addition to exploratory testing). They
just need a method
In my approach "test modules" play a central role for organizing test, to
achieve effective test development and successful automation
21
24. 5/13/2013
Example of a method with keywords:
"Action Based Testing"
Test Module Plan
Test Module 1
Test Module 2
Objectives
Objectives
Tests
Test Module N
Tests
...
Objectives
Tests
Actions
AUTOMATION
interaction test
window
log in
log in
enter
enter
control
business test
user
value
user name jdoe
password StarEast
log in
password
jdoe
StarEast
first
control
property
expected
log in
ok button
enabled
true
last
brand
John
John
Renter
Renter
Ford
Escape
Chevrolet Volt
last
window
check prop
rent car
rent car
model
total
check bill Renter 140.42
" to do automated testing you need to
be a good programmer "
Comments:
The focus should be on testing and test design, not on programming tests
A good tester is not automatically a good programmer, or vice versa
I don't believe we should replace all testers with programmers "in test"
A good programmer can contribute greatly to an airplane control system,
but is not automatically a good airline pilot
Even automation itself is a profession, quite different from regular
programming
22
25. 5/13/2013
" if there are automation problems,
tests should be debugged "
log in
rent car
rent car
Comments:
check bill
user
jdoe
firs t
John
John
last
Renter
pas swo
rd
StarEast
last
Renter
Renter
total
brand
model
Ford
Escape
Chevrole
t Volt
140.42
"Thou should never debug tests"
If observed results are not the expected results, it is not an
automation problem, either the tester or the developer were off
If the application under tests UI isn't accepting your input, run lower
level tests first
If actions aren't working, test them and make them better, before
running your tests with them
" you need an ROI analysis to
determine which tests to automate "
Comments:
$
$ $$ $ $ $
$ $ $ $$
$ $ $ $
$ $
One of the most commonly found statements on test automation
I consider In my view, in a good automated testing effort (my definition
of 'good'), automation is a secondary practical matter
may have to address technology issues to interface with a system under test
good automation is cheap and re-usable, and has a high payoff in time and money
I would rather see an ROI on the tests than on their automation
23
26. 5/13/2013
" automated tests are dumb "
Comments:
They often are very mechanical, and boring, in particular when 1 on 1
based on requirements or specifications, but they don't have to be
Distinguish between an analytical activity ("what to test") and creative
activity ("how to test"), and be mean...
It is the responsibility of the testers to ensure tests are not dumb
automation is not an excuse
Lame tests are not likely to find interesting bugs
" homework "
Are these misconceptions ???
"test automation is the same as programming"
"the most important activity in an automation effort is selecting a tool"
"automation is most suitable for regression testing"
"test automation is a technical challenge"
"if you use keywords your test automation will be successful"
"to have more automation, you just need more people"
24
36. 5/13/2013
WREST
Workshop on Regulated Software Testing
Software subject to review by
an internal or external regulatory body
Purpose and Format
• Share and generate ideas and
techniques
• Provide a forum for people
interested in the topic
• Participation is free and open to
all
• LAWST-style rules of
engagement
34
37. 5/13/2013
Workshop Structure
• Facilitated
• Series of experiential
presentations and group
discussions
• Atmosphere is collaborative,
supportive and constructive
• Focus on the practical and
useful
More Information
• Next WREST: Friday, May 3rd
2013
Hosted by SQE here at
STAREAST
• Contact:
Karen N. Johnson, John
McConda, Griffin Jones
• Website: wrestworkshop.com
35
38. 5/13/2013
Our Thanks To
James Bach
Jennifer Bonine
Dawn Haynes
Jon Bach
Hans Buwalda
Michael Bolton
Bob Galen
Geoff Horne
John Fodeh
Griffin Jones
36