These are the latest version of the slides for mine and Jonathan's talk on Testing Katas. They cover using katas to learn testing skills and then include an example kata that we have run in house.
This session will cover the theoretical and practical aspects of surveys and survey design. Topics will include: pre-survey design, actualizing a survey, analyzing the results, why and when surveys are appropriate, determining the survey audience, existing Yale surveys, and some survey tools popular at Yale including Survey Monkey, ClassesV2, QuestionMark, Sharepoint, Qualtrics and ITS provided solutions.
Staff from the Stat Lab will touch on some of the salient topics from their Survey Design workshop (free to the Yale community) and discuss the expertise available to the Yale community with quantitative and qualitative (text) analysis and tools such as SPSS Text Analysis for Surveys and NVivo.
Presenters will be Themba Flowers, Manager of the Social Science Stat Lab and Scott Matheson, Web Manager at the Yale University Library.
At UNC Chapel Hill, the User Experience and Assessment department regularly runs usability tests to inform our decision making and prioritize our users’ perspectives as we make changes. But there are more things to test than there are hours in the day. Our projects have a variety of stakeholders who are very interested in improving their services, and we found ourselves with a long list of tests we wanted to run.
To catch up, we adapted Harvard Libraries’ Test Fest model: five tests run simultaneously, with five participants rotating through the set of tests. Over a span of two hours, we completed 25 individual usability tests. In this one event, we caught up on much of our testing backlog.
This session will outline how we planned and executed Test Fest and what we learned from using this approach. We’ll also discuss how we approached analyzing the large amount of qualitative data that was gathered during testing, via affinity diagrams and lots of post-it notes.
The focus of this session is on our methodologies with an aim to include time for attendees to discuss how they would have approached the backlog, setting up Test Fest, and analyzing the data.
Presentation at Empirical Librarians 2018 in Knoxville, TN.
At UNC Chapel Hill, the User Experience and Assessment department regularly runs usability tests to inform our decision making and prioritize our users’ perspectives as we make changes. But there are more things to test than there are hours in the day. Our projects have a variety of stakeholders who are very interested in improving their services, and we found ourselves with a long list of tests we wanted to run.
To catch up, we adapted Harvard Libraries’ Test Fest model: five tests run simultaneously, with five participants rotating through the set of tests. Over a span of two hours, we completed 25 individual usability tests. In this one event, we caught up on much of our testing backlog.
This session will outline how we planned and executed Test Fest and what we learned from using this approach. We’ll also discuss how we approached analyzing the large amount of qualitative data that was gathered during testing, via affinity diagrams and lots of post-it notes.
The focus of this session is on our methodologies with an aim to include time for attendees to discuss how they would have approached the backlog, setting up Test Fest, and analyzing the data.
Overcoming the Obstacles, Pitfalls, and Dangers of Unit TestingStephen Ritchie
Have you ever bumped into a wall with your automated tests? Many developers bump into various roadblocks and hurdles when writing test code. Are your test methods starting to fail because the code-under-test uses DateTime.Now? Are your automated integration tests failing because the database they integrate with keeps changing? Do you have an explosion of test methods, with the ratio of test code to code-under-test way too high? Is your effort to refactor and improve code overwhelmed by the time it takes to rewrite all those failing unit tests?
This presentation is about clearing away automated testing obstacles, avoiding common pitfalls, and staying away from dangerous practices.
One of my recent endeavours has been to create a "career model" for the testers within my organisation. I sat in my office at home and designed "the testing wheel". I wanted it to be simple, inclusive and offer questions but few answers.
Careers are long and winding. My own career has not been a linear progression, so building a linear model seemed wrong to me. But, this brought me into conflict with the ideas of others.
My experience report will show the highs and lows of this model after introducing it into the wild. People still tried to measure and rank with it, got mad people got when I refused to answer the questions it posed. Finally how it spread and entered battle with wider organisations career models.
This session will cover the theoretical and practical aspects of surveys and survey design. Topics will include: pre-survey design, actualizing a survey, analyzing the results, why and when surveys are appropriate, determining the survey audience, existing Yale surveys, and some survey tools popular at Yale including Survey Monkey, ClassesV2, QuestionMark, Sharepoint, Qualtrics and ITS provided solutions.
Staff from the Stat Lab will touch on some of the salient topics from their Survey Design workshop (free to the Yale community) and discuss the expertise available to the Yale community with quantitative and qualitative (text) analysis and tools such as SPSS Text Analysis for Surveys and NVivo.
Presenters will be Themba Flowers, Manager of the Social Science Stat Lab and Scott Matheson, Web Manager at the Yale University Library.
At UNC Chapel Hill, the User Experience and Assessment department regularly runs usability tests to inform our decision making and prioritize our users’ perspectives as we make changes. But there are more things to test than there are hours in the day. Our projects have a variety of stakeholders who are very interested in improving their services, and we found ourselves with a long list of tests we wanted to run.
To catch up, we adapted Harvard Libraries’ Test Fest model: five tests run simultaneously, with five participants rotating through the set of tests. Over a span of two hours, we completed 25 individual usability tests. In this one event, we caught up on much of our testing backlog.
This session will outline how we planned and executed Test Fest and what we learned from using this approach. We’ll also discuss how we approached analyzing the large amount of qualitative data that was gathered during testing, via affinity diagrams and lots of post-it notes.
The focus of this session is on our methodologies with an aim to include time for attendees to discuss how they would have approached the backlog, setting up Test Fest, and analyzing the data.
Presentation at Empirical Librarians 2018 in Knoxville, TN.
At UNC Chapel Hill, the User Experience and Assessment department regularly runs usability tests to inform our decision making and prioritize our users’ perspectives as we make changes. But there are more things to test than there are hours in the day. Our projects have a variety of stakeholders who are very interested in improving their services, and we found ourselves with a long list of tests we wanted to run.
To catch up, we adapted Harvard Libraries’ Test Fest model: five tests run simultaneously, with five participants rotating through the set of tests. Over a span of two hours, we completed 25 individual usability tests. In this one event, we caught up on much of our testing backlog.
This session will outline how we planned and executed Test Fest and what we learned from using this approach. We’ll also discuss how we approached analyzing the large amount of qualitative data that was gathered during testing, via affinity diagrams and lots of post-it notes.
The focus of this session is on our methodologies with an aim to include time for attendees to discuss how they would have approached the backlog, setting up Test Fest, and analyzing the data.
Overcoming the Obstacles, Pitfalls, and Dangers of Unit TestingStephen Ritchie
Have you ever bumped into a wall with your automated tests? Many developers bump into various roadblocks and hurdles when writing test code. Are your test methods starting to fail because the code-under-test uses DateTime.Now? Are your automated integration tests failing because the database they integrate with keeps changing? Do you have an explosion of test methods, with the ratio of test code to code-under-test way too high? Is your effort to refactor and improve code overwhelmed by the time it takes to rewrite all those failing unit tests?
This presentation is about clearing away automated testing obstacles, avoiding common pitfalls, and staying away from dangerous practices.
One of my recent endeavours has been to create a "career model" for the testers within my organisation. I sat in my office at home and designed "the testing wheel". I wanted it to be simple, inclusive and offer questions but few answers.
Careers are long and winding. My own career has not been a linear progression, so building a linear model seemed wrong to me. But, this brought me into conflict with the ideas of others.
My experience report will show the highs and lows of this model after introducing it into the wild. People still tried to measure and rank with it, got mad people got when I refused to answer the questions it posed. Finally how it spread and entered battle with wider organisations career models.
Machine Learning es una rama de la inteligencia artificial, que nos permite utilizar algoritmos que pueden operar sobre datos para determinar comportamiento, patrones, preferencias, etc.
Apache Mahout es una librería de código abierto que implementa una diversidad de algoritmos de Machine Learning, que bien pueden ser usados para construir un motor de recomendaciones para dirigir compras.
Introduction into testing in large and distributed organizations that are practicing agile methods. Ideas, practices and tools to help develop open communication, deal with cultural differences both within an organization and across continents specifically related to testing activities.
STARCANADA 2013 Keynote: Lightning Strikes the KeynotesTechWell
Throughout the years, Lightning Talks have been a popular part of the STAR conferences. If you’re not familiar with the concept, Lightning Talks consists of a series of five-minute talks by different speakers within one presentation period. Lightning Talks are the opportunity for speakers to deliver their single biggest bang-for-the-buck idea in a rapid-fire presentation. And now, lightning has struck the STAR keynotes. Some of the best-known experts in testing—Jon Bach, Michael Bolton, Fiona Charles, Janet Gregory, Paul Holland, Griffin Jones, Keith Klain, Gerard Meszaros, and Nate Oster—will step up to the podium and give you their best shot of lightning. Get ten keynote presentations for the price of one—and have some fun at the same time.
Artificial Intelligence lecture notes. AI summarized notes for expert systems, inference mechanisms and so on, this is reading and may be for self-learning, I think.
Gain a deeper understanding of what Exploratory Testing (ET) is, the essential elements of the practice with practical tips and techniques, and finally, ideas for integrating ET into the cadence of an agile process
Michael Bolton - Heuristics: Solving Problems RapidlyTEST Huddle
EuroSTAR Software Testing Conference 2008 presentation on Heuristics: Solving Problems Rapidly by Michael Bolton. See more at conferences.eurostarsoftwaretesting.com/past-presentations/
Embedding Clinical standards in research workshopJames Malone
My slides on a talk titled "Standards and ontologies - changing the value proposition" given at the "Clinical Standards in Research: Development, Implementation and Curation" workshop, EMBL-EBI, Cambridge, October 2016.
Modern Perspectives on Recommender Systems and their Applications in MendeleyKris Jack
Presentation given for one of Pearson's Data Research teams. It motivates the use of recommender systems, describes common approaches to building and evaluating them and gives examples of how they are used in Mendeley. Thanks to Maya Hristakeva for creating some of the slides.
Exploratory testing is an approach to testing that emphasizes the freedom and responsibility of testers to continually optimize the value of their work. It is the process of three mutually supportive activities done in parallel: learning, test design, and test execution. With skill and practice, exploratory testers typically uncover an order of magnitude more problems than when the same amount of effort is spent on procedurally scripted testing. All testers conduct exploratory testing in one way or another, but few know how to do it systematically to obtain the greatest benefits. Even fewer can articulate the process. Jon Bach looks at specific heuristics and techniques of exploratory testing that will help you get the most from this highly productive approach. Jon focuses on the skills and dynamics of exploratory testing, and how it can be combined with scripted approaches.
Test Automation using UiPath Test Suite - Developer Circle Part-3 - 07262022.pdfDiana Gray, MBA
Deep Dive into Test Suite capabilities with Hands-on Workshop
In Part 3 of Test Automation using UiPath Test Suite – Developer Series, We will Deep Dive into the Test Suite capabilities with a Hands-on Workshop on,
- RPA Testing
- Application Testing (Web Application, Windows Application, SAP)
- Mobile Testing
- API Testing
- Code Coverage
- Data-Driven Testing
Speakers: Atul Trikha, Sreenivasa Adathakula
I believe that our existing models of testing are not fit for purpose – they are inconsistent, controversial, partial, proprietary and stuck in the past. They are not going to support us in the rapidly emerging technologies and approaches. The certification schemes that should represent the interests and integrity of our profession don’t, and we are left with schemes that are popular, but have low value, lower esteem and attract harsh criticism. My goal in proposing the New Model is to stimulate new thinking in this area.
eurostarconferences.com
testhuddle.com
I believe that our existing models of testing are not fit for purpose – they are inconsistent, controversial, partial, proprietary and stuck in the past. They are not going to support us in the rapidly emerging technologies and approaches. The certification schemes that should represent the interests and integrity of our profession don’t, and we are left with schemes that are popular, but have low value, lower esteem and attract harsh criticism. My goal in proposing the New Model is to stimulate new thinking in this area.
eurostarconferences.com
testhuddle.com
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Machine Learning es una rama de la inteligencia artificial, que nos permite utilizar algoritmos que pueden operar sobre datos para determinar comportamiento, patrones, preferencias, etc.
Apache Mahout es una librería de código abierto que implementa una diversidad de algoritmos de Machine Learning, que bien pueden ser usados para construir un motor de recomendaciones para dirigir compras.
Introduction into testing in large and distributed organizations that are practicing agile methods. Ideas, practices and tools to help develop open communication, deal with cultural differences both within an organization and across continents specifically related to testing activities.
STARCANADA 2013 Keynote: Lightning Strikes the KeynotesTechWell
Throughout the years, Lightning Talks have been a popular part of the STAR conferences. If you’re not familiar with the concept, Lightning Talks consists of a series of five-minute talks by different speakers within one presentation period. Lightning Talks are the opportunity for speakers to deliver their single biggest bang-for-the-buck idea in a rapid-fire presentation. And now, lightning has struck the STAR keynotes. Some of the best-known experts in testing—Jon Bach, Michael Bolton, Fiona Charles, Janet Gregory, Paul Holland, Griffin Jones, Keith Klain, Gerard Meszaros, and Nate Oster—will step up to the podium and give you their best shot of lightning. Get ten keynote presentations for the price of one—and have some fun at the same time.
Artificial Intelligence lecture notes. AI summarized notes for expert systems, inference mechanisms and so on, this is reading and may be for self-learning, I think.
Gain a deeper understanding of what Exploratory Testing (ET) is, the essential elements of the practice with practical tips and techniques, and finally, ideas for integrating ET into the cadence of an agile process
Michael Bolton - Heuristics: Solving Problems RapidlyTEST Huddle
EuroSTAR Software Testing Conference 2008 presentation on Heuristics: Solving Problems Rapidly by Michael Bolton. See more at conferences.eurostarsoftwaretesting.com/past-presentations/
Embedding Clinical standards in research workshopJames Malone
My slides on a talk titled "Standards and ontologies - changing the value proposition" given at the "Clinical Standards in Research: Development, Implementation and Curation" workshop, EMBL-EBI, Cambridge, October 2016.
Modern Perspectives on Recommender Systems and their Applications in MendeleyKris Jack
Presentation given for one of Pearson's Data Research teams. It motivates the use of recommender systems, describes common approaches to building and evaluating them and gives examples of how they are used in Mendeley. Thanks to Maya Hristakeva for creating some of the slides.
Exploratory testing is an approach to testing that emphasizes the freedom and responsibility of testers to continually optimize the value of their work. It is the process of three mutually supportive activities done in parallel: learning, test design, and test execution. With skill and practice, exploratory testers typically uncover an order of magnitude more problems than when the same amount of effort is spent on procedurally scripted testing. All testers conduct exploratory testing in one way or another, but few know how to do it systematically to obtain the greatest benefits. Even fewer can articulate the process. Jon Bach looks at specific heuristics and techniques of exploratory testing that will help you get the most from this highly productive approach. Jon focuses on the skills and dynamics of exploratory testing, and how it can be combined with scripted approaches.
Test Automation using UiPath Test Suite - Developer Circle Part-3 - 07262022.pdfDiana Gray, MBA
Deep Dive into Test Suite capabilities with Hands-on Workshop
In Part 3 of Test Automation using UiPath Test Suite – Developer Series, We will Deep Dive into the Test Suite capabilities with a Hands-on Workshop on,
- RPA Testing
- Application Testing (Web Application, Windows Application, SAP)
- Mobile Testing
- API Testing
- Code Coverage
- Data-Driven Testing
Speakers: Atul Trikha, Sreenivasa Adathakula
I believe that our existing models of testing are not fit for purpose – they are inconsistent, controversial, partial, proprietary and stuck in the past. They are not going to support us in the rapidly emerging technologies and approaches. The certification schemes that should represent the interests and integrity of our profession don’t, and we are left with schemes that are popular, but have low value, lower esteem and attract harsh criticism. My goal in proposing the New Model is to stimulate new thinking in this area.
eurostarconferences.com
testhuddle.com
I believe that our existing models of testing are not fit for purpose – they are inconsistent, controversial, partial, proprietary and stuck in the past. They are not going to support us in the rapidly emerging technologies and approaches. The certification schemes that should represent the interests and integrity of our profession don’t, and we are left with schemes that are popular, but have low value, lower esteem and attract harsh criticism. My goal in proposing the New Model is to stimulate new thinking in this area.
eurostarconferences.com
testhuddle.com
Similar to Testing katas - Try before you buy (20)
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...
Testing katas - Try before you buy
1. Testing Katas: Try Before you Buy
Emma Armstrong
@EmmaATester
www.taooftesting.co.uk
Jonathan Watts
@whatie
2. Today’s Session
• What’s this session all about?
• What are katas and why use them for testing?
• Some background on using oracles for testing
• The kata
• Introduce the objective
• Testing the application
• Debrief session
• Just ask questions as we go through
3. What are katas?
• Repeated practical exercises to ingrain behaviours until they become
instinctual
• Allow you to cover various topics and situations in a safe learning
environment
• Encourage knowledge sharing and collaboration
4. Why use katas for testing?
• Deliberate practice
• Learning and development – Tacit skills
• Consistency
• Knowledge sharing
• Quality is everyone’s responsibility
5. Let’s try it then…
• Introduce the topic
• Get into groups
• Test the application as a group
• Report back on your findings
• Group discussion
6. Heuristics and Oracles
A heuristic is a mechanism to try and solve a
problem or learn something
Testing examples include:
• Brute force
• Testing at the boundaries
• Input methods
• Starvation
• Working backwards
• Common sense / Educated guess
• Rule of thumb
7. Heuristics and Oracles
“An oracle is a heuristic principle or mechanism by which someone recognizes a problem.”
Michael Bolton, 2010
Examples:
• Requirements • Marketing claims
• Laws and standards • Scientific models
• Previous versions • Maths
• Comparable products • Experts
• Your experience
When do you use this?
• Bug evaluation
• Reporting
• Test design
They are fallible
8. The kata - The consistency heuristic
“We expect the system to be consistent with systems that are in some
way comparable. This includes other products in the same product line;
competitive products, services, or systems; or products that are not in
the same category but which process the same data; or alternative
processes or algorithms.”
Michael Bolton
Examine the calculator web application from a black box perspective
using the consistency heuristic and document your findings.
For example, compare the behaviour our calculator to another
equivalent application and see where the inconsistencies are.
http://bit.ly/1v4Nw6R (http://emmaarmstrong.github.io/CalculatorJavaScript/Calc.html)
9. Debrief
• What did you find?
• How did you go about finding them?
10. Where do we go from here?
• There is also a development
opportunity for those people
running the katas
• We are aligning them with other
educational activities at Red Gate
• We are sharing this talk and the
katas we have run with anyone
who wants them
• Come and join in our katas
11. Questions?
The calculator
https://github.com/EmmaArmstrong/CalculatorJavaScript
Code katas
http://codekata.com/
http://codingkata.net/
http://dev.red-gate.com/category/blog/code-katas/
Test katas
http://www.soapui.org/Testing-Katas/what-are-testing-katas.html
http://testing-challenges.org/tiki-index.php
http://weekendtesting.com/
Test heuristics
http://testobsessed.com/wp-content/uploads/2011/04/testheuristicscheatsheetv1.pdf
Thank you for your time
Editor's Notes
About
Emma Armstrong
Jonathan Watts
Red Gate
What are katas? Kata is a Japanese word describing exercises that were done repeatedly until they became instinctual. Dave Thomas, the co author of The pragmatic programmer first coined the term in respect of code katas and testing katas are no different. We are talking about exercises done in groups, pairs or even individually that allow you to practice something and then learn from others experiences through discussion. We are not suggesting the same exercise done again but different exercises or the same exercise with different constraints.
We believe that the forms of katas can be used to help learn any skill but especially testing, where testing should be happening at all stages in the project and it should be instinctual, testers should know what to do, and adapt their approach based on the context they are in.
Why do katas? As we know, complete testing is rarely possible especially if you have over about 2 lines of code. This can mean the testing tasks at hand can seem infinite and so katas provide a way of enabling deliberate practice.
Maintaining product quality across multiple teams. We work in cross functional teams, which means some of the knowledge transfer of skills in testing that might have occurred due to being immersed in a testing team does not, so to try and develop expertise we need to facilitate this (instructional expertise) in a different way.
If you google Agile and quality, the phrase “Quality is everyone’s responsibility” will come up a few times. We believe that to achieve this you need to expose people of other disciplines to the activities that testers perform. The testing katas that we run are open to the whole company.
Format – At Red Gate we have deliberately chosen to keep the exercises down to only a 20-30minute exercise and then a 30minute -1hr meeting afterwards and only do 1 kata a month. We have also run 1hr workshops.
In today’s kata we are not going to give you any specifications we want you to apply the consistency oracle, specifically looking at the consistency with a comparable product heuristic. So we are focussing on using the behaviour of a comparable product to help evaluate the behaviour of the product under test.
There are several oracles that you will already be using daily even if you are not calling them out as such. As Cem Kaner says, an oracle is a mechanism for determining whether a program has passed or failed a test. The oracle that you use depends on several things, the testing that you are doing, the environment you are in, the application you are testing, the context of the testing for example it could be a specification, it can be laws/statutes, other products, you, etc..
Specs being wrong
Consistency with ‘something’ is an oracle we often rely on.
Specifications
Laws
Previous versions
Comparable products
Marketing claims
There are several oracles that you will already be using daily even if you are not calling them out as such. As Cem Kaner says, an oracle is a mechanism for determining whether a program has passed or failed a test. The oracle that you use depends on several things, the testing that you are doing, the environment you are in, the application you are testing, the context of the testing for example it could be a specification, it can be laws/statutes, other products, you, etc..
Specs being wrong
Consistency with ‘something’ is an oracle we often rely on.
Don’t do a kata on something you haven’t tried out yourself a few times
Try to keep the examples simple
Keep a note of examples
Need a backlog
Emma and Jonathan have gained some teaching experience
Improved communication
2 new testers
Improved aware of testing activities and methodologies
We have learnt new things
Continue to do our own deliberate practice?
Would you like to use our previous katas?
Coding katas talk