Testing is the process of evaluating a product to find errors and improve quality. There are different levels of testing including unit testing, integration testing, system testing, and acceptance testing. Usability testing involves having potential users evaluate how easy a website is to use. It is important to test early and often throughout the development process to find and address errors as early as possible.
OFFLINE SIGNATURE RECOGNITION VIA CONVOLUTIONAL NEURAL NETWORK AND MULTIPLE C...IJNSA Journal
One of the most important processes used by companies to safeguard the security of information and prevent it from unauthorized access or penetration is the signature process. As businesses and individuals move into the digital age, a computerized system that can discern between genuine and faked signatures is crucial for protecting people's authorization and determining what permissions they have. In this paper, we used Pre-Trained CNN for extracts features from genuine and forged signatures, and three widely used classification algorithms, SVM (Support Vector Machine), NB (Naive Bayes) and KNN (k-nearest neighbors), these algorithms are compared to calculate the run time, classification error, classification loss, and accuracy for test-set consist of signature images (genuine and forgery). Three classifiers have been applied using (UTSig) dataset; where run time, classification error, classification loss and accuracy were calculated for each classifier in the verification phase, the results showed that the SVM and KNN got the best accuracy (76.21), while the SVM got the best run time (0.13) result among other classifiers, therefore the SVM classifier got the best result among the other classifiers in terms of our measures.
Authentication of a person is the major concern in this era for security purposes. In biometric systems Signature is one of the behavioural features used for the authentication purpose. In this paper we work on the offline signature collected through different persons. Morphological operations are applied on these signature images with Hough transform to determine regular shape which assists in authentication process. The values extracted from this Hough space is used in the feed forward neural network which is trained using back-propagation algorithm. After the different training stages efficiency found above more than 95%. Application of this system will be in the security concerned fields, in the defence security, biometric authentication, as biometric computer protection or as method of the analysis of person’s behaviour changes.
Hi guys , here is new presentation which is related to password authentication named as Graphical Password Authentication.Here i have covered all the topics which are related to GPA .I will also provide a documentation regarding this topic if u need .So please comment below for the document and fallow @shobha rani
OFFLINE SIGNATURE RECOGNITION VIA CONVOLUTIONAL NEURAL NETWORK AND MULTIPLE C...IJNSA Journal
One of the most important processes used by companies to safeguard the security of information and prevent it from unauthorized access or penetration is the signature process. As businesses and individuals move into the digital age, a computerized system that can discern between genuine and faked signatures is crucial for protecting people's authorization and determining what permissions they have. In this paper, we used Pre-Trained CNN for extracts features from genuine and forged signatures, and three widely used classification algorithms, SVM (Support Vector Machine), NB (Naive Bayes) and KNN (k-nearest neighbors), these algorithms are compared to calculate the run time, classification error, classification loss, and accuracy for test-set consist of signature images (genuine and forgery). Three classifiers have been applied using (UTSig) dataset; where run time, classification error, classification loss and accuracy were calculated for each classifier in the verification phase, the results showed that the SVM and KNN got the best accuracy (76.21), while the SVM got the best run time (0.13) result among other classifiers, therefore the SVM classifier got the best result among the other classifiers in terms of our measures.
Authentication of a person is the major concern in this era for security purposes. In biometric systems Signature is one of the behavioural features used for the authentication purpose. In this paper we work on the offline signature collected through different persons. Morphological operations are applied on these signature images with Hough transform to determine regular shape which assists in authentication process. The values extracted from this Hough space is used in the feed forward neural network which is trained using back-propagation algorithm. After the different training stages efficiency found above more than 95%. Application of this system will be in the security concerned fields, in the defence security, biometric authentication, as biometric computer protection or as method of the analysis of person’s behaviour changes.
Hi guys , here is new presentation which is related to password authentication named as Graphical Password Authentication.Here i have covered all the topics which are related to GPA .I will also provide a documentation regarding this topic if u need .So please comment below for the document and fallow @shobha rani
note: A slide for any presentation should not contain more than 4-5 sentences but this presentation has more than the requirement.So, i suggest you to edit as per your requirement and to make it more effective, you can add animations as well.
Handwritten Character Recognition: A Comprehensive Review on Geometrical Anal...iosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
OCR for PDFs: https://nanonets.com/blog/pdf-ocr/
PDF to CSV converter - https://nanonets.com/convert-pdf-to-csv
PDF to Excel converter - https://nanonets.com/tools/pdf-to-excel
Online OCR - https://nanonets.com/online-ocr
Introductory overview of testing techniques for web application development. Explains where different testing methods fit in to the software development cycle.
note: A slide for any presentation should not contain more than 4-5 sentences but this presentation has more than the requirement.So, i suggest you to edit as per your requirement and to make it more effective, you can add animations as well.
Handwritten Character Recognition: A Comprehensive Review on Geometrical Anal...iosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
OCR for PDFs: https://nanonets.com/blog/pdf-ocr/
PDF to CSV converter - https://nanonets.com/convert-pdf-to-csv
PDF to Excel converter - https://nanonets.com/tools/pdf-to-excel
Online OCR - https://nanonets.com/online-ocr
Introductory overview of testing techniques for web application development. Explains where different testing methods fit in to the software development cycle.
Open Source ERP Technologies for Java Developerscboecking
PowerPoint presentation from an Austin JUG (java user's group) event in June. The purpose of the presentation is to help Java developers learn, use and extent ADemipere, a popular open source ERP.
Compatibility Testing of Your Web Apps - Tips and Tricks for Debugging Locall...Sauce Labs
Test automation is all about running the most tests in the least amount of time. This is great for mature apps, but in the early stages of developing your web or mobile app, developers need to run a number of tests to ensure the app runs at all. Further complicating the issue is that often, your app is architect-ed differently for web and mobile which makes writing automated tests tricky.
Test Automation Specialist Max Saperstone from Coveros will cover some simple testing examples and demonstrate how to expand these for testing over multiple web architectures. He will briefly cover the difference in the design of these sites with a focus on how tests can be designed to overcome their limitations, minimizing duplicate code, and following best practices.
Building websites like applications means bringing more attention to testing. From unit tests early on to load and regression testing later in the game, the primary purpose of testing is to detect software failures so that defects are discovered and corrected before they make it to the customer. Depending on your size, different testing strategies and things like automation may or may not be necessary. I'll cover some great free tools, some simple command line scripts as well as some commercial choices for the various types of testing.
Don't Drop the SOAP: Real World Web Service Testing for Web Hackers Tom Eston
Over the years web services have become an integral part of web and mobile applications. From critical business applications like SAP to mobile applications used by millions, web services are becoming more of an attack vector than ever before. Unfortunately, penetration testers haven't kept up with the popularity of web services, recent advancements in web service technology, testing methodologies and tools. In fact, most of the methodologies and tools currently available either don't work properly, are poorly designed or don't fully test for real world web service vulnerabilities. In addition, environments for testing web service tools and attack techniques have been limited to home grown solutions or worse yet, production environments.
In this presentation Tom, Josh and Kevin will discuss the new security issues with web services and release an updated web service testing methodology that will be integrated into the OWASP testing guide, new Metasploit modules and exploits for attacking web services and a open source vulnerable web service for the Samurai-WTF (Web Testing Framework) that can be used by penetration testers to test web service attack tools and techniques.
A RELIABLE AND AN EFFICIENT WEB TESTING SYSTEMijseajournal
To improve the reliability and efficiency of Web Software, the Testing Team should be creative and
innovative, the experience and intuition of Tester also matters a lot. And most often the destructive nature
of Tester brings reliable software to the user. Actually, Testing is the responsibility of everybody who is
involved in the Project. But, one’s personal curiosity and attention is more important than the various
techniques and tools available in the market for Web Testing due to the phenomena that Software Testing is
an art. In this study, we are actually discussing certain techniques and tools which can be helpful to
minimize bugs in Web Application and achieve reliability and efficiency to a certain level. Indeed, for
bettering the quality of Web Application, Testing may not be considered as the only effective method
because no one can certify that a system is bug-free. This paper presents some essential web testing
techniques, strategies, methods and tools which need to be focused on when performing Web Testing for
several web applications in order to achieve better results.
A RELIABLE AND AN EFFICIENT WEB TESTING SYSTEMijseajournal
To improve the reliability and efficiency of Web Software, the Testing Team should be creative and innovative, the experience and intuition of Tester also matters a lot. And most often the destructive nature of Tester brings reliable software to the user. Actually, Testing is the responsibility of everybody who is
involved in the Project. But, one’s personal curiosity and attention is more important than the various techniques and tools available in the market for Web Testing due to the phenomena that Software Testing is an art. In this study, we are actually discussing certain techniques and tools which can be helpful to minimize bugs in Web Application and achieve reliability and efficiency to a certain level. Indeed, for
bettering the quality of Web Application, Testing may not be considered as the only effective method because no one can certify that a system is bug-free. This paper presents some essential web testing
techniques, strategies, methods and tools which need to be focused on when performing Web Testing for
several web applications in order to achieve better results.
THE USABILITY METRICS FOR USER EXPERIENCE was innovatively created by Google engineers and it is ready for production in record time. The success of Google is to attributed the efficient search algorithm, and also to the underlying commodity hardware. As Google run number of application then Google’s goal became to build a vast storage network out of inexpensive commodity hardware. So Google create its own file system, named as THE USABILITY METRICS FOR USER EXPERIENCE that is GFS. THE USABILITY METRICS FOR USER EXPERIENCE is one of the largest file system in operation. Generally THE USABILITY METRICS FOR USER EXPERIENCE is a scalable distributed file system of large distributed data intensive apps. In the design phase of THE USABILITY METRICS FOR USER EXPERIENCE, in which the given stress includes component failures , files are huge and files are mutated by appending data. The entire file system is organized hierarchically in directories and identified by pathnames. The architecture comprises of multiple chunk servers, multiple clients and a single master. Files are divided into chunks, and that is the key design parameter. THE USABILITY METRICS FOR USER EXPERIENCE also uses leases and mutation order in their design to achieve atomicity and consistency. As of there fault tolerance, THE USABILITY METRICS FOR USER EXPERIENCE is highly available, replicas of chunk servers and master exists.
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
zkStudyClub - Reef: Fast Succinct Non-Interactive Zero-Knowledge Regex ProofsAlex Pruden
This paper presents Reef, a system for generating publicly verifiable succinct non-interactive zero-knowledge proofs that a committed document matches or does not match a regular expression. We describe applications such as proving the strength of passwords, the provenance of email despite redactions, the validity of oblivious DNS queries, and the existence of mutations in DNA. Reef supports the Perl Compatible Regular Expression syntax, including wildcards, alternation, ranges, capture groups, Kleene star, negations, and lookarounds. Reef introduces a new type of automata, Skipping Alternating Finite Automata (SAFA), that skips irrelevant parts of a document when producing proofs without undermining soundness, and instantiates SAFA with a lookup argument. Our experimental evaluation confirms that Reef can generate proofs for documents with 32M characters; the proofs are small and cheap to verify (under a second).
Paper: https://eprint.iacr.org/2023/1886
Elizabeth Buie - Older adults: Are we really designing for our future selves?
Unit 09: Web Application Testing
1. Unit 9: Web Application Testing
Testing is the activity conducted to evaluate the quality of a
product and to improve it by finding errors.
Testing
dsbw 2011/2012 q1 1
2. Testing Terminology
An error is “the difference between a computed, observed, or
measured value or condition and the true, specified, or
theoretically correct value or condition” (IEEE standard 610.12-
1990).
This “true, specified, or theoretically correct value or condition”
comes from
A well-defined requirements model, if available and complete
An incomplete set of fuzzy and contradictory goals, concerns,
and expectations of the stakeholders
A test is a set of test cases for a specific object under test: the
whole Web application, components of a Web application, a
system that runs a Web application, etc.
A single test case describes a set of inputs, execution conditions,
and expected results, which are used to test a specific aspect of the
object under test
dsbw 2011/2012 q1 2
3. Testing [and] Quality
Testing should address compliance not only to functional
requirements but also to quality requirements, i.e., the kinds of
quality characteristics expected by stakeholders.
ISO/IEC 9126-1 [Software] Quality Model:
dsbw 2011/2012 q1 3
4. Goals of Testing
The main goal of testing is to find errors but not to prove
their absence
A test run is successful if errors are detected. Otherwise, it is
unsuccessful and “a waste of time”.
Testing should adopt a risk-based approach:
Test first and with the greatest effort those critical parts of an
application where the most dangerous errors are still
undetected
A further aim of testing is to bring risks to light, not simply to
demonstrate conformance to stated requirements.
Test as early as possible at the beginning of a project: errors
happened in early development phases are harder to localize
and more expensive to fix in later phases.
dsbw 2011/2012 q1 4
5. Test Levels (1/2)
Unit tests
Test the smallest testable units (classes, Web pages, etc.)
independently of one another.
Performed by the developer during implementation.
Integration tests
Evaluate the interaction between distinct and separately tested
units once they have been integrated.
Performed by a tester, a developer, or both jointly.
System tests
Test the complete, integrated system.
Typically performed by a specialized test team.
dsbw 2011/2012 q1 5
6. Test Levels (2/2)
Acceptance tests
Evaluate the system with the client in an “realistic”
environment, i.e. with real conditions and real data.
Beta tests
Let friendly users work with early versions of a product to get
early feedback.
Beta tests are unsystematic tests which rely on the number and
“malevolence” of potential users.
dsbw 2011/2012 q1 6
7. Fitting Testing in the Development Process
Planning: Defines the quality goals, the general testing strategy, the test
plans for all test levels, the metrics and measuring methods, and the test
environment.
Preparing: Involves selecting the testing techniques and tools and
specifying the test cases (including the test data).
Performing: Prepares the test infrastructure, runs the test cases, and then
documents and evaluates the results.
Reporting: Summarizes the test results and produces the test reports.
dsbw 2011/2012 q1 7
8. Web Testing: A Road Map
Content Interface
Testing Testing Usability Testing
user
Navigation
Testing
Component
Testing
Configuration
Testing
Performance Security
technology Testing Testing
dsbw 2011/2012 q1 8
9. Usability
Usability is a quality attribute that assesses how easy user
interfaces are to use. Also refers to methods for improving
ease-of-use during the design process.
Usability is defined by five quality components:
Learnability: How easy is it for users to accomplish basic tasks
the first time they encounter the design?
Efficiency: Once users have learned the design, how quickly
can they perform tasks?
Memorability: When users return to the design after a period
of not using it, how easily can they reestablish proficiency?
Errors: How many errors do users make, how severe are these
errors, and how easily can they recover from the errors?
Satisfaction: How pleasant is it to use the design?
dsbw 2011/2012 q1 9
10. Why Usability matters*
62% of web shoppers gave up looking for an item. (Zona
study)
50% of web sales are lost because visitors can’t easily find
content. (Gartner Group)
40% of repeat visitors do not return due to a negative
experience. (Zona study)
85% of visitors abandon a new site due to poor design.
(cPulse)
Only 51% of sites complied with simple web usability
principles. (Forrester study of 20 major sites)
(*) data from www.usabilitynet.org/management/c_cost.htm
dsbw 2011/2012 q1 10
11. Why people fail
Search
Findability (IA, Category
names, Navigation, Links)
Page design (Readability,
Layout, Graphics, Amateur,
Scrolling)
Information (Content,
Product info, Corporate info,
Prices)
Task support (Workflow,
Privacy, Forms, Comparison,
Inflexible)
Fancy design (Multimedia,
Back button, PDF/Printing,
New window, Sound)
Other (Bugs, Presence on
Web, Ads, New site,
Usability problems weighted by how frequently they Metaphors)
caused users to fail a task [NL06]
dsbw 2011/2012 q1 11
12. Top Ten (Usability) Mistakes in Web Design
1. Bad search
2. Pdf files for online reading
3. Not changing the color of visited links
4. Non-scannable text
5. Fixed font size
6. Page titles with low search engine visibility
7. Anything that looks like an advertisement
8. Violating design conventions
9. Opening new browser windows
10. Not answering users' questions
dsbw 2011/2012 q1 12
13. Assessing Usability
Two major types of assessing methods:
Usability evaluations:
Evaluators and no users
Techniques: surveys/questionnaires, observational
evaluations, guideline based reviews, cognitive
walkthroughs, expert reviews, heuristic evaluations
Usability tests: focus on users working with the product
Usability testing is the only way to know if the Web site
actually has problems that keep people from having a
successful and satisfying experience.
dsbw 2011/2012 q1 13
14. Usability Testing
Usability testing is a methodology that employs potential
users to evaluate the degree to which a website/software
meets predefined usability criteria.
Basic Process:
1. Watch Customers
2. They Perform Tasks
3. Note Their Problems
4. Make Recommendations
5. Iterate
dsbw 2011/2012 q1 14
15. Measures of Usability
Effectiveness (Ability to successfully accomplish tasks)
Percentage of goals/tasks achieved (success rate)
Number of errors
Efficiency (Ability to accomplish tasks with speed and ease)
Time to complete a task
Frequency of requests for help
Number of times facilitator provides assistance
Number of times user gives up
dsbw 2011/2012 q1 15
16. Measures of Usability
Satisfaction (Pleasing to users)
Positive and negative ratings on a satisfaction scale
Percent of favorable comments to unfavorable comments
Number of good vs. bad features recalled after test
Number of users who would use the system again
Number of times users express dissatisfaction or frustration
Learnability (Ability to learn how to use site and remember it)
Ratio of successes to failures
Number of features that can be recalled after the test
dsbw 2011/2012 q1 16
17. Usability Testing Roles
Facilitator:
Oversees the entire test process
Plan, test, and report.
Participant:
Actual or potential customer.
Representative users (marketing, designers) avoided.
Observer (optional):
Records events as they occur.
Limits interaction with the customer.
Does contribute to the report.
dsbw 2011/2012 q1 17
18. Usability Testing Process
Step 1: Planning The Usability Test
Define what to test
Define which customers should be tested
Define what tasks should be tested
Write usability scenarios and tasks
Select participants
Step 2: Conducting The Usability Test
Conduct a test
Collect data
Step 3: Analyzing and Reporting The Usability Test
Compile results
Make recommendations
dsbw 2011/2012 q1 18
19. People – Context – Activities
Step 1: Planning The Usability Test
Define what to test
→ Activities (Use Cases)
Define which customers (user profiles) to be tested
→ People (Actors)
Provide a background for the activities to test
→ Context
dsbw 2011/2012 q1 19
20. Usability Scenarios and Tasks
Provide the participant with motivation and context to make
the situation more realistic
Include several tasks:
Make the first task simple
Give a goal, without describing steps
Set some success criteria, examples:
N% of test participants will be able to complete x% of tasks in
the time allotted.
Participants will be able to complete x% of tasks with no more
than one error per task.
N% of test participants will rate the system as highly usable on
a scale of x to x.
dsbw 2011/2012 q1 20
21. Example of Scenario with Tasks
Context:
You want to book a sailing on Royal Caribbean International for
next June with your church group. The group is called “Saint
Francis Summer 2010”. The group is selling out fast, so you
want to book a cabin, which is close to an elevator because
your leg hurts from a recent injury.
Tasks to perform:
1. Open your browser
2. Click the link labeled “Royal Caribbean”
3. Tell me the available cabins in the “Saint Francis Summer
2010” group
4. Tell me a cabin number closest to an elevator
5. Book the cabin the best suits your needs
dsbw 2011/2012 q1 21
22. Selecting Participants
Recruit participants
In-house
recruitment firms, databases, conferences
Match participants with user profiles
Numbers: of participants, floaters
Schedule test sessions
Incentives:
Gift checks ($100 per session)
Food or gift cards
dsbw 2011/2012 q1 22
23. How Many Test Participants Are Required?
The number of usability problems found in a usability test
with n participants is:
N(1-(1-L)n)
N : total number of usability problems in the design
L : the proportion of usability problems discovered while testing
a single participant.
For L = 31%
dsbw 2011/2012 q1 23
24. How Many Test Participants Are Required?
It seems that you need to test with at least 15 participants to
discover all the usability problems
However, is better to perform 3 tests with 5 participants than
to perform one with 15 participants:
After the first test with 5 participants has found 85% of the
usability problems, you will want to fix them in a redesign.
After creating the new design, you need to test again.
The second test with 5 users will discover most of the
remaining 15% of the original usability problems that were not
found in the first test (and some new one).
The new test will be able to uncover structural usability
problems that were obscured in initial studies as users were
stumped by surface-level usability problems.
Fix the new problems, and test …
dsbw 2011/2012 q1 24
25. Usability Labs … Not Necessary
The testing room contains office The observer side contains a
furniture, video tape equipment, a powerful computer to collect the
microphone and a computer with usability data and analyze it. A one-
appropriate software. way mirror separates the rooms.
dsbw 2011/2012 q1 25
27. Conducting Tests: Facilitator’s Role
Start with an easy task to build confidence
Sit beside the person not behind the glass
Use “think-out-loud” protocol
Give participants time to think it through
Offer appropriate encouragement
Lead participants, don’t answer questions (being an enabler)
Don’t act knowledgeable (treat them as the experts)
Don’t get too involved in data collection
Don’t jump to conclusions
Don’t solve their problems immediately
dsbw 2011/2012 q1 27
28. Collecting Data
Performance
Objective (what actually happened)
Usually Quantitative
Time to complete a task
Time to recover from an error
Number of errors
Percentage of tasks completed successfully
Number of clicks
Pathway information
Preference
Subjective (what participants say/thought)
Usually Qualitative
Preference of versions
Suggestions and comments
Ratings or rankings (can be quantitative)
dsbw 2011/2012 q1 28
29. Report findings and recommendations
Make report usable for your users
Include quantitative data (success rates, times, etc.)
Avoid words like “few, many, several”. Include counts
Use quotes
Use screenshots
Mention positive findings
Do not use participant names, use P1, P2, P3, etc.
Include recommendations
Make it short
dsbw 2011/2012 q1 29
30. Component Testing
Focuses on a set of tests that attempt to uncover errors in
WebApp functions
Conventional black-box and white-box test case design
methods can be used at each architectural layer
(presentation, domain, data access)
Form data can be exploited systematically to find errors:
Missing/incomplete data
Type conversion problems
Value boundary violations
Fake data
Etc.
Database testing is often an integral part of the component-
testing regime
dsbw 2011/2012 q1 30
31. Configuration Testing: Server-Side Issues
Is the WebApp fully compatible with the server OS?
Are system files, directories, and related system data created correctly
when the WebApp is operational?
Do system security measures (e.g., firewalls or encryption) allow the
WebApp to execute and service users without interference or
performance degradation?
Has the WebApp been tested with the distributed server configuration (if
one exists) that has been chosen?
Is the WebApp properly integrated with database software? Is the
WebApp sensitive to different versions of database software?
Do server-side WebApp scripts execute properly?
Have system administrator errors been examined for their affect on
WebApp operations?
If proxy servers are used, have differences in their configuration been
addressed with on-site testing?
dsbw 2011/2012 q1 31
32. Configuration Testing: Client-Side Issues
Hardware—CPU, memory, storage and printing devices
Operating systems—Linux, Macintosh OS, Microsoft
Windows, a mobile-based OS
Browser software—Internet Explorer, Mozilla/Netscape,
Opera, Safari, and others
User interface components—Active X, Java applets and
others
Plug-ins—QuickTime, RealPlayer, and many others
Connectivity—cable, DSL, regular modem, T1
dsbw 2011/2012 q1 32
33. Security Testing
Designed to probe vulnerabilities
of the client-side environment,
the network communications that occur as data are passed
from client to server and back again,
and the server-side environment
On the client-side, vulnerabilities can often be traced to pre-
existing bugs in browsers, e-mail programs, or
communication software.
On the network infrastructure
On the server-side,
Review the DSBW Unit on
At host level WebApp Security
At WebApp level
dsbw 2011/2012 q1 33
34. Performance Testing: Main Questions
Does the server response time degrade to a point where it is
noticeable and unacceptable?
At what point (in terms of users, transactions or data loading) does
performance become unacceptable?
What system components are responsible for performance
degradation?
What is the average response time for users under a variety of
loading conditions?
Does performance degradation have an impact on system security?
Is WebApp reliability or accuracy affected as the load on the
system grows?
What happens when loads that are greater than maximum server
capacity are applied?
dsbw 2011/2012 q1 34
35. Performance Testing: Load Tests
A load test verifies whether or not the system meets the
required response times and the required throughput.
Steps:
1. Determine load profiles (what access types, how many visits per day,
at what peak times, how many visits per session, how many
transactions per session, etc.) and the transaction mix (which
functions shall be executed with which percentage).
2. Determine the target values for response times and throughput (in
normal operation and at peak times, for simple or complex accesses,
with minimum, maximum, and average values).
3. Run the tests, generating the workload with the transaction mix
defined in the load profile, and measure the response times and the
throughput.
4. The results are evaluated, and potential bottlenecks are identified.
dsbw 2011/2012 q1 35
36. Performance Testing: Stress Tests
A stress test verifies whether or not the system reacts in a
controlled way in “stress situations”, which are simulated by
applying extreme conditions, such as unrealistic overload, or
heavily fluctuating load.
The test is aimed at answering the questions:
Does the server degrade ‘gently’ or does it shut down as
capacity is exceeded?
Does server software generate “server not available”
messages? More generally, are users aware that they cannot
reach the server?
Are transactions lost as capacity is exceeded?
Is data integrity affected as capacity is exceeded?
dsbw 2011/2012 q1 36
37. Performance Testing: Stress Tests (cont.)
Under what load conditions the server environment fails? How
does failure manifest itself? Are automated notifications sent to
technical support staff at the server site?
If the system does fail, how long will it take to come back on-
line?
Are certain WebApp functions (e.g., compute intensive
functionality, data streaming capabilities) discontinued as
capacity reaches the 80 or 90% level?
dsbw 2011/2012 q1 37
38. Performance Testing: Interpreting Graphics
Load: the number of
requests that arrive at
the system per time unit
Throughput: the number
of requests served per
time unit.
SLA: Service Level
Agreement
dsbw 2011/2012 q1 38
39. Test Automation
Automation can significantly increase the efficiency of testing and
enables new types of tests that also increase the scope (e.g.
different test objects and quality characteristics) and depth of
testing (e.g. large amounts and combinations of input data).
Test automation brings the following benefits:
Running automated regression tests on new versions of a
WebApp allows to detect defects caused by side-effects to
unchanged functionality.
Various test methods and techniques would be difficult or
impossible to perform manually. For example, load and stress
testing requires to simulate a large number of concurrent users.
Automation allows to run more tests in less time and, thus, to
run the tests more often leading to greater confidence in the
system under test.
Web Site Test Tools: http://www.softwareqatest.com/qatweb1.html
dsbw 2011/2012 q1 39
40. References
R. G. Pressman, D. Lowe: Web Engineering. A Practitioner’s
Approach. McGraw Hill, 2008. Chapter 15.
KAPPEL, Gerti et al. Web Engineering, John Wiley & Sons,
2006. Chapter 7.
[NH06] NIELSEN, J. and LORANGER, H. 2006 Prioritizing Web
Usability. New Riders Publishing.
www.useit.com (Jakob Nielsen)
www.usability.gov
dsbw 2011/2012 q1 40