How does UX design and research fit into Agile development so you can ensure designs are heading in the right direction? How do you measure the success of your lean designs? How often do you test? How do the numbers direct or shape the design and development process? And finally, how do the numbers and measurements impact business outcomes for clients I’ve worked for?
Discover:
-The benefits of measuring through UX activities
- Approaches for incorporating UX measurements and learning outcomes into Agile projects
-Tools and techniques to help provide frequent and repeatable UX insights and applicability within Agile projects
-How clients have used the findings and measurements to drive their business goals and positively impact organsisation outcomes
ASTM A335, ASTM A335 P5, ASTM A335 P5 alloy pipe, seamless steel pipes, steel pipes
Each length of pipe shall be subjected to the hydrostatic test. Also, each pipe shall be examined by a non-destructive examination method in accordance to the required practices.
ASTM A335, ASTM A335 P5, ASTM A335 P5 alloy pipe, seamless steel pipes, steel pipes
Each length of pipe shall be subjected to the hydrostatic test. Also, each pipe shall be examined by a non-destructive examination method in accordance to the required practices.
20 Ways to Shaft your Split Tesring : Conversion ConferenceCraig Sullivan
This talk is the latest deck showing common problems that will easily break or skew your ab and multivariate testing results. Avoid these problems by following the simple advice in this deck!
20 Ways to Shaft your Split Tesring : Conversion ConferenceCraig Sullivan
This talk is the latest deck showing common problems that will easily break or skew your ab and multivariate testing results. Avoid these problems by following the simple advice in this deck!
Detecting Good Abandonment in Mobile SearchJulia Kiseleva
Web search queries for which there are no clicks are referred to as abandoned queries and are usually considered
as leading to user dissatisfaction. However, there are many
cases where a user may not click on any search result page
(SERP) but still be satised. This scenario is referred to
as good abandonment and presents a challenge for most ap-
proaches measuring search satisfaction, which are usually
based on clicks and dwell time. The problem is exacerbated
further on mobile devices where search providers try to in-
crease the likelihood of users being satised directly by the
SERP. This paper proposes a solution to this problem us-
ing gesture interactions, such as reading times and touch
actions, as signals for dierentiating between good and bad
abandonment. These signals go beyond clicks and charac-
terize user behavior in cases where clicks are not needed to
achieve satisfaction. We study different good abandonment
scenarios and investigate the dierent elements on a SERP
that may lead to good abandonment. We also present an
analysis of the correlation between user gesture features and
satisfaction. Finally, we use this analysis to build models to
automatically identify good abandonment in mobile search
achieving an accuracy of 75%, which is significantly better
than considering query and session signals alone. Our fundings have implications for the study and application of user
satisfaction in search systems.
It's time to research our designs better. Here's how. UIUX Conference 2018 - ...Sophie Freiermuth
Slides of the talk I delivered at http://2018.uiuxconf.com on 3rd September 2018 in Shanghai China.
The audience was a mix of Mandarin and English speakers, and was supported by live translation.
Myths and Illusions of Cross Device Testing - Elite Camp June 2015Craig Sullivan
A compendium of the most common mistakes and problems people encounter when trying to optimise or split test cross device experiences (mobile, tablet, desktop, app, tv etc.)
My slides from GOTO Berlin. The talk was about my experiences of designing the right product, some of my influences and how I've used a Lean UX approach. The talk was about reducing the feedback loop and aiming to make sure that the product you are designing is what your customers want or need.
Sourcing The Right Participants For Your UX Research & TestingUserZoom
Join us in this webinar as we delve into the myriad methods of finding the right participants for your UX Research & Testing. Learn from UX experts about the different methods of sourcing participants as well as their takeaways on how you can ensure your recruiting efforts are successful. Grab a seat and discover: The various participant sourcing methods, Tips and tricks to ensure your recruiting efforts go smoothly, How UserZoom’s UX experts find the right participants, An exclusive first look into exciting participant sourcing developments at UserZoom
Presentation for WeWork Labs where you will learn how to use User Research to validate your Value Proposition, MVP or Hypothesis. Gabriel, Product Strategist from the UX agency Osynlig will walk you through practical advice and tools to ensure that you understand what your customers wants and needs from your product or service.
Brighton CRO Meetup #1 - Oh Boy These AB tests Sure Look Like Bullshit to MeCraig Sullivan
An updated deck of a short talk (30m) given at the first Brighton CRO meetup. Contains useful AB testing tools as well as full speaker notes for most of the slides.
Similar to Agile Australia 2014 | UX: How to measure more than a gut feel by Amir Ansari (20)
Data. It keeps coming up time and time again. On our social media feeds, in our client conversations, and has of course been the driver behind never-before-seen tools like ChatGPT.
But how can you do more with the data your organisation has and produces? What is data engineering and big data, and how can you enable data-driven decision-making within your organisation?
Hear from Nabi Rezvani—Lead Data Engineer—and Gaurav Thadani—Lead Software Engineer at DiUS on the latest trends, use cases and real-life examples of how our clients are using data and analytics to improve their decision making, customer experiences and business operations.
Also joining us are Jonathan Gomez—Head of Data Platforms at Wesfarmers OneDigital OnePass—and John Sullivan—CEO at ChargeFox—on their own [big and small] data journeys, along with the lessons they’ve learned along the way.
Watch the presentation on YouTube: https://youtu.be/ccghOfcdGN8
Learn how to establish a greater sense of confidence in your release cycle, along with the practices and processes to create a high-performing engineering culture within your team.
Serverless microservices: Test smarter, not harderDiUS
From YOW! night presentation in Melbourne and Sydney
Modern distributed architectures are increasingly composed of large numbers of decoupled, asynchronous components. In AWS, these components are plumbed together via services like SQS, Kinesis and S3 often integrated via small and frequent numbers of microservices or lambdas. But how do you test these architectures if they are cloud native?
It’s 2019, and we can do better than deploying the entire stack and running a battery of E2E tests against them.
In his talk, Matt will demonstrate how you can scale development of large-scale systems across teams, technology and process, and unlock the agility of your cloud-native architecture.
See https://www.eventbrite.com.au/e/yow-night-2019-melbourne-modern-testing-jul-23-tickets-63937866881#
Deploy distributed systems faster and safer with contract tests
It’s 2018 and we still rely on integrated environments and large end-to-end test suites to release complex ecosystems, also called “software". In this talk, Matt breaks down the arguments for such nonsense and provides a better, faster, safer alternative.
From https://www.meetup.com/sfjava/events/255379906/
Trends and development practices in Serverless architecturesDiUS
AWS ISV Event - Unlocking Business Agility with the AWS Serverless Application Model
Matt Fellows, Principal Consultant from DiUS will talk about the evolution to serverless architectures, and discuss key development and testing practices for these modern distributed systems
Deploying large-scale, serverless and asynchronous systems - without integrat...DiUS
Modern distributed architectures are increasingly composed of large numbers of decoupled, asynchronous components. In AWS, these components are plumbed together via services like SQS, SNS, Kinesis and S3 often integrated via small and frequent numbers of microservices or lambdas. But how do you test these architectures if they are cloud native?
It’s 2018, and we can do better than deploying the entire stack and running a battery of E2E tests against them.
In his talk, Matt will demonstrate how you can scale development of large-scale systems across teams, technology and process, and unlock the agility of your cloud-native architecture.
WARNING: there will be code (https://github.com/mefellows/serverless-testing-example)
GameDay - Achieving resilience through Chaos EngineeringDiUS
http://dius.com.au/resources/game-day/
Agility has brought us iterative software development, independent feature teams, nimble architectures and distributed, scalable infrastructure. But how do you maintain confidence in these systems in the face of this emergent complexity and fast paced change? The answer is to anticipate and practice failure!
In this session we explore GameDays, a collaborative exercise where teams safely introduce chaos into their systems, in order to make them better.
Taken from the talk given at tconf.io 2016.
It’s 2016 and we still rely on integrated environments and complex E2E test suites to release complex ecosystems also called “software". In this talk, Matt breaks down the arguments for such nonsense and provides a better, faster, safer alternative.
Accompanying notes: http://www.onegeek.com.au/wp-content/uploads/2016/11/tconf-consumer-driven-contracts_summary-notes.pdf
Deploy faster and safer using Pact (http://pact.io)
Slides from Golang Melbourne Meetup July 2016 (http://www.meetup.com/golang-mel/events/229251263/).
Agile has redefined how we interact. There’s no developer team or testing team anymore, it’s all about the team; a collaborative effort of individuals that share the same vision to build a product or deliver new features. But building a product or implementing new features can be tough due to a variety of internal and external factors.
In their talk at 1st Conference on February 15, Adam Cough and Tarcio Saraiva explored techniques that can assist you in building quality software. They looked at the evolution of quality in software development and how it’s applied to a fast paced industry that is constantly reshaping itself.
The Lean Startup approach has led to a proliferation of analytics tools over the past few years, promising faster feedback cycles and better products for our customers. However, many of the tools and approaches apply to the back-end, are slow and inappropriate or cost you an arm and a leg. And then there is Adobe Analytics...
In this talk, Matt will discuss why we need metrics, some approaches to bring the party to the front-end, and some cheap/low-fi solutions to get you dreaming up metrics until your heart is content.
Antifragility and testing for distributed systems failureDiUS
Failure is inevitable. In our modern world filled with continuously delivered and increasingly complex distributed architectures (looking at you micro-services), it is important to be able to test and improve our systems under a range of failure conditions.
In this talk, Matt discusses these complexities and the forces they exert on development teams, presenting some simple strategies and practical advice to deal with them.
The Diversity Dilemma: Attracting and Retaining Talented Women in Technology-...DiUS
DiUS' Business Development and Partnerships Principal, Paula Ngov presented alongside John Sullivan from MYOB at Agile Australia 2015 on why diversity matters. Their talk discussed the challenges presented by gender imbalance, and provided ways of addressing these issues in the workplace to overcome the diversity dilemma.
Rise of the machines: Continuous Delivery at SEEK - YOW! Night Summary SlidesDiUS
The virtues of continuous delivery are widely understood and accepted by organisations which value fast feedback cycles, reduced risk through incremental delivery of smaller changes and the ability to respond quickly to external factors. Furthermore if microservices are part of your architecture, then the ability to rapidly deploy multiple components of a system become increasingly important.
The foundations of scripting, automation and more recently containers made *nix-based systems the first target for automated deployments and subsequently continuous delivery. With the advent of some new tooling and a bit of courage these principles can now be applied to more heterogeneous environments including those from Redmond.
Using their backgrounds in automating large-scale ruby and java-based deployments, Warner and Matt embarked on a journey with SEEK to increase their agility by enabling continuous delivery – typically multiple times per day. This is their story.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
21. @amir_ansari @dius_au
21
Qualitative
Why?
• Data is rich and deep
• Explore and describe
• Process-oriented
• Small sample sizes
• Focus groups
• Cognitive walkthroughs
• Moderated usability
testing
• Card sorting
26. @amir_ansari @dius_au
26
The sooner we can find which features
are worth investing in, the sooner we
can focus our limited resources on the
best solution to our business problems.
Jeff Gothelf – LEAN UX
“
47. @amir_ansari @dius_au
47
1. Background
2. Falsifiable Hypotheses
3. Details
6. Next Actions
5. Validated Learning
4. Results
User 1 User 2 User 3 etc.
Hypothesis 1
Hypothesis 2
Etc.
• What feature or assumption to test
• What questions to ask
• How well does X perform
• Etc. • What to test next
• What hypothesis do we want to validate
1. That majority of users think X
2. That majority of users perform Y
3. That majority of users need Z
4. Etc.
• Previous experiment taught us …
• We want to find out…
• We want to test our solutions…
1. We can proceed with X
2. We need to tweak Y
3. We need to pivot and get rid of Z
Title: Experiment 2: XXX Author: Created:
48. @amir_ansari @dius_au
48
1. Background
2. Falsifiable Hypotheses
3. Details
6. Next Actions
5. Validated Learning
4. Results
User 1 User 2 User 3 etc.
Hypothesis 1
Hypothesis 2
Etc.
• What feature or assumption to test
• What questions to ask
• How well does X perform
• Etc. • What to test next
• What hypothesis do we want to validate
1. That majority of users think X
2. That majority of users perform Y
3. That majority of users need Z
4. Etc.
• Previous experiment taught us …
• We want to find out…
• We want to test our solutions…
1. We can proceed with X
2. We need to tweak Y
3. We need to pivot and get rid of Z
Title: Experiment 2: XXX Author: Created:
• From focus groups, it was clear that one barrier
with other online providers has been the lack of
trainer support
• That providing access to trainers in the classroom
and telling students upfront about trainer
availability will eliminate barriers and increase
conversion
• Show users the marketing pages that talk about
trainer availability
• Show users the classroom with the trainer being
online
• Users indeed look for trainer availability and are
comforted by knowing OTI provides access to
them
• Test something else! ; )
58. @amir_ansari @dius_au
58
5
One on one testing
Focus groups
= $800
2 FTEs
X $160
X $375*
* Daily gross equivalent based on a gross annual salary of $90k
= $750
$1,550
16 = $3,200
3 FTEs
X $200
X $375* = $1,125
$4,325
If doing this internally…
73. @amir_ansari @dius_au
73
Thank you
Qualitative data can in fact be
converted into quantitative
measures even if it doesn't come
from an experiment or from a large
sample size.
Jeff Sauro – Measuring Usability
“
Editor's Notes
15 years in the usability and UX space, most of it at the management level of some sort. Last stint before DiUS was 8 years at Stamford (Aus’s largest specialised UX agency). I’ve done my 10000+ hours.
Who knows DiUS?
Privately owned. 100 staff in both Melb & Syd. In May this year (2014) we turned 10 years old. SW engineers at heart, BA, hardware engineers, UX. Agile and nimble. People drive our culture to innovate. We believe in a strong and vibrant community. We sponsor conferences and meetups. Our employees give their time and skills to solve community problems.
Product and technology strategy: what product should be built, how to get it to market. We do this for our own products too. Web applications: QANTAS-Jetstar accommodation booking, Vodafone self-service. Application-specific devices: a fridge magnet for displaying in-home energy usage. Mobile applications: the iPad app for Australia Post Digital Mailbox.
Any uxer’s out there? Hands up. Who can confidently define what UX means?
Firstly, most people still don’t know the breadth of user experience skillset (uxisnotui).
Sometimes confusion between qualitative and quantitative.
Two approaches to measuring UX.
As different as the two approaches are, we both need each other – so let’s all be friends and hug.
My talk - So why the title?
I'm obsessed with measuring, learning and iterating when it comes to UX.
UX has traditionally been waterfall. With Agile and Lean, Lean UX has taken traction. Build, measure, learn is very appropriate to the UX discipline.
I’m going to talk about some the harder measurements – some of the qualitative techniques and prove how the can provide value through some example case studies. I'm a bit tired of having to sell the importance of UX... Not so much that’s it’s important, but more that it’s not fluffy, design only or that it’s not meant for measuring and driving/impacting decisions. It’s not ALL about SAMPLE SIZE. IT’s qualitative – about the Why, not the what! Most recently I've been trying to bring the Research and Insights team of one of my clients on board. Still get the good old argument of 'why Such a small sample size? And that it's not significant. It's not about sample size and you can still measure UX and give great value back to the project.
UX metrics are relatively easy and black and white, so I won’t be covering them today. You can google and find out all about them.
In this presentation, I’m going to cover the qualitative aspects.
So you’re probably thinking to yourself: what does Qualitative measurement look like in your agile project?
When you don’t have access to users. I used this when working for an enterprise startup – OTI.
Enterprise startup using Lean Startup approach.
Next best thing to users. Managed to capture and count the number of times the following themes occurred: Motivations and expectations to study; Concerns/fears/barriers/pain points; Ideal experience and channel. Outcome: Personas; Feature sets; Hypotheses.
What worked well.
Watch out for anecdotes. Plural of anecdote is anecdotes, not data!
Qualitative, small size but to go deep and understand sentiments, reasons for behaviours and perceptions etc. As we had planned to run experiments throughout the project, keeping it lean and light was important.
Things to look out for when running group-based sessions: poor facilitation, group think. In quant, each data is seen as being independent. In a focus group, the entire session = 1 data point, due to Group Think effect. We split users up and do activities to increase our data points.
Outcomes.
Stakeholders got total buy-in – no barriers to convince re findings.
What didn’t work so well.
Moderated usability testing.
As we had planned to run experiments throughout the project, keeping it lean and light was important.
How many know about the Nielsen Norman Group? Godfathers of usability and UX, especially when it comes to research and field studies.
Lean experiment report.
Lean experiment report.
Results annotated, tallied up, prioritised, recommendations made and fed back at the end of that very day.
Online classroom was changed to Online study centre. Seeing two modes side by side preferred over tab. Seeing full course price was still important.
What worked well.
What didn’t work so well.
Hand’s up if you’re a product owner, project manager, responsible for the project’s budget?
So how much do these activities cost? UX REDUCES COST, INCREASES PRODUCTIVITY, SALES, BRAND LOYALTY AND ADVOCACY. Just to name a few. Research was done by Forrester Research that suggested the cost of fixing code was 10 times the cost of the design activities if done upfront , and up to 100 x when the product was launched.
We have trusted partners / market research recruiters that find our users form a large database.
PM happy.
If too expensive, to Guerrilla Usability Testing. Shorter, often 15 – 30 mins. Few features tested. Often, on the road or at the participant’s location, and with some training developers can do it (Steve Krug’s book – This is not Rocket Surgery). Book by Steve Krug – Rocket Surgery Made Easy provides tips on how anybody can conduct usability testing.
You can convert a usability problem into a frequency, using confidence intervals. Categorizing and counting issues. N= + 30 = good estimate for standard deviations, otherwise stick with t-distribution.
As the product matures, you can introduce more UX metrics and quantitative measurements.
Picking the right technique is very important. And these questions can help you choose and narrow down on the technique you need.
For the OTI project, I spent 1 hour very early on defined the research/experiment approach, costed it and put it to the PM. It was included in the budget and signed off. Now some of you may not have access to the person with the wallet, or the budget may have already been allocated prior. In this case, you may have to take a more guerrilla approach to your experiments, OR find an advocate who can play in your team to try and convince a minimum number of user sessions.
Keep it lean, quick and fast. 5 – 6 users per experiment. Prioritise issues. Feedback quickly. You don’t want to be seen as been the bottleneck – pragmatism is important.
Get them to observe. If not, make sure you run through your issues. Agree on pivots and recommendations.
Use consistent approaches to test and report. Consider using templates. Benchmark as design matures for future measurability.
Use consistent approaches to test and report. Consider using templates. Benchmark as design matures for future measurability.
Measuring the User Experience is more statistical, and assumes you have an existing product or service.