(From UPA 2011-Atlanta) Usability practitioners have a variety of methods and techniques to inform interaction design and identify usability problems. However, these tools are not as effective at evaluating the visceral and emotional response generated by visual design and aesthetics. This presentation will discuss why studying visual design is important, review considerations for preference and desirability testing and present two alternative approaches to user studies of visual designs in the form of case studies.
Desirability Testing: Analyzing Emotional Response to a DesignMegan Grocki
In the design process we follow, once we have defined the conceptual direction and content strategy for a given design and refined our approach through user research and iterative usability testing, we start applying visual design. Generally, we take a key screen whose structure and functionality we have finalized—for example, a layout for a home page or a dashboard page—and explore three alternatives for visual style. These three alternative visual designs, or comps, include the same content, but reflect different choices for color palette and imagery. The idea is to present business owners and stakeholders with different visual design options from which they can choose. Sometimes there is a clear favorite among stakeholders or an option that makes the most sense from a brand perspective. However, there can often be disagreements among the members of a project team on which direction to choose. If we’ve done our job right, there are rationales for our various design decisions in the different comps, but even so, there may be disagreement about which rationale is most appropriate for the situation.
As practitioners of user-centered design, it is natural for us to turn to user research to help inform and guide the process of choosing a visual design. But traditional usability testing and related methods don’t seem particularly well suited for assessing visual design for two reasons:
1. When we reach out to users for feedback on visual design options, stakeholders are generally looking for large sample sizes—larger than are typical for a qualitative usability study.
2. The response we are looking for from users is more emotional—that is, less about users’ ability to accomplish tasks and more about their affective response to a given design.
With this in mind, we were very interested in articles we saw on Desirability Testing. In one article, the author posits desirability testing as a mix of quantitative and qualitative methods that allow you to assess users’ attitudes toward aesthetics and visual appeal. Inspired by his overview, we researched desirability studies a bit further and tried a modified version of the techniques on one of our projects. This presentation reviews the variants of desirability testing that we considered and the lessons we learned from a desirability study on visual design options for one of our projects. Interestingly, we found that while desirability testing did help us better understand participant’s self reported emotional response to a visual design, it also helped us identify other key areas of the experience that could be improved.
Ai and Design: When, Why and How? - Morgenbooster1508 A/S
This year, A and I became the probably most used letters in the alphabet. Time to reflect upon the role we play as designers in an increasingly AI-driven landscape.
Product Design and UX / UI Design Process in Digital Product DevelopmentVolodymyr Melnyk
Presentation about product design and its role in digital product development, UI / UX design process and methodologies, examples of their applications.
This presentation aims to teach others how to use the user centered design methodology known as personas.
Personas are archetypes (models) that represent groups of real users who have similar behaviors, attitudes, and goals. A persona describes an archetypical user of software as it relates to the area of focus or domain you are designing for as a lens to highlight the relevant attitudes and the specific context associated with the area of work you are doing.
Desirability Testing: Analyzing Emotional Response to a DesignMegan Grocki
In the design process we follow, once we have defined the conceptual direction and content strategy for a given design and refined our approach through user research and iterative usability testing, we start applying visual design. Generally, we take a key screen whose structure and functionality we have finalized—for example, a layout for a home page or a dashboard page—and explore three alternatives for visual style. These three alternative visual designs, or comps, include the same content, but reflect different choices for color palette and imagery. The idea is to present business owners and stakeholders with different visual design options from which they can choose. Sometimes there is a clear favorite among stakeholders or an option that makes the most sense from a brand perspective. However, there can often be disagreements among the members of a project team on which direction to choose. If we’ve done our job right, there are rationales for our various design decisions in the different comps, but even so, there may be disagreement about which rationale is most appropriate for the situation.
As practitioners of user-centered design, it is natural for us to turn to user research to help inform and guide the process of choosing a visual design. But traditional usability testing and related methods don’t seem particularly well suited for assessing visual design for two reasons:
1. When we reach out to users for feedback on visual design options, stakeholders are generally looking for large sample sizes—larger than are typical for a qualitative usability study.
2. The response we are looking for from users is more emotional—that is, less about users’ ability to accomplish tasks and more about their affective response to a given design.
With this in mind, we were very interested in articles we saw on Desirability Testing. In one article, the author posits desirability testing as a mix of quantitative and qualitative methods that allow you to assess users’ attitudes toward aesthetics and visual appeal. Inspired by his overview, we researched desirability studies a bit further and tried a modified version of the techniques on one of our projects. This presentation reviews the variants of desirability testing that we considered and the lessons we learned from a desirability study on visual design options for one of our projects. Interestingly, we found that while desirability testing did help us better understand participant’s self reported emotional response to a visual design, it also helped us identify other key areas of the experience that could be improved.
Ai and Design: When, Why and How? - Morgenbooster1508 A/S
This year, A and I became the probably most used letters in the alphabet. Time to reflect upon the role we play as designers in an increasingly AI-driven landscape.
Product Design and UX / UI Design Process in Digital Product DevelopmentVolodymyr Melnyk
Presentation about product design and its role in digital product development, UI / UX design process and methodologies, examples of their applications.
This presentation aims to teach others how to use the user centered design methodology known as personas.
Personas are archetypes (models) that represent groups of real users who have similar behaviors, attitudes, and goals. A persona describes an archetypical user of software as it relates to the area of focus or domain you are designing for as a lens to highlight the relevant attitudes and the specific context associated with the area of work you are doing.
Beyond Just Usability: Desirability and Usefulness TestingSusan Mercer
Much of our work in UX research focuses on usability – evaluating products and interfaces to ensure they are easy-to-use. However, in today’s digital world, they are no longer enough. Consumers also have come to expect entertaining and engaging experiences. Web and mobile applications need to be usable, useful and engaging.
So, how do we evaluate web interfaces to determine how useful and engaging they are? Desirability has been evaluated in recent years by the use of the Product Reaction Card technique, originated by folks at Microsoft. However, there are many other techniques used in market and industrial design research that we can borrow to complement this technique. Likewise, we can use standard usability testing techniques with lines of questioning with a slightly different focus to evaluate the relative usefulness of different solutions for a particular user group.
In this talk, I discuss several techniques that I have used in recent months to evaluate the usefulness and desirability of interfaces The best techniques I have discovered to evaluate usefulness involve open-ended interview questions regarding current processes and pain points, followed by a usability evaluation of the interface and then a reflective interview discussing the benefits and drawbacks of that solution to their personal situation. To evaluate desirability, I will discuss the product reaction card technique and variations using more defined vocabularies for emotional responses and product personalities. In addition I will show results from techniques borrowed from psychology and marketing research - sentence completion, collaging, and the use of dyad rating scales. These techniques offer a variety of both qualitative and quantitative data that can be used to compare different interface options.
Template for the improved Value Proposition Canvas. This version focuses on customer wants, needs and fears and on features, benefits and user experiences.
A talk Marc gave at the UI20 conference in Boston, November the 3rd, 2015.
Smaply: www.smaply.com
ExperienceFellow: www.experiencefellow.com
This is Service Design Thinking: www.thisisservicedesignthinking.com
This is Service Design Doing: www.thisisservicedesigndoing.com
Content:
1. The typology of journey maps
2. Customer experience research
3. Prototyping services
4. Service design and start-ups
Product Managers are the visionaries for both identifying solutions, and innovating for the next big thing. But how does one jump from “I have an idea” to “go live”? There’s lots in between.
By putting you in real-world scenarios, this deck was created for a Hearst-wide division workshop that helped various teams through how they can break down their idea into actionable next steps by borrowing agile methodologies.
Epoca presented at Service Design Drinks Milan #3 how to use the customer journey map tool in b2b projects, showcasing a case-study they have been working on in the last years.
Welcome to our YouTube video on "Figma Variables & Design Tokens: Elevate Your Designs and Collaboration!"
In this comprehensive guide, we'll take you on a journey through the powerful world of Figma variables and design tokens. Whether you're a seasoned designer or just starting your design journey, understanding and harnessing these tools will revolutionize the way you create and collaborate on design projects.
Say goodbye to repetitive design tasks and hello to efficiency! Figma variables allow you to create reusable components and styles, ensuring consistent and pixel-perfect designs throughout your projects. We'll show you how to set up and use variables effectively, so you can focus on what you do best – designing stunning user interfaces.
Content of this video:
1. What is design token
2. Importance of token
3. Types of token
4. How token Works
5. What can be token?
6. How to write a design token?
7. Token Studio
8. How Token Studio Works
9. Figma Variable
10. How Figma Variable Works
11. Style vs. Variables
12. Is Variable = Token?
13. Token Studio vs Figma Variables
14. Should I Use Variable in My Design?
15. Some Tips & Caution
16. Plugins
As organizations continue to establish and mature their in-house design teams, it turns out there’s very little common wisdom on what makes for a successful design organization. Books and presentations tend to focus on process, methods, tools, and outcomes, leaving a gap of knowledge when it comes to organizational and operational matters.
In this talk, Kristin Skinner discusses how to coordinate efforts and structure teams within large organizations. She covers:
- Realizing the Potential of Design
- Organizational Models / The Centralized Partnership
- The 5 Stages of Design Organizations
- The 12 Qualities of Effective Design Organizations
She also stresses the impact that design can have on business and highlights the importance of design managers in coordinating in-house efforts, advocating for quality, and enabling culture.
More information can be found in Kristin's book with Peter Merholz, Org Design for Design Orgs: Building and Managing In-House Design Teams, published by O'Reilly in August 2016.
http://orgdesignfordesignorgs.com/
Product Management 101: #1 How To Create Products Customer Love.Jean-Yves SIMON
An introduction to Product Management, for people involved in technology or software companies. Mainly aimed at evangelizing the role and responsibilities across an organization.
This is the #1 presentation out of a serie of 10 sessions.
Special thanks to Marty Cagan @ SVPG for the title :)
A design system can vastly improve your team's productivity, but most of all, it leads to better products! The challenge lies in creating a mature system and leading its adoption across the company successfully. Let's talk about how we learned to meet the needs of different designers and developers on different products, on different tech stacks, on different platforms. Attendees will go home with tips they can use to improve design systems of any stage.
Building a Design System: A Practitioner's Case Studyuxpin
- How to build a design system from scratch
- How to audit your product for design consistency
- How to structure and communicate a design system to an Agile team
User Experience and Product Management: Two Peas in the Same Pod?Jeff Lash
What is the difference between User Experience and Product Management? Where do you draw the line between the two? How can UXers work better with Product Managers? How can a UXer transition into product management? All these questions and more, answered in this presentation by Jeff Lash for the 2011 St. Louis User Experience conference on Feb 25, 2011.
A fast-forward tour about Design Thinking by webkeyz.
How design thinking differs from scientific thinking? Why to use it? When to use it? And how design thinking can impact your life?
Product Management 101: Techniques for SuccessMatterport
This is a snapshot from a living document. To see the current document, please go to https://goo.gl/yFFrml.
Topics covered include:
- Resources
- General Overview
- The Role of Product Management
- Characteristics of Great Project and Product Managers
- Problem Space and Solution Space
- Customer Personas
- User Stories
- Product Documentation
- Agile Product Development
- Succeeding with Agile from The Lean Playbook
- Analytics, Customer Engagement, & Monetization
- Pricing Strategies
- Overall Leadership and Organizational Development
- Final Guidelines and Recommendations
Presentation from my keynote at the Idean UX Summit 11 in San Francisco. This presentation shares IBM's journey to drive delightful experiences at scale across its products and offerings. This presentation details IBM's investment in design thinking and user experience (UX), in terms of talent, design studios, and best practices. This presentation also shows a preview of the IBM Design Language.
Articial Intelligence in Brand & IllustrationMartin Hansen
A low practical look into our early exploration with AI.
A presentation about how we have explored using AI in our visual design work. The slides presents overall thoughts about AI and highlights 2 cases with our early experiences using AI in design and brand work.
The slides was made by
Martin Hansen linkedin.com/in/designbymash and
Jakob Kahlen https://www.linkedin.com/in/kahlen/
Beyond Just Usability: Desirability and Usefulness TestingSusan Mercer
Much of our work in UX research focuses on usability – evaluating products and interfaces to ensure they are easy-to-use. However, in today’s digital world, they are no longer enough. Consumers also have come to expect entertaining and engaging experiences. Web and mobile applications need to be usable, useful and engaging.
So, how do we evaluate web interfaces to determine how useful and engaging they are? Desirability has been evaluated in recent years by the use of the Product Reaction Card technique, originated by folks at Microsoft. However, there are many other techniques used in market and industrial design research that we can borrow to complement this technique. Likewise, we can use standard usability testing techniques with lines of questioning with a slightly different focus to evaluate the relative usefulness of different solutions for a particular user group.
In this talk, I discuss several techniques that I have used in recent months to evaluate the usefulness and desirability of interfaces The best techniques I have discovered to evaluate usefulness involve open-ended interview questions regarding current processes and pain points, followed by a usability evaluation of the interface and then a reflective interview discussing the benefits and drawbacks of that solution to their personal situation. To evaluate desirability, I will discuss the product reaction card technique and variations using more defined vocabularies for emotional responses and product personalities. In addition I will show results from techniques borrowed from psychology and marketing research - sentence completion, collaging, and the use of dyad rating scales. These techniques offer a variety of both qualitative and quantitative data that can be used to compare different interface options.
Template for the improved Value Proposition Canvas. This version focuses on customer wants, needs and fears and on features, benefits and user experiences.
A talk Marc gave at the UI20 conference in Boston, November the 3rd, 2015.
Smaply: www.smaply.com
ExperienceFellow: www.experiencefellow.com
This is Service Design Thinking: www.thisisservicedesignthinking.com
This is Service Design Doing: www.thisisservicedesigndoing.com
Content:
1. The typology of journey maps
2. Customer experience research
3. Prototyping services
4. Service design and start-ups
Product Managers are the visionaries for both identifying solutions, and innovating for the next big thing. But how does one jump from “I have an idea” to “go live”? There’s lots in between.
By putting you in real-world scenarios, this deck was created for a Hearst-wide division workshop that helped various teams through how they can break down their idea into actionable next steps by borrowing agile methodologies.
Epoca presented at Service Design Drinks Milan #3 how to use the customer journey map tool in b2b projects, showcasing a case-study they have been working on in the last years.
Welcome to our YouTube video on "Figma Variables & Design Tokens: Elevate Your Designs and Collaboration!"
In this comprehensive guide, we'll take you on a journey through the powerful world of Figma variables and design tokens. Whether you're a seasoned designer or just starting your design journey, understanding and harnessing these tools will revolutionize the way you create and collaborate on design projects.
Say goodbye to repetitive design tasks and hello to efficiency! Figma variables allow you to create reusable components and styles, ensuring consistent and pixel-perfect designs throughout your projects. We'll show you how to set up and use variables effectively, so you can focus on what you do best – designing stunning user interfaces.
Content of this video:
1. What is design token
2. Importance of token
3. Types of token
4. How token Works
5. What can be token?
6. How to write a design token?
7. Token Studio
8. How Token Studio Works
9. Figma Variable
10. How Figma Variable Works
11. Style vs. Variables
12. Is Variable = Token?
13. Token Studio vs Figma Variables
14. Should I Use Variable in My Design?
15. Some Tips & Caution
16. Plugins
As organizations continue to establish and mature their in-house design teams, it turns out there’s very little common wisdom on what makes for a successful design organization. Books and presentations tend to focus on process, methods, tools, and outcomes, leaving a gap of knowledge when it comes to organizational and operational matters.
In this talk, Kristin Skinner discusses how to coordinate efforts and structure teams within large organizations. She covers:
- Realizing the Potential of Design
- Organizational Models / The Centralized Partnership
- The 5 Stages of Design Organizations
- The 12 Qualities of Effective Design Organizations
She also stresses the impact that design can have on business and highlights the importance of design managers in coordinating in-house efforts, advocating for quality, and enabling culture.
More information can be found in Kristin's book with Peter Merholz, Org Design for Design Orgs: Building and Managing In-House Design Teams, published by O'Reilly in August 2016.
http://orgdesignfordesignorgs.com/
Product Management 101: #1 How To Create Products Customer Love.Jean-Yves SIMON
An introduction to Product Management, for people involved in technology or software companies. Mainly aimed at evangelizing the role and responsibilities across an organization.
This is the #1 presentation out of a serie of 10 sessions.
Special thanks to Marty Cagan @ SVPG for the title :)
A design system can vastly improve your team's productivity, but most of all, it leads to better products! The challenge lies in creating a mature system and leading its adoption across the company successfully. Let's talk about how we learned to meet the needs of different designers and developers on different products, on different tech stacks, on different platforms. Attendees will go home with tips they can use to improve design systems of any stage.
Building a Design System: A Practitioner's Case Studyuxpin
- How to build a design system from scratch
- How to audit your product for design consistency
- How to structure and communicate a design system to an Agile team
User Experience and Product Management: Two Peas in the Same Pod?Jeff Lash
What is the difference between User Experience and Product Management? Where do you draw the line between the two? How can UXers work better with Product Managers? How can a UXer transition into product management? All these questions and more, answered in this presentation by Jeff Lash for the 2011 St. Louis User Experience conference on Feb 25, 2011.
A fast-forward tour about Design Thinking by webkeyz.
How design thinking differs from scientific thinking? Why to use it? When to use it? And how design thinking can impact your life?
Product Management 101: Techniques for SuccessMatterport
This is a snapshot from a living document. To see the current document, please go to https://goo.gl/yFFrml.
Topics covered include:
- Resources
- General Overview
- The Role of Product Management
- Characteristics of Great Project and Product Managers
- Problem Space and Solution Space
- Customer Personas
- User Stories
- Product Documentation
- Agile Product Development
- Succeeding with Agile from The Lean Playbook
- Analytics, Customer Engagement, & Monetization
- Pricing Strategies
- Overall Leadership and Organizational Development
- Final Guidelines and Recommendations
Presentation from my keynote at the Idean UX Summit 11 in San Francisco. This presentation shares IBM's journey to drive delightful experiences at scale across its products and offerings. This presentation details IBM's investment in design thinking and user experience (UX), in terms of talent, design studios, and best practices. This presentation also shows a preview of the IBM Design Language.
Articial Intelligence in Brand & IllustrationMartin Hansen
A low practical look into our early exploration with AI.
A presentation about how we have explored using AI in our visual design work. The slides presents overall thoughts about AI and highlights 2 cases with our early experiences using AI in design and brand work.
The slides was made by
Martin Hansen linkedin.com/in/designbymash and
Jakob Kahlen https://www.linkedin.com/in/kahlen/
Jeff Belden MD and Janey Barnes PhD co-presented at HIMSS Virtual Conference June 2010. You can hear the audio recording if you are a HIMSS member, available online.
1How to Perform ExperimentsBasic Concepts CSCI .docxdrennanmicah
1
How to Perform Experiments:
Basic Concepts
CSCI 783: Empirical Software Engineering
2
Empirical Software Engineering: How to use empirical research in software engineering?
Repetition of empirical studies is necessary!
Definition
Planning and Design
Execution
Analysis
Packaging
Definition: Determine study goal(s)
Design: and research hypothesis(es). Select type of empirical study to be employed Operationalize study goal(s) and hypotheses. Make study plan: what needs to be done by whom and when. Prepare material required to conduct the study
Execution: Run study according to plan and collect required data
Analysis: Analyze collected data to answer operationalized study goals and hypotheses
Packaging: Report your studies
3
Empiricism in Software Engineering
Confirmation
Evaluation
Identification
Validation
Understanding
Guidance / Control
Of more or less accepted hypotheses:
For example: object-orientation is good for reuse
Of Methods:
For example: Whether Java produces higher quality code than C++
Of Relationships:
For example: Find a relationship between fault prone components and design concepts
Of Models and Measures:
For example: Validate a specific cost estimate model
Of Methods, Techniques and Models:
For example: To understand the relationship between inspections and testing
to help in Management:
For example: as input to personnel to software inspections
To support Decision- Making with respect to Changes:
For example: Whether or not to introduce a new development tool
C
Change / Improve
Experimentation in software engineering
4
Experiment Objective
Cause
Construct
Effect
Construct
Cause-effect
Construct
Theory
Treatment
Outcome
Treatment - Outcome
Construct
Observation
Experiment Operation
Independent variable
Dependent variable
5
What is Empirical Software Engineering Research
What kinds of questions are "interesting"?
What kinds of results help to answer these questions, and what research methods can produce these results?
What kinds of evidence can demonstrate the validity of a result, and how to distinguish good results from bad ones?
6
Types of Research Questions
What kinds of questions are "interesting"?
Types of Research Questions
Method or means of development
Method for analysis
Design, evaluation, or analysis of a particular instance
Generalization or characterization
Feasibility
How can we do/create (or automate doing) X?
What is a better way to do/create X?
How can I evaluate the quality/correctness of X?
How do I choose between X and Y?
What is a (better) design or implementation for application X?
What is property X of artifact/method Y?
How does X compare to Y?
What is the current state of X / practice of Y?
Given X, what will Y (necessarily) be?
What, exactly, do we mean by X?
What are the important characteristics of X?
What is a good formal/empirical model for X?
What are the varieties of X, how are they related?
Is it possible to accomplis.
Introduction to Prototyping - Scottish UPA - June 2011Neil Allison
Presented to the Scottish Usability Professionals Association, Edinburgh, 22 June 2011.
Covering the basics, the benefits, some tools, some tips and a case study.
Every study starts with a question. This session at CSUN 2014 started by examining the questions that usability testing can answer. Short case studies illustrate how the right technique will help us know not only what is happening but also why it’s happening. It's an overview of usability testing as a research method, and what you can (and can’t) learn from working with real people as they try to use a web site or other product.
Design thinking is a cooperative problem-solving framework based on observations and experiments. This is a short introduction to the methods and pocesses of design thinking, with practical examples of how it's used to create products, services and on-screen experiences.
Ch 6 only 1. Distinguish between a purpose statement, research pMaximaSheffield592
Ch 6 only
1. Distinguish between a purpose statement, research problem, and research questions.
2. What are major ideas that should be included in a qualitative purpose statement?
3. What are the major components of a quantitative purpose statement?
4. What are the major components of a mixed methods purpose statement?
Requirements Engineering (20 points)
In Chapter 4 of Software Engineering. Sommerville, Pearson, 2016 (10th edition), Sommerville discusses ethnography as a method for eliciting requirements.
1. Discuss two advantages and two disadvantages of an ethnographic approach. (5 points)
2. Suggest two contexts where ethnography might be a challenging method of requirements engineering. For each context, how would you recommend that your team elicit requirements? (15 points)
Design (20 points)Design patterns (5 points)
Which of the following statements is (are) true? Explain.
1. StudentsDatabase is the model, StudentsManager is the controller, and WebApplication is the view.
2. StudentsDatabase is the model, StudentsManager is the view, and WebApplication is the controller.
3. StudentsManager is the model, StudentsDatabase is the view, and StudentsManager is the controller.
4. This is not MVC, because StudentsManager must use a listener to be notified when the database changes.
(Credit: EPFL)Design task (15 points)
Suppose you are asked to design a time management and notetaking system to support (1) scheduling meetings; and (2) tracking the documents associated with those meetings (e.g. agendas, presentations, meeting minutes).[footnoteRef:1] The system should accommodate [1: Such a feature seems like an inevitable development in any messaging platform…]
Make reasonable assumptions as needed.
1. Create a use case for “Schedule meeting”. You might follow the style in Sommerville Figure 7.3. (5 points)
2. Identify the objects in your system. Represent them using a structural diagram showing the associations between objects (“Class diagram” – cf. Sommerville Figure 5.9). (5 points)
3. Draw a sequence diagram showing the interactions between objects when a group of people are arranging a meeting (cf. Sommerville Figure 5.15). (5 points)
1. Implementation (20 points)
Consider the software package is-positive.[footnoteRef:2] Examine its source code (see index.js) and its test suite (see test.js), then complete these questions. [2: https://www.npmjs.com/package/is-positive]
1. Describe the API surface of this package. (2 points)
2. Describe how you would test this package. Describe how and why your approach would change if you maintained a similar package in a different programming language of your choice. (2 points)
3. According to npmjs.com, this package receives over 16,000 downloads each month.
a. Why might an engineer choose to use this package? (4 points)
b. Why might an engineer choose not to use this package? (You may find insights from the chapter ab ...
Denver Startup Week 2019: Choosing a Direction Learning How to Test Ideas and...BrittanyRubinstein
As part of Denver's 2019 Startup Week, Crownpeak's Director of UX, Ari Weissman and Lys Maitland, Experience Research Manager at a national healthcare organization, presented a joint session on "Choosing a direction: Learning how to test ideas and designs."
Can AI do good? at 'offtheCanvas' India HCI preludeAlan Dix
Invited talk at 'offtheCanvas' IndiaHCI prelude, 29th June 2024.
https://www.alandix.com/academic/talks/offtheCanvas-IndiaHCI2024/
The world is being changed fundamentally by AI and we are constantly faced with newspaper headlines about its harmful effects. However, there is also the potential to both ameliorate theses harms and use the new abilities of AI to transform society for the good. Can you make the difference?
Transforming Brand Perception and Boosting Profitabilityaaryangarg12
In today's digital era, the dynamics of brand perception, consumer behavior, and profitability have been profoundly reshaped by the synergy of branding, social media, and website design. This research paper investigates the transformative power of these elements in influencing how individuals perceive brands and products and how this transformation can be harnessed to drive sales and profitability for businesses.
Through an exploration of brand psychology and consumer behavior, this study sheds light on the intricate ways in which effective branding strategies, strategic social media engagement, and user-centric website design contribute to altering consumers' perceptions. We delve into the principles that underlie successful brand transformations, examining how visual identity, messaging, and storytelling can captivate and resonate with target audiences.
Methodologically, this research employs a comprehensive approach, combining qualitative and quantitative analyses. Real-world case studies illustrate the impact of branding, social media campaigns, and website redesigns on consumer perception, sales figures, and profitability. We assess the various metrics, including brand awareness, customer engagement, conversion rates, and revenue growth, to measure the effectiveness of these strategies.
The results underscore the pivotal role of cohesive branding, social media influence, and website usability in shaping positive brand perceptions, influencing consumer decisions, and ultimately bolstering sales and profitability. This paper provides actionable insights and strategic recommendations for businesses seeking to leverage branding, social media, and website design as potent tools to enhance their market position and financial success.
Dive into the innovative world of smart garages with our insightful presentation, "Exploring the Future of Smart Garages." This comprehensive guide covers the latest advancements in garage technology, including automated systems, smart security features, energy efficiency solutions, and seamless integration with smart home ecosystems. Learn how these technologies are transforming traditional garages into high-tech, efficient spaces that enhance convenience, safety, and sustainability.
Ideal for homeowners, tech enthusiasts, and industry professionals, this presentation provides valuable insights into the trends, benefits, and future developments in smart garage technology. Stay ahead of the curve with our expert analysis and practical tips on implementing smart garage solutions.
Hello everyone! I am thrilled to present my latest portfolio on LinkedIn, marking the culmination of my architectural journey thus far. Over the span of five years, I've been fortunate to acquire a wealth of knowledge under the guidance of esteemed professors and industry mentors. From rigorous academic pursuits to practical engagements, each experience has contributed to my growth and refinement as an architecture student. This portfolio not only showcases my projects but also underscores my attention to detail and to innovative architecture as a profession.
Book Formatting: Quality Control Checks for DesignersConfidence Ago
This presentation was made to help designers who work in publishing houses or format books for printing ensure quality.
Quality control is vital to every industry. This is why every department in a company need create a method they use in ensuring quality. This, perhaps, will not only improve the quality of products and bring errors to the barest minimum, but take it to a near perfect finish.
It is beyond a moot point that a good book will somewhat be judged by its cover, but the content of the book remains king. No matter how beautiful the cover, if the quality of writing or presentation is off, that will be a reason for readers not to come back to the book or recommend it.
So, this presentation points designers to some important things that may be missed by an editor that they could eventually discover and call the attention of the editor.
Between Filth and Fortune- Urban Cattle Foraging Realities by Devi S Nair, An...Mansi Shah
This study examines cattle rearing in urban and rural settings, focusing on milk production and consumption. By exploring a case in Ahmedabad, it highlights the challenges and processes in dairy farming across different environments, emphasising the need for sustainable practices and the essential role of milk in daily consumption.
Preference and Desirability Testing: Measuring Emotional Response to Guide Design
1. Preference and Desirability Testing: Measuring Emotional Response to Guide Design Michael Hawley Chief Design Officer, Mad*Pow @hawleymichael Paul Doncaster Senior User Experience Designer, Thomson Reuters
2. Agenda Why we should care Why it’s not always as simple as asking:“Which option do you prefer?” Methods to consider Case Study: Greenwich Hospital Case Study: WestlawNext Summary/Comparison
12. Poor articulation “It reminds me of…” “It’s nice and clean.” “There’s just something about it . . .” “I ordinarily don’t like red, but for some reason it works here . . .” “It’s better than the other ones.”
13. What Stakeholders Should Care About “We should go with design C over A and B, because I feel it evokes the right kind of emotional response in our audience that is closer to our most important brand attributes.”
15. Present three different concepts or ideas to participants, and ask them to identify how two of them are different from the third and why. 13 Triading
29. Determine intended brand attributes (and their opposites) 20 Product Reaction Cards: Before You Begin Leverage existing marketing/brand materials Alternatively, stakeholder brainstorm to identify key brand attributes/descriptors using full list of product reaction cards as a start Tip: “If the brand was a person, how would it speak to your customers?”
30. Methodology Include 60/40 split of positive and negative words Target 60 words, optimized to test brand Simple question: “Which of the following words do you feel best describe the site/design/product (please select 5):” One comp per participant, or multiple comps per participant (no more than 3) Participants Qualitative: Paired with usability testing Quantitative: Target minimum of 30 per option if possible 21 Product Reaction Cards: Conducting
31. 22 Process - Analyzing Calculate percentage of positive and negative attributes per design Visualize overall sentiment of feedback using “word clouds” (see wordle.net) 68% Positive 32% Negative
38. Additional feedback obtained via participant interviews (qualitative)Survey Questions Hello, I am requesting feedback on a website I am working on. Your answers let me know if the site is conveying the right feel. 1. What are your initial reactions to the web site? 2. Which of the following words best do you feel best describe the site (select 5):
54. Sessions were held in 4 cities over 5 days Seattle Denver Memphis Minneapolis-St. Paul 4 sessions were held per day, with a maximum of 25 participants per session 1.5 hours allotted per study, most participants finished in less than 1 hour 319 participants successfully completed their sessions Phase 1: Logistics & Execution
55. Participants completed the study at individual workstations at their own pace All workstations included a 20” monitor, at 1024x768 resolution Phase 1: Logistics & Execution Memphis, TN, May 2009
56. Brief review of Westlaw critical screens Positive/negative word selection to describe Westlaw 35 Positive/negative product descriptors
57. Each set of Element variations were viewed in full screen Participant selects “top choice” by dragging a thumbnail image to a drop area 36 Homepage: Design Elements
59. Homepage: Design Elements (1) All options viewed in full screen Participant selects “top choice” by dragging a thumbnail image to a drop area
60.
61.
62. Visual Weight (6 options) Use of Imagery (8 options) Components (4 options) Search Area (4 options) Palette (10 options) Homepage: Design Elements
63. 19 HP designs viewed in full screen (randomized) All 19 options are presented again; participant assigns a rating using a 10-point slider. Top 5 and Bottom 2 choices are positioned in order of rating values on one long, scrollable page. Next to each design displayed, rates key aspects for each design on a 5-point scale Homepage: Design Gallery
101. Home Page (19) HP16 & HP15 designs consistently placed in the Top 5 across all filters Results List (14) RL4 consistently placed in the Top 3 across all sample filters, and was the #1 choice for 80% of all participants Document Display (9) DD3 placed in the Top 5 across all sample filters and was the #1 choice for 77% of all participants Phase 1: High-level Results
102. Note, participants were asked to describe the current Westlaw before being shown the new designs. 54 Phase 1: Word Selection Results
107. “No Big Fonts Please”The study narrowed the list of potential designs, and we better understood what design elements that Westlaw users liked and disliked. Phase 1: High-level Results
108. 56 Phase 2: September 2009 Kansas City, MO, Sept 2009
111. Get closure on other design options for online and printed content
112.
113. Method View, Rate, and Pick Top Choice for Homepage (3 options) Result List (2 options) Document Display (2 options) “Why?” Simple preference selection for two unresolved UI design issues Citing References: Grid display or List display? Out of Plan Indication design (6 options) Type formatting preferences for 3 different content types Font Face Font Size Margin Width Phase 2: September 2009
114. Logistics 3 cities (Philadelphia, Kansas City, Los Angeles) 1 Day 226 participants Analysis Filters (8 categories) were used to score the designs for each visual preference Results Clear choices for top designs in each of all categories “Why” feedback shed new light on designs under consideration and helped focus “homestretch” design activities Phase 2: September 2009
115. Home Page (3) HP3 ranked #1in 94% of filter groups (54% of total participants) Results List (2) RL5 ranked #1in 97% of filter groups (58% of total participants) Document Display (2) DD7 ranked #1in 94% of filter groups (61% of total participants) Phase 2: High-level Results
116. The main concerns regarding Homepage Design HP3 Search Box Too small How do I do a Terms-and-Connectors search? Browse Section How do I specify multiple or specific search content? Poor organization Poor label Need access to “often-used” content Need better access to help 61 Participant Comments: Homepage
117. Goals Get feedback on branding options from decision makers and those who influence purchase of the product Get closure on final outstanding design issues Tool Same as in Rounds 1 & 2, with some minor revisions to accommodate specialized input Phase 3: December 2009
118. Method Wordmark/Branding View wordmark color combinations and design elements against different backgrounds, pick top choice and provide comments Make a final “Top Choice” from all selections Simple preference selection for outstanding UI design issues Header Space: Tile or No Tile? Notes Design Location: Inline or Column? State: Open or Closed? Headnote Icon design (4 variations) Phase 3: December 2009
119. What color combination do you prefer? Please rank the 4 combinations below according to your preferences. To rank, click and drag an item from the left to a box on the right. Your Most Liked 1 2 3 4 Your Least Liked
120. Logistics 3 cities (Seattle, Denver, Boston) 1 Day 214 participants Analysis Simple preference, no advanced filters Results Decision-makers confirmed that critical brand elements should be retained Phase 3: December 2009
140. Both groups valued support in design decision making Align methodology with needs of the project Research-inspired, not research-decided 73 Summary/Comparison
142. Benedek, Joey and Trish Miner. “Measuring Desirability: New Methods for Evaluating Desirability in a Usability Lab Setting.” Proceedings of UPA 2002 Conference, Orlando, FL, July 8–12, 2002. http://www.microsoft.com/usability/uepostings/desirabilitytoolkit.doc Lindgaard, Gitte, Gary Fernandes, Cathy Dudek, and J. Brown. "Attention Web Designers: You Have 50 Milliseconds to Make a Good First Impression!" Behaviour and Information Technology, 2006. http://www.imagescape.com/library/whitepapers/first-impression.pdf Rohrer, Christian. “Desirability Studies: Measuring Aesthetic Response to Visual Designs.” xdStrategy.com, October 28, 2008. Retrieved February 10, 2010. http://www.xdstrategy.com/2008/10/28/desirability_studies 75 Additional Reading
143. User Focus. "Measuring satisfaction: Beyond the Usability Questionnaire." Retrieved February 10, 2010. http://www.userfocus.co.uk/articles/satisfaction.html UserEffect. "Guide to Low-Cost Usability Tools." Retrieved May 12, 2010.http://www.usereffect.com/topic/guide-to-low-cost-usability-tools Tullis, Thomas and Jacqueline Stetson. “A Comparison of Questionnaires for Assessing Website Usability.” Usability Professionals’ Association Conference, 2004.home.comcast.net/~tomtullis/publications/UPA2004TullisStetson.pdf Westerman, S. J., E. Sutherland, L. Robinson, H. Powell, and G. Tuck. “A Multi-method Approach to the Assessment of Web Page Designs.” Proceedings of the 2nd international conference on Affective Computing and Intelligent Interaction, 2007.http:// portal.acm.org/citation.cfm?id=1422200 76 Additional Reading
Impacts a product's or application's perceived:UtilityUsabilityCredibility
If users have a positive impression of the design aesthetics, they are more likely to overlook or forgive poor usability or limited functionality. With a negative impression, users are more likely to find fault with an interaction, even if a product’s overall usability is good and the product offers real value.
High desirability feeds into the motivational factors that help trigger target behavior.
High desirability feeds into the motivational factors that help trigger target behavior.
The simplicity of the question doesn’t work well with larger numbers of design options, especially if some are highly similar
People can have a difficult time articulating what it is about a design they like or dislike
The whys are important for stakeholder acceptance (branding guidelines)Business sponsors and stakeholders often want substantial customer feedback to assure them a given direction is correct.
TriadingQualitative interview technique that reveals constructsElicits attributes that are important to users in their vocabularyResearcher asks the participant to identify how two of the three examples are different from the thirdIn typical user research interviews, a researcher asks participants about their thoughts on a defined list of topics. The disadvantage of this approach is that the researcher may be inquiring about topics that are of little value or significance to the experience of the participants. Generally, participants will dutifully answer questions about any topics we ask them about, without thinking more broadly, going beyond the limits our questions impose, or interrupting us to tell us about dimensions that may be more relevant to them. Participants assume researchers are interested in studying the particular topics they’ve included in their interview scripts and don’t raise other issues that might be more pertinent to their overall experience with a product or potential design.Triading is a method that allows a researcher to uncover dimensions of a design space that are pertinent to its target audience. In triading, researchers present three different concepts or ideas to participants and ask them to identify how two of them are different from the third. Participants describe, in their own terms, the dimensions or attributes that differentiate the concepts. Participants follow this process iteratively—identifying additional attributes they feel distinguish two of the concepts from the third until they can’t think of any other distinguishing factors. By repeating this process across multiple participants, researchers can see trends that define audience segments or personas.The benefit of this process is that it uncovers dimensions of a particular domain that are important to the target audience rather than the researcher or designer. In addition, the dimensions participants identify are generally emotional aspects that it is important for experience designers to consider. For example, participants may describe differences in groups as “warm” versus cold” or business-like” versus fun.” Designers can then use the most relevant or common dimensions as inspiration for further design and exploration.
Benefits – straightforward and easy to administer on a large scaleNegatives – if you want to pick more than a clear winner but rather understand the emotional connections/reactions to each design this will not lend itself to that.
Obvious examples are consumer electronics or other retail productsAlso appropriate for applications in healthcare, insurance, financial, travel, etc.
Sensors track participants’ physiological measurements to particular designs. Changes in suggest a particular emotional response.Paired with attitudinal and self-reporting surveys measurements give a multifaceted view of emotional reactions to a design
Respondents are being asked: "To which extent do the feelings expressed by the characters correspond with your own feelings towards the stimulus?“Building on the responses of many people allows you to abstract valuable data pertaining to the emotional performance of your website, product, service.
“My initial reaction to this web site is that it seems kind of plain. There is not much going on in the page, and the colors seem kind of drab.”“This is a nice looking website. It is well designed, well laid out, and is appealing to look at. It makes me want to continue to navigate the site to learn more. “
“Men don’t really go with children… where there’s a baby, there must be a mother.”“My initial reaction to the website is that it seems very clean and modern. I like the layout, it looks like its easy to find information.”
“I felt love. I saw a mother holding a child…that’s pretty touchy. The site looks good, and it makes the hospital trustworthy.”“My initial reaction was that the hospital is represented by a caring, warm and friendly website.”
As you’re about to see, my story is just a LITTLE bit different than Mike’s . . .I joined the legal business unit of TR in 2007, just as it was about to embark on a 3-year, $90 million journey to produce a next generation subscription-based legal research tool. Please note the catchphrase in the center graphic – Legal Research Goes Human. This was at the core of what the executive leadership was trying achieve. Which was no easy endeavor – first, because legal research is not a pleasurable activity under any circumstances, and second, because the legacy Westlaw product, which had long dominated the market, looked like this . . .
This is one tab out of more than 100 that the user could have access to. Some were worse than this.Most of the selling points of the new system were going to be feature-based – a completely new and proprietary search algorithm, robust filters, the ability to create and share folders, collaborative tools, etc.However, the top exec, to his great credit, said he wanted to create something users would love to use – he used to say “I want them to snuggle up in bed with it at night.” And he committed to significant preference testing of designs toward dual purposes of equal significance: guiding the design itself, and having rock-solid justification for the senior executives for what we were doing. To Mike’s earlier point, that meant quantitative data with lots of users.
Initial goals were . . .1.2.measure how strong the brand was and whether people cared if it was messed withCritical – affected our approach, as you’ll see
In May of 2009, inparellel with the feature build outs . . .
This was done for security reasons (phones put away, constant monitoring)I told you our approach was different . . . Try to stay with me
This was to establish that users liked and were loyal the product, but hated the design
Presented pages designs with instructions to a focus on a specific element in isolation – like use of a photo/image in the product header, or size of the search area – to see if there were any trends in those areas.Here’s how they did is
Look at all pages in full screen (randomized)
From those they reviewed, select
Either change the one you selected, or move on to the next Design Element.
Looked at, and selected from, 32 screens
In the Design Gallery phase, users were asked to register their preferences to optimized page designs in their totality.
Instructions – you’re going to see them again and give an overall rating on a 10 point scale
Top 5 choices are presented, with a way to rate key aspects to get granular on what they like
As well as their bottom 2 – what they don’t like
We had them do the same for Results Lists (smaller numbers of design elements and design gallery, but the same way to rate their top 5 and bottom 2)
And document display as well
Revisit the descriptors to establish that the perception of a new design for the product was positveand they were done (MOST IN LESS THAN 1 HOUR)However, we did solicit some who could stay longer to participate in a brief interview which was taped (extra $$ to those who did)
Here’s one example
Out of all this, we got 3 core buckets of emotional response information that the VP could report up .. Baseline for design refinement established based on these clear winners
2. That we were getting the desired perceptual responses on the differences between the products
post-session discussions elicited the qualitative feedback we needed to provide that color that Mike talked about earlierPlease take particular note of the 4th one there – users did not want the brand messed with. We’re going to come back to that laterOK, the design team takes the summer, refines refinesrefines, gets to a certain point, and the VP says . . .
Let’s do it again
Because of the refinements, we were able to piggyback some outstanding UI issues into this testSecurity important now more than ever – this can’t get out!!
Type formatting was the dynamic manipulation of case and statute documents on the screen, topic of my submission to UPA 2010, happy to provide that info if you’re interested
** DON”T VERBALIZE RESULTS ** (next slides)
Clear choices (in the 90% range) for each of our primary page types
Specific areas of concern about all three (Significantly T&C search)In the home stretch, 3 months away from launch, we’re doing massive final validation testing for all of the core features I mentioned at the top, when I get called into my directors officeEveryone at the top was thrilled at what was coming, BUT – “we accept that we cannot mess with the legacy blue – however, we STRONGLY advocate that the branding of the product be aligned with the corporate template (orange, gray, white that you say at the top), and if you resist you better have a damn good reason . . . .”SO
Focus on those who decide to buy, or have influence on whether to buy
Results confirmed what we knew to be true from the outset
Achieved our primary goal 759
Measuredmeasuredmeasured (quant & qual) – 759 participants total about 325kGut-level preferences, which is a by-product of emotional responseGuided the design via explicit trends to all stakeholders
Evolution – changes are nuanced, but still critical
We sort of did in phase 1, but we didn’t ask “why” so much as just get quotes of enthusiasm for the fact that the product was being updated
As I was reviewing, I thought “why would anyone ever consider doing anything like this”, but if you do
In order of ascending practicality
A lot of that comes from the top, but we as UX pros can help make the caseEvery project is different, and as you saw in my case, you can take a little bit from a lot of different methods and come up with something that works in your specific circumstancesRe-emphasize – this should be used to guide and inspire the evolution of a design, then confirm decisions (if you’ve done it right)