SlideShare a Scribd company logo
P.GOWSIKRAJA M.E., (Ph.D.,)
Assistant Professor
Department of Computer Science and Design
UNIT V- Extended Reality(XR) Development
KONGU ENGINEERING COLLEGE (AUTONOMOUS)
DEPARTMENT OF COMPUTER SCIENCE AND DESIGN
20CDH01-HONOR DEGREE-IMMERSIVE DESIGN THEORY
UNIT V- Extended Reality(XR) Development
1. Augmented Typography:
Legibility and readability
Creating visual contrast
Take control
Design with purpose
2. Color for XR:
Color appearance models
Light interactions
Dynamic adaptation
Reflection
3. Sound Design:
Hearing what you see
Spatial sound
Augmented audio
Voice experiences
Power of sound.
Augmented Typography:-
To exploring how to optimize your type in augmented experiences.
Though many of the topics discussed also apply to virtual reality, the
emphasis best practices for AR, because that provides less control of the
environment.
UNDERSTAND LEGIBILITY AND READABILITY
CREATE VISUAL CONTRAST
TAKE CONTROL
Legibility and readability
Evolution
Back to the basics
Legibility
KEEP IT SIMPLE
THINK BIG
CONSIDER X-HEIGHT
STAY SANS DETAIL
Type made for XR
ARone
Type is meant to be read
Readability
GIVE IT SPACE
ARone Halo
SAY MORE WITH LESS
MAKE A CASE
LIMIT LINE LENGTH
WEIGH IN
KEEP IT FLAT
Displays covered in type are around us everywhere—from gas station signage to
airport information boards to mobile applications in the palm of our hands.
Creation of each new screen, each new context, there will be some adjustment that
designers need to notice and adjust.
Type on displays. Presenting text that will appear on a variety of screens creates
design challenges.
Evolution: Throughout the evolution of screens, typefaces have been designed to
improve the user experience.(CRT)
Screens have grown bigger, brighter & more light weight.
Back to the basics:-
Designing typography for ease of reading within XR involves similar
considerations as designing for a screen.
typographical marks include:
● Letters
● Numbers
● Punctuation
● Dingbats/symbols
Legibility is based on learned from designing for screens, simple is better.
Typefaces that are made from simple shapes translate better into lower
resolution displays, such as screens.
Legibility How easily distinguishable one letter is from another within a
typeface.
KEEP IT SIMPLE. Typefaces created from simple shapes work better than
overly styled type.
Geometric Type. Look for letterforms
that are created with basic geometric
shapes, right angles, and horizontal
finishing strokes.
GEMOMETRY
THINK BIG. In print you can have body
copy ranging from 8 to 12 points (in
print design we measure the size or
height of type using points). That is too
small for pixel-based type. 14 to 16
pixels or larger is optimal size.
CONSIDER X-HEIGHT. The height of
the lowercase letters is called the x-
height. Not all typefaces have the same
x-height.
STAY SANS DETAIL:- create typefaces specifically for
screen type, so these are good places to start.
Helvetica, Verdana, and Georgia are some classics,
but this list continues to grow thanks to the availability
of the Web Open Font Format and fonts being designed
for both print and web formats
ARone Halo
SERIF
Type made for XR
Type is meant to be read:
Selecting a typeface that is legible to users means that they can easily
distinguish the characters from one another.
Readability The spacing and arrangement of characters and words in order
to make the content flow together to aid reading it.
Many of these remain connected to the foundations of typography, but just
need some optimization for XR. Keep these guidelines in mind:
GIVE IT SPACE:-
Increasing your overall tracking, the space between two or more characters,
will help with readability.
With many of the displays you often see a bit of a halo effect around the text,
so by tracking out your type you can avoid the overlay of the halos and the
letters themselves.
An ARone typeface that demonstrates how unusual shapes in letter forms
produce better results in the rendering from AR headsets.
SAY MORE WITH LESS. Reducing the amount of copy, especially
paragraphs of type, is a better practice.
You can complement this with tool tips, explainer type, closed
captioning, and an audio track.
MAKE A CASE. Select a case that works for the content.
Uppercase is hard to read in large amounts, but can add hierarchy for
headers or shorter phrases you want to stand out.
LIMIT LINE LENGTH To reduce eye strain, keep the length of your lines of
type to 50 to 60 characters per line. We lose our place when our eyes
have to jump back to the beginning of the next line if it’s too long.
WEIGH IN. Varying your weights of type is a great way to add hierarchy to
your designs and can help guide the user’s eye though the page.
Just watch for the extreme weights.
Light and Extra Bold weights are much less legible than regular, medium,
or bold weights.
KEEP IT FLAT. 2D type is easier to read than 3D type. Type that is
extruded and volumetric becomes much harder to read.
It makes sense if you consider we aren’t as used to reading type in 3D; most
of our reading is two dimensional.
Logotypes are an exception as they can work as a 3D element in an
experience.
Legibility and readability
Evolution
Back to the basics
Legibility
KEEP IT SIMPLE
THINK BIG
CONSIDER X-HEIGHT
STAY SANS DETAIL
Type made for XR
ARone
Type is meant to be read
Readability
GIVE IT SPACE
ARone Halo
SAY MORE WITH LESS
MAKE A CASE
LIMIT LINE LENGTH
WEIGH IN
KEEP IT FLAT
UNIT V- Extended Reality(XR) Development
1. Augmented Typography:
Legibility and readability
Creating visual contrast
Take control
Design with purpose
2. Color for XR:
Color appearance models
Light interactions
Dynamic adaptation
Reflection
3. Sound Design:
Hearing what you see
Spatial sound
Augmented audio
Voice experiences
Power of sound.
Creating visual contrast
Viewing Distance- display and text
type
Spatial Zones
UI zone
Focal zone
Environmental zone
Different spatial zones
IMMERSIVE TYPE
UI TYPE
ANCHORED TYPE
RESPONSIVE TYPE
Creating visual contrast:-
Designers are used to considering
reading distance when designing.
A poster or billboard is expected to
be viewed from a further distance
than a brochure or postcard.
There are different design
considerations as a result of the
distance between the user and the
design element.
Viewing Distance. The viewing distance from text to our eyes changes
based on the medium.
Within an XR experience there may be type that is:
● Placed within the 3D space
● Static (such as any type that is part of the UI)
● Anchored within the environment
● Responsive
In print media, when choosing from the wide range of type options, you can
start by selecting from two main text types: display and text type.
(There may be other places where type is used, such as for a URL or caption
information, which will also be relatively small.)
Display type Type found in large
headings and titles; typically 16+ points.
Text type Type found in paragraphs and
meant for longer reading; typically 8 to 12
points and sometimes called body text.
Spatial Zones. Showing the three main
spatial zones in relationship to the user’s
display.
● UI zone
● Focal zone
● Environmental zone
UI ZONE The closest text to the user is within this space.
This type is anchored to the camera position on a mobile device or HMD making
this information constant in placement and view.
FOCAL ZONE The next zone moving farther away from the user is the focal zone.
This is an optimal placement for some of the main part of the experience,
including any essential type.
This is the ideal reading distance for essential type within the experience. This
space is within 3 to 16 feet from the user.
ENVIRONMENTAL ZONE The space that reaches farther beyond this scope is
the environmental zone. It can be used for positioning, landmarks, and to add
any additional environmental context within the experience.
Because this is farther away from the user, it is intended to provide directional
cues for the user, showing them places that they can explore within the
experience, or to provide helpful context to what they are experiencing up close.
Keep your type in the center zone of what you are designing to avoid the pixels
from blurring on the edge of your peripheral sight. Here are the optimal degrees
to remember:
Field of view: 94°
Head turn limit: 154°
Maximum viewing at one time: 204
With a 3D experience, there are important type design considerations for each
kind of type relative to the different spatial zones.
Immersive type
UI type
Anchored type
Responsive type
IMMERSIVE TYPE This type needs to act like a 3D object, but will most
likely be a flat 2D element (for readability).
● This type is integrated into the 3D environment. As such, it should
match the perspective of the planes where it is placed.
● If you want the type to feel integrated into a space, then it needs to look
believable by following the same perspective.
● This dynamic type will rely on spatial computing to map out the space
in advance of the experience or having the user select a vertical or
horizontal plane where the type will be placed.
UI TYPE This type remains static in the experience.
This should be 2D and remain in one place on the screen, such as the
navigation bar or on the top and bottom of the screen.
This text is critical for the user experience and often provides identifying
information, such as the name of the app or experience.
The type can serve as a menu allowing the user to see what other options
are available at any given point.
UI type must be easy to find, easy to see, and easy to use, because it plays
an essential role in the approachability of the experience for a user.
ANCHORED TYPE This type is connected to a
specific plane or object within the environment.
As the user moves around the environment, the type
will remain in the same spot as the object to which it
is anchored.
Anchored type stays pinned to one specific location
or object to identify it, like the business labels in
this AR navigation app prototype.
Example, in a navigation experience, the tags
pinned to the surrounding businesses and
landmarks around help the user identify them.
These visual tags are anchored to the physical
location.
So, as the user explores, they will always see the
correct name to each building
RESPONSIVE TYPE Just as websites have to create responsive layouts and
size ratios for the desktop displays, tablets, and mobile devices, that
same concept applies in XR environments.
Currently, type in HMDs uses pixel or bitmap type, instead of vector or
outline type which would allow it to be scalable.
With the dynamic needs of the content and type used in an augmented
environment, the design can be seen from far away and also super close,
even inside it and all around it.
This means that the type needs to be crisp and clear in both near and far
viewing distances.
Just as in CSS we use the em unit of measurement to scale the type in
relation to the width of the screen, there is a benefit for a similar system
within AR.
Based on user movement and the viewing angle, this approach allows type to
automatically adjust for optimal readability.
Black-on-white text is not as effective across all devices because you cannot
reproduce pure black in a transparent or see-through display, which is
used for many AR/MR experiences.
Without the pure black there may not be enough contrast between the type
and the background for readability.
Creating visual contrast
Viewing Distance- display and text
type
Spatial Zones
UI zone
Focal zone
Environmental zone
Different spatial zones
IMMERSIVE TYPE
UI TYPE
ANCHORED TYPE
RESPONSIVE TYPE
UNIT V- Extended Reality(XR) Development
1. Augmented Typography:
Legibility and readability
Creating visual contrast
Take control
Design with purpose
2. Color for XR:
Color appearance models
Light interactions
Dynamic adaptation
Reflection
3. Sound Design:
Hearing what you see
Spatial sound
Augmented audio
Voice experiences
Power of sound.
Take control
Where type is used? Tags
How type is used
How to view type? perspective distortion.
Customization
Minimize
Design with purpose
DESIGN CHALLENGE
MAKE AN AUGMENTED EYE CHART
Take control
You are probably aware by this point that there are a lot of uncontrollable
components to working within augmented and mixed realities.
One of the most exciting aspects about these technologies is that you can
use them in varying environments and scenarios.
Where type is used
To achieve more control is to consistently place your type in the same
location within the experience, from the entry point until the end.
After the user sees type repeatedly show up in the same place multiple
times, they will start to look to that place for the information when they
need it.
This can apply to the UI type that helps users
figure out how to navigate through an
experience, but it can also relate to the
immersive type that is part of the 3D space.
For example, in tagAR the name tags always
appear above people’s heads.
After you see this happen two or three times,
you understand that is where the digital
augmented object appears, and then you will
look for it in that same location each time after
that.
Tags. An augmented name tag from the mobile application tagAR.
These tags always appear directly above each person’s head, making it easier
to see their name and make eye contact at the same time.
In a different example, cars with projected GPS directions appearing in the
road in front of them use this approach to take advantage of constants
within the driving experience.
How type is used
To allow the users to start associating a specific style with a specific
function, give a role to each of the type styles within the experience.
You can use headers to identify important information, for instance, and
body copy to provide tool tips or provide instructions within the experience.
It does take time to initially set up the styling for each of the needed styles,
such as:
Main header (h1)
Secondary header (h2)
Additional headers (max 6)
Body type (p)
Adding these categories of type to your experience will make it easier to
navigate and find content.
How to view type
Because people can move through and around an AR experience, a world of
possibilities opens for how they can view any given element, including type.
Unlike a 3D object, however, type needs to be viewed from the correct angle
and perspective for it to be readable.
The way to control this viewing angle of text in 3D space is to have it always
face the user.
The positioning and orientation are relative to the user and their gaze. This
added control ensures that people will view the type without any
perspective distortion.
When users view text in 3D space from extreme angles, the type can get bent
and misshapen
Perspective Distortion. As type gets
warped to fit into a 3D scene, the use of
extreme perspectives makes the type
more distorted, reducing the readability
of the message.
Perspective distortion A warping of the
appearance of an object or image often
caused by viewing it from an extreme
angle or how it is placed into a 3D scene.
Customization
Knowing that people will each have a different experience based on their
physical location and environment, you can design for this.
Using your user research to identify the most common places people interact
with the experience, you can create different experiences for each.
When a user first launches the experience, they would have to provide
information about their physical environment.
Reduce the effort for users (and yourself) by providing a list they can choose
from; not only does this make providing the information easier for them, it
also is easier to design for.
Their answers, which could be as simple as selecting indoors or outdoors,
would activate different features or designs based on their choice.
Lighting is typically brighter outside than inside, for example, so you could
alter the design of your type, and other elements, according to their
selection.
Minimize
What information is essential to be included in the copy?
It is important to look through all the wording you are including in an
experience and be as efficient as possible.
reading large amounts of type in XR is not optimal. So, you want to narrow
in on just what is needed and avoid anything that is not needed for the
experience itself.
You can also explore if there are any other ways to express the information
instead of in type form.
it may not always be the best solution to communicate an idea or action
quickly.
Using simple icons, arrows, illustrations, photographs, videos, or even a
combination of these like a data visualization or an infographic could help
eliminate the amount of type needed by communicating the same
information in a visual way.
For this to work well, you have to put in some work to narrow down what
type is needed and how best to use each word effectively.
When working with mobile AR especially, screen space is premium real
estate.
You want to reserve as much space as you can for people to see and interact
with the AR experience. Make use of user interactions or UI elements to
reveal more information.
Design with purpose
The key takeaway to understand leaving this chapter is efficiency.
There are many challenges in displaying type in AR—everything from
working with lower resolution screens to choosing the best typeface to be
viewed up close and far away and everywhere in between.
These challenges reveal the need for efficiency of all the type you include
within an experience.
As you go through your full user journey, check to make sure the type holds
purpose everywhere you add it.
DESIGN CHALLENGE
MAKE AN AUGMENTED EYE CHART
The goal of this challenge is to help test your typographic design choices at
varying reading distances in augmented reality.
1. Based on some of the suggestions in this chapter select three
typefaces and font weights that you think will be legible in AR.
2. Using Adobe Illustrator or Photoshop, design an eye chart with
different letters in each row. Use these letters in order and add line
breaks as shown in the figure.
EFPTOZLPEDPECFDEDFCZP
3. As you go down each line, reduce the point size as shown.
4. Save this file as a JPG.
5. Launch Adobe Dimension. From the basic shape library select a plane.
6. Using the widget tool on the plane, rotate the
plane on the z-axis (blue) to lift the plane up
vertically, as you would expect to see a traditional
eye chart. Then position the plane on the x-axis
(magenta) to lift it up off the ground.
7. Now you need to add your eye chart to the plane.
To do this, move to the right side of the screen and
select your Plane layer in the Scene panel. Click the
arrow on this layer to view your customization
properties. Find the Properties panel, and double-
click the base color. Toggle from selecting a color to
selecting an image. Here you can upload the JPG
you saved earlier.
8. Adjust the positioning of your plane as needed to make sure you can view
the letters correctly.
9. Now, the fun really begins. You are going to share this to Adobe Aero.
While still in Adobe Dimension, choose File > Export > Selected for Aero.
Choose Export from the pop-up window, and then save the file in your
Creative Cloud Files folder. This should be the default folder that comes up,
but if you don’t see it, you can find it in your user files.
10. Using a mobile device or iPad, launch the Adobe Aero application. When
it prompts you to choose an image, choose your eye chart from your Creative
Cloud files. Place this on a plane so you can start testing.
11. Make sure you are clear to
move around the image. View the
type up close, and step back away
from it. How is the readability
affected? Take notes.
12. Based on your findings, choose
a different typeface and repeat the
process to help identify which
typefaces to try out in your next
AR project.
Take control
Where type is used? Tags
How type is used
How to view type? perspective distortion.
Customization
Minimize
Design with purpose
DESIGN CHALLENGE
MAKE AN AUGMENTED EYE CHART
UNIT V- Extended Reality(XR) Development
1. Augmented Typography:
Legibility and readability
Creating visual contrast
Take control
Design with purpose
2. Color for XR:
Color appearance models
Light interactions
Dynamic adaptation
Reflection
3. Sound Design:
Hearing what you see
Spatial sound
Augmented audio
Voice experiences
Power of sound.
2. Color for XR: Color appearance models
Color space
Additive
Subtractive
Linear versus gamma color space
Usability
●Legibility and readability
●Contrast
●Vibrancy
●Comfort
●Transparency
2. Color for XR: Color appearance models
● The color is a personal and dynamic relationship.
● Color creates an emotional impact, as we carry cultural meanings to
the hues surrounding our society.
● “Not all reds are the same. Some are more intense,
some more passionate, some more full of life, and
some more cautionary”.
● The term color space is used to describe the capabilities of a display or
printer to reproduce color information.
Example, you will want to make sure that you match
the color space used with the medium (print or
digital).
In a closely related concept, software often allows you
to set the color mode.
Color space A specific organization of colors that
determines the color profile that is used to support
the reproduction of accurate color information on a
device. RGB and CMYK are two common examples.
In traditional print design, ensuring the accurate
creation of color is so important to brand identities
and marketing that the Pantone Matching System was
Pan tone
Additive color/RGB Subtractive color/CMYK
Red, Green, and Blue each a color value of
between 0 and 255. To create over 16 million
color combinations.
The common color profile for this is called
CMYK (cyan, magenta, yellow and key black).
(mixing base)
8-bit sRGB color format preferred input for
images on many XR devices.
the term key is a direct reference to the key
plate used in the printing process. four-color
printing process
Each color has a specific value in the HSB or
HSL format. provides a numeric value to the
hue, saturation, and brightness (or
lightness) of the color.
These four colors can produce over 16,000
different color combinations.
create elements such as image targets, a
camera scanning a printed image and then
applying augmented content to it.
Linear and Gamma
color space:
The difference between
increasing the shading
incrementally in the
linear color space
versus using the
gamma correction,
which is nonlinear.
Linear Color Space Gamma Color Space
when creating digital images, there is
a need of more accuracy and variety
in the dark tones.
Once an image or graphic has been
gamma corrected, it should, in theory,
be displayed “correctly” for the human
eye.
To accommodate for this sensitivity of
the way the brain perceives shades,
gamma correction, also referred to as
tone mapping, was created.
To replicate that in a way that is
mathematical correct, the linear color
space was created to match our
physical space.
Linear Color Space Gamma Color Space
Linear color space Numeric color
intensity values that are
mathematically proportionate.
Gamma correction A process that
increases the contrast of an image in
a nonlinear way to adjust for the
human eye’s perception and the way
displays function.
many XR and game designers prefer to use the linear color space to give
their work that realistic feel. This has also become a standard within
software focused on immersive experiences such as Unity and Unreal
Engine.
Tint The increased lightness of color by the addition of white.
Shade The increased darkness of a color by the addition of black.
HMDs will support linear only, while others support gamma only.
Some will allow a combination: linear color with some gamma
corrections.
Usability:-
Selecting colors that will make the experience usable.
● Legibility and readability
● Contrast
● Vibrancy
● Comfort
● Transparency
Legibility and readability
● Legibility and readability refer not only to the color of type (text), but
also to color of the elements surrounding the text.
● To ensure that type is easily read, To use a shape as a color
background that helps separate the letters from the environmental
background.
● White is the most common color for text and icons in XR.
● Red text on a black background is hard to read because they are both
dark.
● Select colors that have varying shades, so you don’t have a dark color on
a dark color; instead, you want light on dark or dark on light.
Contrast
● When you have two colors that are close in shade or even saturation,
they will start to vibrate off one another.
● To avoid this effect, select colors that have visual contrast.
● It means opposite qualities such as light& dark or saturated &
desaturated.
Color Vibration. Colors that are close in tonal range start to vibrate when
placed in close proximity.
● Contrast is essential for keeping your experience accessible.
● Making sure your color choices have solid contrast will make the
experience usable for a greater number of users.
● This approach is more likely to suit a user’s unique needs, even if those
needs change based on their environment.
Vibrancy
● A color at its purest form is called chroma. when the color is fully saturated,
without the addition of gray. These pure colors are bright and vibrant.
● Vibrancy increases the brightness of the desaturated tones.
● The energy of a color caused by increasing or decreasing the saturation of the
least saturated tones.
● Vibrancy can also change the energy of the color and, as a result, the overall
experience.
● Bright oranges and reds will grab your attention over desaturated greens or grays.
Comfort
To create a positive user experience, you want the user to be comfortable.
If the colors you select are too intense or create too much strain, then this
will cause discomfort.
If a user is met with too much discomfort, they will likely leave the experience
to find a different one that is more comfortable.
Larger areas of color in XR, especially vibrant and fully saturated colors, will
be hard on the eyes. So, use these brighter colors sparingly to attract
attention, but don’t use them in large quantities.
Comfort
To create a positive user experience, you want the user to be comfortable.
If the colors you select are too intense or create too much strain, then this
will cause discomfort.
If a user is met with too much discomfort, they will likely leave the experience
to find a different one that is more comfortable.
Larger areas of color in XR, especially vibrant and fully saturated colors, will
be hard on the eyes. So, use these brighter colors sparingly to attract
attention, but don’t use them in large quantities.
Comfort:-
The users test the experience, and even test with different color combinations
to see what works best for most people.
The colors will change in appearance between the computer you create them
on and the actual device that plays the XR experience, it is important to test
your designs.
View the colors in context, and then make adjustments to improve the ease
of use.
Transparency:- Color will be displayed differently based on the kind of
display you use.
An optical see-through (OST) display, such as the Microsoft HoloLens 2,
AR glasses, or smart glasses will show all elements as more transparent,
due to the nature of the technology.
Video see-through (VST) displays, such as mobile AR experiences that use
the camera to view the physical world, have different considerations.
Because any graphics or objects will be applied directly on top of the
camera view in a VST-based experience, they can be displayed fully opaque.
If the amount of transparency in a 3D model or
object,then you can reserve opaque colors for UI
elements so that they stand out on the display.
Making the UI easy to see and interact with is a
high priority.
The perception of color is directly connected to
the light in the scene.
To ensure that users see the colors that you
select for the design, you need to design the
lighting as well.
2. Color for XR: Color appearance models
Color space
Additive
Subtractive
Linear versus gamma color space
Usability
●Legibility and readability
●Contrast
●Vibrancy
●Comfort
●Transparency
UNIT V- Extended Reality(XR) Development
1. Augmented Typography:
Legibility and readability
Creating visual contrast
Take control
Design with purpose
2. Color for XR:
Color appearance models
Light interactions
Dynamic adaptation
Reflection
3. Sound Design:
Hearing what you see
Spatial sound
Augmented audio
Voice experiences
Power of sound.
Light interactions:
●Type of light
● POINT LIGHT
● SPOT LIGHT
● AREA LIGHT
● DIRECTIONAL OR PARALLEL LIGHT
● AMBIENT LIGHT
●Color of light- Light Temperatures
●Lighting setup
● Soft lighting
● One-point lighting
● Three-point lighting
● Sunlight
● Backlight
● Environmental
●Direction and distance of light:- Falloff, Feathering
●Intensity of light:- 100% (the highest brightness)
●Shadows
Adjusting light in a scene or onto an object does not just mean that you
are simply brightening or darkening;
Believable immersion relies in the use of light and its accompanying
shadow.
With the exception of some stylistic deviations, you will want your
lighting to mimic the real world.
It makes sense then to be inspired by light from your physical space.
Type of light: Think about lighting design as you would think about
determining the colors of a composition: Identify the key areas that you
would like to have the most attention.
The brightest and most vibrant colors will attract attention first.
POINT LIGHT A point light will emit light in
all directions from a single point.
This light has a specific location and
shines light equally in all directions,
regardless of orientation or rotation.
Examples are lightbulbs and candles.
SPOT LIGHT A spot light works just like a
spotlight used in stage design.
It emits light in a single direction, and you
can move the direction of the light as needed.
Example is a stage spot light for a soloist.
AREA LIGHT This light source is confined within
a single object, often in a geometric shape such
as a rectangle or sphere shape. Examples are a
rectangular florescent light and a softbox
light.
DIRECTIONAL OR PARALLEL LIGHT Parallel
rays that mimic the sun; these lights are
infinite, just like sun infinitely lights. This means
that the position of these lights doesn’t matter,
only their direction and brightness. An obvious
example is sunlight.
AMBIENT LIGHT Ambient light applies to the full scene. You cannot
choose a specific location for this light, and it will change the overall
brightness of the scene. Example: natural, indirect light from a window.
Color of light
If you have ever gone lightbulb shopping or bought Christmas lights, then
you’ve seen how many different colors of light there are.
Even if you want just “plain white” light, you are greeted with a magnitude
of options. The reason is that no light is pure white.
Light is made up of three colors: red, green, and blue. Mixing these colors
in different proportions alters the color of the light we see, thanks to the
additive property we discussed earlier.
Light has a color temperature;
it can be warm or cool depending on the proportional mix of colors.
2700K is a warmer, yellower white;
7000K is a cooler, bluer white; daylight is 6400K.
Light Temperatures. The temperatures of various kinds of lights using the
Kelvin scale for measurement.
Lighting setup: It is quite to use more than one light in your scene, just as
you would in the real world. You can have window light and a table lamp in
the same space.
Example: add additional lights to the scene, you need to control the
relationship of the lights.
Soft lighting:Soft lighting is the best choice if you need to add evenly
distributed lighting to your scene.
The name actually refers to the soft quality of shadows in the scene, making
the overall contrast feel balanced and calm.
This kind of lighting is frequently used for
portrait photography.
Soft Light. One soft light provides
equal light across the 3D sphere.
One-point lighting: The one-point
lighting technique uses a single light
and, as a result, will create a dynamic
mood.
It also creates harsher shadows where
the light is not illuminating the
object.
One-point light hits the 3D sphere
making the light and shadows more
dramatic.
Three-point lighting
The three-point lighting technique
uses three lights—key, rim, and
fill—each of which has a specific role
in the overall lighting setup.
Three lights are set up around the 3D
sphere to demonstrate the positions
of the rim light (backlight), key
light, and fill light.
Three-Point Light:
Three lights are set up around the 3D sphere to demonstrate the positions of
the rim light (backlight), key light, and fill light.
● Key light illuminates the focal point of the scene or object and is the
primary light in the scene.
● Rim light illuminates the back your subject, separating it from the
background and adding depth.
● Fill light fills in more light in the scene to reduce or eliminate harsh
shadows and even out the overall lighting.
Sunlight
In the sunlight approach there is a single
light source: the sun.
If you are looking to replicate an outdoor
scene, then you should use direct sunlight as
your lighting.
Unlike in the real world, however, you easily
can move the direction of the sun in a 3D
scene to mimic the type of sunlight you
prefer: sunrise, high noon, sunset, or
something in between.
Backlight
A primary light source behind your
object is a backlight.
This technique is not as commonly
used, but it can create some
mystery and drama to the scene as
needed.
This lighting also can cause harsh
shadows and a lot of contrast
between the light and the object,
often creating a silhouette and
reducing the number of details seen.
Environmental
The environmental lighting approach pulls lighting from an image that is
imported into the program.
This works best when using high-dynamic-range imagery (HDRI) for which
the luminosity data of the image, specifically the darkest and lightest
tones, are captured at a larger range.
This basically means that more lighting data is stored within the image file (it
is a 32-bit image, versus the standard 8-bit).
These images can be used to replicate the lighting in the image in the 3D
scene.
Using environmental lighting is a fast way to generate a custom and
believable lighting setup.
Environmental. The light was created to mimic
the lighting from the background image and
replicated on the 3D sphere.
Direction and distance of light
The relationship between the light and the shadow provides a lot of
information, to control the look and feel of that transition.
the light weakens so too will the shadow. This weakening of a light along
its outer edge is called falloff.
The falloff has a radius and a distance, and you can control it.
Lights with a smooth falloff have a high radius and a large distance that
will show a gradient blur that slowly goes from light to dark.
Falloff :
● The visual relationship of shadow and light as illumination decreases
while becoming more distant from the light source.
● The edge of the light can be controlled through edge or cone feathering
to soften the line between the light and the shadow.
● This is how you can edit and control the edge itself.
● This option is often available for any lighting that is a cone shape, such
as a spot light.
Feathering :-
The smoothing, softening, or blurring of an edge in computer
graphics.
Intensity of light
Once you have the kinds of lights, their position, their roles in the
scene, and their color properties identified, the next step is to
determine how bright the light should be. This is the intensity.
The default is 100% (the highest brightness), but this amount can be
edited to make the light dimmer. The strength of the light can also be
called energy.
Shadows
Wherever there is a light, there must be an accompanying shadow, where
there is light falloff or light is blocked by another object.
Without a shadow, the light will not be perceived as real
and won’t be believable.
Shadows also play a big part in our ability to perceive
where an object is in space. Seeing a shadow far away
from an object tells us that the object is suspended in the
air or not near the plane.
A shadow that connects to the bottom of the object
tells us that the object is sitting directly on the plane.
Example, natural sunlight casts stronger shadows than
artificial light.
The terms soft light and hard light actually
reference the characteristic of the shadows the
types of light create.
Soft lighting provides a more even light across
all of the subject and, in turn, creates soft
shadows with a fuzzy edge.
Hard lighting provides more dramatic lighting on
an object, creating sharp edges on shadows.
Shadows. 3D rendering highlighting where the main light source is (the
setting sun) and how the light falls off into increasing shadow inside the
cave. The farther away from the sunlight, the darker the shadows become.
Light interactions:
●Type of light
● POINT LIGHT
● SPOT LIGHT
● AREA LIGHT
● DIRECTIONAL OR PARALLEL LIGHT
● AMBIENT LIGHT
●Color of light- Light Temperatures
●Lighting setup
● Soft lighting
● One-point lighting
● Three-point lighting
● Sunlight
● Backlight
● Environmental
●Direction and distance of light:- Falloff, Feathering
●Intensity of light:- 100% (the highest brightness)
●Shadows
UNIT V- Extended Reality(XR) Development
1. Augmented Typography:
Legibility and readability
Creating visual contrast
Take control
Design with purpose
2. Color for XR:
Color appearance models
Light interactions
Dynamic adaptation
Reflection
3. Sound Design:
Hearing what you see
Spatial sound
Augmented audio
Voice experiences
Power of sound.
Dynamic adaptation
Lighting estimation
● Brightness
● Light color
● Color correction values
● Main light direction
● Ambient intensity
● Ambient occlusion
Environmental reflections
●Diffusion
●Roughness
●Metalness
Dynamic adaptation:
The idea of the copycat. It allows you to learn and adapt to new
interactions by imitating what someone else is doing—learning as you go
along. This simple concept can be applied to a larger scale, as we look at
imitation in AR.
With dynamic backgrounds and environments, the light and the
properties of the light will constantly change. Just as a child sees a hand
movement and repeats the action on their own, so too can software.
such as Google’s ARCore and Apple’s ARKit framework, evaluate
environmental light and repeat it as digital light. The basic method used is
called lighting estimation.
Using sensors, cameras, and algorithms, the computer creates a
picture of the lighting found within a user’s physical space and then
generates similar lighting and shadows for digital objects added to the
space.
To be effective and realistic, this analysis should be continual
throughout the experience so it can adapt to changes in the lighting and
within the environment.
This is a key attribute in the ARCore and ARKit frameworks.
Lighting estimation A process that uses sensors, cameras, machine
learning, and mathematics to provide data dynamically on lighting
properties within a scene.
Lighting estimation
When using this lighting estimation method, the computer and AR
development framework work together to analyze the:
● Brightness
● Light color
● Color correction values
● Main light direction
● Ambient intensity
● Ambient occlusion
Brightness:
For each pixel on the display, the average lighting intensity can be
calculated and then applied to all digital objects is called pixel intensity, and it
adjusts the overall brightness based on calculating the average overall available light in
the environment.
Light color and color correction
The white balance can be detected and checked dynamically to allow for color
correction of any digital objects within the scene to react to the color of the
light.
It will enhance the color balance allows changes to occur smoothly and more
naturally instead of abrupt adjustments(the illusion or realism).
To apply luminance properties applied to your 3D model, it will still maintain
those color properties, but it will also receive the color correction from the light
estimation scan.
Main light direction:By identifying the
main directional light, the software ensures
that digital objects added to the scene will
have shadows cast in the same direction
as other objects around them.
It also enables specular highlights and
reflections to be correctly positioned on the
object to match the environment.
If you want to make sure that all the
shadows and highlights are following
consistently from the singular directional
light.
Having this consistent direction of light may seem minor, but it is
something that the brain sees and perceives without us even realizing.
The intensity of the light and also the falloff of those shadows. You
don’t want the intensity of the light to feel too bright to match the scene,
or the reverse of that where the light feels too dark to match the scene.
Light Direction:
3D rendering showing a prominent main light source that can be seen
as it enters through the window opening.
The position of this light source leaves the interior of the scene in
shadow.
Ambient intensity: how multiple lights
can work together to create a full lighting
setup, As an important part of the light
estimation scan, ARCore can re-create
what Google calls “ambient probes” that
add an ambient light to the full scene
coming from a broad direction to create a
softer overall tone.
It works with the directional light to help
the digital objects blend more seamlessly
into the scene.
Again, it is about replicating or
imitating the real-world scene.
Ambient occlusion
Every time you add a computer-generated light,
it will produce a generated shadow. Those
shadows need to fall into the physical space
to make to make them believable. To do so,
two things need to happen.
● When you add an ambient light, it should
both cast a shadow on the object and
have the shadows occlude all around
it.
● When the light hits the object itself, such
as on a piece of fabric, each wrinkle
should show a shadow.
Something like a brick wall should have shadows created inside of every
groove. Ambient light will hit multiple surfaces, and each one will create
their own shadow. This shadow casting is called ambient occlusion.
Ambient occlusion Simulation of shadows both on an object itself and
also on the other objects around it created by the addition of an ambient
light source.
Environmental reflections
Take a look at the reflections to see environmental reflections, or places
where pieces of the space are reflected.
Depending on the material of the objects, the relative reflectiveness
will change.
When you add a digital object to a scene, especially an object that has a
metallic or glass surface, it should respond to the light around it in the
form of a reflection.
For these virtual objects, the reflections have to happen in real time and
adjust according to the space to lend realism and believability to the
objects.
Reflection. A metallic sphere reflects images from the environment
surrounding it.When creating your 3D objects, you can adjust several
properties to affect how reflective an object is.
● Diffusion
● Roughness
● Metalness
Diffusion Even distribution of light across an object’s surface.
Each material you apply to your 3D object has a base color or texture.
Adjusting an object’s diffusion property affects the amount and color of
light that is reflected at each point of an object.
Diffusion: The diffusion stays consistent as you look around the object.
It is a property that is applied equally along the material’s surface.
Because this is an even distribution of light, it will result in a
nonreflective surface.
In 3D software, the default diffusion color is white, unless you change it
otherwise.
Roughness:
● If the surface is smooth and shiny like a car’s chrome bumper, it will be highly
reflective. But if the surface has tiny bumps and cracks along the surface like the
surface of a rock or brick, then it will be less reflective.
● This roughness property can change how matte or shiny an object can become.
Increasing the roughness and using brighter colors will diffuse the light across the
surface more, making it appear matte or rough.
● Reducing the amount of roughness, in addition to using darker colors, will cause the
material to appear smooth and shiny.
● Materials that are shiny will also create specular highlights.
● These are the small shiny areas on the edges of an object’s surface that reflect a light.
● These specular highlights should change relative to the position of a viewer in a scene,
because they are created by the position of the light.
Metalness:- For the physical surface of an object, you can set multiple
properties to determine how metallic or nonmetallic it is.
● The refraction index controls the ability for light to travel through the
material. Light that cannot travel through an object will reflect back,
and more metallic surfaces will produce sharper reflections.
● The grazing angle makes the surface appear more or less mirror-like.
● If the surface reflects the light sharply and has a mirror-like quality, it
will appear more metallic.
● These properties can be adjusted to lower or increase the metalness to
change the appearance of an object’s surface.
● If the surface is made more metallic and mirror-like, this will increase the
need for environmental reflections on the object’s surface.
● Reflective surfaces also pick up colors and reflect images. So, a metallic
object placed in a green room will also have a green tone.
Reflection: Light and color work together to create a sense of depth and
realism.
● To create and design digital objects, they should be reflective of the
environment around them.
● This process starts with selecting the appropriate color appearance mode
for your experience, works through adding and adjusting any custom
lighting options, and should come to life by adapting to the physical
spaces that the object augments.
LIGHTING DESIGN
To creating some lighting setups.
To get started, you need to add a sphere to your scene. Do not apply any
materials to the sphere so you can see the way the lights change the
surface. Using the lighting setups create the following: One-point light
(soft and hard), Three-point light (add a key, fill, and rim light)
● Sunlight, Backlight and Your own custom setup
For each lighting setup you create, go to your Render options, and save a
PNG file for each. You can name each lighting setup accordingly. Save
these images in a folder, and use them for reference as you work on more
complex 3D models. This will create a lighting reference library for you.
Dynamic adaptation
Lighting estimation
● Brightness
● Light color
● Color correction values
● Main light direction
● Ambient intensity
● Ambient occlusion
Environmental reflections
●Diffusion
●Roughness
●Metalness
UNIT V- Extended Reality(XR) Development
1. Augmented Typography:
Legibility and readability
Creating visual contrast
Take control
Design with purpose
2. Color for XR:
Color appearance models
Light interactions
Dynamic adaptation
Reflection
3. Sound Design:
Hearing what you see
Spatial sound
Augmented audio
Voice experiences
Power of sound.
Sound Design: Hearing what you see
Exploring how sound plays an essential role in creating an immersive experience.
From how sound is created to how we can re-create it in a digital space, there
are a lot of exciting things happening within 3D sound experiences.
HEARING WHAT YOU SEE It is important first to understand how we hear so
that we can then look at the best ways to re-create that sound to create realism in
a soundscape.
SPATIAL SOUND Just as in physical spaces, sound has direction and distance.
There is different technology that will help create a sense of 3D sound.
AUGMENTED AUDIO Just as we can add a layer of visuals into a user’s view,
we can also add a layer of ambient audio to what they hear.
VOICE EXPERIENCES With XR devices becoming more hands free, voice is
becoming an intriguing way to interact with a computer.
HEARING WHAT YOU SEE
Listening
Sound localization
How do we hear sound?
Loudness, pitch
Raw audio to be captured and edited
Music and voice audio
Transferable and sharable audio
How is sound useful?
How do we use sound in XR?
● Ambient sound
● Feedback sound
● Spatial sound
HEARING WHAT YOU SEE
Find a place that you can sit comfortably for about five
minutes, and bring a notebook and something to write
with.
It can be inside or outside—it really can be anywhere.
1. Close your eyes, and be still. Bring your awareness to listening. Try to
avoid moving your head as you do this. Don’t turn your neck toward a
sound; try to keep your neck at a similar orientation.
2. Listen for what you hear. See if you can identify what sounds you are
hearing.
3. Then go one step further and try to identify where those sounds are
coming from.
Are they close? Far?
Which direction are they coming from?
Keeping yourself as the central axis, do you hear them in front of you?
Behind you?
To the left or right of you?
Up high or down low?
4. When five minutes are up, draw out what you heard by placing a circle in
the middle of the page to represent you, and then map out all the sounds
that you heard around you in the locations you heard them from. If they felt
close, write them closer to you, and in the same way, if they felt far away,
then write them farther from you.
Sound localization: start to pay attention to where you place the sound in context to
yourself. Also consider how you determine the source of the sound. This is called
sound localization. The ability of a listener to identify the origin of a sound based on
distance and direction.
It is impressive how well we can understand spatial and distance relationships just
from sound.
How do we hear sound?
Sound is created through the vibration of an object. This causes particles to
constantly bump into one another, sending vibrations as sound waves to our ears and,
more specifically, to our eardrums.
When a sound wave reaches the eardrum, it too will vibrate at the same rate. Then the
cochlea, inside the ear, processes the sound into a format that can be read by the
brain.
To do this, the sound has to travel from the ear to the brain along the
auditory nerve.
Sound requires an element or medium to travel through, such as air, water,
or even metal.
You may already understand this process, but as we look to design for
sound, there are some key properties that are essential to understand,
including loudness and pitch.
Loudness The intensity of a sound, measured in relation to the space that
the sound travels.
we can detect a wide range of sound, we need a way to measure the intensity
of the sound. This is called loudness, which uses the unit of decibels (dB) to
measure how loud or soft a sound is.
To help you add some perspective to dB measurements:
● A whisper is between 20 and 30 dB.
● Normal speech is around 50 dB.
● A vacuum cleaner is about 70 dB.
● A lawn mower is about 90 dB.
● A car horn is about 110 dB.
Pitch:-
Sound changes depending on how fast the object is vibrating. The faster the
vibration, the higher the sound. This pitch is measured using frequency, or
how many times the object vibrates per second.
Pitch The perceived highness or lowness of a sound based on the frequency of vibration.
Frequency is measured in hertz (Hz).
The human hearing ranges from 20 to 20,000 Hz. However, our hearing is most sensitive to
sounds ranging in frequency between 2000 and 5000 Hz. Those who experience hearing
loss will often start to lose or have the upper pitches affected first.
How do you choose which format to use? The answer depends on what you’re working with.
● Raw audio to be captured and edited: Uncompressed formats allow you to work
with the highest quality file, and then you can compress the files to be smaller
afterward.
● Music and voice audio: Lossless audio compression files maintain the audio quality
but also the larger file sizes.
● Transferable and sharable audio: Lossy audio compression formats produce smaller
files sizes, which facilitates sharing.
How is sound useful?
● Sound is much like a ripple in water; it starts in a central spot, and
then it slowly extends out gradually getting smaller and smaller (or
quieter and quieter) as it moves away from the center.
● Even if you hear a sound from far away, you can still detect where the
sound is coming from or at least an approximate direction.
● You can tell the difference between footsteps walking behind you or
down another hallway.
● You can tell the difference between a crowded restaurant and an empty
one from the lobby, all because of the sound cues of chatter.
● The more chatter you hear, the more people must be inside.
● Sound adds an additional layer of information that will help the user
further grow their understanding of what is going on around them.
● how light can be used to understand space and depth, and sound can
also be used to calculate distance and depth.
● Through the use of SONAR (sound navigation and ranging), you can
measure the time it takes for a sound to reflect back its echo.
● This idea is used by boats and submarines to navigate at sea and to
learn about the depth of the ocean as well.
How do we use sound in XR?
There are many ways that sounds play a role in our understanding of space.
Within XR there are three main ways sound is used.
● Ambient sound
● Feedback sound
● Spatial sound
Ambience for reality
In order to really create a sense of “being there,” sound adds another layer
of realness.
When you see a train approaching, that comes with the expectation of
hearing the wheels on the tracks, the chugging sound of the engine, steam
blowing, and the whistle, horn, or bell.
These sounds add to your perception of the train approaching.
Notice the ambient sounds that allow the user to feel truly immersed.
Listen for sounds that you can mimic to re-create the scene. Sounds that are
noise intensive and have consistent looping, such as fans, wind, or waves,
do not work as well in this medium, however, so you want to avoid them.
Sounds that have a start and stop to them will be more effective and less
intrusive.
When designing for AR and MR, you can rely more on the natural ambient
noise that will be in the user’s physical space.
Providing feedback
● For the user experience, sound can be a great way to provide feedback
about how the user is interacting within space.
● Hearing a sound when you select an interactive element will reinforce
that you have successfully activated it.
● These sounds can be very quiet, such as a click, or louder, such as a
chime.
● Just be sure to use these sounds in a consistent way, so that the user
will start to associate the sounds with their actions.
● Sound cues can guide interactions.
● You can also use sound to direct the user to look or move to another
location, to make sure they see an object that may not be in their
gaze.
● It can also be used in VR to alert the user when they are close to the
edge of their space boundaries.
Creating depth
Because our understanding of sound is 3D, it makes sense that you would
also re-create sound to reflect depth.
It also provides information to the user, such as how close or far away an
object is.
This topic is such an essential part of XR sound design, we are going to
dive into how make your sound have depth next.
HEARING WHAT YOU SEE
Listening
Sound localization
How do we hear sound?
Loudness, pitch
Raw audio to be captured and edited
Music and voice audio
Transferable and sharable audio
How is sound useful?
How do we use sound in XR?
● Ambient sound
● Feedback sound
● Spatial sound
UNIT V- Extended Reality(XR) Development
1. Augmented Typography:
Legibility and readability
Creating visual contrast
Take control
Design with purpose
2. Color for XR:
Color appearance models
Light interactions
Dynamic adaptation
Reflection
3. Sound Design:
Hearing what you see
Spatial sound
Augmented audio
Voice experiences
Power of sound.
Spatial sound
Single-point audio capture:
Binaural
Ambisonic
Paradise case study
Virtual Design Environment
Behind the Scenes
To re-create sound in a spatial environment, look at two components.
● How the sound is recorded
● How the sound is played back through speakers or headphones
The traditional types of audio recordings are mono and stereo.
Mono sound is recorded from a single microphone.
stereo is recorded with two microphones spaced apart.
Stereo is an attempt to create a sense of depth by having different sounds
heard on the right and left sides of a recording.
It is intended to create a sense of 3D audio.
The concept of 360-degree sound has been experimented with for years,
looking at how surround sound can allow sound to come from different
speakers all around the room creating a full 3D audio experience.
This is used most commonly for the cinema and must be designed around
people sitting in one fixed location
Single-point audio capture:
stereo recordings sound even more natural, one option is a binaural audio
recording format.
To record binaurally, you record from two opposite sides and place each
microphone inside a cavity to replicate the position and chamber of an ear.
This concept is used to re-create sound as closely as possible to the way we
hear it ourselves.
Headphones are needed to accurately listen to binaural sound.
Binaural A method of recording two-channel sound that
mimics the human ears by placing two microphones within
a replicated ear chamber positioned in opposite locations to
create a 3D sound.
Ambisonic audio uses four channels (W, X, Y, and Z) of sound
versus the standard two channels.
An ambisonic microphone is almost like four microphones in
one. You can think of this as 2D (stereo) versus 4D (ambisonic)
sound.
Ambisonic microphones have four pickups, each pointed and
oriented in a different direction making a tetrahedral
arrangement. Sound from each direction is recorded to its own
channel to create a sphere of sound.
Ambisonic Microphone. Able to
capture audio from four directions at
once, this Sennheiser ambisonic
microphone is creating a spatial audio
recording from nature.
Ambisonic A method of recording
four-channel sound that captures a
sphere of sound from a single point to
reproduce 360° sound.
It was developed by the British National Research Development
Council in the 1970s, more specifically, by engineer Michael
Gerzon.
Paradise case study
Many XR experiences rely on an individual user experience, where each
person will have their own set of headphones on or be inside their own
virtual space.
This is a feature for which there is more and more of a demand.
In a social situation, the sound may not track with the user.
However, it could be designed to be static within a space, allowing the sound
to change as the user moves through space (as in the real life).
Paradise is an interactive sound installation and gestural instrument for
16 to more than 24 loudspeakers.
For this collaborative project, Douglas Quin and Lorne Covington joined
their backgrounds in interaction design and sound design to create a fully
immersive sound experience that they optimized for four to eight people.
Paradise case study
The installation allows users to “compose a collage of virtual acoustic
spaces drawn from the ‘natural’ world.” As users move through the space
and change their arm positioning,
sensors activate different soundscapes from wilderness and nature to create
a musical improvisation.
This composition is unique each time as it relies on how each user moves
and interacts within the space.
Motions can change the density of sounds, the volume of them, the motion or
placement of the sound in the space, and the overall mix of the sounds
together.
Paradise Experience:- Visitors
react to the interactive
soundscape environment of
Paradise. Venice International
Performance Art Week, 2016.
Photograph used by permission
of Douglas Quin
This experience was reimagined for
both interior and exterior spaces.
Changing the location “creates a
different spatial image,” Quin
explained when I spoke with him
and Covington about the challenges
of the project.
As they re-created the experience,
they had to adjust for the location.
The exterior exhibit required fewer
ambient sounds, as they were
provided naturally.
The interior exhibit
required more
planning based on
how sound would
be reflected and
reverbed by the
architecture of the
space.
Behind the Scenes. This behind-the-scenes screen capture shows the
installation environment for Paradise.
Numbers indicate loudspeakers. The green rectangular blocks are visitors.
The large red circles are unseen zones of sounds that slowly rotate.
Sounds are activated when a visitor breaks the edge of a circle.
The triangles with colored balls are sound sources for any given sound (with
volume indicated by the size of each ball).
Spatial sound
Single-point audio capture:
Binaural
Ambisonic
Paradise case study
Virtual Design Environment
Behind the Scenes
UNIT V- Extended Reality(XR) Development
1. Augmented Typography:
Legibility and readability
Creating visual contrast
Take control
Design with purpose
2. Color for XR:
Color appearance models
Light interactions
Dynamic adaptation
Reflection
3. Sound Design:
Hearing what you see
Spatial sound
Augmented audio
Voice experiences
Power of sound.
Augmented audio
AR Sound
How does it work?
Speaker Closeup
More than a speaker
How can you get your own?
Imagine going for a bike ride while listening to your favorite playlist,
receiving voice-driven instructions, and still being able to hear car engines,
sirens, and horns honking all around you.
This is the power of augmented audio.
Augmented audio The layering of digital sound on top of, but not blocking
out, the ambient sounds of an environment.
Augmented audio, also referred to as open-ear audio, allows you to hear all
the ambient sounds around you while adding another layer of audio on top of
it.
This allows you to receive navigational directions, participate on a phone call,
listen to an audiobook, or listen to your favorite music—all while still being
connected to the world around you.
Smartglasses, also known as audio glasses, come in different shapes from a
number of different manufacturers.
Many of these come in a sunglasses option, as they are most likely to be
used outside. However, many come with customizable lens options to
personalize your experience.
Bose was the first to the market and started off demonstrating the technology
using 3D printed prototypes at South by Southwest (SXSW) Conference and
Festival in 2018.
I remember walking by the Bose house in Austin, Texas, where they had taken
over part of a local restaurant to showcase their AR glasses.
I was intrigued.
I wanted to know how Bose, known for their high-quality speakers and
headphones, was entering the world of AR.
Well, I quickly found out how important audio is to an immersive experience while
wearing their AR glasses on a walking tour of Austin.
The experience started by connecting the sunglasses to my phone through
Bluetooth.
Advantage of the processing power of a smartphone, Bose could keep them
lightweight and cool.
One person in the group spotted a famous
actor stepping out of their vehicle for a film
premiere at the festival and was able to tell
everyone else as we continued listening to our
guided tour.
AR Sound. 3D printed prototypes of the
original Bose AR glasses at SXSW 2018.
To be clear, these glasses and similar pairs
from other developers don’t show the user any
visuals. They are just to provide audio.
It allow for voice interactions without needing to take out a phone.
They allow the user to interact hands-free and ears-free.
They are essentially replacements for headphones or earbuds that allow the
user to still hear everything around them at the same time.
How does it work?
Using what is called open-ear technology, a speaker is built into each arm
of the audio glasses.
What helps make them augmented, while also staying private, is the position
and direction of the speakers.
One speaker is placed on each arm of the glasses near the temple so that the
sound is close, but still allows other sounds to enter the ear cavity.
The speakers point backward from
the face, so they are angled right
toward the ears.
This angle reduces how much of
the sound can be heard by others
around the wearer.
Even in a 3D printed prototype
there was not much sound
escaping from the glasses, and very
little could be heard even by those
standing on either side.
Speaker Closeup. The speakers on the Bose
AR glass prototypes are near the ear.
In addition to the speakers themselves, there is
also a head-motion sensor built in that can
send information from the multi-axis points to
your smartphone.
This allows the app to know both the wearer’s
location as well as what direction they are
looking.
This information can help customize
directions—knowing the wearer’s right from left
for example—as well as making sure they see
key parts of an experience along the way.
More than a speaker
Listening is only half of the conversation. To allow for user feedback,
these glasses also include a microphone.
This allows the glasses to connect with the user’s voice assistant (more on
this in the next section).
Once again, this function helps maintain the hands-free functionality.
It also allows the glasses to be used for phone calls and voice memos for
those who want to communicate on the go.
Many models have the option to turn the microphone feature on and off for
privacy when needed. This is an important consideration.
If you do purchase a pair, make sure that you can control when the device is
listening and when it is not.
To further customize the experience, one arm of the glasses is equipped with
a multi-function button that you can tap, touch, or swipe.
Other than microphone control, this is the only other button you will find on
the glasses.
This allows you change your volume, change tracks, make a selection,
and navigate within an experience—without having to access your
phone directly.
How can you get your own?
Although Bose has recently announced they would stop manufacturing their
audio sunglasses line, they are still currently available for purchase as of this
writing.
They were the first to the market, but decided to not continue manufacturing
the glasses as they didn’t make as much profit as the company had hoped.
When interviewed about, this a Bose spokesperson said, Bose AR didn’t
become what we envisioned. It’s not the first time our technology couldn’t be
commercialized the way we planned, but components of it will be used to
help Bose owners in a different way.
We’re good with that. Because our research is for them, not us.
Roettgers, J. (2020, June 16). Another company is giving up on AR. This time,
it’s Bose. Protocol. www.protocol.com/bose-gives-up-on-augmented-reality.
Since Bose’s first launch, others have stepped up production of their own
version of Bluetooth shades.
Leading the way is Amazon with their Echo Frames, which bring their well-
known Alexa assistant into a pair of sunglasses.
Everything many have learned to love about having a voice-powered home
assistant is now available on the go.
Other options to check out include audio glasses from GELETE, Lucyd,
Scishion, AOHOGOD, Inventiv, and OhO.
If you are looking to use your glasses for more than just audio
communication, some of shades on the market also include cameras allowing
for some action-packed capture. Leading the way in this market is Snapchat
with their Spectacles Bluetooth Video Sunglasses.
Audio glasses might remain as stand-alone audio devices. Augmented audio
may also be incorporated into full visual and auditory glasses. But in either
case, the focus on exceptional sound quality will pave the way.
Augmented audio
AR Sound
How does it work?
Speaker Closeup
More than a speaker
How can you get your own?
UNIT V- Extended Reality(XR) Development
1. Augmented Typography:
Legibility and readability
Creating visual contrast
Take control
Design with purpose
2. Color for XR:
Color appearance models
Light interactions
Dynamic adaptation
Reflection
3. Sound Design:
Hearing what you see
Spatial sound
Augmented audio
Voice experiences
Power of sound.
Voice experiences & Power of sound
Voice experiences
● VUI for voice user interface
● NLP for natural language processing
●Not a replacement- Virtual Keyboard
●Context
●Scripts
Power of sound
Voice experiences
Voice is now an interface. Voice interfaces are found in cars, mobile devices,
smartwatches, and speakers.
They have become popular because of how they can be customized to the
user’s environment, the time of day, and the uniqueness of each situation.
Alexa, Siri, and Cortana have become household names, thanks to their
help as virtual assistants.
We are accustomed to using our voice to communicate with other people—
not computers.
it makes sense that companies like Amazon, Apple, and Microsoft try to
humanize their voice devices by giving them names.
It is important to make these interfaces feel conversational to match the
expectations that humans have for any kind of voice interaction.
As stated in Amazon’s developer resources for Alexa, “Talk with them,
not at them.”
This concept has also been supported by Stanford researchers Clifford
Nass and Scott Brave, authors of the book Wired for Speech.
Their work affirms how users relate to voice interfaces in the same way that
they relate to other people.
This makes sense, because that is the most prominent way we engage in
conversation, up until this point.
Voice user interface The use of human speech recognition in order to
communicate with a computer interface.
Alexa is one example of a voice interface that allows a user to interact
conversationally.
The challenge of this, of course, is that when we speak to a person, we rely
on context to help them make sense of what we are saying.
Natural language processing, or understanding the context of speech, is
the task that an NLP software engine performs for virtual-assistant devices.
The process starts with a script provided by a VUI designer.
Just as you’d begin learning a foreign language by understanding important
key words, a device like Alexa must do something similar.
This script allows the user to train an assistant to an experience, or skill, as
it is called in VUI design.
Natural language processing
The use of artificial intelligence to translate human language to be
understood by a computer.
With many XR experiences linking to a smartphone or even promoting
hands-free as an added benefit, that opens up the potential for other ways for
users to interact within an experience.
If you are relying on tapping into smartphone technology, then you first
need to understand how to design for it.
VUIs are not reliant on visuals, unlike graphic user interfaces (GUIs).
The first thing to understand is that voice interactions should not be
viewed as a replacement for a visual interface.
Not a replacement
It is important not to get into a mindset that a voice interaction can serve
as a replacement for a visual interaction.
Example, adding a voice component is not an exact replacement for
providing a keyboard.
You also need to be aware that the design approach must be different.
If you show a keyboard to a user, they will likely understand what action
to complete thanks to their past experiences with keyboards.
A keyboard, in and of itself, will communicate to the user that they need
to enter each letter, number, or symbol.
If they are able to do this with both hands, like on a computer, it may be
an easy enough task.
But if they have to enter a long search term or password using an
interface where they have to move a cursor to each letter individually, this
task may be greeted with intense resentment.
One way to overcome this daunting task is to provide a voice input option
instead.
It is often much easier to say a word than type it all out.
However, the process of inputting data this way is much different than
with a traditional QWERTY keyboard or even an alphabet button
selection, and it not as familiar.
When a user sees the letters
of the alphabet or a standard
QWERTY keyboard, they
connect their past experiences
with it, so they can easily start
to navigate to the letter they
want to choose.
But when you are relying on
voice, the user will connect to
their past communication with
other people as how to interact.
Virtual Keyboard. Concept communicating
with friend via screen hologram with full
QWERTY keyboard.
There needs to be something in the UI that
communicates to the user that they can
use their voice to interact.
This is often shown through the use of a
microphone icon.
However, it can also come in the form of
speech.
One way to let someone know that they can
speak is by starting the conversation with a
question such as “How can I help you?”
Depending on the device, and where it will be used, design this experience to
match what will work best to communicate to the user that they can use their
voice and to start the conversation.
What do you do first in a conversation?
Before you speak, you might make sure the person you are speaking to is
listening. But if you are speaking to a computer, you don’t have body cues or eye
contact to rely on. So, this active listening state needs to be designed.
Though most of the conversation experience will not use visuals, this is one area
where a visual provides great benefit.
Using a visual to let the user know that the device is listening can help substitute
for the eye contact they are used to when talking with another person.
This can be a visual change, such as a light turning on or a colorful animation, so
they know that what they are saying is being heard.
Tip
Provide visual cues to provide feedback to the user. A great place for this is to
communicate that a device is ready and listening.
Companies like Apple have created their own custom circular animations
that they use across all their devices;
when a user sees Apple’s colorful circle of purples, blues, and white, they
connect it with a voice interaction.
Seeing this animation communicates that the device is ready and listening
for a voice command.
All of this happens instead of a keyboard appearing. So, it isn’t a
replacement, but rather a totally different way of communicating, and
therefore in need of a totally different interface and design.
Context
When people communicate, we use our knowledge of the context to create a
shared understanding. Once the user knows the device is listening, they may
know to start talking, but how does the user know what to say or even what
is okay to say without any prompts?
With a voice interface there is no visual to show what the options are.
It is best practice to have some options voiced, such as “you can ask...”
followed by a few options.
You may be familiar with the options provided by a teleprompt: “Press 1 to
talk to HR, press 2 to talk to the front desk.”
Those are often very frustrating, because they are very one sided and not
conversational. The goal here is to start by understanding the user’s goal.
This is where everything we have been talking about is coming together.
Once you have identified the why in your project, planned out roughly how it
might look, done user research, and created a user flow, you can start to
predict some options that a user may be looking for.
You can start off by letting the user know what their options are based on
this research.
This can be laid out in a set of questions or by asking an open-ended
question.
When you record an interview with someone, it is the best practice to ask
open-ended questions or compound questions.
The reason is that you want the person to answer with context.
If you ask two questions in one, a compound question, it is a natural
tendency for them to clarify the answer as they respond.
Perhaps you ask “What is your favorite way to brew coffee, and what do you
put in it?” Instead of answering “French press and with cream,” it is likely
that they will specify which of the questions they are answering within the
answer itself.
We’re discussing question and answer methods here because such
exchanges point out an important way that humans communicate.
We like to make sure that our answers have the correct context to them. This
is especially true when there is more than one question asked.
So, a likely response would be, “My favorite way to brew coffee is using a
French press, and I like just a little cream in it.”
Traditional media interviews don’t include the questions—so it’s important to
get context in the answer.
The need for context is important to understand as it relates to voice
interfaces:
Humans may not provide needed context in their voice commands.
Having the computer ask questions that are open-ended or have multiple
options will trigger the user to provide more context in their answer, which
will help the device more successfully understand what the user is asking.
Using the power of machine learning and natural language processing,
the device creates a system that recognizes specific voice commands.
These commands must be written out as scripts.
Scripts
Think about all the different ways within just the English language someone
can say no:
nope, nah, no way, not now, no thanks, not this time...
And this is just to name a few. You also need to consider what questions the
user may ask and the answers they may give.
This requires anticipating what the user will say and then linking that
response to activate the next step.
With a traditional computer screen, the user has a limited number of options
based on the buttons and links you provide.
With the input of a click or a tap, the computer knows to load the connected
screen based on that action. With voice, the interaction is reliant only on
spoken language.
As part of the voice interface design, an
important step of the process is to create a
script.
This script should embrace the dynamic
qualities of conversation.
A successful script should go through
multiple levels of user testing to identify
questions that users answer—and also all
the different ways they answer.
When the user isn’t given a set number of
options to choose from, the script helps
translate the human response into
something actionable by the computer.
Script Sample. Sample answers collected to show possible answers to the
question “How have you been?”
All likely answers need to be collected to help a voice assistant understand
how to respond based on each possible answer.
While it is easy to tell a computer what to do if someone says “yes” or “no,” it
is less likely that the user will stick to these words.
Computers may easily be able to understand yes and no commands, but
what happens when a user, who is speaking as they always do to other
people, says “I’m going to call it a day.” Or “I’m going to hit the sack.”
These idioms are not going to be understood by a computer, unless they are
taught them. Without understanding the cultural context, it could be
understood that you are going to physically hit a bag, instead of go to sleep.
How do you anticipate responses of users to build into a script? You ask
them, and you listen to them.
How do you anticipate responses of users to build into a script? You ask
them, and you listen to them.
Use a script and thorough user testing to collect anticipated responses to
help keep the experience complete.
Multiple rounds of quality assurance testing must be completed throughout
the whole process.
Think of how color, image choice, and copy set the tone for a design in print
or on the web. In the same way, sound quality, mood, and content of the
responses of the voice will set the tone for the voice experience.
As you can imagine, this is no small task.
To do this well requires a team of people dedicated to designing these VUI
skills. However, scripts that are created have components that can be used
across the different skills.
The conversational experiences we have will start to build a relationship with
the device and will also help establish a level of trust.
These experiences have a way of making an experience personal. Think about
how nice it is when someone knows and uses your name.
What if they could also learn and know your favorite settings and words?
What if they could mimic these preferences as they guide you, provide clear
instructions, and help reduce your anxiety by saying the right words?
This is what voice experiences can do. They are rooted in our experiences of
conversations with friends and colleagues, so it is no surprise that we start to
trust them like one too
Power of sound
Sound design should not be an afterthought; it makes or breaks the
experience.
Once you start to notice how sound plays a role in your physical world, you
can start to design ways for sound to create more immersion in your XR
experiences.
Using audio can help an experience feel more real, enhance your physical
space, or even help you interact with a computer interface, hands-free.
SOUND LOCALIZATION DESIGN
In the beginning of this chapter, you played the role of the listener. You
directed your awareness to the sounds that happened around you.
This time, you can take what you learned from that experience, and what you
have learned in this chapter, to design your own soundscape.
To do this, draw a chart similar to the sound localization diagram you
created from your listening experience.
However, this time you are going to design what sounds will be happening,
and where.
● Think about where the experience will be happening, and if it is for VR
or AR, as that will determine how much ambient sound you will need
to plan for.
● Think about the distance and intensity of the sound from the user’s
perspective.
If you want the extra challenge, you can then record the sounds and bring
them into a sound editor of your choice, such as Apple Logic Pro or Adobe
Audition, to start editing them.
To create a full immersive experience, you
will need to bring the edited sounds into a
program, such as Unity Pro or Unreal
Engine, that will allow you to spatial map
out the location of the sounds.
Extended Reality(XR) Development in immersive design

More Related Content

Similar to Extended Reality(XR) Development in immersive design

Multimedia chapter 2
Multimedia chapter 2Multimedia chapter 2
Multimedia chapter 2
PrathimaBaliga
 
Multimedia chapter 2
Multimedia chapter 2Multimedia chapter 2
Multimedia chapter 2
PrathimaBaliga
 
AR / UX: Building Augmented Reality Experiences
AR / UX: Building Augmented Reality ExperiencesAR / UX: Building Augmented Reality Experiences
AR / UX: Building Augmented Reality Experiences
Joey deVilla
 
Accounting For Every Camper
Accounting For Every CamperAccounting For Every Camper
Accounting For Every Camper
Ashley Dzick
 
Adaptive UI for Android and iOS using Material and Cupertino.pptx
Adaptive UI for Android and iOS using Material and Cupertino.pptxAdaptive UI for Android and iOS using Material and Cupertino.pptx
Adaptive UI for Android and iOS using Material and Cupertino.pptx
Flutter Agency
 
The Good, The Bad, The Voiceover - ios Accessibility
The Good, The Bad, The Voiceover - ios AccessibilityThe Good, The Bad, The Voiceover - ios Accessibility
The Good, The Bad, The Voiceover - ios Accessibility
Aimee Maree Forsstrom
 
iPad Pro vs Surface Pro 4 Infographic
iPad Pro vs Surface Pro 4 InfographiciPad Pro vs Surface Pro 4 Infographic
iPad Pro vs Surface Pro 4 Infographic
OurITDepartment
 
Mobile UI / UX Trends
Mobile UI / UX TrendsMobile UI / UX Trends
Mobile UI / UX Trends
Evgeny Tsarkov
 
Accessible by design
Accessible by designAccessible by design
Accessible by design
Marc Harrod
 
Intro to Responsive
Intro to ResponsiveIntro to Responsive
Intro to Responsive
Tom Elliott
 
Excellence in the Android User Experience
Excellence in the Android User ExperienceExcellence in the Android User Experience
Excellence in the Android User Experience
mobilegui
 
Effective sign
Effective signEffective sign
Effective sign
Rayn HOWAYEK
 
Introduction to Adobe Aero 2023
Introduction to Adobe Aero 2023Introduction to Adobe Aero 2023
Introduction to Adobe Aero 2023
Shalin Hai-Jew
 
Web Site Design Principles
Web Site Design PrinciplesWeb Site Design Principles
Web Site Design Principles
Mukesh Tekwani
 
Usability and Accessiblity
Usability and AccessiblityUsability and Accessiblity
Usability and Accessiblity
Darren Jackson
 
Impact of Fonts: in Web and Apps Design
Impact of Fonts:  in Web and Apps DesignImpact of Fonts:  in Web and Apps Design
Impact of Fonts: in Web and Apps Design
contactproperweb2014
 
Android UX-UI Design for Fun and Profit
Android UX-UI Design for Fun and ProfitAndroid UX-UI Design for Fun and Profit
Android UX-UI Design for Fun and Profit
penanochizzo
 
Android UX-UI Design for fun and profit | Fernando Cejas | Tuenti
 Android UX-UI Design for fun and profit | Fernando Cejas | Tuenti   Android UX-UI Design for fun and profit | Fernando Cejas | Tuenti
Android UX-UI Design for fun and profit | Fernando Cejas | Tuenti
Smash Tech
 
Android UX-UI Design for Fun and Profit
Android UX-UI Design for Fun and ProfitAndroid UX-UI Design for Fun and Profit
Android UX-UI Design for Fun and Profit
Fernando Cejas
 
Chap12
Chap12Chap12
Chap12
meltem7798
 

Similar to Extended Reality(XR) Development in immersive design (20)

Multimedia chapter 2
Multimedia chapter 2Multimedia chapter 2
Multimedia chapter 2
 
Multimedia chapter 2
Multimedia chapter 2Multimedia chapter 2
Multimedia chapter 2
 
AR / UX: Building Augmented Reality Experiences
AR / UX: Building Augmented Reality ExperiencesAR / UX: Building Augmented Reality Experiences
AR / UX: Building Augmented Reality Experiences
 
Accounting For Every Camper
Accounting For Every CamperAccounting For Every Camper
Accounting For Every Camper
 
Adaptive UI for Android and iOS using Material and Cupertino.pptx
Adaptive UI for Android and iOS using Material and Cupertino.pptxAdaptive UI for Android and iOS using Material and Cupertino.pptx
Adaptive UI for Android and iOS using Material and Cupertino.pptx
 
The Good, The Bad, The Voiceover - ios Accessibility
The Good, The Bad, The Voiceover - ios AccessibilityThe Good, The Bad, The Voiceover - ios Accessibility
The Good, The Bad, The Voiceover - ios Accessibility
 
iPad Pro vs Surface Pro 4 Infographic
iPad Pro vs Surface Pro 4 InfographiciPad Pro vs Surface Pro 4 Infographic
iPad Pro vs Surface Pro 4 Infographic
 
Mobile UI / UX Trends
Mobile UI / UX TrendsMobile UI / UX Trends
Mobile UI / UX Trends
 
Accessible by design
Accessible by designAccessible by design
Accessible by design
 
Intro to Responsive
Intro to ResponsiveIntro to Responsive
Intro to Responsive
 
Excellence in the Android User Experience
Excellence in the Android User ExperienceExcellence in the Android User Experience
Excellence in the Android User Experience
 
Effective sign
Effective signEffective sign
Effective sign
 
Introduction to Adobe Aero 2023
Introduction to Adobe Aero 2023Introduction to Adobe Aero 2023
Introduction to Adobe Aero 2023
 
Web Site Design Principles
Web Site Design PrinciplesWeb Site Design Principles
Web Site Design Principles
 
Usability and Accessiblity
Usability and AccessiblityUsability and Accessiblity
Usability and Accessiblity
 
Impact of Fonts: in Web and Apps Design
Impact of Fonts:  in Web and Apps DesignImpact of Fonts:  in Web and Apps Design
Impact of Fonts: in Web and Apps Design
 
Android UX-UI Design for Fun and Profit
Android UX-UI Design for Fun and ProfitAndroid UX-UI Design for Fun and Profit
Android UX-UI Design for Fun and Profit
 
Android UX-UI Design for fun and profit | Fernando Cejas | Tuenti
 Android UX-UI Design for fun and profit | Fernando Cejas | Tuenti   Android UX-UI Design for fun and profit | Fernando Cejas | Tuenti
Android UX-UI Design for fun and profit | Fernando Cejas | Tuenti
 
Android UX-UI Design for Fun and Profit
Android UX-UI Design for Fun and ProfitAndroid UX-UI Design for Fun and Profit
Android UX-UI Design for Fun and Profit
 
Chap12
Chap12Chap12
Chap12
 

More from GOWSIKRAJA PALANISAMY

UNIT V ACTIONS AND COMMANDS, FORMS AND CONTROLS.pptx
UNIT V ACTIONS AND COMMANDS, FORMS AND CONTROLS.pptxUNIT V ACTIONS AND COMMANDS, FORMS AND CONTROLS.pptx
UNIT V ACTIONS AND COMMANDS, FORMS AND CONTROLS.pptx
GOWSIKRAJA PALANISAMY
 
UNIT V EVOLVE (22CDT21-DESIGN THINKING) COURSE
UNIT V EVOLVE (22CDT21-DESIGN THINKING) COURSEUNIT V EVOLVE (22CDT21-DESIGN THINKING) COURSE
UNIT V EVOLVE (22CDT21-DESIGN THINKING) COURSE
GOWSIKRAJA PALANISAMY
 
UNIT_IVHuman Factors and Background of Immersive Design.pptx
UNIT_IVHuman Factors and Background of Immersive Design.pptxUNIT_IVHuman Factors and Background of Immersive Design.pptx
UNIT_IVHuman Factors and Background of Immersive Design.pptx
GOWSIKRAJA PALANISAMY
 
UNIT III NAVIGATION AND LAYOUT- 22CDT42-USER INTERFACE DESIGN
UNIT III NAVIGATION AND LAYOUT- 22CDT42-USER INTERFACE DESIGNUNIT III NAVIGATION AND LAYOUT- 22CDT42-USER INTERFACE DESIGN
UNIT III NAVIGATION AND LAYOUT- 22CDT42-USER INTERFACE DESIGN
GOWSIKRAJA PALANISAMY
 
UNIT IV ENGAGE: DESIGN THINKING 22CDT21
UNIT IV ENGAGE:  DESIGN THINKING 22CDT21UNIT IV ENGAGE:  DESIGN THINKING 22CDT21
UNIT IV ENGAGE: DESIGN THINKING 22CDT21
GOWSIKRAJA PALANISAMY
 
UNIT III EXPERIMENT -DESIGN THINKING 22CDT21
UNIT III EXPERIMENT -DESIGN THINKING 22CDT21UNIT III EXPERIMENT -DESIGN THINKING 22CDT21
UNIT III EXPERIMENT -DESIGN THINKING 22CDT21
GOWSIKRAJA PALANISAMY
 
UNIT II EMPATHIZE IN DESIGN THINKING AND ANALYSIS
UNIT II  EMPATHIZE IN DESIGN THINKING AND ANALYSISUNIT II  EMPATHIZE IN DESIGN THINKING AND ANALYSIS
UNIT II EMPATHIZE IN DESIGN THINKING AND ANALYSIS
GOWSIKRAJA PALANISAMY
 
UNIT II ADVANCED DESIGN COMPONENTS USING FIGMA
UNIT II ADVANCED DESIGN COMPONENTS USING FIGMAUNIT II ADVANCED DESIGN COMPONENTS USING FIGMA
UNIT II ADVANCED DESIGN COMPONENTS USING FIGMA
GOWSIKRAJA PALANISAMY
 
UNIT 1 BASIC DESIGN COMPONENTS USING FIGMA
UNIT 1  BASIC DESIGN COMPONENTS USING FIGMAUNIT 1  BASIC DESIGN COMPONENTS USING FIGMA
UNIT 1 BASIC DESIGN COMPONENTS USING FIGMA
GOWSIKRAJA PALANISAMY
 
UNIT I Design Thinking and Explore.pptx
UNIT I  Design Thinking and Explore.pptxUNIT I  Design Thinking and Explore.pptx
UNIT I Design Thinking and Explore.pptx
GOWSIKRAJA PALANISAMY
 
UNIT III-UX-UI.pptx
UNIT III-UX-UI.pptxUNIT III-UX-UI.pptx
UNIT III-UX-UI.pptx
GOWSIKRAJA PALANISAMY
 
UNIT-II Immersive Design for 3D.pptx
UNIT-II Immersive Design for 3D.pptxUNIT-II Immersive Design for 3D.pptx
UNIT-II Immersive Design for 3D.pptx
GOWSIKRAJA PALANISAMY
 
UNIT-I INTRODUCITON TO EXTENDED REALITY.pptx
UNIT-I INTRODUCITON TO EXTENDED REALITY.pptxUNIT-I INTRODUCITON TO EXTENDED REALITY.pptx
UNIT-I INTRODUCITON TO EXTENDED REALITY.pptx
GOWSIKRAJA PALANISAMY
 

More from GOWSIKRAJA PALANISAMY (13)

UNIT V ACTIONS AND COMMANDS, FORMS AND CONTROLS.pptx
UNIT V ACTIONS AND COMMANDS, FORMS AND CONTROLS.pptxUNIT V ACTIONS AND COMMANDS, FORMS AND CONTROLS.pptx
UNIT V ACTIONS AND COMMANDS, FORMS AND CONTROLS.pptx
 
UNIT V EVOLVE (22CDT21-DESIGN THINKING) COURSE
UNIT V EVOLVE (22CDT21-DESIGN THINKING) COURSEUNIT V EVOLVE (22CDT21-DESIGN THINKING) COURSE
UNIT V EVOLVE (22CDT21-DESIGN THINKING) COURSE
 
UNIT_IVHuman Factors and Background of Immersive Design.pptx
UNIT_IVHuman Factors and Background of Immersive Design.pptxUNIT_IVHuman Factors and Background of Immersive Design.pptx
UNIT_IVHuman Factors and Background of Immersive Design.pptx
 
UNIT III NAVIGATION AND LAYOUT- 22CDT42-USER INTERFACE DESIGN
UNIT III NAVIGATION AND LAYOUT- 22CDT42-USER INTERFACE DESIGNUNIT III NAVIGATION AND LAYOUT- 22CDT42-USER INTERFACE DESIGN
UNIT III NAVIGATION AND LAYOUT- 22CDT42-USER INTERFACE DESIGN
 
UNIT IV ENGAGE: DESIGN THINKING 22CDT21
UNIT IV ENGAGE:  DESIGN THINKING 22CDT21UNIT IV ENGAGE:  DESIGN THINKING 22CDT21
UNIT IV ENGAGE: DESIGN THINKING 22CDT21
 
UNIT III EXPERIMENT -DESIGN THINKING 22CDT21
UNIT III EXPERIMENT -DESIGN THINKING 22CDT21UNIT III EXPERIMENT -DESIGN THINKING 22CDT21
UNIT III EXPERIMENT -DESIGN THINKING 22CDT21
 
UNIT II EMPATHIZE IN DESIGN THINKING AND ANALYSIS
UNIT II  EMPATHIZE IN DESIGN THINKING AND ANALYSISUNIT II  EMPATHIZE IN DESIGN THINKING AND ANALYSIS
UNIT II EMPATHIZE IN DESIGN THINKING AND ANALYSIS
 
UNIT II ADVANCED DESIGN COMPONENTS USING FIGMA
UNIT II ADVANCED DESIGN COMPONENTS USING FIGMAUNIT II ADVANCED DESIGN COMPONENTS USING FIGMA
UNIT II ADVANCED DESIGN COMPONENTS USING FIGMA
 
UNIT 1 BASIC DESIGN COMPONENTS USING FIGMA
UNIT 1  BASIC DESIGN COMPONENTS USING FIGMAUNIT 1  BASIC DESIGN COMPONENTS USING FIGMA
UNIT 1 BASIC DESIGN COMPONENTS USING FIGMA
 
UNIT I Design Thinking and Explore.pptx
UNIT I  Design Thinking and Explore.pptxUNIT I  Design Thinking and Explore.pptx
UNIT I Design Thinking and Explore.pptx
 
UNIT III-UX-UI.pptx
UNIT III-UX-UI.pptxUNIT III-UX-UI.pptx
UNIT III-UX-UI.pptx
 
UNIT-II Immersive Design for 3D.pptx
UNIT-II Immersive Design for 3D.pptxUNIT-II Immersive Design for 3D.pptx
UNIT-II Immersive Design for 3D.pptx
 
UNIT-I INTRODUCITON TO EXTENDED REALITY.pptx
UNIT-I INTRODUCITON TO EXTENDED REALITY.pptxUNIT-I INTRODUCITON TO EXTENDED REALITY.pptx
UNIT-I INTRODUCITON TO EXTENDED REALITY.pptx
 

Recently uploaded

NHL Stenden University of Applied Sciences Diploma Degree Transcript
NHL Stenden University of Applied Sciences Diploma Degree TranscriptNHL Stenden University of Applied Sciences Diploma Degree Transcript
NHL Stenden University of Applied Sciences Diploma Degree Transcript
lhtvqoag
 
一比一原版(LaTrobe毕业证书)拉筹伯大学毕业证如何办理
一比一原版(LaTrobe毕业证书)拉筹伯大学毕业证如何办理一比一原版(LaTrobe毕业证书)拉筹伯大学毕业证如何办理
一比一原版(LaTrobe毕业证书)拉筹伯大学毕业证如何办理
67n7f53
 
Introduction to User experience design for beginner
Introduction to User experience design for beginnerIntroduction to User experience design for beginner
Introduction to User experience design for beginner
ellemjani
 
CocaCola_Brand_equity_package_2012__.pdf
CocaCola_Brand_equity_package_2012__.pdfCocaCola_Brand_equity_package_2012__.pdf
CocaCola_Brand_equity_package_2012__.pdf
PabloMartelLpez
 
一比一原版布兰登大学毕业证(BU毕业证书)如何办理
一比一原版布兰登大学毕业证(BU毕业证书)如何办理一比一原版布兰登大学毕业证(BU毕业证书)如何办理
一比一原版布兰登大学毕业证(BU毕业证书)如何办理
wkip62b
 
一比一原版美国哥伦比亚大学毕业证Columbia成绩单一模一样
一比一原版美国哥伦比亚大学毕业证Columbia成绩单一模一样一比一原版美国哥伦比亚大学毕业证Columbia成绩单一模一样
一比一原版美国哥伦比亚大学毕业证Columbia成绩单一模一样
881evgn0
 
LGBTQIA Pride Month presentation Template
LGBTQIA Pride Month presentation TemplateLGBTQIA Pride Month presentation Template
LGBTQIA Pride Month presentation Template
DakshGudwani
 
International Upcycling Research Network advisory board meeting 4
International Upcycling Research Network advisory board meeting 4International Upcycling Research Network advisory board meeting 4
International Upcycling Research Network advisory board meeting 4
Kyungeun Sung
 
Manual ISH (International Society of Hypertension)
Manual ISH (International Society of Hypertension)Manual ISH (International Society of Hypertension)
Manual ISH (International Society of Hypertension)
bagmai
 
一比一原版马里兰大学毕业证(UMD毕业证书)如何办理
一比一原版马里兰大学毕业证(UMD毕业证书)如何办理一比一原版马里兰大学毕业证(UMD毕业证书)如何办理
一比一原版马里兰大学毕业证(UMD毕业证书)如何办理
9lq7ultg
 
一比一原版南安普顿索伦特大学毕业证Southampton成绩单一模一样
一比一原版南安普顿索伦特大学毕业证Southampton成绩单一模一样一比一原版南安普顿索伦特大学毕业证Southampton成绩单一模一样
一比一原版南安普顿索伦特大学毕业证Southampton成绩单一模一样
3vgr39kx
 
原版制作(MDIS毕业证书)新加坡管理发展学院毕业证学位证一模一样
原版制作(MDIS毕业证书)新加坡管理发展学院毕业证学位证一模一样原版制作(MDIS毕业证书)新加坡管理发展学院毕业证学位证一模一样
原版制作(MDIS毕业证书)新加坡管理发展学院毕业证学位证一模一样
hw2xf1m
 
一比一原版(UoN毕业证书)纽卡斯尔大学毕业证如何办理
一比一原版(UoN毕业证书)纽卡斯尔大学毕业证如何办理一比一原版(UoN毕业证书)纽卡斯尔大学毕业证如何办理
一比一原版(UoN毕业证书)纽卡斯尔大学毕业证如何办理
f22b6g9c
 
ADESGN3S_Case-Study-Municipal-Health-Center.pdf
ADESGN3S_Case-Study-Municipal-Health-Center.pdfADESGN3S_Case-Study-Municipal-Health-Center.pdf
ADESGN3S_Case-Study-Municipal-Health-Center.pdf
GregMichaelTapawan
 
modular-kitchen home plan civil engineering.pdf
modular-kitchen home plan civil engineering.pdfmodular-kitchen home plan civil engineering.pdf
modular-kitchen home plan civil engineering.pdf
RashmitaSwain3
 
一比一原版(CSU毕业证书)查尔斯特大学毕业证如何办理
一比一原版(CSU毕业证书)查尔斯特大学毕业证如何办理一比一原版(CSU毕业证书)查尔斯特大学毕业证如何办理
一比一原版(CSU毕业证书)查尔斯特大学毕业证如何办理
67n7f53
 
一比一原版阿肯色大学毕业证(UCSF毕业证书)如何办理
一比一原版阿肯色大学毕业证(UCSF毕业证书)如何办理一比一原版阿肯色大学毕业证(UCSF毕业证书)如何办理
一比一原版阿肯色大学毕业证(UCSF毕业证书)如何办理
bo44ban1
 
一比一原版(soton毕业证书)英国南安普顿大学毕业证在读证明如何办理
一比一原版(soton毕业证书)英国南安普顿大学毕业证在读证明如何办理一比一原版(soton毕业证书)英国南安普顿大学毕业证在读证明如何办理
一比一原版(soton毕业证书)英国南安普顿大学毕业证在读证明如何办理
yufen5
 
AHMED TALAAT ARCHITECTURE PORTFOLIO .pdf
AHMED TALAAT ARCHITECTURE PORTFOLIO .pdfAHMED TALAAT ARCHITECTURE PORTFOLIO .pdf
AHMED TALAAT ARCHITECTURE PORTFOLIO .pdf
talaatahm
 
Heuristics Evaluation - How to Guide.pdf
Heuristics Evaluation - How to Guide.pdfHeuristics Evaluation - How to Guide.pdf
Heuristics Evaluation - How to Guide.pdf
Jaime Brown
 

Recently uploaded (20)

NHL Stenden University of Applied Sciences Diploma Degree Transcript
NHL Stenden University of Applied Sciences Diploma Degree TranscriptNHL Stenden University of Applied Sciences Diploma Degree Transcript
NHL Stenden University of Applied Sciences Diploma Degree Transcript
 
一比一原版(LaTrobe毕业证书)拉筹伯大学毕业证如何办理
一比一原版(LaTrobe毕业证书)拉筹伯大学毕业证如何办理一比一原版(LaTrobe毕业证书)拉筹伯大学毕业证如何办理
一比一原版(LaTrobe毕业证书)拉筹伯大学毕业证如何办理
 
Introduction to User experience design for beginner
Introduction to User experience design for beginnerIntroduction to User experience design for beginner
Introduction to User experience design for beginner
 
CocaCola_Brand_equity_package_2012__.pdf
CocaCola_Brand_equity_package_2012__.pdfCocaCola_Brand_equity_package_2012__.pdf
CocaCola_Brand_equity_package_2012__.pdf
 
一比一原版布兰登大学毕业证(BU毕业证书)如何办理
一比一原版布兰登大学毕业证(BU毕业证书)如何办理一比一原版布兰登大学毕业证(BU毕业证书)如何办理
一比一原版布兰登大学毕业证(BU毕业证书)如何办理
 
一比一原版美国哥伦比亚大学毕业证Columbia成绩单一模一样
一比一原版美国哥伦比亚大学毕业证Columbia成绩单一模一样一比一原版美国哥伦比亚大学毕业证Columbia成绩单一模一样
一比一原版美国哥伦比亚大学毕业证Columbia成绩单一模一样
 
LGBTQIA Pride Month presentation Template
LGBTQIA Pride Month presentation TemplateLGBTQIA Pride Month presentation Template
LGBTQIA Pride Month presentation Template
 
International Upcycling Research Network advisory board meeting 4
International Upcycling Research Network advisory board meeting 4International Upcycling Research Network advisory board meeting 4
International Upcycling Research Network advisory board meeting 4
 
Manual ISH (International Society of Hypertension)
Manual ISH (International Society of Hypertension)Manual ISH (International Society of Hypertension)
Manual ISH (International Society of Hypertension)
 
一比一原版马里兰大学毕业证(UMD毕业证书)如何办理
一比一原版马里兰大学毕业证(UMD毕业证书)如何办理一比一原版马里兰大学毕业证(UMD毕业证书)如何办理
一比一原版马里兰大学毕业证(UMD毕业证书)如何办理
 
一比一原版南安普顿索伦特大学毕业证Southampton成绩单一模一样
一比一原版南安普顿索伦特大学毕业证Southampton成绩单一模一样一比一原版南安普顿索伦特大学毕业证Southampton成绩单一模一样
一比一原版南安普顿索伦特大学毕业证Southampton成绩单一模一样
 
原版制作(MDIS毕业证书)新加坡管理发展学院毕业证学位证一模一样
原版制作(MDIS毕业证书)新加坡管理发展学院毕业证学位证一模一样原版制作(MDIS毕业证书)新加坡管理发展学院毕业证学位证一模一样
原版制作(MDIS毕业证书)新加坡管理发展学院毕业证学位证一模一样
 
一比一原版(UoN毕业证书)纽卡斯尔大学毕业证如何办理
一比一原版(UoN毕业证书)纽卡斯尔大学毕业证如何办理一比一原版(UoN毕业证书)纽卡斯尔大学毕业证如何办理
一比一原版(UoN毕业证书)纽卡斯尔大学毕业证如何办理
 
ADESGN3S_Case-Study-Municipal-Health-Center.pdf
ADESGN3S_Case-Study-Municipal-Health-Center.pdfADESGN3S_Case-Study-Municipal-Health-Center.pdf
ADESGN3S_Case-Study-Municipal-Health-Center.pdf
 
modular-kitchen home plan civil engineering.pdf
modular-kitchen home plan civil engineering.pdfmodular-kitchen home plan civil engineering.pdf
modular-kitchen home plan civil engineering.pdf
 
一比一原版(CSU毕业证书)查尔斯特大学毕业证如何办理
一比一原版(CSU毕业证书)查尔斯特大学毕业证如何办理一比一原版(CSU毕业证书)查尔斯特大学毕业证如何办理
一比一原版(CSU毕业证书)查尔斯特大学毕业证如何办理
 
一比一原版阿肯色大学毕业证(UCSF毕业证书)如何办理
一比一原版阿肯色大学毕业证(UCSF毕业证书)如何办理一比一原版阿肯色大学毕业证(UCSF毕业证书)如何办理
一比一原版阿肯色大学毕业证(UCSF毕业证书)如何办理
 
一比一原版(soton毕业证书)英国南安普顿大学毕业证在读证明如何办理
一比一原版(soton毕业证书)英国南安普顿大学毕业证在读证明如何办理一比一原版(soton毕业证书)英国南安普顿大学毕业证在读证明如何办理
一比一原版(soton毕业证书)英国南安普顿大学毕业证在读证明如何办理
 
AHMED TALAAT ARCHITECTURE PORTFOLIO .pdf
AHMED TALAAT ARCHITECTURE PORTFOLIO .pdfAHMED TALAAT ARCHITECTURE PORTFOLIO .pdf
AHMED TALAAT ARCHITECTURE PORTFOLIO .pdf
 
Heuristics Evaluation - How to Guide.pdf
Heuristics Evaluation - How to Guide.pdfHeuristics Evaluation - How to Guide.pdf
Heuristics Evaluation - How to Guide.pdf
 

Extended Reality(XR) Development in immersive design

  • 1. P.GOWSIKRAJA M.E., (Ph.D.,) Assistant Professor Department of Computer Science and Design UNIT V- Extended Reality(XR) Development KONGU ENGINEERING COLLEGE (AUTONOMOUS) DEPARTMENT OF COMPUTER SCIENCE AND DESIGN 20CDH01-HONOR DEGREE-IMMERSIVE DESIGN THEORY
  • 2. UNIT V- Extended Reality(XR) Development 1. Augmented Typography: Legibility and readability Creating visual contrast Take control Design with purpose 2. Color for XR: Color appearance models Light interactions Dynamic adaptation Reflection 3. Sound Design: Hearing what you see Spatial sound Augmented audio Voice experiences Power of sound.
  • 3. Augmented Typography:- To exploring how to optimize your type in augmented experiences. Though many of the topics discussed also apply to virtual reality, the emphasis best practices for AR, because that provides less control of the environment. UNDERSTAND LEGIBILITY AND READABILITY CREATE VISUAL CONTRAST TAKE CONTROL
  • 4.
  • 5. Legibility and readability Evolution Back to the basics Legibility KEEP IT SIMPLE THINK BIG CONSIDER X-HEIGHT STAY SANS DETAIL Type made for XR ARone Type is meant to be read Readability GIVE IT SPACE ARone Halo SAY MORE WITH LESS MAKE A CASE LIMIT LINE LENGTH WEIGH IN KEEP IT FLAT
  • 6. Displays covered in type are around us everywhere—from gas station signage to airport information boards to mobile applications in the palm of our hands. Creation of each new screen, each new context, there will be some adjustment that designers need to notice and adjust. Type on displays. Presenting text that will appear on a variety of screens creates design challenges. Evolution: Throughout the evolution of screens, typefaces have been designed to improve the user experience.(CRT) Screens have grown bigger, brighter & more light weight.
  • 7.
  • 8.
  • 9.
  • 10. Back to the basics:- Designing typography for ease of reading within XR involves similar considerations as designing for a screen. typographical marks include: ● Letters ● Numbers ● Punctuation ● Dingbats/symbols
  • 11.
  • 12. Legibility is based on learned from designing for screens, simple is better. Typefaces that are made from simple shapes translate better into lower resolution displays, such as screens. Legibility How easily distinguishable one letter is from another within a typeface. KEEP IT SIMPLE. Typefaces created from simple shapes work better than overly styled type. Geometric Type. Look for letterforms that are created with basic geometric shapes, right angles, and horizontal finishing strokes. GEMOMETRY
  • 13. THINK BIG. In print you can have body copy ranging from 8 to 12 points (in print design we measure the size or height of type using points). That is too small for pixel-based type. 14 to 16 pixels or larger is optimal size. CONSIDER X-HEIGHT. The height of the lowercase letters is called the x- height. Not all typefaces have the same x-height.
  • 14. STAY SANS DETAIL:- create typefaces specifically for screen type, so these are good places to start. Helvetica, Verdana, and Georgia are some classics, but this list continues to grow thanks to the availability of the Web Open Font Format and fonts being designed for both print and web formats ARone Halo SERIF
  • 16.
  • 17. Type is meant to be read: Selecting a typeface that is legible to users means that they can easily distinguish the characters from one another. Readability The spacing and arrangement of characters and words in order to make the content flow together to aid reading it. Many of these remain connected to the foundations of typography, but just need some optimization for XR. Keep these guidelines in mind: GIVE IT SPACE:- Increasing your overall tracking, the space between two or more characters, will help with readability. With many of the displays you often see a bit of a halo effect around the text, so by tracking out your type you can avoid the overlay of the halos and the letters themselves.
  • 18. An ARone typeface that demonstrates how unusual shapes in letter forms produce better results in the rendering from AR headsets. SAY MORE WITH LESS. Reducing the amount of copy, especially paragraphs of type, is a better practice. You can complement this with tool tips, explainer type, closed captioning, and an audio track. MAKE A CASE. Select a case that works for the content. Uppercase is hard to read in large amounts, but can add hierarchy for headers or shorter phrases you want to stand out.
  • 19. LIMIT LINE LENGTH To reduce eye strain, keep the length of your lines of type to 50 to 60 characters per line. We lose our place when our eyes have to jump back to the beginning of the next line if it’s too long. WEIGH IN. Varying your weights of type is a great way to add hierarchy to your designs and can help guide the user’s eye though the page. Just watch for the extreme weights. Light and Extra Bold weights are much less legible than regular, medium, or bold weights.
  • 20. KEEP IT FLAT. 2D type is easier to read than 3D type. Type that is extruded and volumetric becomes much harder to read. It makes sense if you consider we aren’t as used to reading type in 3D; most of our reading is two dimensional. Logotypes are an exception as they can work as a 3D element in an experience.
  • 21.
  • 22. Legibility and readability Evolution Back to the basics Legibility KEEP IT SIMPLE THINK BIG CONSIDER X-HEIGHT STAY SANS DETAIL Type made for XR ARone Type is meant to be read Readability GIVE IT SPACE ARone Halo SAY MORE WITH LESS MAKE A CASE LIMIT LINE LENGTH WEIGH IN KEEP IT FLAT
  • 23. UNIT V- Extended Reality(XR) Development 1. Augmented Typography: Legibility and readability Creating visual contrast Take control Design with purpose 2. Color for XR: Color appearance models Light interactions Dynamic adaptation Reflection 3. Sound Design: Hearing what you see Spatial sound Augmented audio Voice experiences Power of sound.
  • 24. Creating visual contrast Viewing Distance- display and text type Spatial Zones UI zone Focal zone Environmental zone Different spatial zones IMMERSIVE TYPE UI TYPE ANCHORED TYPE RESPONSIVE TYPE
  • 25. Creating visual contrast:- Designers are used to considering reading distance when designing. A poster or billboard is expected to be viewed from a further distance than a brochure or postcard. There are different design considerations as a result of the distance between the user and the design element.
  • 26. Viewing Distance. The viewing distance from text to our eyes changes based on the medium. Within an XR experience there may be type that is: ● Placed within the 3D space ● Static (such as any type that is part of the UI) ● Anchored within the environment ● Responsive In print media, when choosing from the wide range of type options, you can start by selecting from two main text types: display and text type. (There may be other places where type is used, such as for a URL or caption information, which will also be relatively small.)
  • 27. Display type Type found in large headings and titles; typically 16+ points. Text type Type found in paragraphs and meant for longer reading; typically 8 to 12 points and sometimes called body text. Spatial Zones. Showing the three main spatial zones in relationship to the user’s display. ● UI zone ● Focal zone ● Environmental zone
  • 28. UI ZONE The closest text to the user is within this space. This type is anchored to the camera position on a mobile device or HMD making this information constant in placement and view. FOCAL ZONE The next zone moving farther away from the user is the focal zone. This is an optimal placement for some of the main part of the experience, including any essential type. This is the ideal reading distance for essential type within the experience. This space is within 3 to 16 feet from the user. ENVIRONMENTAL ZONE The space that reaches farther beyond this scope is the environmental zone. It can be used for positioning, landmarks, and to add any additional environmental context within the experience. Because this is farther away from the user, it is intended to provide directional cues for the user, showing them places that they can explore within the experience, or to provide helpful context to what they are experiencing up close.
  • 29.
  • 30. Keep your type in the center zone of what you are designing to avoid the pixels from blurring on the edge of your peripheral sight. Here are the optimal degrees to remember: Field of view: 94° Head turn limit: 154° Maximum viewing at one time: 204 With a 3D experience, there are important type design considerations for each kind of type relative to the different spatial zones. Immersive type UI type Anchored type Responsive type
  • 31. IMMERSIVE TYPE This type needs to act like a 3D object, but will most likely be a flat 2D element (for readability). ● This type is integrated into the 3D environment. As such, it should match the perspective of the planes where it is placed. ● If you want the type to feel integrated into a space, then it needs to look believable by following the same perspective. ● This dynamic type will rely on spatial computing to map out the space in advance of the experience or having the user select a vertical or horizontal plane where the type will be placed.
  • 32. UI TYPE This type remains static in the experience. This should be 2D and remain in one place on the screen, such as the navigation bar or on the top and bottom of the screen. This text is critical for the user experience and often provides identifying information, such as the name of the app or experience. The type can serve as a menu allowing the user to see what other options are available at any given point. UI type must be easy to find, easy to see, and easy to use, because it plays an essential role in the approachability of the experience for a user.
  • 33. ANCHORED TYPE This type is connected to a specific plane or object within the environment. As the user moves around the environment, the type will remain in the same spot as the object to which it is anchored. Anchored type stays pinned to one specific location or object to identify it, like the business labels in this AR navigation app prototype. Example, in a navigation experience, the tags pinned to the surrounding businesses and landmarks around help the user identify them. These visual tags are anchored to the physical location. So, as the user explores, they will always see the correct name to each building
  • 34. RESPONSIVE TYPE Just as websites have to create responsive layouts and size ratios for the desktop displays, tablets, and mobile devices, that same concept applies in XR environments. Currently, type in HMDs uses pixel or bitmap type, instead of vector or outline type which would allow it to be scalable. With the dynamic needs of the content and type used in an augmented environment, the design can be seen from far away and also super close, even inside it and all around it. This means that the type needs to be crisp and clear in both near and far viewing distances.
  • 35. Just as in CSS we use the em unit of measurement to scale the type in relation to the width of the screen, there is a benefit for a similar system within AR. Based on user movement and the viewing angle, this approach allows type to automatically adjust for optimal readability. Black-on-white text is not as effective across all devices because you cannot reproduce pure black in a transparent or see-through display, which is used for many AR/MR experiences. Without the pure black there may not be enough contrast between the type and the background for readability.
  • 36. Creating visual contrast Viewing Distance- display and text type Spatial Zones UI zone Focal zone Environmental zone Different spatial zones IMMERSIVE TYPE UI TYPE ANCHORED TYPE RESPONSIVE TYPE
  • 37. UNIT V- Extended Reality(XR) Development 1. Augmented Typography: Legibility and readability Creating visual contrast Take control Design with purpose 2. Color for XR: Color appearance models Light interactions Dynamic adaptation Reflection 3. Sound Design: Hearing what you see Spatial sound Augmented audio Voice experiences Power of sound.
  • 38. Take control Where type is used? Tags How type is used How to view type? perspective distortion. Customization Minimize Design with purpose DESIGN CHALLENGE MAKE AN AUGMENTED EYE CHART
  • 39. Take control You are probably aware by this point that there are a lot of uncontrollable components to working within augmented and mixed realities. One of the most exciting aspects about these technologies is that you can use them in varying environments and scenarios. Where type is used To achieve more control is to consistently place your type in the same location within the experience, from the entry point until the end. After the user sees type repeatedly show up in the same place multiple times, they will start to look to that place for the information when they need it.
  • 40. This can apply to the UI type that helps users figure out how to navigate through an experience, but it can also relate to the immersive type that is part of the 3D space. For example, in tagAR the name tags always appear above people’s heads. After you see this happen two or three times, you understand that is where the digital augmented object appears, and then you will look for it in that same location each time after that.
  • 41. Tags. An augmented name tag from the mobile application tagAR. These tags always appear directly above each person’s head, making it easier to see their name and make eye contact at the same time. In a different example, cars with projected GPS directions appearing in the road in front of them use this approach to take advantage of constants within the driving experience. How type is used To allow the users to start associating a specific style with a specific function, give a role to each of the type styles within the experience. You can use headers to identify important information, for instance, and body copy to provide tool tips or provide instructions within the experience.
  • 42. It does take time to initially set up the styling for each of the needed styles, such as: Main header (h1) Secondary header (h2) Additional headers (max 6) Body type (p) Adding these categories of type to your experience will make it easier to navigate and find content.
  • 43. How to view type Because people can move through and around an AR experience, a world of possibilities opens for how they can view any given element, including type. Unlike a 3D object, however, type needs to be viewed from the correct angle and perspective for it to be readable. The way to control this viewing angle of text in 3D space is to have it always face the user. The positioning and orientation are relative to the user and their gaze. This added control ensures that people will view the type without any perspective distortion. When users view text in 3D space from extreme angles, the type can get bent and misshapen
  • 44. Perspective Distortion. As type gets warped to fit into a 3D scene, the use of extreme perspectives makes the type more distorted, reducing the readability of the message. Perspective distortion A warping of the appearance of an object or image often caused by viewing it from an extreme angle or how it is placed into a 3D scene.
  • 45. Customization Knowing that people will each have a different experience based on their physical location and environment, you can design for this. Using your user research to identify the most common places people interact with the experience, you can create different experiences for each. When a user first launches the experience, they would have to provide information about their physical environment. Reduce the effort for users (and yourself) by providing a list they can choose from; not only does this make providing the information easier for them, it also is easier to design for.
  • 46. Their answers, which could be as simple as selecting indoors or outdoors, would activate different features or designs based on their choice. Lighting is typically brighter outside than inside, for example, so you could alter the design of your type, and other elements, according to their selection.
  • 47. Minimize What information is essential to be included in the copy? It is important to look through all the wording you are including in an experience and be as efficient as possible. reading large amounts of type in XR is not optimal. So, you want to narrow in on just what is needed and avoid anything that is not needed for the experience itself. You can also explore if there are any other ways to express the information instead of in type form. it may not always be the best solution to communicate an idea or action quickly.
  • 48. Using simple icons, arrows, illustrations, photographs, videos, or even a combination of these like a data visualization or an infographic could help eliminate the amount of type needed by communicating the same information in a visual way. For this to work well, you have to put in some work to narrow down what type is needed and how best to use each word effectively. When working with mobile AR especially, screen space is premium real estate. You want to reserve as much space as you can for people to see and interact with the AR experience. Make use of user interactions or UI elements to reveal more information.
  • 49. Design with purpose The key takeaway to understand leaving this chapter is efficiency. There are many challenges in displaying type in AR—everything from working with lower resolution screens to choosing the best typeface to be viewed up close and far away and everywhere in between. These challenges reveal the need for efficiency of all the type you include within an experience. As you go through your full user journey, check to make sure the type holds purpose everywhere you add it.
  • 50. DESIGN CHALLENGE MAKE AN AUGMENTED EYE CHART The goal of this challenge is to help test your typographic design choices at varying reading distances in augmented reality. 1. Based on some of the suggestions in this chapter select three typefaces and font weights that you think will be legible in AR. 2. Using Adobe Illustrator or Photoshop, design an eye chart with different letters in each row. Use these letters in order and add line breaks as shown in the figure. EFPTOZLPEDPECFDEDFCZP 3. As you go down each line, reduce the point size as shown. 4. Save this file as a JPG. 5. Launch Adobe Dimension. From the basic shape library select a plane.
  • 51. 6. Using the widget tool on the plane, rotate the plane on the z-axis (blue) to lift the plane up vertically, as you would expect to see a traditional eye chart. Then position the plane on the x-axis (magenta) to lift it up off the ground. 7. Now you need to add your eye chart to the plane. To do this, move to the right side of the screen and select your Plane layer in the Scene panel. Click the arrow on this layer to view your customization properties. Find the Properties panel, and double- click the base color. Toggle from selecting a color to selecting an image. Here you can upload the JPG you saved earlier.
  • 52. 8. Adjust the positioning of your plane as needed to make sure you can view the letters correctly. 9. Now, the fun really begins. You are going to share this to Adobe Aero. While still in Adobe Dimension, choose File > Export > Selected for Aero. Choose Export from the pop-up window, and then save the file in your Creative Cloud Files folder. This should be the default folder that comes up, but if you don’t see it, you can find it in your user files. 10. Using a mobile device or iPad, launch the Adobe Aero application. When it prompts you to choose an image, choose your eye chart from your Creative Cloud files. Place this on a plane so you can start testing.
  • 53. 11. Make sure you are clear to move around the image. View the type up close, and step back away from it. How is the readability affected? Take notes. 12. Based on your findings, choose a different typeface and repeat the process to help identify which typefaces to try out in your next AR project.
  • 54. Take control Where type is used? Tags How type is used How to view type? perspective distortion. Customization Minimize Design with purpose DESIGN CHALLENGE MAKE AN AUGMENTED EYE CHART
  • 55. UNIT V- Extended Reality(XR) Development 1. Augmented Typography: Legibility and readability Creating visual contrast Take control Design with purpose 2. Color for XR: Color appearance models Light interactions Dynamic adaptation Reflection 3. Sound Design: Hearing what you see Spatial sound Augmented audio Voice experiences Power of sound.
  • 56. 2. Color for XR: Color appearance models Color space Additive Subtractive Linear versus gamma color space Usability ●Legibility and readability ●Contrast ●Vibrancy ●Comfort ●Transparency
  • 57. 2. Color for XR: Color appearance models ● The color is a personal and dynamic relationship. ● Color creates an emotional impact, as we carry cultural meanings to the hues surrounding our society. ● “Not all reds are the same. Some are more intense, some more passionate, some more full of life, and some more cautionary”. ● The term color space is used to describe the capabilities of a display or printer to reproduce color information.
  • 58. Example, you will want to make sure that you match the color space used with the medium (print or digital). In a closely related concept, software often allows you to set the color mode. Color space A specific organization of colors that determines the color profile that is used to support the reproduction of accurate color information on a device. RGB and CMYK are two common examples. In traditional print design, ensuring the accurate creation of color is so important to brand identities and marketing that the Pantone Matching System was
  • 60. Additive color/RGB Subtractive color/CMYK Red, Green, and Blue each a color value of between 0 and 255. To create over 16 million color combinations. The common color profile for this is called CMYK (cyan, magenta, yellow and key black). (mixing base) 8-bit sRGB color format preferred input for images on many XR devices. the term key is a direct reference to the key plate used in the printing process. four-color printing process Each color has a specific value in the HSB or HSL format. provides a numeric value to the hue, saturation, and brightness (or lightness) of the color. These four colors can produce over 16,000 different color combinations. create elements such as image targets, a camera scanning a printed image and then applying augmented content to it.
  • 61. Linear and Gamma color space: The difference between increasing the shading incrementally in the linear color space versus using the gamma correction, which is nonlinear.
  • 62. Linear Color Space Gamma Color Space when creating digital images, there is a need of more accuracy and variety in the dark tones. Once an image or graphic has been gamma corrected, it should, in theory, be displayed “correctly” for the human eye. To accommodate for this sensitivity of the way the brain perceives shades, gamma correction, also referred to as tone mapping, was created. To replicate that in a way that is mathematical correct, the linear color space was created to match our physical space.
  • 63. Linear Color Space Gamma Color Space Linear color space Numeric color intensity values that are mathematically proportionate. Gamma correction A process that increases the contrast of an image in a nonlinear way to adjust for the human eye’s perception and the way displays function. many XR and game designers prefer to use the linear color space to give their work that realistic feel. This has also become a standard within software focused on immersive experiences such as Unity and Unreal Engine.
  • 64. Tint The increased lightness of color by the addition of white. Shade The increased darkness of a color by the addition of black. HMDs will support linear only, while others support gamma only. Some will allow a combination: linear color with some gamma corrections. Usability:- Selecting colors that will make the experience usable. ● Legibility and readability ● Contrast ● Vibrancy ● Comfort ● Transparency
  • 65. Legibility and readability ● Legibility and readability refer not only to the color of type (text), but also to color of the elements surrounding the text. ● To ensure that type is easily read, To use a shape as a color background that helps separate the letters from the environmental background. ● White is the most common color for text and icons in XR. ● Red text on a black background is hard to read because they are both dark. ● Select colors that have varying shades, so you don’t have a dark color on a dark color; instead, you want light on dark or dark on light.
  • 66.
  • 67. Contrast ● When you have two colors that are close in shade or even saturation, they will start to vibrate off one another. ● To avoid this effect, select colors that have visual contrast. ● It means opposite qualities such as light& dark or saturated & desaturated. Color Vibration. Colors that are close in tonal range start to vibrate when placed in close proximity. ● Contrast is essential for keeping your experience accessible. ● Making sure your color choices have solid contrast will make the experience usable for a greater number of users. ● This approach is more likely to suit a user’s unique needs, even if those needs change based on their environment.
  • 68.
  • 69. Vibrancy ● A color at its purest form is called chroma. when the color is fully saturated, without the addition of gray. These pure colors are bright and vibrant. ● Vibrancy increases the brightness of the desaturated tones. ● The energy of a color caused by increasing or decreasing the saturation of the least saturated tones. ● Vibrancy can also change the energy of the color and, as a result, the overall experience. ● Bright oranges and reds will grab your attention over desaturated greens or grays.
  • 70. Comfort To create a positive user experience, you want the user to be comfortable. If the colors you select are too intense or create too much strain, then this will cause discomfort. If a user is met with too much discomfort, they will likely leave the experience to find a different one that is more comfortable. Larger areas of color in XR, especially vibrant and fully saturated colors, will be hard on the eyes. So, use these brighter colors sparingly to attract attention, but don’t use them in large quantities.
  • 71. Comfort To create a positive user experience, you want the user to be comfortable. If the colors you select are too intense or create too much strain, then this will cause discomfort. If a user is met with too much discomfort, they will likely leave the experience to find a different one that is more comfortable. Larger areas of color in XR, especially vibrant and fully saturated colors, will be hard on the eyes. So, use these brighter colors sparingly to attract attention, but don’t use them in large quantities.
  • 72. Comfort:- The users test the experience, and even test with different color combinations to see what works best for most people. The colors will change in appearance between the computer you create them on and the actual device that plays the XR experience, it is important to test your designs. View the colors in context, and then make adjustments to improve the ease of use.
  • 73. Transparency:- Color will be displayed differently based on the kind of display you use. An optical see-through (OST) display, such as the Microsoft HoloLens 2, AR glasses, or smart glasses will show all elements as more transparent, due to the nature of the technology. Video see-through (VST) displays, such as mobile AR experiences that use the camera to view the physical world, have different considerations. Because any graphics or objects will be applied directly on top of the camera view in a VST-based experience, they can be displayed fully opaque.
  • 74. If the amount of transparency in a 3D model or object,then you can reserve opaque colors for UI elements so that they stand out on the display. Making the UI easy to see and interact with is a high priority. The perception of color is directly connected to the light in the scene. To ensure that users see the colors that you select for the design, you need to design the lighting as well.
  • 75. 2. Color for XR: Color appearance models Color space Additive Subtractive Linear versus gamma color space Usability ●Legibility and readability ●Contrast ●Vibrancy ●Comfort ●Transparency
  • 76. UNIT V- Extended Reality(XR) Development 1. Augmented Typography: Legibility and readability Creating visual contrast Take control Design with purpose 2. Color for XR: Color appearance models Light interactions Dynamic adaptation Reflection 3. Sound Design: Hearing what you see Spatial sound Augmented audio Voice experiences Power of sound.
  • 77. Light interactions: ●Type of light ● POINT LIGHT ● SPOT LIGHT ● AREA LIGHT ● DIRECTIONAL OR PARALLEL LIGHT ● AMBIENT LIGHT ●Color of light- Light Temperatures ●Lighting setup ● Soft lighting ● One-point lighting ● Three-point lighting ● Sunlight ● Backlight ● Environmental ●Direction and distance of light:- Falloff, Feathering ●Intensity of light:- 100% (the highest brightness) ●Shadows
  • 78. Adjusting light in a scene or onto an object does not just mean that you are simply brightening or darkening; Believable immersion relies in the use of light and its accompanying shadow. With the exception of some stylistic deviations, you will want your lighting to mimic the real world. It makes sense then to be inspired by light from your physical space. Type of light: Think about lighting design as you would think about determining the colors of a composition: Identify the key areas that you would like to have the most attention. The brightest and most vibrant colors will attract attention first.
  • 79. POINT LIGHT A point light will emit light in all directions from a single point. This light has a specific location and shines light equally in all directions, regardless of orientation or rotation. Examples are lightbulbs and candles. SPOT LIGHT A spot light works just like a spotlight used in stage design. It emits light in a single direction, and you can move the direction of the light as needed. Example is a stage spot light for a soloist.
  • 80. AREA LIGHT This light source is confined within a single object, often in a geometric shape such as a rectangle or sphere shape. Examples are a rectangular florescent light and a softbox light. DIRECTIONAL OR PARALLEL LIGHT Parallel rays that mimic the sun; these lights are infinite, just like sun infinitely lights. This means that the position of these lights doesn’t matter, only their direction and brightness. An obvious example is sunlight.
  • 81. AMBIENT LIGHT Ambient light applies to the full scene. You cannot choose a specific location for this light, and it will change the overall brightness of the scene. Example: natural, indirect light from a window. Color of light If you have ever gone lightbulb shopping or bought Christmas lights, then you’ve seen how many different colors of light there are. Even if you want just “plain white” light, you are greeted with a magnitude of options. The reason is that no light is pure white. Light is made up of three colors: red, green, and blue. Mixing these colors in different proportions alters the color of the light we see, thanks to the additive property we discussed earlier. Light has a color temperature;
  • 82. it can be warm or cool depending on the proportional mix of colors. 2700K is a warmer, yellower white; 7000K is a cooler, bluer white; daylight is 6400K. Light Temperatures. The temperatures of various kinds of lights using the Kelvin scale for measurement.
  • 83. Lighting setup: It is quite to use more than one light in your scene, just as you would in the real world. You can have window light and a table lamp in the same space. Example: add additional lights to the scene, you need to control the relationship of the lights. Soft lighting:Soft lighting is the best choice if you need to add evenly distributed lighting to your scene. The name actually refers to the soft quality of shadows in the scene, making the overall contrast feel balanced and calm. This kind of lighting is frequently used for portrait photography.
  • 84. Soft Light. One soft light provides equal light across the 3D sphere. One-point lighting: The one-point lighting technique uses a single light and, as a result, will create a dynamic mood. It also creates harsher shadows where the light is not illuminating the object. One-point light hits the 3D sphere making the light and shadows more dramatic.
  • 85. Three-point lighting The three-point lighting technique uses three lights—key, rim, and fill—each of which has a specific role in the overall lighting setup. Three lights are set up around the 3D sphere to demonstrate the positions of the rim light (backlight), key light, and fill light.
  • 86. Three-Point Light: Three lights are set up around the 3D sphere to demonstrate the positions of the rim light (backlight), key light, and fill light. ● Key light illuminates the focal point of the scene or object and is the primary light in the scene. ● Rim light illuminates the back your subject, separating it from the background and adding depth. ● Fill light fills in more light in the scene to reduce or eliminate harsh shadows and even out the overall lighting.
  • 87. Sunlight In the sunlight approach there is a single light source: the sun. If you are looking to replicate an outdoor scene, then you should use direct sunlight as your lighting. Unlike in the real world, however, you easily can move the direction of the sun in a 3D scene to mimic the type of sunlight you prefer: sunrise, high noon, sunset, or something in between.
  • 88. Backlight A primary light source behind your object is a backlight. This technique is not as commonly used, but it can create some mystery and drama to the scene as needed. This lighting also can cause harsh shadows and a lot of contrast between the light and the object, often creating a silhouette and reducing the number of details seen.
  • 89. Environmental The environmental lighting approach pulls lighting from an image that is imported into the program. This works best when using high-dynamic-range imagery (HDRI) for which the luminosity data of the image, specifically the darkest and lightest tones, are captured at a larger range. This basically means that more lighting data is stored within the image file (it is a 32-bit image, versus the standard 8-bit). These images can be used to replicate the lighting in the image in the 3D scene. Using environmental lighting is a fast way to generate a custom and believable lighting setup.
  • 90. Environmental. The light was created to mimic the lighting from the background image and replicated on the 3D sphere. Direction and distance of light The relationship between the light and the shadow provides a lot of information, to control the look and feel of that transition. the light weakens so too will the shadow. This weakening of a light along its outer edge is called falloff. The falloff has a radius and a distance, and you can control it. Lights with a smooth falloff have a high radius and a large distance that will show a gradient blur that slowly goes from light to dark.
  • 91. Falloff : ● The visual relationship of shadow and light as illumination decreases while becoming more distant from the light source. ● The edge of the light can be controlled through edge or cone feathering to soften the line between the light and the shadow. ● This is how you can edit and control the edge itself. ● This option is often available for any lighting that is a cone shape, such as a spot light. Feathering :- The smoothing, softening, or blurring of an edge in computer graphics.
  • 92. Intensity of light Once you have the kinds of lights, their position, their roles in the scene, and their color properties identified, the next step is to determine how bright the light should be. This is the intensity. The default is 100% (the highest brightness), but this amount can be edited to make the light dimmer. The strength of the light can also be called energy. Shadows Wherever there is a light, there must be an accompanying shadow, where there is light falloff or light is blocked by another object.
  • 93. Without a shadow, the light will not be perceived as real and won’t be believable. Shadows also play a big part in our ability to perceive where an object is in space. Seeing a shadow far away from an object tells us that the object is suspended in the air or not near the plane. A shadow that connects to the bottom of the object tells us that the object is sitting directly on the plane. Example, natural sunlight casts stronger shadows than artificial light.
  • 94. The terms soft light and hard light actually reference the characteristic of the shadows the types of light create. Soft lighting provides a more even light across all of the subject and, in turn, creates soft shadows with a fuzzy edge. Hard lighting provides more dramatic lighting on an object, creating sharp edges on shadows. Shadows. 3D rendering highlighting where the main light source is (the setting sun) and how the light falls off into increasing shadow inside the cave. The farther away from the sunlight, the darker the shadows become.
  • 95. Light interactions: ●Type of light ● POINT LIGHT ● SPOT LIGHT ● AREA LIGHT ● DIRECTIONAL OR PARALLEL LIGHT ● AMBIENT LIGHT ●Color of light- Light Temperatures ●Lighting setup ● Soft lighting ● One-point lighting ● Three-point lighting ● Sunlight ● Backlight ● Environmental ●Direction and distance of light:- Falloff, Feathering ●Intensity of light:- 100% (the highest brightness) ●Shadows
  • 96. UNIT V- Extended Reality(XR) Development 1. Augmented Typography: Legibility and readability Creating visual contrast Take control Design with purpose 2. Color for XR: Color appearance models Light interactions Dynamic adaptation Reflection 3. Sound Design: Hearing what you see Spatial sound Augmented audio Voice experiences Power of sound.
  • 97. Dynamic adaptation Lighting estimation ● Brightness ● Light color ● Color correction values ● Main light direction ● Ambient intensity ● Ambient occlusion Environmental reflections ●Diffusion ●Roughness ●Metalness
  • 98. Dynamic adaptation: The idea of the copycat. It allows you to learn and adapt to new interactions by imitating what someone else is doing—learning as you go along. This simple concept can be applied to a larger scale, as we look at imitation in AR. With dynamic backgrounds and environments, the light and the properties of the light will constantly change. Just as a child sees a hand movement and repeats the action on their own, so too can software. such as Google’s ARCore and Apple’s ARKit framework, evaluate environmental light and repeat it as digital light. The basic method used is called lighting estimation.
  • 99. Using sensors, cameras, and algorithms, the computer creates a picture of the lighting found within a user’s physical space and then generates similar lighting and shadows for digital objects added to the space. To be effective and realistic, this analysis should be continual throughout the experience so it can adapt to changes in the lighting and within the environment. This is a key attribute in the ARCore and ARKit frameworks. Lighting estimation A process that uses sensors, cameras, machine learning, and mathematics to provide data dynamically on lighting properties within a scene.
  • 100. Lighting estimation When using this lighting estimation method, the computer and AR development framework work together to analyze the: ● Brightness ● Light color ● Color correction values ● Main light direction ● Ambient intensity ● Ambient occlusion
  • 101. Brightness: For each pixel on the display, the average lighting intensity can be calculated and then applied to all digital objects is called pixel intensity, and it adjusts the overall brightness based on calculating the average overall available light in the environment. Light color and color correction The white balance can be detected and checked dynamically to allow for color correction of any digital objects within the scene to react to the color of the light. It will enhance the color balance allows changes to occur smoothly and more naturally instead of abrupt adjustments(the illusion or realism). To apply luminance properties applied to your 3D model, it will still maintain those color properties, but it will also receive the color correction from the light estimation scan.
  • 102. Main light direction:By identifying the main directional light, the software ensures that digital objects added to the scene will have shadows cast in the same direction as other objects around them. It also enables specular highlights and reflections to be correctly positioned on the object to match the environment. If you want to make sure that all the shadows and highlights are following consistently from the singular directional light.
  • 103. Having this consistent direction of light may seem minor, but it is something that the brain sees and perceives without us even realizing. The intensity of the light and also the falloff of those shadows. You don’t want the intensity of the light to feel too bright to match the scene, or the reverse of that where the light feels too dark to match the scene. Light Direction: 3D rendering showing a prominent main light source that can be seen as it enters through the window opening. The position of this light source leaves the interior of the scene in shadow.
  • 104. Ambient intensity: how multiple lights can work together to create a full lighting setup, As an important part of the light estimation scan, ARCore can re-create what Google calls “ambient probes” that add an ambient light to the full scene coming from a broad direction to create a softer overall tone. It works with the directional light to help the digital objects blend more seamlessly into the scene. Again, it is about replicating or imitating the real-world scene.
  • 105. Ambient occlusion Every time you add a computer-generated light, it will produce a generated shadow. Those shadows need to fall into the physical space to make to make them believable. To do so, two things need to happen. ● When you add an ambient light, it should both cast a shadow on the object and have the shadows occlude all around it. ● When the light hits the object itself, such as on a piece of fabric, each wrinkle should show a shadow.
  • 106. Something like a brick wall should have shadows created inside of every groove. Ambient light will hit multiple surfaces, and each one will create their own shadow. This shadow casting is called ambient occlusion. Ambient occlusion Simulation of shadows both on an object itself and also on the other objects around it created by the addition of an ambient light source.
  • 107. Environmental reflections Take a look at the reflections to see environmental reflections, or places where pieces of the space are reflected. Depending on the material of the objects, the relative reflectiveness will change. When you add a digital object to a scene, especially an object that has a metallic or glass surface, it should respond to the light around it in the form of a reflection. For these virtual objects, the reflections have to happen in real time and adjust according to the space to lend realism and believability to the objects.
  • 108. Reflection. A metallic sphere reflects images from the environment surrounding it.When creating your 3D objects, you can adjust several properties to affect how reflective an object is. ● Diffusion ● Roughness ● Metalness Diffusion Even distribution of light across an object’s surface. Each material you apply to your 3D object has a base color or texture. Adjusting an object’s diffusion property affects the amount and color of light that is reflected at each point of an object.
  • 109. Diffusion: The diffusion stays consistent as you look around the object. It is a property that is applied equally along the material’s surface. Because this is an even distribution of light, it will result in a nonreflective surface. In 3D software, the default diffusion color is white, unless you change it otherwise.
  • 110. Roughness: ● If the surface is smooth and shiny like a car’s chrome bumper, it will be highly reflective. But if the surface has tiny bumps and cracks along the surface like the surface of a rock or brick, then it will be less reflective. ● This roughness property can change how matte or shiny an object can become. Increasing the roughness and using brighter colors will diffuse the light across the surface more, making it appear matte or rough. ● Reducing the amount of roughness, in addition to using darker colors, will cause the material to appear smooth and shiny. ● Materials that are shiny will also create specular highlights. ● These are the small shiny areas on the edges of an object’s surface that reflect a light. ● These specular highlights should change relative to the position of a viewer in a scene, because they are created by the position of the light.
  • 111. Metalness:- For the physical surface of an object, you can set multiple properties to determine how metallic or nonmetallic it is. ● The refraction index controls the ability for light to travel through the material. Light that cannot travel through an object will reflect back, and more metallic surfaces will produce sharper reflections. ● The grazing angle makes the surface appear more or less mirror-like. ● If the surface reflects the light sharply and has a mirror-like quality, it will appear more metallic. ● These properties can be adjusted to lower or increase the metalness to change the appearance of an object’s surface.
  • 112. ● If the surface is made more metallic and mirror-like, this will increase the need for environmental reflections on the object’s surface. ● Reflective surfaces also pick up colors and reflect images. So, a metallic object placed in a green room will also have a green tone. Reflection: Light and color work together to create a sense of depth and realism. ● To create and design digital objects, they should be reflective of the environment around them. ● This process starts with selecting the appropriate color appearance mode for your experience, works through adding and adjusting any custom lighting options, and should come to life by adapting to the physical spaces that the object augments.
  • 113. LIGHTING DESIGN To creating some lighting setups. To get started, you need to add a sphere to your scene. Do not apply any materials to the sphere so you can see the way the lights change the surface. Using the lighting setups create the following: One-point light (soft and hard), Three-point light (add a key, fill, and rim light) ● Sunlight, Backlight and Your own custom setup For each lighting setup you create, go to your Render options, and save a PNG file for each. You can name each lighting setup accordingly. Save these images in a folder, and use them for reference as you work on more complex 3D models. This will create a lighting reference library for you.
  • 114. Dynamic adaptation Lighting estimation ● Brightness ● Light color ● Color correction values ● Main light direction ● Ambient intensity ● Ambient occlusion Environmental reflections ●Diffusion ●Roughness ●Metalness
  • 115. UNIT V- Extended Reality(XR) Development 1. Augmented Typography: Legibility and readability Creating visual contrast Take control Design with purpose 2. Color for XR: Color appearance models Light interactions Dynamic adaptation Reflection 3. Sound Design: Hearing what you see Spatial sound Augmented audio Voice experiences Power of sound.
  • 116. Sound Design: Hearing what you see Exploring how sound plays an essential role in creating an immersive experience. From how sound is created to how we can re-create it in a digital space, there are a lot of exciting things happening within 3D sound experiences. HEARING WHAT YOU SEE It is important first to understand how we hear so that we can then look at the best ways to re-create that sound to create realism in a soundscape. SPATIAL SOUND Just as in physical spaces, sound has direction and distance. There is different technology that will help create a sense of 3D sound. AUGMENTED AUDIO Just as we can add a layer of visuals into a user’s view, we can also add a layer of ambient audio to what they hear. VOICE EXPERIENCES With XR devices becoming more hands free, voice is becoming an intriguing way to interact with a computer.
  • 117.
  • 118. HEARING WHAT YOU SEE Listening Sound localization How do we hear sound? Loudness, pitch Raw audio to be captured and edited Music and voice audio Transferable and sharable audio How is sound useful? How do we use sound in XR? ● Ambient sound ● Feedback sound ● Spatial sound
  • 119. HEARING WHAT YOU SEE Find a place that you can sit comfortably for about five minutes, and bring a notebook and something to write with. It can be inside or outside—it really can be anywhere. 1. Close your eyes, and be still. Bring your awareness to listening. Try to avoid moving your head as you do this. Don’t turn your neck toward a sound; try to keep your neck at a similar orientation. 2. Listen for what you hear. See if you can identify what sounds you are hearing.
  • 120. 3. Then go one step further and try to identify where those sounds are coming from. Are they close? Far? Which direction are they coming from? Keeping yourself as the central axis, do you hear them in front of you? Behind you? To the left or right of you? Up high or down low? 4. When five minutes are up, draw out what you heard by placing a circle in the middle of the page to represent you, and then map out all the sounds that you heard around you in the locations you heard them from. If they felt close, write them closer to you, and in the same way, if they felt far away, then write them farther from you.
  • 121. Sound localization: start to pay attention to where you place the sound in context to yourself. Also consider how you determine the source of the sound. This is called sound localization. The ability of a listener to identify the origin of a sound based on distance and direction. It is impressive how well we can understand spatial and distance relationships just from sound. How do we hear sound? Sound is created through the vibration of an object. This causes particles to constantly bump into one another, sending vibrations as sound waves to our ears and, more specifically, to our eardrums. When a sound wave reaches the eardrum, it too will vibrate at the same rate. Then the cochlea, inside the ear, processes the sound into a format that can be read by the brain.
  • 122. To do this, the sound has to travel from the ear to the brain along the auditory nerve. Sound requires an element or medium to travel through, such as air, water, or even metal. You may already understand this process, but as we look to design for sound, there are some key properties that are essential to understand, including loudness and pitch. Loudness The intensity of a sound, measured in relation to the space that the sound travels. we can detect a wide range of sound, we need a way to measure the intensity of the sound. This is called loudness, which uses the unit of decibels (dB) to measure how loud or soft a sound is.
  • 123. To help you add some perspective to dB measurements: ● A whisper is between 20 and 30 dB. ● Normal speech is around 50 dB. ● A vacuum cleaner is about 70 dB. ● A lawn mower is about 90 dB. ● A car horn is about 110 dB. Pitch:- Sound changes depending on how fast the object is vibrating. The faster the vibration, the higher the sound. This pitch is measured using frequency, or how many times the object vibrates per second.
  • 124. Pitch The perceived highness or lowness of a sound based on the frequency of vibration. Frequency is measured in hertz (Hz). The human hearing ranges from 20 to 20,000 Hz. However, our hearing is most sensitive to sounds ranging in frequency between 2000 and 5000 Hz. Those who experience hearing loss will often start to lose or have the upper pitches affected first. How do you choose which format to use? The answer depends on what you’re working with. ● Raw audio to be captured and edited: Uncompressed formats allow you to work with the highest quality file, and then you can compress the files to be smaller afterward. ● Music and voice audio: Lossless audio compression files maintain the audio quality but also the larger file sizes. ● Transferable and sharable audio: Lossy audio compression formats produce smaller files sizes, which facilitates sharing.
  • 125. How is sound useful? ● Sound is much like a ripple in water; it starts in a central spot, and then it slowly extends out gradually getting smaller and smaller (or quieter and quieter) as it moves away from the center. ● Even if you hear a sound from far away, you can still detect where the sound is coming from or at least an approximate direction. ● You can tell the difference between footsteps walking behind you or down another hallway. ● You can tell the difference between a crowded restaurant and an empty one from the lobby, all because of the sound cues of chatter. ● The more chatter you hear, the more people must be inside. ● Sound adds an additional layer of information that will help the user further grow their understanding of what is going on around them.
  • 126. ● how light can be used to understand space and depth, and sound can also be used to calculate distance and depth. ● Through the use of SONAR (sound navigation and ranging), you can measure the time it takes for a sound to reflect back its echo. ● This idea is used by boats and submarines to navigate at sea and to learn about the depth of the ocean as well. How do we use sound in XR? There are many ways that sounds play a role in our understanding of space. Within XR there are three main ways sound is used. ● Ambient sound ● Feedback sound ● Spatial sound
  • 127. Ambience for reality In order to really create a sense of “being there,” sound adds another layer of realness. When you see a train approaching, that comes with the expectation of hearing the wheels on the tracks, the chugging sound of the engine, steam blowing, and the whistle, horn, or bell. These sounds add to your perception of the train approaching. Notice the ambient sounds that allow the user to feel truly immersed.
  • 128. Listen for sounds that you can mimic to re-create the scene. Sounds that are noise intensive and have consistent looping, such as fans, wind, or waves, do not work as well in this medium, however, so you want to avoid them. Sounds that have a start and stop to them will be more effective and less intrusive. When designing for AR and MR, you can rely more on the natural ambient noise that will be in the user’s physical space.
  • 129. Providing feedback ● For the user experience, sound can be a great way to provide feedback about how the user is interacting within space. ● Hearing a sound when you select an interactive element will reinforce that you have successfully activated it. ● These sounds can be very quiet, such as a click, or louder, such as a chime. ● Just be sure to use these sounds in a consistent way, so that the user will start to associate the sounds with their actions. ● Sound cues can guide interactions.
  • 130. ● You can also use sound to direct the user to look or move to another location, to make sure they see an object that may not be in their gaze. ● It can also be used in VR to alert the user when they are close to the edge of their space boundaries. Creating depth Because our understanding of sound is 3D, it makes sense that you would also re-create sound to reflect depth. It also provides information to the user, such as how close or far away an object is. This topic is such an essential part of XR sound design, we are going to dive into how make your sound have depth next.
  • 131. HEARING WHAT YOU SEE Listening Sound localization How do we hear sound? Loudness, pitch Raw audio to be captured and edited Music and voice audio Transferable and sharable audio How is sound useful? How do we use sound in XR? ● Ambient sound ● Feedback sound ● Spatial sound
  • 132. UNIT V- Extended Reality(XR) Development 1. Augmented Typography: Legibility and readability Creating visual contrast Take control Design with purpose 2. Color for XR: Color appearance models Light interactions Dynamic adaptation Reflection 3. Sound Design: Hearing what you see Spatial sound Augmented audio Voice experiences Power of sound.
  • 133. Spatial sound Single-point audio capture: Binaural Ambisonic Paradise case study Virtual Design Environment Behind the Scenes
  • 134. To re-create sound in a spatial environment, look at two components. ● How the sound is recorded ● How the sound is played back through speakers or headphones The traditional types of audio recordings are mono and stereo. Mono sound is recorded from a single microphone. stereo is recorded with two microphones spaced apart. Stereo is an attempt to create a sense of depth by having different sounds heard on the right and left sides of a recording. It is intended to create a sense of 3D audio.
  • 135. The concept of 360-degree sound has been experimented with for years, looking at how surround sound can allow sound to come from different speakers all around the room creating a full 3D audio experience. This is used most commonly for the cinema and must be designed around people sitting in one fixed location Single-point audio capture: stereo recordings sound even more natural, one option is a binaural audio recording format. To record binaurally, you record from two opposite sides and place each microphone inside a cavity to replicate the position and chamber of an ear. This concept is used to re-create sound as closely as possible to the way we hear it ourselves. Headphones are needed to accurately listen to binaural sound.
  • 136. Binaural A method of recording two-channel sound that mimics the human ears by placing two microphones within a replicated ear chamber positioned in opposite locations to create a 3D sound. Ambisonic audio uses four channels (W, X, Y, and Z) of sound versus the standard two channels. An ambisonic microphone is almost like four microphones in one. You can think of this as 2D (stereo) versus 4D (ambisonic) sound. Ambisonic microphones have four pickups, each pointed and oriented in a different direction making a tetrahedral arrangement. Sound from each direction is recorded to its own channel to create a sphere of sound.
  • 137. Ambisonic Microphone. Able to capture audio from four directions at once, this Sennheiser ambisonic microphone is creating a spatial audio recording from nature. Ambisonic A method of recording four-channel sound that captures a sphere of sound from a single point to reproduce 360° sound. It was developed by the British National Research Development Council in the 1970s, more specifically, by engineer Michael Gerzon.
  • 138. Paradise case study Many XR experiences rely on an individual user experience, where each person will have their own set of headphones on or be inside their own virtual space. This is a feature for which there is more and more of a demand. In a social situation, the sound may not track with the user. However, it could be designed to be static within a space, allowing the sound to change as the user moves through space (as in the real life). Paradise is an interactive sound installation and gestural instrument for 16 to more than 24 loudspeakers. For this collaborative project, Douglas Quin and Lorne Covington joined their backgrounds in interaction design and sound design to create a fully immersive sound experience that they optimized for four to eight people.
  • 139. Paradise case study The installation allows users to “compose a collage of virtual acoustic spaces drawn from the ‘natural’ world.” As users move through the space and change their arm positioning, sensors activate different soundscapes from wilderness and nature to create a musical improvisation. This composition is unique each time as it relies on how each user moves and interacts within the space. Motions can change the density of sounds, the volume of them, the motion or placement of the sound in the space, and the overall mix of the sounds together.
  • 140. Paradise Experience:- Visitors react to the interactive soundscape environment of Paradise. Venice International Performance Art Week, 2016. Photograph used by permission of Douglas Quin
  • 141. This experience was reimagined for both interior and exterior spaces. Changing the location “creates a different spatial image,” Quin explained when I spoke with him and Covington about the challenges of the project. As they re-created the experience, they had to adjust for the location. The exterior exhibit required fewer ambient sounds, as they were provided naturally.
  • 142. The interior exhibit required more planning based on how sound would be reflected and reverbed by the architecture of the space.
  • 143. Behind the Scenes. This behind-the-scenes screen capture shows the installation environment for Paradise. Numbers indicate loudspeakers. The green rectangular blocks are visitors. The large red circles are unseen zones of sounds that slowly rotate. Sounds are activated when a visitor breaks the edge of a circle. The triangles with colored balls are sound sources for any given sound (with volume indicated by the size of each ball).
  • 144. Spatial sound Single-point audio capture: Binaural Ambisonic Paradise case study Virtual Design Environment Behind the Scenes
  • 145. UNIT V- Extended Reality(XR) Development 1. Augmented Typography: Legibility and readability Creating visual contrast Take control Design with purpose 2. Color for XR: Color appearance models Light interactions Dynamic adaptation Reflection 3. Sound Design: Hearing what you see Spatial sound Augmented audio Voice experiences Power of sound.
  • 146. Augmented audio AR Sound How does it work? Speaker Closeup More than a speaker How can you get your own? Imagine going for a bike ride while listening to your favorite playlist, receiving voice-driven instructions, and still being able to hear car engines, sirens, and horns honking all around you. This is the power of augmented audio.
  • 147. Augmented audio The layering of digital sound on top of, but not blocking out, the ambient sounds of an environment. Augmented audio, also referred to as open-ear audio, allows you to hear all the ambient sounds around you while adding another layer of audio on top of it. This allows you to receive navigational directions, participate on a phone call, listen to an audiobook, or listen to your favorite music—all while still being connected to the world around you. Smartglasses, also known as audio glasses, come in different shapes from a number of different manufacturers. Many of these come in a sunglasses option, as they are most likely to be used outside. However, many come with customizable lens options to personalize your experience.
  • 148. Bose was the first to the market and started off demonstrating the technology using 3D printed prototypes at South by Southwest (SXSW) Conference and Festival in 2018. I remember walking by the Bose house in Austin, Texas, where they had taken over part of a local restaurant to showcase their AR glasses. I was intrigued. I wanted to know how Bose, known for their high-quality speakers and headphones, was entering the world of AR. Well, I quickly found out how important audio is to an immersive experience while wearing their AR glasses on a walking tour of Austin. The experience started by connecting the sunglasses to my phone through Bluetooth. Advantage of the processing power of a smartphone, Bose could keep them lightweight and cool.
  • 149. One person in the group spotted a famous actor stepping out of their vehicle for a film premiere at the festival and was able to tell everyone else as we continued listening to our guided tour. AR Sound. 3D printed prototypes of the original Bose AR glasses at SXSW 2018. To be clear, these glasses and similar pairs from other developers don’t show the user any visuals. They are just to provide audio.
  • 150. It allow for voice interactions without needing to take out a phone. They allow the user to interact hands-free and ears-free. They are essentially replacements for headphones or earbuds that allow the user to still hear everything around them at the same time. How does it work? Using what is called open-ear technology, a speaker is built into each arm of the audio glasses. What helps make them augmented, while also staying private, is the position and direction of the speakers. One speaker is placed on each arm of the glasses near the temple so that the sound is close, but still allows other sounds to enter the ear cavity.
  • 151. The speakers point backward from the face, so they are angled right toward the ears. This angle reduces how much of the sound can be heard by others around the wearer. Even in a 3D printed prototype there was not much sound escaping from the glasses, and very little could be heard even by those standing on either side.
  • 152. Speaker Closeup. The speakers on the Bose AR glass prototypes are near the ear. In addition to the speakers themselves, there is also a head-motion sensor built in that can send information from the multi-axis points to your smartphone. This allows the app to know both the wearer’s location as well as what direction they are looking. This information can help customize directions—knowing the wearer’s right from left for example—as well as making sure they see key parts of an experience along the way.
  • 153. More than a speaker Listening is only half of the conversation. To allow for user feedback, these glasses also include a microphone. This allows the glasses to connect with the user’s voice assistant (more on this in the next section). Once again, this function helps maintain the hands-free functionality. It also allows the glasses to be used for phone calls and voice memos for those who want to communicate on the go. Many models have the option to turn the microphone feature on and off for privacy when needed. This is an important consideration. If you do purchase a pair, make sure that you can control when the device is listening and when it is not.
  • 154. To further customize the experience, one arm of the glasses is equipped with a multi-function button that you can tap, touch, or swipe. Other than microphone control, this is the only other button you will find on the glasses. This allows you change your volume, change tracks, make a selection, and navigate within an experience—without having to access your phone directly.
  • 155. How can you get your own? Although Bose has recently announced they would stop manufacturing their audio sunglasses line, they are still currently available for purchase as of this writing. They were the first to the market, but decided to not continue manufacturing the glasses as they didn’t make as much profit as the company had hoped. When interviewed about, this a Bose spokesperson said, Bose AR didn’t become what we envisioned. It’s not the first time our technology couldn’t be commercialized the way we planned, but components of it will be used to help Bose owners in a different way. We’re good with that. Because our research is for them, not us. Roettgers, J. (2020, June 16). Another company is giving up on AR. This time, it’s Bose. Protocol. www.protocol.com/bose-gives-up-on-augmented-reality.
  • 156. Since Bose’s first launch, others have stepped up production of their own version of Bluetooth shades. Leading the way is Amazon with their Echo Frames, which bring their well- known Alexa assistant into a pair of sunglasses. Everything many have learned to love about having a voice-powered home assistant is now available on the go. Other options to check out include audio glasses from GELETE, Lucyd, Scishion, AOHOGOD, Inventiv, and OhO.
  • 157. If you are looking to use your glasses for more than just audio communication, some of shades on the market also include cameras allowing for some action-packed capture. Leading the way in this market is Snapchat with their Spectacles Bluetooth Video Sunglasses. Audio glasses might remain as stand-alone audio devices. Augmented audio may also be incorporated into full visual and auditory glasses. But in either case, the focus on exceptional sound quality will pave the way.
  • 158. Augmented audio AR Sound How does it work? Speaker Closeup More than a speaker How can you get your own?
  • 159. UNIT V- Extended Reality(XR) Development 1. Augmented Typography: Legibility and readability Creating visual contrast Take control Design with purpose 2. Color for XR: Color appearance models Light interactions Dynamic adaptation Reflection 3. Sound Design: Hearing what you see Spatial sound Augmented audio Voice experiences Power of sound.
  • 160. Voice experiences & Power of sound Voice experiences ● VUI for voice user interface ● NLP for natural language processing ●Not a replacement- Virtual Keyboard ●Context ●Scripts Power of sound
  • 161. Voice experiences Voice is now an interface. Voice interfaces are found in cars, mobile devices, smartwatches, and speakers. They have become popular because of how they can be customized to the user’s environment, the time of day, and the uniqueness of each situation. Alexa, Siri, and Cortana have become household names, thanks to their help as virtual assistants. We are accustomed to using our voice to communicate with other people— not computers. it makes sense that companies like Amazon, Apple, and Microsoft try to humanize their voice devices by giving them names.
  • 162. It is important to make these interfaces feel conversational to match the expectations that humans have for any kind of voice interaction. As stated in Amazon’s developer resources for Alexa, “Talk with them, not at them.” This concept has also been supported by Stanford researchers Clifford Nass and Scott Brave, authors of the book Wired for Speech. Their work affirms how users relate to voice interfaces in the same way that they relate to other people. This makes sense, because that is the most prominent way we engage in conversation, up until this point.
  • 163. Voice user interface The use of human speech recognition in order to communicate with a computer interface. Alexa is one example of a voice interface that allows a user to interact conversationally. The challenge of this, of course, is that when we speak to a person, we rely on context to help them make sense of what we are saying. Natural language processing, or understanding the context of speech, is the task that an NLP software engine performs for virtual-assistant devices. The process starts with a script provided by a VUI designer. Just as you’d begin learning a foreign language by understanding important key words, a device like Alexa must do something similar. This script allows the user to train an assistant to an experience, or skill, as it is called in VUI design.
  • 164. Natural language processing The use of artificial intelligence to translate human language to be understood by a computer. With many XR experiences linking to a smartphone or even promoting hands-free as an added benefit, that opens up the potential for other ways for users to interact within an experience. If you are relying on tapping into smartphone technology, then you first need to understand how to design for it. VUIs are not reliant on visuals, unlike graphic user interfaces (GUIs). The first thing to understand is that voice interactions should not be viewed as a replacement for a visual interface.
  • 165. Not a replacement It is important not to get into a mindset that a voice interaction can serve as a replacement for a visual interaction. Example, adding a voice component is not an exact replacement for providing a keyboard. You also need to be aware that the design approach must be different. If you show a keyboard to a user, they will likely understand what action to complete thanks to their past experiences with keyboards. A keyboard, in and of itself, will communicate to the user that they need to enter each letter, number, or symbol. If they are able to do this with both hands, like on a computer, it may be an easy enough task.
  • 166. But if they have to enter a long search term or password using an interface where they have to move a cursor to each letter individually, this task may be greeted with intense resentment. One way to overcome this daunting task is to provide a voice input option instead. It is often much easier to say a word than type it all out. However, the process of inputting data this way is much different than with a traditional QWERTY keyboard or even an alphabet button selection, and it not as familiar.
  • 167. When a user sees the letters of the alphabet or a standard QWERTY keyboard, they connect their past experiences with it, so they can easily start to navigate to the letter they want to choose. But when you are relying on voice, the user will connect to their past communication with other people as how to interact.
  • 168. Virtual Keyboard. Concept communicating with friend via screen hologram with full QWERTY keyboard. There needs to be something in the UI that communicates to the user that they can use their voice to interact. This is often shown through the use of a microphone icon. However, it can also come in the form of speech. One way to let someone know that they can speak is by starting the conversation with a question such as “How can I help you?”
  • 169. Depending on the device, and where it will be used, design this experience to match what will work best to communicate to the user that they can use their voice and to start the conversation. What do you do first in a conversation? Before you speak, you might make sure the person you are speaking to is listening. But if you are speaking to a computer, you don’t have body cues or eye contact to rely on. So, this active listening state needs to be designed. Though most of the conversation experience will not use visuals, this is one area where a visual provides great benefit. Using a visual to let the user know that the device is listening can help substitute for the eye contact they are used to when talking with another person. This can be a visual change, such as a light turning on or a colorful animation, so they know that what they are saying is being heard.
  • 170. Tip Provide visual cues to provide feedback to the user. A great place for this is to communicate that a device is ready and listening. Companies like Apple have created their own custom circular animations that they use across all their devices; when a user sees Apple’s colorful circle of purples, blues, and white, they connect it with a voice interaction. Seeing this animation communicates that the device is ready and listening for a voice command. All of this happens instead of a keyboard appearing. So, it isn’t a replacement, but rather a totally different way of communicating, and therefore in need of a totally different interface and design.
  • 171. Context When people communicate, we use our knowledge of the context to create a shared understanding. Once the user knows the device is listening, they may know to start talking, but how does the user know what to say or even what is okay to say without any prompts? With a voice interface there is no visual to show what the options are. It is best practice to have some options voiced, such as “you can ask...” followed by a few options. You may be familiar with the options provided by a teleprompt: “Press 1 to talk to HR, press 2 to talk to the front desk.” Those are often very frustrating, because they are very one sided and not conversational. The goal here is to start by understanding the user’s goal.
  • 172. This is where everything we have been talking about is coming together. Once you have identified the why in your project, planned out roughly how it might look, done user research, and created a user flow, you can start to predict some options that a user may be looking for. You can start off by letting the user know what their options are based on this research. This can be laid out in a set of questions or by asking an open-ended question. When you record an interview with someone, it is the best practice to ask open-ended questions or compound questions. The reason is that you want the person to answer with context.
  • 173. If you ask two questions in one, a compound question, it is a natural tendency for them to clarify the answer as they respond. Perhaps you ask “What is your favorite way to brew coffee, and what do you put in it?” Instead of answering “French press and with cream,” it is likely that they will specify which of the questions they are answering within the answer itself.
  • 174. We’re discussing question and answer methods here because such exchanges point out an important way that humans communicate. We like to make sure that our answers have the correct context to them. This is especially true when there is more than one question asked. So, a likely response would be, “My favorite way to brew coffee is using a French press, and I like just a little cream in it.” Traditional media interviews don’t include the questions—so it’s important to get context in the answer.
  • 175. The need for context is important to understand as it relates to voice interfaces: Humans may not provide needed context in their voice commands. Having the computer ask questions that are open-ended or have multiple options will trigger the user to provide more context in their answer, which will help the device more successfully understand what the user is asking. Using the power of machine learning and natural language processing, the device creates a system that recognizes specific voice commands. These commands must be written out as scripts.
  • 176. Scripts Think about all the different ways within just the English language someone can say no: nope, nah, no way, not now, no thanks, not this time... And this is just to name a few. You also need to consider what questions the user may ask and the answers they may give. This requires anticipating what the user will say and then linking that response to activate the next step. With a traditional computer screen, the user has a limited number of options based on the buttons and links you provide. With the input of a click or a tap, the computer knows to load the connected screen based on that action. With voice, the interaction is reliant only on spoken language.
  • 177. As part of the voice interface design, an important step of the process is to create a script. This script should embrace the dynamic qualities of conversation. A successful script should go through multiple levels of user testing to identify questions that users answer—and also all the different ways they answer. When the user isn’t given a set number of options to choose from, the script helps translate the human response into something actionable by the computer.
  • 178. Script Sample. Sample answers collected to show possible answers to the question “How have you been?” All likely answers need to be collected to help a voice assistant understand how to respond based on each possible answer. While it is easy to tell a computer what to do if someone says “yes” or “no,” it is less likely that the user will stick to these words. Computers may easily be able to understand yes and no commands, but what happens when a user, who is speaking as they always do to other people, says “I’m going to call it a day.” Or “I’m going to hit the sack.” These idioms are not going to be understood by a computer, unless they are taught them. Without understanding the cultural context, it could be understood that you are going to physically hit a bag, instead of go to sleep.
  • 179. How do you anticipate responses of users to build into a script? You ask them, and you listen to them. How do you anticipate responses of users to build into a script? You ask them, and you listen to them. Use a script and thorough user testing to collect anticipated responses to help keep the experience complete. Multiple rounds of quality assurance testing must be completed throughout the whole process. Think of how color, image choice, and copy set the tone for a design in print or on the web. In the same way, sound quality, mood, and content of the responses of the voice will set the tone for the voice experience. As you can imagine, this is no small task.
  • 180. To do this well requires a team of people dedicated to designing these VUI skills. However, scripts that are created have components that can be used across the different skills. The conversational experiences we have will start to build a relationship with the device and will also help establish a level of trust. These experiences have a way of making an experience personal. Think about how nice it is when someone knows and uses your name. What if they could also learn and know your favorite settings and words? What if they could mimic these preferences as they guide you, provide clear instructions, and help reduce your anxiety by saying the right words? This is what voice experiences can do. They are rooted in our experiences of conversations with friends and colleagues, so it is no surprise that we start to trust them like one too
  • 181. Power of sound Sound design should not be an afterthought; it makes or breaks the experience. Once you start to notice how sound plays a role in your physical world, you can start to design ways for sound to create more immersion in your XR experiences. Using audio can help an experience feel more real, enhance your physical space, or even help you interact with a computer interface, hands-free.
  • 182. SOUND LOCALIZATION DESIGN In the beginning of this chapter, you played the role of the listener. You directed your awareness to the sounds that happened around you. This time, you can take what you learned from that experience, and what you have learned in this chapter, to design your own soundscape. To do this, draw a chart similar to the sound localization diagram you created from your listening experience. However, this time you are going to design what sounds will be happening, and where.
  • 183. ● Think about where the experience will be happening, and if it is for VR or AR, as that will determine how much ambient sound you will need to plan for. ● Think about the distance and intensity of the sound from the user’s perspective. If you want the extra challenge, you can then record the sounds and bring them into a sound editor of your choice, such as Apple Logic Pro or Adobe Audition, to start editing them.
  • 184. To create a full immersive experience, you will need to bring the edited sounds into a program, such as Unity Pro or Unreal Engine, that will allow you to spatial map out the location of the sounds.