2. The Universal Windows Platform
App structure, page
layout, and navigation
are the foundation of an
app's user experience.
The Universal Windows
Platform (UWP) makes is
extremely easy to create
a user interface that
harmonizes a variety of
devices with different
display sizes, as well as
foundational navigation
paradigms that make
sense across devices.
16. Delivery methods
Delivery method Use with Description
Local
Tile, Badge,
Toast
A set of API calls that send notifications while your app is running,
directly updating the tile or badge, or sending a toast notification.
Scheduled Tile, Toast
A set of API calls that schedule a notification in advance, to update
at the time you specify.
Periodic Tile, Badge
Notifications that update tiles and badges regularly at a fixed time
interval by polling a cloud service for new content.
Push
Tile, Badge,
Toast, Raw
Notifications sent from a cloud server, even if your app isn't
running.
28. Checks to see if App
exists by App Name
Checks to see if App
exists by Command
Prefix
Checks to see if App
Command Set contains
verbiage
Instantiates App
Instantiates App Voice
Command Service
Processes App Service
logic
Processes OOTB
Commands
Instantiates App
Cortana Logic Flow
30. Personal Assistant Actions
Cortana interaction
can trigger user
actions, which can the
trigger background
tasks via to send
information from
Cortana to your app in
the foreground,
creating a “full circle”
and loosely connected
personal assistant
experience.
App
Editor's Notes
As a development platform, the Universal Windows Platform stands alone in its ability to provide a rich set of tools, controls, and universal APIs that work the same, and easily adapt and harmonize across devices with varied navigation styles, display sizes, and navigation paradigms.
A primary goal of the Universal Windows Platform is to make it simple and straightforward for developers to create a user experience once, that scales, adapts, and adjusts as needed across devices, including desktops, tablets, phones, gaming systems (such as an Xbox) or even a Surface Hub or HoloLens. These concepts are typically referred to cumulatively as “Adaptive UI”.
Similar to legacy concepts found in HTML and CSS, the Universal Windows Platform (UWP) bases controls and control layout on “fluid” concepts, meaning controls and panels can typically automatically reposition themselves and “reflow” based on adjustments to parent controls.
All layouts and controls in UWP have the notion of Margin, Padding, and the ability to use “fixed” or “automatic” sizing.
In the Universal Windows Platform, developers can take advantage of the concept of Adaptive Layouts by using Adaptive “Triggers” that serve as a threshold or “breakpoint” for triggering UI changes.
For example, an Adaptive Trigger might “kick in” when a window (or screen) width increases beyond 900 pixels, wherein a change to the user interface would move to a new Visual State.
Visual States and Visual State Groups are a convenient mechanism when working with Adaptive Triggers. When the threshold of an Adaptive Trigger is crossed, code can be written (either in XAML or code-behind) to simply “move” to a new Visual State, for example a “WideState” or “NarrowState”.
In general, Adaptive Triggers are XAML-based, and embedded directly in the page or view itself, making it easy to reference controls directly and avoid extra “strongly connected” code-behind logic.
NOTE: Visual States and Adaptive Triggers are designed to be placed immediately within the parent control being referenced for the page. For example, this will typically be a RelativePanel, ScrollViewer, or Grid. Nesting Adaptive Triggers at a lower child level will result in the inability to resolve setters.
Most robust apps have multiple pages that are often designed to be used in some type of a sequential “flow”. The accumulation of active page navigation and history is commonly referred to as the “navigation stack” where the symbolic concept of a “stack” of pages is involved.
The Universal Windows Platform provides a broad range of mechanisms to not only navigated from page to page (or frame) easily, but also mechanisms to handle concepts such as back and forward across devices, including devices where hardware buttons are part of the paradigm.
The Universal Windows Platform also supports “injecting” controls into app elements, such as a title bar, via concepts such as the SystemNavigationManager which lets you respond to user presses of the system provided back button such as a hardware button, or gestures and voice commands that activate the same event.
All apps (with more than one page or view) follow a “peer-based” navigation system, a “hierarchy-based” navigation system, or (more commonly) a combination of both.
Understanding the differences and similarity of page navigation paradigms is critical for good app design, as users are accustomed to working with content in a certain way.
For example, in a News Reader app, a user would not expect to navigate THROUGH a Technology category to get to an Entertainment category, but WOULD expect to navigate through a Technology category to get a detail of a news article in the Technology category.
Although not every development platform uses the term “controls” all platforms have the “concept” of controls. Controls are the elements that get added to a page or view that (typically, but not always) provide the mechanism for user interaction.
The Universal Windows Platform provides a large variety of out-of-the-box controls for almost every common scenarios including buttons, labels (TextBlock), text entries (TextBox) date and time pickers, and other layout related controls.
Simple put, a control “pattern” is a recipe for combining several controls into a single control, in a supported way, to create a new control. Most user interface elements are ultimately the result of standardized control patterns.
For example, a simple navigation system may be the following recipe: Border + Grid + TextBlock + ListView + ListView Items
Beyond the more common, standard controls like buttons and labels, the Universal Windows Platform provides one of the most powerful sets of “advanced” controls available to developers today, including support for deep image manipulation, multimedia, mapping, speech recognition, and even digital ink and inking canvases.
The term “binding” is used in a general sense, not just to mean “data” binding, but binding the value of a property or object to a resources or model property.
Essentially, every property of a control, page, or other UI element can be “bound” to a resource, for example a Style, Converter, or Command.
Binding is a key aspect of the Model-View-ViewModel (MVVM) concept.
In the Universal Windows Platform binding is typically performed against a value that exists in an App Resource, View Model, or Model. For example a TextBox Content property can be bound to a DisplayName property of an Employee Model, or the background brush/color of a Page can be bound to a Theme Resource that exists globally in the App.xaml file.
For most scenarios, a Binding Mode can be set to determine how the value is affected by changes to the source, such as Two-way, which communicates and reflects value updates TO and FROM a source.
Controls that inherit from an ItemsControl can easily be bound to collections.
One of the more powerful features of the Universal Windows Platform is the ability to use “loose” templates for creating “Adaptive” Toasts (or alerts) based on simple, and mostly optional, XML schema elements.
Adaptive Toasts in the Universal Windows Platform can be interactive as well, providing common control elements, such as buttons and textboxes for user interactivity, without ever having to open the app associated with the Toast, resulting in a true “mini” app experience.
By resorting purely on in-app code, meaning toasts that are send directly TO an app FROM the app locally, toasts can be sent via Local, Scheduled, and Periodic methods.
When combined with the power of a Notification Hub, such as through Azure Notification Hub services, your app can also respond to real-time push notifications or display Toasts initiated by a back-end service, and sent as tile templates, using both legacy toast XML or adaptive payloads.
The concept of interactive toasts goes well beyond anything available on any other platform. Essentially, interactive toasts can be “mini” apps with the ability to respond to simple control events and then take actions such as sending information and opening an app, or even running a background task to update data, call a service, or perform app-specific actions.
Just like Adaptive Toasts, the Universal Windows Platform supports a rich set of tile services based on “adaptive” tile concepts. Adaptive tile payloads can contain XML markup for any or all supported devices and form factors, making it easy to send a single tile update that works (and adapts) perfectly across all devices, form factors, and screen resolutions.
To create a more noticeable, engaging experience, the Universal Windows Platform supports a wide variety of Adaptive Tile template enhancements such as a “peek” animation for viewing content that slowly “reveals” as it displays content. Other enhancement template animations exist, however the peek animation is easily the most common.“
Unlike other platforms, the Universal Windows Platform support true “out of process” background tasks, as well as the more simplified “in-process” background task. Out of process background tasks perform much like a legacy “Windows Service” where the can execute and operate on their own schedules, totally disconnected from an app experience. This creates a stronger “resiliency” as misbehaving background tasks will not affect an app experience in any way.
To fully integrate an out-of-process background task into an app flow there are a number of working parts. The starting point is always a Windows Runtime Component, which is a “specialized” class library capable of working within the limitations of shared contracts and sandboxing environments. Best practice dictates the creation of a “common” class library to easily share code between an app’s user experience and background task logic.
Background tasks get run or “instantiated” based on various “triggers” available for use on the device. These triggers can be as simple as a timer, for example running a background task every 15 minutes, or based on actual system events, such as a time zone change, or battery level or power state. Background tasks can also be run based on push notification events, for a real-time background update process or notification system.
The combination of speech synthesis, speech recognition, semantic interpretation create an extremely robust voice and speech offering in the Universal Windows Platform. In general, especially when leveraging the power of Cortana services, ANY task can be performed via voice commanding, and any textual response can be magically turned into audio.
Although the possibilities for voice and speech integration are endless, some of the more common scenarios are to simply get app-centric information, kick off a process, search and return document or file results, add reminders, alerts or calendar events, and provide a guided experience to make decisions.
The Speech Recognizer is a critical piece of the Universal Windows Platform API Speech puzzle, providing ready-to-use grammars for dictation and web search, a default system UI that helps users discover and use speech recognition features, as well as text-to-speech (TTS), natural language understanding, and commanding.
The Universal Windows Platform provides support for initializing and configuring a speech synthesis engines to convert a text strings to audio streams, also known as text-to-speech (TTS). Voice characteristics (like gender), pronunciation, volume, pitch, rate or speed, and emphasis can also be controlled via Speech Synthesis Markup Language (SSML).
Why integrate Cortana services into an app? Because Cortana provides a comprehensive framework for seamless voice integration from your app or service into the Cortana experience via simple mechanisms that can be leveraged to handle even the most complex logic flows. Virtually anything users can do in your app, they can do via Cortana.
Although simple and logical, the Cortana logic flow tends to be misunderstood by new developers. A clear understanding of what decisions Cortana makes, and why, is invaluable when working with Cortana services.
Once in place, Cortana integration adds some of the coolest and most powerful enhancements available in an app experience, as well as helpful accessibility features, however there are a number of steps and dependencies to be put in place for Cortana services to be correctly configured, including package manifest updates, the creation of a voice command definition file, the creation of an App Service (a specialized background task) and programmatic installation of voice command sets.
Personal Assistant actions are a special, less-documented feature of Cortana integration, providing developers with the ability to have a “full circle” Cortana integrated experience where queries and guided conversations can occur with Cortana, decisions can be made, and Cortana can then navigate directly into the user experience of an app from the Cortana pane.