The term Rich Internet Application was first coined by the Macromedia Product Management team to explain the direction they saw the application of their technology platform Flash + what was then called Remoting enabling within web-based applications. It is important to understand that RIA as a concept is no longer owned by Macromedia (or bought by Adobe), but is a term that captures the general spirit of what the Flash platform was trying to achieve, but that many technologies can now enable similar behavior, often even better than Macromedia can achieve. Connected: It lives on the Network Distributed: Both the DATA and the GUI are distributed remotely (i.e. over the network) Local: Access to local computation outside of the browser itself is required: file system, multimedia drivers, saved data Intelligent: The client shares the logic of the application with the server Moving: Animation is used for GUI and Cinematic effects.
Drag & Drop is the ability to select an object with your mouse pointer and with a mouse-down action hold that object and with a gesture of the mouse move that object. Usually the end goal is to release that held object over a target. Releasing the object triggers a new action like a location change, or even triggers a whole new macro of actions. Menus & Tool bars: These are containers that help designers create a system of discoverability of actions when a list of actions becomes too many to be displayed all at once within a screen view. Toolbars more specifically are done by using icons instead of text as trigger buttons for actions, usually to be enacted on selected objects. Windows & Wizards: Windows are moveable, resizable containers where GUI elements can be placed. This is usually done for hierarchical or linear progressing reasons. A Wizard can take place in a window, but it doesn’t have to. It is characterized by controlling the flow of tasks through a linear progress, whose end completes a more complex set of tasks that if done all at once would be too difficult to do. Wizards also are used to build interfaces where a first step actually determines the elements to be shown during the proceeding steps. Panels: These are like windows, however, instead of hovering over the pre-existing content on the screen, they separate an existing screen into structured action and view areas.. Trees: This is a type of hierarchical navigation using expand and collapse of nodes in order to hide the complexity of a total structure allowing the user to browse their way to discoverability. Form validation: This form of client side intelligence allows the system to check for various criteria, so that the user does not have to wait for the longer server side validation to tell them that they have in fact made a preventable error. Non-HTML Controls: There are a host of controls. These are just examples of ones that are common in desktop applications but not available in the standard set of controls within xHTML. Keyboard actions are like hotkeys (ctrl-p = print) or even our ability to use the shift and arrow keys to make selections, or that same shift key with the mouse creates a different type of selection than without the shift key depressed.. Context menus: These menus are found by either using the right mouse button or through holding down the single mouse button (PC vs. Mac)
Movement: Moving pixels on a screen has been shown to get people’s attention. Yup, that bouncing Monkey does drive people to want to shoot it. But more importantly movement can be guided to aid users with some basic understandings of the GUI they are using. Because objects are being manipulated in a metaphorical sense and not in reality, animation helps us do the following: Understand that the computer is processing (we’ve all seen progress bars or animated mouse cursors). Guide us to understand where objects go when we are done with them, or where they came from. Alert us to the state change of an object.
What did we mean when we said “page” when the metaphor first became used? Well we literally meant a page. We were creating documents and so an HTML file was a finite counting, even though when printed it actually contained multiple pages. Today that same file though can contain many documents in it, or its point isn’t even for documentation purposes. The concept of the page as we understood it is not so valuable as it once was. However, because the concept of a “get” is understood by both designers and developers as that moment in time when we have to relinquish the browser’s hold on the screen for a moment and go get a whole new collection of GUI elements. There are many reasons to still do this, most of which have little to do with the user. Further it is not the only reason we make dramatic changes to screen elements. Sometimes the task has changed so drastically what we need to accomplish that the original context of working is not viable for our continuing to maintain that context. The decision that a designer now needs to make is 2-fold: How connected are a collection of tasks to the context of the current goals of the user. What limits are in my technical architecture that might force me to otherwise change architecture. Both questions are equally important and they play off of each other. No technology is not without its constraints so don’t dismiss #2 as someone else’s problem. It is your problem as the designer of a complete system. Ignoring it will come back to haunt you later when your engineers come back to you and tell you they can’t do the original design.
Web 2.0 is really only marketing speak. What is more important is to talk individually about the particulars that make it up: RIAs (our topic today) How to architect user-generated content Creating social networks Do you want an open system that is iterative, or a closed one with longer lifecycles? “ Limited designer role” is a tricky one since well we are a designer. The purpose here though is where are you going to concentrate on aesthetics and where are you going to concentrate on system and behavioral design. The latter is not forfeitable at all.
Sometime circa 2005 a decision started brewing that less and less sites started supporting Netscape 4.x. Being the legacy technology that kept AJAX from happening, it was a bold move that only could have happen through the popularity of a successor to the rebel browser community that was cross platform—Firefox. Since AJAX was invented in IE 4.x it was the relinquishing of Netscap3 that was required to move on.
This effect existed first with Flash-based technologies inside of a ubiquitous browser component, but by being able to do this effect in straight (no plug-in) HTML, the tables were turned. Add in the pretty good animation features of DHTML to the puzzle and you can pretty much do anything in open standards HTML that you can do with a plug-in service like Flash.
Being a drawing, animation, development, and player all in one, it really has some good skills for bringing RIAs to life. There are particular advantages around cinematic effects and other uses for animations.
Extensions or add-ons take browsers to the next level. They add functionality that seems native to the browser in useful and powerful ways. Some are so simple but add so much value.
Scott … can you help this out?
If link doesn’t work open ford.ppt
Deployment issues are those that effect both the building of and the distributing of your application. Before you distribute you need to know if you can build it. Can you engineers master something new in the time requirements and other resource constraints of your system? Is such training worth the added cost? But afterwards, there are support questions by the user, the user’s environment, and your environment. If your environment for business reasons needs to do smaller, rapid iterations to maybe get to market faster, can your user base support the type of distribution model that will be required. Then there is lastly the ethical question that people are coming to in technology. And the legacy technology issues.
These questions focus on the product itself. Since most web applications don’t require access to the local system, a plug-in based architecture is probably not required.
I keep to the blogsphere for the most part. There are a host of RIA related blogs out there with great examples of what others are doing. BarCamps or other un-conferences are the great in person way to engage those that are doing this work in a more intimate fashion.
Empressr – A PPT solution that is built on Flash. NetFlix – Probably one of the earliest examples of RAIs with their star-ratings. They have slowly added more and more to their system. Meebo – one of the best AJAX/RIA examples because it allows for 2-way communication through a web browser using a variant of AJAX known as COMET (get it?) Flickr – The photosharing site recently removed all of its flash elements and now is only HTML/AJAX. Nice stuff. It does have desktop application that is required for bulk upload operations … If time allows we can look at how BubbleShare did the build-upload using hidden Flash elements. Zimbra – Is Email/Calendar/Contacts groupware GUI that has lots of nice AJAX being used.
Designing is not the same as conceiving of an idea. Sometimes all anyone needs to do is conceive a great idea. But design happens through exploration, experimentation, divergent thinking, followed by reflection, critique and review by others (aka testing and studio). A studio environment is one where your ideas are shared with peers and reviewed (sometimes cruelly) against their expectations of what good design is, and their understanding of the goals (sometimes as you communicated them).
This is a napkin drawing--the quintessential sketch. What makes it a sketch in terms of design is that it was quick, and through designer intervention can communicate basic ideas of the thought process of the designer. Sketches usually try to take advantage of assumptions about many of the aspects of a design, so that all the details aren’t necessary. In this case the designer is assuming that people understand a web site. The sketch is rapid & rough. It is rapid so that you don’t waste time on them. They are rough for a similar reasons, but also the added reason of allowing for a lot of interpretation by those the designer is trying to communicate to. Why? Because a sketch is also used not only to convey an existing idea, but also to help a group of people to generate MORE ideas. This leads us to multiplicity. A sketch of one design is nice. But by being quick, we can sketch many ideas quickly and if quickly enough without interfering in the flow of ideas for a larger group. The focus of a sketch is to communicate concepts, not specifics. Specifics require details by their very nature that will slow down the sketching and thus the idea generation process.
A framing can exist at many levels of detail, just like a language can exist. I.e. “Dick runs home,” vs. “My doctor Richard Hartwell, III, M.D., drives his ’77 IROC Camaro, on I-93, through the Pit, 87 MPH, to get to my Home, because I have a fever, am nauseous, and developing a rash.” One level of the frame allows for the first type of language, and then over time we add more detail – AKA refinement. Frameworks tend to be blocky, but they can increase their language through refining. There is a lot more done here about review and critique when doing framework development than when sketching.
Well both Frameworks and Refinement work hand in hand to get you to a final detailed design. You could say they are one long phase, but I like to think of them as distinct units because it helps me focus on different sets of questions. When I’m doing a framework I can skip questions about color, type, brand, etc. and concentrate more on language, behavior, structure, and navigation.
At the core of IxD is the design of behaviors. And when we think about RIAs we are really thinking about increasing the types and availability of behaviors that weren’t present at the same density as previous systems. A behavior simply put is the way an object or system reacts to human input. A behavior can be about selection, triggering an action on that selection, re-displaying information anew or changing the structure of existing information or other display properties somehow. These can be done in our case through the input devices that a PC affords us thus far: mouse, keyboard, microphone, camera, touch-screen. In turn the system has the ability to deliver output to that human being in response to their inputs through a monitor, speakers, hepatic device (some mice can vibrate on computer command). One thing to think about when designing behaviors, is that to communicate them well requires interactivity to occur in your prototypes. Behaviors are experiential and thus talking about them doesn’t really allow anyone the proper ability to reflect and review the meaning behind them, let alone test them.
The key elements I like to concentrate on when doing design is speed, organization, framing, cataloging, and prototyping.
The introduction of Ajax has exposed the weakness of wireframing. Wireframing was not invented during the desktop interaction revolution. Instead it was created to capture the page to page interaction inherent in the web.
In order to understand the reason that wireframes are “creaking” under the change, you really need to understand the change. Here is a spectrum that illustrates the difference as you move from page based web sites to single page RIAs. There is a wide range of transitional applications that provide elements from both sides. But the deeper the interaction, the harder it is to capture in a traditional wireframe.
The basic assumption is events are course-grained. Wireframes best capture the layout, priority, content and some of the behavior of the page.
The impact of these micro changes means that we need to either adapt the wireframe technique or come up with new techniques to capture an interface.
Notice the preponderance of nuances. You have to capture all of these “blessed moments”. Like invitation, activation, de-activation, affordances, constraints, timing, delays, rate of feedback, etc.
When I (Bill) first joined Yahoo! I was part of several meetings to discuss the best way to handle drag and drop interaction across our various sites. The first challenge was just the shear number of discrete moments (interesting moments) of interaction that needed to be considered. Each of these event states had a multitude of interaction and visual options available.
Add to this the number of user interface elements (actors) that get involved in a drag and drop interaction and you have a large set of permutations. How do you construct user testing, prototypes, design drag and drop libraries if you can’t zero in on a subset of candidate interaction techniques? And all of this just for drag and drop!
The result was I created a simple matrix to plot the interesting moments vs. the actors. The intersection of these form visual & interaction possibilities. This chart illustrates the interesting moments grid for the Drag & Drop Modules Pattern.
One of the applications of this was with the addition of drag and drop to the my.yahoo.com site. We used a large printout of the matrix (blank) and penciled in the interactions. This drove our prototyping and user research efforts and finally helped us settle on a design. One good thing is that for each cell that is blank you have to decide if you want interaction or not. This is different than just not thinking about that micro interaction. I see this as a common problem in teams (even within Yahoo!) when they don’t use a tool like this to document microstates like this. The above fully rendered matrix is an after-the-fact rendering of what was decided. Its purpose was to communicate to other teams what we had learned. This is a good example of the difference between conceptual design vs. communicating a design vs. recording a design.
Adaptive Path calls these microstates and uses a concept they call “key frames”. This terminology comes from the animation business (and is used by Macromedia in their Flash projects.) The idea is to have a callout from the wireframe that shows the static steps or key frames of the micro interaction.
Here is an example of the same thing from Yahoo! in an early design of podcast.yahoo.com. Notice it describes an animation fade. One could imagine other techniques like having legend callouts that reference patterns. The patterns would document the exact timing, style, etc.
At a previous company I (Bill) created a wireframing toolkit with Visio. One of the techniques was to create a way to animate wireframes with Visio. By manipulating layers and associating this with steps in an interaction flow (storyboard steps). The complete technique and software is described at boxesandarrows.com: http://www.boxesandarrows.com/view/storyboarding_rich_internet_applications_with_visio
Another technique is to simulate animation with Photoshop. You can use layered comps + the animation palette to create animated demos in a fairly quick manner. See the article: http://looksgoodworkswell.blogspot.com/2005/11/animating-interactions-with-photoshop.html
Designing Powerful Web Applications with AJAX & Other Rich Internet Applications David (Heller) Malouf & Bill Scott UI 11 Cambridge, MA October 9, 2006
Macromedia (today Adobe) coined the term “Rich Internet Application” to describe the growing trend of adding media richness (more motion internal to a single page view) due to the creation of applications using their product Flash MX.