SlideShare is now on Android. 15 million presentations at your fingertips.  Get the app

  • Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content

Responsive Design Tested: What a recent experiment reveals about the potential ROI of mobile design

by on Jan 09, 2014


To most web designers and forward-looking experts in the field, responsive mobile design is not an optional feature. Mobile users have to be able to view your website on their devices without having ...

To most web designers and forward-looking experts in the field, responsive mobile design is not an optional feature. Mobile users have to be able to view your website on their devices without having to scroll everywhere to read it.

But is responsive design really worth the investment? Has anyone actually pitted a responsive design against a regular desktop design?

These were the questions in the minds of the marketers at a large media company faced with the mobile dilemma.

To answer these questions, the marketers designed a test.



Total Views
Views on SlideShare
Embed Views



1 Embed 1 1



Upload Details

Uploaded via SlideShare as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.


12 of 2 previous next

  • JonPowell2 Jon Powell Hey Mike,

    Thanks again for taking time to share your perspective with me. I wanted to reply-all here so everyone could get the benefit of some of the additional points of clarity we discussed and perhaps join in on the discussion.

    NOTE: Though I am not allowed to share certain pieces of data (per the partner), I will share below as much as I can per my conversations with the data sciences and page development teams.

    -The content on both control and treatment versions were exactly the same EXCEPT the control’s desktop background image (faux pop-up) and the control's desktop exit page link image (“x” versus button). The reason why these desktop variables were not completely eliminated was to minimize unnecessary development time and cost (running an 'in-between test') that could be completely mitigated by advanced test tracking and segmentation (which was used)

    -The test was actually tracked as 3 separate tests (mobile vs mobile, tablet vs tablet, desktop vs desktop) to increase the transferability of the learnings and to isolate the background image variable only present in the desktop control experience

    -The results of all 3 separate tests were aggregated (as shown in the initial slides) to provide an additional layer of executive level insight on the potential risk of a wide implementation of the responsive code (which was in favor of responsive design since mobile and tablet didn’t hurt them) as opposed to producing and implementing dedicated device pages (which would increase cost significantly when executed to multiple areas of the site)

    -The only change in code for the treatment were the CSS changes necessary to enable the responsive display of content according to screen size calling on the page.

    -Page weight was closely compared and page load time was tested on multiple devices and connections by a dedicated QA team and no significant difference was discovered or noted (thus limiting the impact of that additional, important variable). I would need to conduct a detailed interview with the QA team to be overly certain.

    -Sample distribution for each device category was consistent with historical audience data and behavior gleaned from a detailed data/metrics analysis and sample size was sufficient in each segment to provide a result that could be taken seriously from a scientific perspective (can’t share absolute numbers though..).

    -Since the content of the page (other than the background and link image) did not change, the way in which the form elements were set up and the keyboard types that were called on for each device category did not change, so it wasn't a variable in this test. I would need to do further investigation to determine which options were used (to see if a follow-up test would make sense on this variable)

    -Dedicated tablet and form pages (separate pages and code sets) were not tested in this protocol, specifically to understand the implications of using responsive (one code set) as opposed to multiple code sets and pages (as this has an impact on tech spend in scale). There may be a follow-up in the queue, but it is not high priority given the potential cost/benefit implications compared to other tests lined up that would produce a higher yield (potential gains vs scaled implementation cost)

    -On user research: Real-time audience behavior, trends from a data/metrics analysis over a long period of time were used to plan and anticipate the traffic for the test. Data from the test indicated a consistency of audience, mitigating a potential selection effect (i.e. the audience profile that makes up the site was consistent with the audience that participated in the test).

    -On additional UX style user studies and focus groups: I would have to do additional research to see if this was a factor. I do know that when it has been available to partnerships I have overseen, it has always been utilized to form the most informed hypothesis possible

    Hopefully this helps. All great observations and questions from Mike!


    Jon Powell
    Sr. Manager, Research and Strategy
    3 months ago
    Are you sure you want to
    Your message goes here
  • donahuephoto Mike Donahue, UX Architect & Designer at Citrix I was on the webinar and listened to the whole thing and unfortunately almost every aspect of the this presentation, it’s assumptions, the test itself and the interpretation of the findings are flawed. About the only thing correct was the description that responsive changes the layout of displayed content based on screen size.

    The minute the responsive design didn’t exactly match the non-responsive design for desktop you killed the confidence of the findings. Beyond that, the test suggests that responsive itself plays a role in conversion and it really doesn’t. Well designed websites with great content and user-friendly interactions do, regardless of responsiveness.

    During the talk the speaker mentioned the change in the design was because the existing design couldn’t be exactly replicated in responsive which is simply not the case. When the decision was made to change the design a new baseline for a non-responsive version of the new design need to be established to make the responsive findings of any value.

    There are also other factors not discussed that would dramatically impact the results.

    1. Were both versions, responsive and non-responsive, of equal page weight? In other words, was the code downloaded by the devices the same size in Kb for both designs? Download speeds have big impact on conversions and a poorly executed responsive site can easily weigh in at more than a non-responsive page.

    2. Were the numbers based on a similar sample size? Were there basically the same number of phone, tablet, and desktop visitors in both tests? Can you share the analytics?

    3. What was the overall sample size used for this experiment? Small sample numbers can yield seemingly huge shifts but actually be statistically irrelevant. So many missing data points that could have given confidence tot the findings.

    4. Were best practices followed when setting up the form elements. Did the responsive design make sure to assign the proper type to each form field to enable the most appropriate keyboard for different information, like phone numbers or emails? Data entry is more difficult and time consuming on touch devices and enabling the right keyboard facilitates completion. To be fair, I can’t really see what the forms are asking for so this might not be relevant here.

    5. You don’t mention testing a dedicated phone or tablet size design. It could simply be that this particular audience just doesn’t like to fill in forms on touch devices. Or, they may have had an entirely different goal while browsing from their phone? Was any direct user research done to find out if this audience prefers to use their desktop for form entry?

    In general users can’t tell a responsive site from a non-responsive site per se. What they can tell is if a site is slow loading, difficult to navigate, has poor content, looks untrustworthy, looks designed to fit their screen and many other things, but not if they are looking at a responsive site.

    A better test is one to test a poorly done responsive site vs. a well done responsive site.

    I’d caution everyone that heard this talk or is looking at the results shown and take them with a grain of salt. Sad thing is that I’ve been looking for some serious data on the matter, either for or against responsive, but all I keep finding stuff tike this.
    3 months ago
    Are you sure you want to
    Your message goes here
Post Comment
Edit your comment

Responsive Design Tested: What a recent experiment reveals about the potential ROI of mobile design Responsive Design Tested: What a recent experiment reveals about the potential ROI of mobile design Presentation Transcript