Second Life - Highly Commended Paper AMSRS Conference 2009


Published on

Published in: Business, Economy & Finance
  • Be the first to comment

  • Be the first to like this

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

Second Life - Highly Commended Paper AMSRS Conference 2009

  1. 1. Introduction<br />In the past year I have heard the words ‘respondent engagement’ with increasing occurrence. Take questionnaires for example. Many of us have come to the conclusion that what traditionally worked on paper, does not translate online - web 1.0 style questionnaires tend to be boring, and non-engaging. If we give respondents such questionnaires, filled with arduous grids, should we be surprised if this has a detrimental impact on data quality? Thankfully, evolving web technology has given us the ability to correct some wrongs, and we have been able to enhance the respondent experience (and engagement in the process) ... but is there room for further improvement? We were interested in utilising web 2.0 technologies, but wanted to first understand if there were other ways of increasing engagement. Sampling was of particular interest, coupled with experimenting with traditional methodologies.<br />Predictive Markets<br />Whilst Web 2.0 technology can indeed provide a more pleasurable experience for respondents, some companies (mostly in the US and Europe) have also been testing out different methodologies, a side-effect of which, may increase engagement. One such example is predictive markets. <br />Let’s ignore predictive markets within a commercial market research environment for a moment and look at other ways they are being used. A great example of a predictive market is ‘The Hollywood Stock Exchange’ ( Here, you can sign up and buy shares in your favorite actors and their new movies. “Hollywood Dollars” are the local fictitious currency, and you can watch your values rise or fall based on your predictive success. Prices soar with a blockbuster opening at the box office, and plummet with a bomb no one went to see! Winnings can be traded in for low value prizes (such as pens, t-shirts etc). <br />The ultimate value of a moviestock is based on the film's box office, so stock prices act as box office predictions. For example, if a particular moviestock trades at "H$40.00", the market is predicting that the movie will gross US$40 million at the box office in the first four weekends of wide release. In 2007, players in the Hollywood Stock Exchange correctly predicted 32 of the 39 major-category Oscar nominees and 7 out of 8 top-category winners. Cantor has used HSX's moviestock prices to assist Cantor's gambling operations in the United Kingdom, in which bettors can place bets on how much money US films will gross. Clearly this methodology is worth a deeper look.<br />Another example is the Iowa Electronics Market (IEM) that works in a similar way but this time participants play with real money – so traders have the opportunity to profit from their trades, but must also bear the risk of losing money. The IEM allows traders to buy and sell contracts based on, among other things, political election results and economic indicators.<br />Joyce E. Berg, Forrest D. Nelson and Thomas A. Rietz argue that the political election results have been highly accurate, especially when compared with traditional polling. Together they wrote a paper which was published in the International Journal of Forecasting (Volume 24, Issue 2, April-June 2008, Pages 283-298) where they discussed prediction markets and their use in polling:<br />“Prediction markets are designed specifically to forecast events such as elections. Though election prediction markets have been being conducted for almost twenty years, to date nearly all of the evidence on efficiency compares election eve forecasts with final pre-election polls and actual outcomes. Here, we present evidence that prediction markets outperform polls for longer horizons. We gather national polls for the 1988 through 2004 U.S. Presidential elections and ask whether either the poll or a contemporaneous Iowa Electronic Markets vote-share market prediction is closer to the eventual outcome for the two-major-party vote split. We compare market predictions to 964 polls over the five Presidential elections since 1988. The market is closer to the eventual outcome 74% of the time. Further, the market significantly outperforms the polls in every election when forecasting more than 100 days in advance.” <br />Polls, however, did narrow the gap during the final five days before the election, but the IEM was still more accurate 68 percent of the time. The biggest disparity was the period 66 to 100 days before an election, when the IEM beat the polls 84 percent of the time. Researchers suggest this gap was caused by the so-called "convention bounce," when a candidate temporarily jumped in the polls because of favorable public opinion following his party's nominating convention.<br />This bounce, however, is not reflected in the markets, which stay largely stable during and after the conventions. Since the conventions were held during the period 66 to 100 days before the election, the polls taken during that time were influenced by the bounce but the markets weren't, giving the IEM's predictions greater accuracy.<br />Berg, the director of the IEM, said the market is more accurate for several reasons. First, the market uses real money. Investors on the IEM open accounts of between $5 and $500 and trade contracts based on which candidate or party they think will win a presidential election. With real money at stake, she said investors base their decisions on who they think will win, as opposed to polls, which asks voters who they want to win.<br />Both of the above examples are US based, and whether or not real or token money is used, it would seem that predictive markets share similar characteristics with gambling. The Sun-Herald (11th Jan 2009) looked at the gambling levels of a number of western countries and concluded that (considering the relative populations) Australians were the biggest losers! So if gambling is indeed part of the Australian psyche, where better to put predictive markets - in an exciting and engaging format to respondents – to the test and understand whether it can successfully be used for commercial market research, and how results compare to traditional measures.<br />Now, we are not the first market research company to do this. Various research agencies are utilising predictive type markets; i.e. BrainJuicer for concept testing asks respondents to play a game in which they can buy or sell shares in ideas (each respondent sees a maximum of 15 ideas). They are asked to consider how well they think the idea will perform in market, not based on how likely they are to buy it. Not only is the methodology very different to what we are traditionally accustomed to in market research, but so too is the sampling method. Integral to this sampling approach is the ‘Wisdom of the Crowd’ theory.<br />James Surowiecki published his book, ‘The Wisdom of Crowds’ in 2004. He talks about the aggregation of information in groups, resulting in decisions that, he argues, are often better than could have been made by any single member of the group. The book presents numerous case studies and anecdotes to illustrate its argument, and touches on several fields, primarily economics and psychology. The opening anecdote relates Francis Galton's surprise that the crowd at a county fair accurately guessed the weight of an ox when their individual guesses were averaged (the average was closer to the ox's true butchered weight than the estimates of most crowd members, and also closer than any of the separate estimates made by cattle experts). The book relates to diverse collections of independently-deciding individuals, rather than crowd psychology as traditionally understood. Its central thesis, that a diverse collection of independently-deciding individuals is likely to make certain types of decisions and predictions better than individuals or even experts, draws many parallels with statistical sampling.<br />We too were interested in the concept of predictive markets, the crowd theory, and their application to market research in Australia. As much of our work involves consumer concept and sensory testing, we decided to conduct some research (with the agreement of our client) where we evaluated 10 FMCG concepts. We were interested in: <br />Would our new sample (a crowd) lead to different conclusions than our traditional targeted sample (i.e. those that buy into a particular category regularly)?<br />If the concepts were not as relevant to the crowd, could they be less engaged in the research than our targeted sample? If so, would this affect the quality of the data?<br />How do the results of the predictive market compare versus traditional questioning techniques for both our targeted sample and the crowd?<br />To test this we set up a parallel study with a traditional targeted cell (those that bought into the category regularly - from here on in they will be referred to as MGB’s) and a ‘crowd’ (general population, no buying requirement necessary):<br />A target of 200 respondents each cell.<br />The same screening questionnaire was used to ensure the length of the total questionnaire was the same (however, MGB’s were screened out if they did not meet consumption criteria, the ‘crowd’ were not).<br />After the screener, each saw 10 concepts (rotated to remove positional bias). MGB’s were asked purchase likelihood on a 5 point scale for each concept (traditional questioning). The crowd was asked to predict purchase likelihood on a 5 point scale.<br />After this, MGB’s were shown pairs of concepts (15 pairs in total), and were asked which they were most likely to buy, and least likely to buy from each pair.<br />The crowd followed the same exercise, except they were asked to imagine they were investors and to indicate which idea from each pair they would buy shares in, and which they would sell shares in.<br />The last component of this research is our version of a predictive market, and it seems there are various approaches agencies are using to conduct predictive markets. Our version is based on a trade-off technique and is analysed with Hierarchical Bayes so individual level data is obtained. This allows us to perform ‘what-if’ simulations so the end user can include or exclude concepts as they wish and recalculate the proportion of respondents that would invest in each idea if they were the only ideas available.<br />The surveys were programmed using Web 2.0 capabilities. Below is an example screenshot of a flash drag and drop question with Direction First’s standard background which both samples saw:<br />The Twist<br />As we were designing the above study, we became increasingly interested in the concept of engagement and the effect it could have on the quality of the data. As we seemed to be over the line in terms of getting a crowd to evaluate our concepts, we wondered whether we could test other ‘crowds’ in a slightly different way, and whether this would have any effect on engagement and data quality.<br />We had two ideas. Given Australians apparent obsession with gambling, we wanted to recruit a panel of gamblers to see if they could predict what the target sample would choose (and financially reward them for correct answers). In addition to this we wanted to take the survey into an online virtual world to see if we could increase respondent engagement even further amongst participants in this world.<br />Gamblers’ Cell<br />In our third cell we screened respondents based on their interest in gambling. Those that gamble regularly were invited to participate. Other than this, they completed the same screener as the other two cells, but like the “Crowd” cell, were not screened out on consumption. The questionnaire was also the same as the “Crowd” cell, but this time the background was set up as a poker table. They were told at the beginning that in addition to the normal survey incentive they had the chance of winning more cash if their answers matched what buyers of these products thought (this was done via a number of prize draws). Would the chance of winning additional money engage them more and would their answers be more considered?<br />Below is an example screenshot of the same flash drag and drop question which the gambling cell saw:<br />Second Life Cell<br />Our fourth cell consisted of Second Life residents. Second Life (SL) is a virtual world that launched in 2003 and to date has had approximately 15 million users. It enables its users, called Residents, to interact with each other through avatars. Residents can explore, meet other residents, participate in individual and group activities, create and trade virtual property and services with one another, or travel throughout the world, which residents refer to as the grid. Second Life is for people aged 18 and over.<br />Unlike the other 3 cells, which were more or less readily available from a panel supplier, a different approach was needed for Second Life. Whilst the survey almost mirrored that of the “Crowd” (background was again different to try and keep them in an “in-world” environment), they did not access the survey through an email invite from a panel provider. Further information on the process can be found below.<br />This is an example screenshot of the same flash drag and drop question which the Second Life cell saw:<br />Second Life in More Detail<br />First of all, Direction First had to set up a virtual presence in Second life, and create Avatars (animated characters). We created a Direction First virtual office, right next door to the virtual Opera House! We then furnished it and set-up laptops which automatically linked to the survey when activated. All we needed now was respondents!<br />In order to get participants we decided to hold a Direction First launch party – at our neighbouring premise, the Opera House! Friends of The Pond were invited to the party, where there was a DJ, plenty of dancing, and various prizes for best dressed etc. <br />Direction First was heavily branded and the activity clearly communicated our purpose in Second Life. The following week a message was sent out to SL members to invite them to take part in our survey. They then came to our office, walked to a computer which then activated a link to the survey which they completed. The background of the survey was designed to try and keep them in a SL frame of mind (i.e. was an in-world screen shot).<br />The Results<br />We wanted to understand whether the results for each of our four samples would lead to different recommendations for our client, as well as monitoring engagement levels. We looked at purchase intention/prediction (traditional method) and preference/product invested in (predictive market). <br />Traditional Measures<br />To re-cap, MGBs were shown the 10 concepts (rotated to avoid positional bias) and were asked to indicate how likely they would be to purchase each on a 5 point scale from ‘definitely would buy’ to ‘definitely would not buy’. The other 3 samples saw the same concepts, but were asked how likely they thought consumers of these types of products were to buy each (using the same scale). <br />The chart below shows the proportion of respondents saying they would definitely buy the concept, or those predicting the concept would definitely be bought:<br />Out of the 10 concepts, there was a clear stand out winner – evident in all four cells (Concept D). Encouragingly, the winning concept would have been recommended in each case (as action standard is based on top box). However, it is often necessary to put forward several winners for further development. Based on top box scores, Concept H seems to be performing ok across all 4 cells, as does Concept F (although this rates quite low amongst the ‘Crowd’). The Second Life cell appears more optimistic than the others with Concept G. The reality is that there are a number of concepts within each cell that have statistically comparable scores – so we need to try and discriminate more.<br />One way of doing this, is to also look at the results in terms of top 2 box, which is a KPI used by other companies:<br />Overall, MGBs showed more discrimination in their choices than the other three panels, awarding their lowest T2B of 35% to one concept and their highest T2B of 77% to another. The gamblers panel was the most reflective of the MGB panel, rating all but 2 of the concepts (the ones least liked by MGBs – C and G) very similarly. Crowds were the next closest, but discriminated between the concepts less than MGBs, resulting in a range of T2B’s from 50% to 75%. The 2nd Life panel was the least similar to the MGB panel, although this was largely driven by substantial differences in the rating of the last two concepts - C and G.<br />Despite slight nuances, each panel would have led to similar conclusions. <br />Predictive Market Results<br />MGBs were shown the concepts in pairs and were asked which from each pair were they most likely to buy. Each concept was shown at least 3 times. The other 3 samples followed a similar exercise, but instead of asking which concept they preferred, they were asked which one would they buy shares in. A model was then built for each respondent.<br />The chart shows that if all 10 concepts were available, and MGBs could only buy 1, which would they buy. Unlike our normal purchase intent data, where respondents could say they would ‘definitely buy’ a number of concepts, this method allows us to discriminate further.<br />The same analysis is used for the other 3 cells. If all 10 concepts were available, and they could only buy shares in 1, which 1 would they invest in?<br />Concept D once again stands out for all 4 cells (albeit second in the Crowd), and Gamblers are still the most reflective of MGBs. Interestingly 20% of the Crowd said they would invest in Concept A – this is strong support for this concept considering it didn’t particularly stand out in the traditional purchase intention scores. After Concept D, Second Life tended to slightly favour Concept E and Concept B, the latter of which was not supported by the other 3 panels.<br />Unfortunately, there is still not much discrimination in the MGB and Gamblers panel, other than the stand out winner. Therefore we decided to take each cell and remove the worst performer within each, and then recalculated the analysis (so for MGBs, Gamblers and Second Life we first removed Concept J, but removed Concept C first for the Crowd). Essentially, this means that if that concept was unavailable, respondents would switch to their next best investment choice. This process was repeated until just 3 concepts were left for each cell – those that survived each elimination. The results are shown below:<br />So, after many iterations of removing the worst performing concept, Concept A and D were the most invested in by three of the cells - MGBs, Gamblers and the Crowd. This is particularly interesting considering Concept A performed fairly mediocre in terms of purchase interest, so in this instance, our results would indicate that the two questioning techniques could lead to different conclusions. Note that Second Life was only consistent with all other panels on one concept (D).<br />So, who was right?<br />Well, we couldn’t ask our client to launch all the products to see which sold best in market, and ultimately prove which of our panels was the most accurate, or to see how our purchase intent data stacked up against the predictive market, but overall we found fewer differences than expected.<br />We did, however, put a quick sense check in at the end of the interview, on a completely different topic…. “when the reserve bank meets next week (the month was May 2009), what do you think will happen with interest rates?”<br />They will increase the rate<br />The rate will stay the same<br />There will be ¼% cut<br />There will be ½% cut<br />There will be ¾% cut<br />There will be 1% or more cut<br />As it turned out, there was no movement in the rate. This was correctly predicted by 66% of the MGB panel, 62% of the Crowd and 60% of Gamblers – so a similar proportion for all three. However, just 41% of the Second Life panel got the answer right, which raises particular concern amongst this sample.<br />Engagement<br />OK. So we saw some slight nuances with the results of the four different panels, and between traditional questioning and the predictive market. Results aside, did we manage to increase engagement with any of our samples? We had several measures of engagement. We directly asked respondents at the end of the survey, how interesting they found it (on a 5 point scale from ‘extremely exciting’ to ‘not at all exciting’) and how it compared with other surveys they had previously undertaken (again a 5 point scale from ‘much better’ to ‘much worse’). We also took an indirect measure – the completion time. Were gamblers more likely to give more considered answers (i.e. spend more time) than our other panels if they were rewarded extra for correct answers? Was our Crowd more likely to speed through the questionnaire as the topic was less relevant to them than MGBs?<br />On two of the measures (stated interest and comparison to other surveys) there was little difference. Between 64% and 72% for each cell claimed the survey was very/quite interesting, and around three-quarters of respondents (regardless of cell) claimed the survey was better than others they had undertaken – but then again we did script all surveys with Web 2.0 technologies so they would look and feel better than a web 1.0 survey. They were also asked to think about the survey and asked what one (spontaneous) word came to mind – easy, fun, different and interesting were the most associated words, by all four groups. Perhaps this was due to the questionnaire scripting, or perhaps the predictive market investment game deserves more credit for this.<br />The last measure was completion times (which are shown below). To remove outliers we removed the fastest 10% and the slowest 10% from each sample:<br />Mean Completion TimeMGBs9 minutes, 12 secsCrowd10 minutes, 12 secsGamblers10 minutes, 30 secsSecond Life12 minutes, 12 secs<br />Despite the concepts being less relevant to the Crowd, this didn’t lead to a faster completion time than the MGBs (in fact, if anything, MGBs seemed to complete the exercise faster). Interestingly, Second Life residents took far longer to complete the interview than the other panels (on average 3 minutes longer than MGBs). Our hypothesis for this is that as they took the survey in world, they could also chat to other members whilst completing it, perhaps accounting for the longer completion time. If this is true, they were essentially being distracted from the task in hand (and possibly being biased by others opinions) – not particularly great for research.<br />The company has 16 islands in Second Life and last year introduced a "kiosk" so it could offer customer service via the virtual world.<br />Mr Habib said the experiment with Second Life was a success.<br />"Customers who are part of Second Life stay with us. There's a lot more brand loyalty," he said.<br />But Telstra would keep re-evaluating its presence as new forms of connecting with customers, such as Twitter, emerged.<br />"Like any other corporate, if the next big thing comes along, we'll need to evaluate whether to move on," Mr Habib said.<br />"It's been a success to date but as far as how Second Life is going in the future, that's something we'll need to look at and monitor.<br />"We are the largest internet services and broadband provider, so it makes sense that we are going to dabble in new areas."<br />Mr Habib attended a Twitter event at the Hilton Sydney yesterday morning, promoting the use of the social media tool.<br />One reason there was a belief Second Life was struggling was that the media lost interest in it, Mr Holloway said.<br />"The media coverage has certainly declined since 2006, and that all determines the behaviour and perception that Second Life is declining significantly," he said.<br />At the height of the hype, news wire Reuters even established a bureau inside Second Life to report from inside the virtual world. This was shut down last month.<br />One of its reporters, Eric Krangel, shed light on the decision.<br />"It's hard to say what, if anything, Linden Lab can do to make Second Life appeal to a general audience," wrote Krangel, who reported under the byline 'Eric Reuters' in Second Life, for an online journal.<br />"The very things that most appeal to Second Life's hardcore enthusiasts are either boring or creepy for most people: spending hundreds of hours of effort to make insignificant amounts of money selling virtual clothes, experimenting with changing your gender or species, getting into random conversations with strangers from around the world, or having pseudo-nonymous sex (and let's not kid ourselves, sex is a huge draw into Second Life)."<br />In an attempt to control the sexual elements within Second Life, Linden Labs three weeks ago announced it would cut its virtual mainland in half, with only PG-rated content acceptable in one of the halves.<br />Mr Holloway said this was a "sign of confidence" from Second Life in its growth.<br />"It's the growth that has forced this issue, as there's been a lot of people now who want an innocent place for families to interact rather than finding themselves next to a virtual brothel."<br />Mr Holloway did not believe Second Life would become as mainstream as social-networking tools such as MySpace or Facebook.<br />"Second Life's main potential will stay niche, but virtual worlds won't.<br />"When governments and companies are setting up ... [the eventual success of virtual worlds] is unavoidable."<br />Second Life Issues<br />Although this paper investigated four different samples, of particular interest to us was Second Life and the potential of virtual worlds in the future. To date, Australian Researchers have had concerns about Second Life. Brian Fine, president of Association of Market and Social Research Organisations (AMSRO), last year said:<br />“Virtual worlds like Second Life are interesting, but not gaining enough traction to warrant more researchers going onto the worlds and selling research services. Recruiting people from these worlds is only relevant to the world, and not necessarily representative.”<br />Fair point. But given that some market researchers are now claiming that a ‘crowd’ can yield better market research results than a traditional (and more costly) ‘representative’ sample, and given that we were trialing a “Crowd” in this study, we thought it was still worth a go.<br />However, we did experience many issues in Second Life:<br />Our target of N=200 per cell was easy to achieve for our initial 3 cells – achieved in just a few days. SL, on the other hand, took weeks to get the sample, and we closed at 157, 43 short of our 200 target.<br />Many security mechanisms are in place to stop respondents repeating a survey numerous times in online panels. Over time panels are adapting their process to ensure consistent quality control. We found this to be a problem in SL. Members can create many Avatars, and repeat the survey several times. A Pond Estates representative advised us of a few ‘known’ culprits who we subsequently removed from the data.<br />Location. Whilst Big Pond Estates is a popular location for SL members, it is not exclusively for Australians. Although we asked respondents what country they lived in (and later excluded those who did not select Australia), we could not validate this. There was therefore potential for those living outside of Australia to evaluate concepts when they had no idea of the current products available in Australia.<br />Clearly we had many issues conducting a quantitative concept test in SL, but we wonder whether SL would have been a better place to develop new product ideas instead of testing them there. Reperes, a French agency, claim that they have successfully used SL as a creativity tool, and are using SL for co-creation, exploratory studies and immersive focus groups.<br />Indeed, Eric Klopfer, a professor at Massachusetts Institute of Technology believes Second Life's real potential may be that of an experimentation platform. He notes the ability within Second Life to rapidly construct objects and experiences: <br />“It's relatively easy for residents to build objects that others can use, sit on, walk through, pick up and so on. That can mean anything from a hammer to a house to a landscape. And since it's also easy to share, replicate and tweak creations, Second Life is a world of abundance for creators.”<br />Other Market Research Agencies in Second Life<br />We did find a couple of other market research companies in SL, although none of them Australian. Some had been operating for several years, and had even recruited their own panel of respondents to survey (a few claiming in the region of 10,000 panel members). Slightly concerned with our recent experience we contacted them, interested in theirs. <br />One of the agencies we spoke to was Market Truths who had been operating in SL since 2006. They believed they were the first commercial research company to set up operations in SL (although Reperes also made the same claim).<br />Market Truths said that setting up in SL had generated a lot of positive publicity for them, and there were no regrets. However, they also said that they would only recommend other MR companies to set up shop in SL in a very limited number of cases. They believed that operating successfully in SL required a real commitment to understand the technology and the culture, “this takes time, and the payoff for that time is likely to be quite a long way off”. <br />When asked about the future of research in virtual worlds there was strong belief that this will grow, even though it will take some time for things to evolve to that point.<br />Market Truths had also previously faced the same obstacles we did, and made particular reference of the issue of developing systems to generate reasonable samples in light of the large number of alts and the anonymity of avatars. <br />We later spoke to Kim MacKenzie, a PhD student from Queensland University who centered her honours year thesis on Second Life and who has been published numerous times in the national press:<br />“I recommend all businesses familiarise themselves with Second Life (SL), as it offers fundamental insights into the nature of interacting with virtual worlds.  However, how businesses ‘set up’ and leverage this evolving technology for business purposes requires a paradigm shift in their business strategy, and far greater visionary thinking than has been demonstrated thus far.  <br />If organisations want to explore an existing virtual world such as SL, and not ‘reinvent the wheel’, they must have a fundamental understanding of the ‘nature’ of that particular world, what has and hasn’t been tried before, the demographics, needs and behaviours of existing users, who they are expecting to attract to their presence and why, and the architectural, technical, learning curve, and behavioural limitations of the existing space.  All this considered, an innovative idea that will push further the frontiers of leveraging the platform is strongly encouraged, and is probably only a matter of time to evolve.<br />A business must provide an experience for the consumer that is enchanting, compelling, experiential, and wonderful . . . that harnesses the ability to provide visionary simulations, and yet provide a business objective, otherwise why would they bother visiting their space, when they can just have fun and enjoy themselves with a plethora of other virtual activities already on offer.”<br />The future of virtual worlds<br />The number of people who have joined SL since it launched in 2003 eclipsed 15 million last year. The average number of people logged on to SL at any given time is about 70,000, and Market Truths claim that the median time spent per week in SL is 20 hours.<br />Second Life is still evolving. Current technology limitations result in the failure of teleports, the disappearance of inventory, and the stalling of “transaction loops”. Additional problems include slow service, frequent crashes, the requirement to continually download updated versions, and the limitation of only fifty avatars per site at one time – more than enough to put many new users off.<br />As a result, Second Life is still considered by many as unsuitable for real-world commercial activity, given the non business-like appearance of avatars, anti-social behaviour, high levels of X-rated activity, and the absence of in-world legal and copyright protection. <br />“It's hard to say what, if anything, Linden Lab can do to make Second Life appeal to a general audience," wrote Krangel, who reported under the byline 'Eric Reuters' in Second Life, for an online journal. "The very things that most appeal to Second Life's hardcore enthusiasts are either boring or creepy for most people: spending hundreds of hours of effort to make insignificant amounts of money selling virtual clothes, experimenting with changing your gender or species, getting into random conversations with strangers from around the world, or having pseudo-anonymous sex (and let's not kid ourselves, sex is a huge draw into Second Life)."<br />Because of this, David Holloway, editor of webzine ‘The Metaverse Journal’ does not believe Second Life would become as mainstream as social-networking tools such as MySpace or Facebook. "Second Life's main potential will stay niche, but virtual worlds won't.”<br />"When governments and companies are setting up ... [the eventual success of virtual worlds] is unavoidable." David Holloway.<br />Such is the popularity of three-dimensional technologies such as Second Life, World of Warcraft (which now has 12 million users) or Sims Online that it is predicted that 80% of active Internet users will interact within online virtual worlds by the end of 2011. A key learning that did come out of this research was that despite creating what we thought was an exciting and engaging in-world survey, those in the world, didn’t think it was quite as engaging as we expected! So what we think is brilliant now, may well change in the near future. Research may need to evolve in terms of its entertainment factor if it is going to be successfully used (and deliver a practical benefit) in this arena. If 80% of us step into virtual worlds in the next 2 years, perhaps our expectations (and tolerance) of things like surveys will also move on, to a stage where simple web 2.0 no longer cuts it. Perhaps other techniques can also be explored and adapted. One suggestion is auction methods. Some real estate auctioneers provide real theatre, but the practical benefit is that bids lead to clear choices!<br />Conclusions<br />Sampling<br />In this paper we evaluated four different samples; MGB’s, The Crowd, Gamblers and Second Life. The first three samples showed certain similarities in their results, each predicting the same 2 winning concepts in the predictive market exercise. SL results were slightly different, and given the poorer performance of this sample in predicting the interest rate, and the numerous issues we were faced with conducting research in SL, it would seem that, as it stands today, there is little to be gained conducting research such as this in SL. Whilst we thought that the creation of a virtual office and keeping the survey in-world may excite and increase engagement more, this didn’t seem to be the case. Perhaps Second Life members demand an even higher level of technological advancement/interactivity than others, and in time, maybe the rest of us will demand the same.<br />Our interest in the Gamblers panel was to see if we could increase motivation and increase engagement in the research by rewarding more for correct answers. Encouragingly, results were similar to our MGBs, but as engagement was similar for all samples, there seems little point further researching a sample that has no cost benefit over a traditional sample.<br />Our Crowd also suggested the same top two concepts as our MGBs, and was equally engaged. Considering this, the possible draw card of the Crowd is that at 90% incidence they are cheaper to recruit than MGBs, resulting in a lower cost to the client. Would we risk switching samples? Probably not. In most studies clients may want to know more information, such as fit with brand, products they would replace etc - which you can’t really ask from a Crowd. Besides, the cost saving may be substantial in countries such as the US where monadic testing is popular – but this method is less common in Australia, and consequently there really isn’t that much cost saving versus a traditional targeted sample (unless the product is particularly low incidence). The Crowd did not lead us to any different conclusions than our MGBs – a predictive type market can be run on both samples, leading to similar results. <br />Traditional Vs Predictive Markets<br />Making decisions based on purchase interest alone can be an issue when testing multiple concepts. Concept testing can often result in a few clear winners, a few clear losers, with most of the remaining concepts showing little differentiation between them. Utilising a predictive market (or a choice exercise with MGBs) gives us a good back-up measure to aid the decision process. We will continue to use this measure in all concept studies moving forward, and use the results in conjunction with other measures.<br />Virtual Worlds<br />We don’t know what the future of virtual worlds holds for market research (if at all anything), but it seems SL currently has too many drawbacks to consider pursuing. If some of the speculation is correct, we will see more of these virtual worlds in the future, and many of us will be regular users….let’s wait and see!<br />