• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
11
 

11

on

  • 325 views

 

Statistics

Views

Total Views
325
Views on SlideShare
325
Embed Views
0

Actions

Likes
0
Downloads
0
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

    11 11 Document Transcript

    • Hsu, J. S.-C., Huang, H.-H., & Linden, L. P. (2011). Computer-mediated Counter-Arguments and Individual Learning.Educational Technology & Society, 14 (4), 111–123. Computer-mediated Counter-Arguments and Individual Learning Jack Shih-Chieh Hsu1, Hsieh-Hong Huang2* and Lars P. Linden3 1 Department of Information Management, National Sun Yat-sen University, Taiwan // 2Department of InformationScience and Management Systems, National Taitung University, Taiwan // 3Department of Information Technology, Georgia Southern University, USA // jackshsu@mis.nsysu.edu.tw // kory@nttu.edu.tw // llinden@georgiasouthern.edu*Corresponding authorABSTRACT This study explores a de-bias function for a decision support systems (DSS) that is designed to help a user avoid confirmation bias by increasing the user’s learning opportunities. Grounded upon the theory of mental models, the use of DSS is viewed as involving a learning process, whereby a user is directed to build mental models so as to reduce the tendency to consider only data that supports any preexisting mental models. The results of an experiment suggest that a de-bias function, called computer-mediated counter-argument (CMCA), facilitates individual learning and improves decision satisfaction.Keywords Mental model, Individual learning, Computer-mediated counter-argument, De-bias, Decision support systemsIntroductionA decision support system (DSS) is a platform for learning in at least two ways. First, a DSS aids the user inextending his or her knowledge about the subject matter of the decision domain. For example, a person using a stock-selecting DSS may explore the relationships among interest rates and stock prices and, while exploring, learnknowledge about the markets. Second, a DSS may present the user with an opportunity for adjusting his or herknowledge about the decision-making process itself. In this case, the system aids a person in altering the decision-making process in an attempt to improve the outcome. An example of this second way of learning is shown by a userof a stock-selecting DSS who, having been warned of an error-producing cognitive bias known as gambler’s fallacy,reacts to this warning by performing additional steps in the decision-making process, which in this case mightinclude the reflection upon possible market outcomes, if future prices are not dependent upon historical prices. Thissecond way of learning, an adjustment of the decision-making process, serves the decision maker by reconciling themany interlocking and incompatible beliefs about the problem domain, some of which may be based upon cognitivebiases (Tverskey & Kahneman, 1974). Increasingly, attention is being focused upon cognitive theories of learningand how these theories can be understood and applied in the context of DSS, a trajectory of research that is distinctfrom much of DSS research, which largely focuses upon the end result of decision making: decision quality(Santana, 1995). Along these lines, our focus is upon the second type of learning and how a DSS might help to adjusta user’s decision-making process for the purposes of overcoming a cognitive bias known as confirmation bias.During DSS use, cognitive biases present a perilous downside. Users, enabled by DSS, may make poor decisions andenact seemingly limited decision-making processes, often while confident that all is well. Several examples illustratecognitive biases arising during the use of DSS. A cognitive bias called the illusion of control occurs when a user of aDSS performs a what-if analysis and displays increased confidence, yet achieves no significant performance gain(Davis & Kottemann, 1994). The cognitive bias called confirmation bias occurs when a person who is gatheringinformation restricts attention to only data that supports a favored hypothesis. The cognitive bias known as theillusion of knowledge occurs when a user is overconfident about having access to a greater amount of information, asis easily the case with a DSS, but makes poor decisions, despite the additional information. This last cognitive biaswas detected when investors who had switched from a phone-based investing method to an online trading platformbecame more confident, and yet recorded poor performance despite having access to more information (Barber &Odean, 2002). Attempting to counter these cognitive biases is called de-biasing (Fischhoff, 1982). In general, de-biasing techniques, such as educating users about biases, are believed to beneficially impact decision quality (Arnott,2006) and de-biasing techniques applied to DSS have been successfully demonstrated (Bhandari, Deaves, &Hassanein, 2008). To investigate the embedding of a de-biasing technique into the design of a DSS we adopt mental-model theory. ISSN 1436-4522 (online) and 1176-3647 (print). © International Forum of Educational Technology & Society (IFETS). The authors and the forum jointly retain the 111 copyright of the articles. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear the full citation on the first page. Copyrights for components of this work owned by others than IFETS must be honoured. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from the editors at kinshuk@ieee.org.
    • The theory of mental models explains how people perform certain activities, for example, information processing.When a decision maker confronts a large amount of information, a mental model aids in the filtering of thatinformation. The mental model fulfills a gatekeeper role for the mind by preventing unrelated and unimportantinformation from being consciously considered. However, along with the beneficial effects that are attributed tomental models, there are also error-producing effects. Inadequate mental models may block important information.The ability of a decision maker to change mental models is viewable as a learning process, one that may be enabledby DSS. That a person can change existing mental models is important, given that mental models may have anegative impact during decision making (Vandenbosch & Higgins, 1994). Through the mental-model theory, weseek to explain how an embedded de-bias function can induce a change in a person’s mental model, conceptualizedas a type of learning, and, as a result, overcome confirmation bias. Mental models provide a theoretical explanationupon which to ground a DSS function for reducing confirmation bias.The consideration of mental models is not new to DSS research and can be recognized as a type of cognitive skill.The use of DSS is attributed to the development of cognitive skills. Intelligent tutoring tools, for example, providethe user with an opportunity to obtain cognitive skills, such as the ability to learn via self-explanation (Patel,Kinshuk, & Russell, 2000). Technology-enriched learning includes meta-learning activities, such as monitoring andsequencing one’s own learning (Sinitsa, 2000). The cognitive skill being considered in this research is the learningeffect of mental-model reforming. We seek to understand if the design of a de-biasing function embedded in a DSSincreases the cognitive skill of the type of learning associated with mental-model reforming.The de-bias function being explored aims to eliminate cognitive bias through the introduction of counter-arguments.The de-bias function that is embedded in a DSS is designed to present counter-arguments to the user so as toeliminate bias through a learning process. We seek to understand if users, having read counter-arguments, report tohave built mental models and express a lower confidence level, as compared with users without counter-arguments.Being aware of the need to lessen the impact of confounding variables from the environment, we conducted anexperiment to investigate this embedded de-bias function.In the following sections, we discuss the concepts of mental-model theory, confirmation bias, and learning. Wepropose a research model grounded upon mental-model theory and state our hypotheses. We discuss the researchdesign and the procedures of the experiment. We analyze the data and discuss the results in light of their implicationto DSS design and individual learning.Theoretical background and hypothesesMental models and confirmation biasWhile explaining the nature of thought, Craik (1943) describes the modeling capabilities of the human mind,whereby “small-scale models” of external entities are manipulated by the mind to consider past events and toanticipate future events. These mental models are psychological representations of the environment and consist ofcognitive constructs and their relationships (Holyoak, 1984). As such, mental models serve a range of purposes:when designing, mental models help a person to describe the composition of entities; when observing a system,mental models help a person to explain the current states of unseen variables; and when considering the future,mental models help a person to predict events and suitable responses to these events (Rouse & Morris, 1986). Mentalmodels, as structured patterns, aid in classifying information and aid in simulating a set of outcomes based uponvarying conditions (Cannon-Bowers, Salas, & Converse, 1993). An activity such as information processing suggeststhat mental models are relevant to DSS design.One example illustrates the association of mental models with cognitive biases. In the stock market, a decision makerneeds to predict a future price in order to make the purchasing decision. The decision maker possesses a mentalmodel of the impacts of various factors upon price. This mental model helps to guide the search for informationrequired for predicting price and abandon information that is not important to the decision. If abundant amounts ofinformation are available, the mental model serves as a filter that helps the decision maker forego consideration ofinconsequential information. A mental-model-based information search can reduce processing time and minimizeinformation overload. These benefits, however, which coexist with a mental model, may lead the decision maker to 112
    • neglect, or perceive as unimportant, information critical to the analysis. Mental models contribute to a cognitive biascalled confirmation bias.Wason (1960) defined the phenomenon of confirmation bias as people’s tendency to seek evidence in support oftheir assumptions instead of searching for evidence that challenges their assumptions. When a person contemplatesconcepts or verifies assumptions, a variety of biased tendencies have been described. A person may tend to treatevidence supporting existing beliefs more favorably than is objectively appropriate. A person confronted with a bodyof information may “see” exactly the effect that the person set out to find (Russo, Medvec, & Meloy, 1996). Aperson may tend to pass along information that is congruent to their beliefs (Heath, 1996). In short, the problem isthat “decision-makers seek confirmatory evidence and do not search for disconfirming information” (Arnott, 2006, p.60). The consequence of overweighting some evidence and underweighting other evidence is reaching a wrongconclusion. Confirmation bias is observed to be an exceptionally problematic aspect of human reasoning (Nickerson,1998). How then might the design of a DSS compensate for this cognitive bias?Mental models and learningBased on mental models and learning theory (e.g., Gagne, 1977; Norman 1982; Piaget 1954), Vandenbosch andHiggins (1994) argue that mental models are closely connected to learning. Mental models direct the information-gathering process and limit the process. Reflexively, the information gathered has the potential to change mentalmodels, confirm them, enhance them, change them, and reinforce them. Vandenbosch and Higgins (1994) state thatthe gathered information impacts mental models through two different processes: mental-model maintenance andmental-model building. Mental-model maintenance occurs when new information fits easily into existing mentalmodels. Mental-model building occurs when new information from newly perceived situations and environmentsprovokes a person to change existing mental models.The connection between mental models and learning is amplified by Vandebosch and Higgins (1994) with theircomparison of mental models to theories found in the management discipline. First among these theories is Argyrisand Schön’s (1978) organizational learning theory of single-loop and double-loop learning. Single-loop learningrepresents the learning that takes place when existing routines guide the problem-solving process. Double-looplearning represents the challenging of existing routines and the need to restructure existing norms and assumptions.These two ways of learning are similar to the learning that is explained by the use of mental models. In terms ofmental models, single-loop learning maps to mental-model maintenance, both of which pertain to the confirmationand disconfirmation of new information when compared to an existing, stable theory. Double-loop learning maps tomental-model building, both of which represent a potential change in foundations of how learning is achieved.Vandenbosch and Higgins (1994) also draw parallels between mental models and another theory, March’s (1991)theory of exploitation and exploration learning. Exploitation is the act of enhancing performance by improvingcurrent practices. Exploration is the act of enhancing performance by introducing new practices. Exploitationrepresents mental-model maintenance as one attempts to reinforce current mental models by adding new constructsor building new links between existing constructs. Exploration represents the mental-model building process as oneattempts to reconstruct the model completely (March, 1991). These management theories, because they include boththe learning of individuals and the collective of individuals (i.e., the organization as a whole) may not be fullyapplicable to the individual learning considered in this research. However, in as much as they are found to modelindividual learning, these theories are useful because they are congruent with our understanding of the maintenanceand building of mental models.Of the two approaches described, mental-model maintenance and mental-model building, the one more likely tohappen is mental-model maintenance (e.g., Quinn, 1980; Kiesler & Sproull, 1982; Grønhaug & Falkenberg, 1989).Acceptance of mental-model maintenance as a form of learning is high because it can be routinely confirmed througha person’s past experience. Vandenbosch and Higgins (1994) write how, in contrast, mental-model building impliesthat decision makers face uncertainty. The theory of mental-model building has been tested through empiricalevidence. People tend to make decisions or take in part or all action based on their mental models. The mental-model-building form of learning requires stimulus such as feedback from actions and information that contrasts withcurrent thinking. DSS could be designed to induce mental-model building by injecting a particular type ofinformation. Therefore, we attempt to identify the presence of mental-model building by introducing the stimulus of 113
    • computer-mediated counter-arguments (CMCA). A counter-argument is information contrasting with what thedecision maker previously hypothesized, and computer-mediated counter-argument is defined as oppositeinformation provided by a DSS during the decision-making process. In addition, in our study, we also attempt todetermine whether individuals using a system with counter-argument report a higher level of satisfaction. Theresearch model (Figure 1) shows the relationship between these factors. Learning  mental-model building H1  mental-model maintenance Computer-mediated counter-argument H2 Satisfaction  toward process  toward outcome Figure 1. Research modelFrom a cognitive perspective, existing mental models determine the ideal construction and concept framing process.Due to cognitive-based biases, people tend to read information that fits into their current mental model. Theinformation-selection process allows them to gather evidence that can support their existing mindset. Although thereceived information either strengthens or weakens the current mental model (Vandenbosch & Higgins, 1996),theory indicates that once one’s belief is formed, it is not easy to change if there is no additional stimulus (Hoch &Deighton, 1989). Since subjects in the group without computer-mediated counter-argument tend to maintain theircurrent mental model rather than building a new mental model, therefore, we hypothesize the following: H1a: The score of mental-model maintenance for subjects who use DSS without computer-mediated counter-argument is higher than for those who use DSS with computer-mediated counter-argument.On the other hand, people question their existing mindset when they receive contradictory evidence (Vandenbosch &Higgins, 1996). Counter-arguments provided by the system give contradictory evidence that challenges decisionmakers’ perspectives and provides a chance for them to reevaluate their current mental model. Under this setting,people are more likely to abandon their current mental model or take appropriate steps to improve their mentalmodel. Therefore, subjects in the group with computer-mediated counter-argument tend to reform their mentalmodel, rather than maintain their mental model. We hypothesize the following: H1b: The score of mental-model building for subjects who use DSS with computer- mediated counter-argument is higher than for those who use DSS without computer- mediated counter-argument.Decision satisfaction can be separated into process satisfaction and outcome satisfaction (Green & Taber, 1980).Process satisfaction refers to users’ perceived efficiency and fluency of using a decision support system to facilitatedecision making. On the other hand, outcome satisfaction refers to users’ perceived effectiveness and expectedperformance toward the decision that has been made (Sanders & Courtney, 1985). For subjects in the group withCMCA, the existence of counter-argument challenges their prior assumptions, and this process may lead to negativeemotional outcomes. Decision makers have to spend more time on making a decision and the fluency of the decision-making process is low. Negative moods and increased fatigue are caused by interruptions (Schonpflug & Battman,1988; Zohar, 1999). Therefore, decision makers tend to rate satisfaction with the decision-making outcome as low,even if the interruption is helpful (Speier, Vessey, & Valacich, 2003). However, different results are expected toward 114
    • the outcome satisfaction. Although the counter-arguments interrupt the decision-making process by allowingdecision makers to consider both positive and negative opinions suggested by the system, the decision maker ispredicted to report satisfaction toward their decision. Therefore, we hypothesize the following: H2a: Subjects who use DSS with computer-mediated counter-argument will feel less satisfied about the decision-making process. H2b: Subjects who use DSS with computer-mediated counter-argument will feel more satisfied about the decision-making outcome.Research methodsAn experiment was conducted to test the listed hypotheses. Participants were assigned randomly into two groups,with/without computer-mediated counter-argument. They were asked to accomplish a stock-investment task usingthe DSS provided by the authors. A stock-investment task was selected because DSS are intensively adopted tosupport stock investment decision making to cope with high uncertainty and process data in large quantities.In general stock-investment decision-making tasks, subjects are allowed to invest certain amounts of money in one ormore selected investing targets. Real decision making is more complex, and the final investment portfolio maycontain several combinations of investments. Since this study focuses on understanding learning effects only, asimplified investment environment was constructed. That is, people were requested to invest their money in twocompanies belonging to different industries instead of among all available companies in the open market.The stock-investment task used in this paper was modified from Melone, McGuire, Hinson, and Yee (1993).Subjects were requested to put the given amount of money into two different stocks. To maximize their return,subjects allocated that money into two different options (a company in the financial holding industry or high-techindustry) based on whatever subset they deemed appropriate. DSS with “what if” analysis was provided as a decisionaid. The system provided basic market information and current market status (with news format). Basic marketinformation included the name of company, capital, business scope, expected ROI in the next five years, and stockprice for the past five years. Current market status included positive and negative information about these twocompanies and the industries in which they belong. Information was summarized from articles published ininvesting-related journals and magazines.Although the investment information presented in this experiment was collected from real financial news, events,and reports, the content was reviewed and modified by two business-school PhD candidates. We removed the namesand brand names that appeared in the original news report to avoid any inappropriate connection between theexperiment and real-world events. Further, we refined the wording of news titles and news content. The length of thepresented news headlines and story bodies were condensed to 10–20 words and 150–200 words, respectively.The content validity was then assured by eight financial professionals, including professors, experienced investors,senior managers in the high-tech industry, and accountants. News with ambiguous titles or content was dropped ormodified. The direction (positive or negative) and importance of each piece of news and the accompanying headlinewas determined based on opinions provided by the above eight financial professionals.MeasurementUser satisfaction includes two major dimensions: satisfaction toward the system and toward the decision making(Sanders and Courtney, 1985). Since this study focused on the new function, namely counter-argument provided bythe DSS, several questions were used to understand users’ attitude toward the DSS. A total of three questionsadopted from Sanders and Courtney (1985) were used to measure subjects’ satisfaction with the DSS. On the otherhand, three questions adopted from Green and Taber (1980) were used to measure satisfaction toward the decision-making process. 115
    • Learning in this study refers to, with regard to DSS use, participants’ using the information to challenge theirassumptions or using the information to maintain their initial beliefs. A total of six questions obtained from(Vandenbosch & Higgins, 1996) were used to measure learning effects: three for mental-model maintenance andthree for mental-model building. The measurement uses a seven-point Likert scale ranging from 1 (strong disagree)to 7 (strong agree).Manipulation checkSince we wanted to manipulate the effect of with and without counter-argument, we asked subjects whether or notthey were aware of the pop-up window showing counter-argument information during the experiment. We found that90 subjects in the group with CMCA were aware and only two subjects were not; and that 86 subjects in the groupwithout CMCA were not aware (or could not recall) and only nine subjects were. The result of subjects’ awareness isshown in Table 1. Table 1. Results of subjects’ awareness Awareness Total number Yes No Could not recall Computer-mediated w/o 9 71 15 95 Counter-argument w 90 1 1 92 Total number 99 72 16 187According to a reading-behavior study by using eye-tracking machines, in a two-column condition, people tend tofocus on content that appears on the left-hand side first. Since the information for these two companies appeared in atwo-column format, we randomized the location of the messages to avoid the possible confounding effect. As shownin Table 2, the location of the news and whether or not there was CMCA are independent. Table 2. Results of subjects in different information displayed location Computer-mediated counter-argument Location of displayed information Total number of subjects w/o w IT-related news on the left-hand side 51 45 96 IT-related news on the right-hand side 44 47 91 Total number of subjects 95 92 187 Table 3. DSS functions (Hung et al., 2007) Menu FunctionMarket summary Detailed information on the stock market, including stock quotes, change, day’s range, volume, etc.Industry quotes Detailed information on the specific industries.Company information Provided information on six companies used for trading in the experiment, including basic information, historical prices, news, and streaming charts.Technical analysis Analyzed technical aspects, such as the moving average, relative strength index, and stochastic line indicators.Recent news Provided three categories of news: political, international, and financial.Calculator Calculated investment returns and more.SystemA GUI-based DSS, with 4GL programming language and a relational database management system, was developedfor the experiment. The system was able to support Simon’s decision processes, which allow people to performintelligence, design, and choice activities. Actual design activities followed Hung, Ku, Liang, and Lee’srecommendations (2007), and the detail system tools are shown in Table 3. Furthermore, the purpose of this studywas to understand how users’ information-reading behaviors affected their final decision. To support this, a total of 5to 15 positive and 5 to 15 negative message titles about each company were displayed on the main page for the non-information-loading group. After users clicked on the message title, the system popped up an “always on top” screen, 116
    • which contained detailed message content. To record the exact reading time for each message, this pop-up “alwayson top” window could only be closed by clicking the “close” function. Finally, for experiment group, after theyclicked the “close” button, another window containing a reversed direction message popped up. Basic information Technical analysis Information listing Detailed information Computer-mediated counter-argument Decision making Figure 2. Screenshots of the experimental systemProcedureBefore the experiment started, each participant read and signed a consent form describing the purpose of theresearch. Subjects were told that any decision they made was independent from their academic performance and that 117
    • the experiment process was anonymous. All experiments were held by the same facilitator to reduce possibilities ofbias.In the first stage, the experimenter gave a five-minute introduction of two investment options (a financial holdingcompany and a high-tech company). After the brief introduction, subjects were asked to make a quick decision first.We consider the decision at this stage as prior belief. Next, what-if analysis tools and the current status of eachmarket were shown on the screen. Positive and negative information was represented by a brief title only.Participants had to click on the title to obtain detailed information. They could use what-if analysis tools to calculatepossible returns according to their understanding of the market and the provided information. In this stage, thesubjects’ information acquisition behaviors, including reading sequence and time, were recorded by the systemautomatically. When participants felt comfortable making their final decision, they pressed a button on the screen,and the system led them to the final decision screen. After participants made their final decision, they were asked toprovide their level of confidence toward their final decision and answer predefined questions. Finally, they werethanked for their participation.Pilot testsThe experiment was pilot-tested twice with master and senior students from the MIS department. After the first pilottest, with eight volunteer students, we refined the process flow and divided the company information page into twopages: one for company information and the other for the preference test. In addition, we invited three studentsstudying human-computer interfaces to review our experiment system and provide us with some functional andinterface-design comments. Based on these comments, we improved our system and made its operation smoother andmore user-friendly. The second pilot test was conducted with fifteen students. We found no revisions were neededafter the second pilot test, so this version was considered final.DemographicsA total of 187 subjects were recruited from part-time MBA programs of seven different universities and oneacademic institution. Since our predefined task was stock investment, we focused on subjects with stock or otherinvesting experience. Five of the six subjects had experience with stock investment, while one of the six had noexperience with stock investment but had other investment experience. The average age was 33.1 years old with astandard deviation of 5.6. The eldest subject was 52 years old, and the youngest was 22 years old.For those subjects in the with-CMCA group, one out of five had never invested in the stock market, and the averageage was 33.19 years old with a standard deviation of 4.843. The eldest subject was 48 years old, and the youngestwas 22 years old. For those subjects in the without-CMCA group, one out of five had never invested in stock, and theaverage age was 33.01 years old with a standard deviation of 6.325. The eldest subject was 52 years old, and theyoungest was 22 years old. Table 4. Validity and reliability Factor loading Variables Items Factor 1 Factor 2 Factor 3 Factor4 Support my viewpoint 0.86 Mental-model maintenance Support action taking 0.77 (Cronbach’s alpha = 0.74) Reinforce belief 0.70 Challenge my viewpoint 0.82 Mental-model building Criticize my cognition 0.81 (Cronbach’s alpha = 0.70) Reinvestigate my assumption 0.73 Process efficiency 0.79 Satisfaction process Process satisfaction 0.69 (Cronbach’s alpha = 0.87) Process clarity 0.68 Happy with the decision 0.83 Satisfaction decision Pleased with the decision 0.82 (Cronbach’s alpha = 0.95) Satisfied with the decision 0.82 118
    • Data analysisItem reliability, convergent validity, and discriminant validity tests are often used to test the robustness ofmeasurement. Individual item reliability can be examined by observing the factor loading of each item. Convergentvalidity should be assured when multiple indicators are used to measure one construct. Convergent validity can beexamined by bivariate correlation analyses and reliability of questions. Discriminant validity focuses on testingwhether the measures of constructs are different from one another. It can be assessed by testing whether thecorrelation between pairs of construct are below the threshold value of 0.90. As shown in Table 4, given that allcriteria were met, validity and reliability of our measurement are assured in our study.Hypotheses testingThere are four hypotheses in this category, based on different decision outcomes: mental-model maintenance,mental-model building, decision-process satisfaction, and outcome satisfaction. In this study, MANOVA wasselected because the correlation matrix showed a moderate- to high-level correlation among variables and to controlan experiment-wide error rate resulting from conducting multiple ANOVA (Hair, Black, Babin, Anderson &Tatham, 2006). In Table 5, descriptive analysis and the correlation among variables were also provided for thevalidation of assumption. “Mean” represents the averaged response toward each construct. “Skewness” and“kurtosis” represent the extent to which our sample fit into the normality assumption. The MANOVA test result inTable 6 shows that, except for process satisfaction, CMCA has effects on each dependent variable whenconfirmation bias is controlled. Table 5. Descriptive analysis and correlation matrix Descriptive analysis Correlation matrix Mean Std dev. Skewness Kurtosis MMM MMB SATP SATDMental-model maintenance 4.87 0.78 −0.03 1.00 1.00 Mental-model building 5.16 0.87 −0.09 −0.16 −0.09 1.00Satisfaction toward process 4.97 0.93 −0.71 1.51 0.31 0.45 1.00Satisfaction toward outcome 5.02 0.97 −0.33 0.51 0.32 0.46 0.78 1.00 Table 6. Effect of CMCA on learning and satisfaction Source Dependent variable F Sig. MM building 6109.007 .000 MM maintenance 7370.924 .000 Intercept Process satisfaction 5246.856 .000 Outcome satisfaction 4992.449 .000 MM building 4.711 .031 MM maintenance 7.912 .005 CMCA Process satisfaction 2.388 .124 Outcome satisfaction 5.868 .016The hypothesis that the score of mental-model maintenance for subjects who use DSS without CMCA will be higherthan for those who use DSS with CMCA is supported. As Table 7 indicates, for subjects in the group withoutCMCA, the average-rated mental-model maintenance was 5.03 (out of 7). For subjects in the group with CMCA, theaverage-rated mental-model maintenance score was 4.71 (out of 7). The significant result indicates that CMCA caneffectively reduce mental-model maintenance.The hypothesis that the score of mental-model building for subjects who use DSS with CMCA will be higher thanfor those who use DSS without CMCA is supported. In addition, as Table 7 shows, for subjects in the group withoutCMCA, the average-rated mental-model building was 4.96 (out of 7); for subjects in the group with CMCA, theaverage-rated mental-model building was 5.36 (out of 7).The MONOVA result shows that counter-argument has an effect on outcome satisfaction but not on processsatisfaction. The hypothesis that subjects who use DSS with CMCA will feel more satisfied toward the system is not 119
    • supported. However, significant difference can be found between these two groups and H2, and thus, the satisfactiontoward decision-making outcome is supported. The average scores and standard deviation for process and outcomesatisfaction for each group are provided in Table 7. Table 7. Analytical results of mental-model maintenance Computer-mediated counter-argument w/o wMental-model maintenance 5.03 (0.88) 4.71 (0.64)Mental-model building 4.96 (0.81) 5.36 (0.87)Process satisfaction 4.88 (0.90) 5.07 (0.96)Outcome satisfaction 4.86 (0.89) 5.19 (1.02)DiscussionOne of our hypotheses (H2) was not supported. Contrary to our expectation that subjects provided with CMCAwould be interrupted by the popping-up of counter-argument during the decision-making process and would tend tofeel less satisfied, subjects in the group with counter-argument felt even more satisfied than the group withoutcounter-argument. A couple of reasons may have caused this to happen. First, decision-process satisfaction ismeasured by a subject’s perceived efficiency, satisfaction, and clearness toward the decision-making process. Ingeneral, decision makers do not read all the information provided by the system. Accompanied by the experimenter’sintroduction, the decision-making process is very easy to understand and follow. The measurement may not exactlyreflect the actual problem with counter-argument. Therefore, there is no significant difference between these twogroups. There is a need to develop another instrument to understand subject’s satisfaction toward the decision-making process.Second, we measured the subjects’ perceived process satisfaction and perceived outcome satisfaction together, afterthe decision had been made. Although the questionnaire focused on the decision-making process, the decisionmakers’ perceived satisfaction toward the process may have been affected by their perceived satisfaction toward thedecision outcome. Therefore, future research should reconsider the most appropriate time for collecting processsatisfaction.ConclusionBecause human beings have bounded rationality, our decision-making process is largely affected by cognitive biases(Tversky & Kahneman, 1974). This study focused on the introduction of a de-bias function, which may reduce thenegative consequences of confirmation bias in the computer-supported decision-making context. The relationshipsbetween counter-argument and decision outcomes were hypothesized. After collecting data from 187 experiencedfinancial investors, three out of four proposed hypotheses were supported (the relationship between the providing ofCMCA and process satisfaction was not supported). We successfully showed how the de-bias function can triggerindividual learning by leading decision makers to challenge their current mental models. This result contributes toacademics and practitioners in several ways.Implications for researchFirst, our results indicate that decision support systems with specific functions can stimulate learning by reformingdecision makers’ mental model. How users can learn through the use of information systems has long been ignoredby information system research (Alavi & Leidner, 2001). In this study, we focused on two different approaches tolearning: mental-model maintenance and mental-model building. However, biases emerged while decision makersattempt to maintain current mental models. This research shows that with pre-design functions, users may challengetheir initial beliefs and reform their mental models.Second, there was no difference with regard to process satisfaction with or without counter-argument. This indicatesthat although a new function may be useful to minimize overconfidence and to guide the decision maker to balance 120
    • their decision, people may be against the de-bias function in the decision-making process. The existence of counter-argument may interrupt decision makers’ information reading and downgrade their processing fluency. This impliesthat, although counter-argument allows decision makers to challenge their assumptions, an unwanted emotional sideeffect may also appear. Future research may explore other emotional consequences of using de-biasing functionsembedded in computer-supported decision-making tools.Implications for practiceLearning takes place when learners enhance their current mental models or build new mental models. It is vital tobuild an environment to help individuals to avoid cognitive bias as well as to support the building and reforming ofmental models. Computer-supported learning is popular and broadly adopted by the contemporary education system.In addition to providing various types of knowledge in a more efficient or effective manner through the support ofinformation technology, approaches such as counter thinking, which provide stimulus for learners to challenge theirexisting mental model, should be adopted by educators or embedded into the learning system.For a decision-support-systems designer, de-bias functions should be included in the system design. Our study showsthat systems should provide certain functions to prevent bias or support learning. However, designers should alsonote that some unwanted effects may be caused by those de-bias functions. For example, in our study, we providedone piece of counter-argument when subjects read a piece of news. The pop-up counter-argument may lead tonegative emotions, for example, annoyance or confusion. Too many counter-arguments or too much information mayincrease the effect of information overload. Therefore, the frequency and timing used to provide counter-argument isimportant. System designers should also consider the type of task and other factors.Limitations and future researchThis study is not without limitations. First, a between-group comparison was conducted and subjective evaluation ofperceived mental-model building was used to evaluate the learning effect. Some may argue that although thisapproach may capture part of the learning effect, a pre- and post-experiment comparison should be made in order totruly reflect how counter-argument changes the mental model. Therefore, further research should extend this studyby employing within-individual experiment and examining the change of one’s mental model objectively. Second,this was a cross-sectional study and the existence of counter-argument was unexpected by the subjects. People mayget used to the appearance of counter-argument and ignore it automatically (e.g., Handzic & Tolhurst, 2002). Futureresearch may extend this study by conducting a longitudinal study to examine the above assumption. Third, only afinancial decision-making task was used. Since confirmation bias can be applied to various decision-making tasksand counter-argument may serve as a useful tool for eliminating it, future research should extend the proposedconcept to other decision-making areas to test the effect of counter-argument on decision-making. Fourth, only onede-biasing function was included in this study. One bias may be eliminated through different approaches underdifferent settings. Future research should include other de-biasing functions and compare the effect of different de-biasing functions under different settings. Lastly, although we have illustrated the effect of confirmation bias andmethods to prevent it, we did so with a simple experiment design. Future studies may examine this issue in morecomplicated circumstances, such as a group decision-making or social decision-making process.ReferencesAlavi, M., & Leidner, D. E. (2001). Research commentary: Technology-mediated learning: A call for greater depth and breadth ofresearch. Information System Research, 12(1), 1–10.Argyris, C., & Schön, D. (1978). Organizational Learning: A Theory of Action Approach. Boston, MA: Addison-Wesley.Arnott, D. (2006). Cognitive biases and decision support systems development: A design science approach. Information SystemsJournal, 16(1), 55–78.Barber, B. M., & Odean, T. (2002). Online investors: Do the slow die first? The Review of Financial Studies, 15(S2), 455–487.Bhandari, G., Deaves, R., & Hassanein, K. (2008) Debiasing investors with decision support systems: An experimentalinvestigation. Decision Support Systems, 46(1), 399–410. 121
    • Cannon-Bowers, J. A., Salas, E., & Converse, S. (1993). Shared mental models in expert team decision making. In J. N. J.Castellan (ed.), Individual and group decision making. Hillsdale, NJ: Lawrence Erlbaum Associates, 221–246.Craik, K. J. W. (1943). The Nature of Explanation. Cambridge, UK: Cambridge University Press.Davis, F. D., & Kottemann, J. E. (1994). User perceptions of decision support effectiveness: Two production planningexperiments. Decision Sciences, 25(1), 57–76.Festinger, L. A. (1957). Theory of Cognitive Dissonance. Stanford, CA: Stanford University Press.Fischhoff, B. (1982) Debiasing. In D. Kahneman, P. Slovic, & A. Tversky (Eds.), Judgment under uncertainty: Heuristics andbiases. Cambridge, Cambridge University Press, 422–444.Gagne, R. M. 1977. The condition of learning, New York: Holt, Rinehart and Winston.Green, S. G., & Taber, T. D. (1980). The effects of three social decision schemes on decision group process. OrganizationalBehavior and Human Performance, 25(1), 97–106.Grønhaug, K., & Falkenberg, J. S. (1989). Exploring strategy perceptions in changing environments. Journal of ManagementStudies, 26(4), 349–359.Hair, J. F., Black, B., Anderson, R. E., Tatham, R. L., & Black, W. C. (2006). Multivariate data analysis, 6th ed. EnglewoodCliffs, NJ: Prentice Hall.Handzic, M., & Tolhurst, D. (2002). Evaluating an interactive learning environment in management education. EducationalTechnology & Society, 5(3), 113–122.Heath, C. (1996). Do People Prefer to Pass Along Good or Bad News? Valence and Relevance of News as Predictors ofTransmission Propensity. Organizational Behavior and Human Decision Processes, 68(2), 79–94.Hoch, S. J., & Deighton, J. (1989). Managing What Consumers Learn from Experience. Journal of Marketing, 53(2), 1–20.Holyoak, K. J. (1984). Analogical thinking and human intelligence. In Advances in the psychology of human intelligence, R. J.Sternberg (Eds.), Hillsdale, NJ: Erlbaum, 199–223.Hung, S. Y., Ku, Y. C., Liang, T. P., & Lee, C. J. (2007). Regret avoidance as a measure of DSS success: An exploratory study.Decision Support Systems, 42(4), 2093–2106.Kiesler, S. & Sproull, L. (1982). Managerial response to changing environments: Perspectives on problem sensing from socialcognition. Administrative Science Quarterly, 27(4), 548–570.March, P. (1991). How to people with a mild/moderate mental handicap conceptualise physical illness and its cause. BritishJournal of Mental Subnormality, 37(73), 80–91.Melone, N. P., McGuire, T. W., Hinson, G. B., & Yee, K. Y. (1993). The effect of decision support systems on managerialperformance and decision confidence. Proceedings of the 26th Hawaii International Conference on System Sciences, 482–489.Nickerson, R. S. (1998). Confirmation bias: A ubiquitous phenomenon in many guises. Review of General Psychology, 2(2), 175–220.Norman, D. A. (1982). Learning and memory. San Francisco: W. H. Freeman and Company.Patel, A., Kinshuk, & Russell, D. (2000). Intelligent tutoring tools for cognitive skill acquisition in life long learning. EducationalTechnology & Society, 3(1), 32–40.Piaget, J. (1954). The construction of reality in the child, New York: Basic Books.Quinn, J. B. (1980). Strategies for change: Logical incrementalism, Homewood, IL: Richard D Irwin.Rouse, W. & Morris, N. (1986). On looking into the black box: Prospects and limits in the search for mental models.Psychological bulletin, 100(3), 349–363.Russo, J. E., Medvec, V. H., & Meloy, M. G. (1996). The distortion of information during decisions. Organizational Behaviorand Human Decision Processes, 66(1), 102–110.Sanders, G. L., & Courtney, J. F. (1985). A field study of organizational factors influencing DSS success. MIS Quarterly, 9(1),77–93.Santana, M. (1995). Managerial learning: a neglected dimension in decision support systems. Proceedings of the 28th AnnualHawaii International Conference on System Sciences, 82–91.Schonpflug, W., & Battmann, W. (1988). The costs and benefits of coping. In S. Fisher & J. Reason (Eds.), Handbook of lifestress, cognition, and health, New York: Wiley, 699–713. 122
    • Sinitsa, K. (2000). Learning individually: A life-long perspective. Educational Technology & Society, 3(1), 17–23.Speier, C., Vessey, I., & Valacich, J. S. (2003). The effects of interruptions, task complexity and information presentation oncomputer-supported decision-making performance. Decision Sciences, 34(4), 771–797.Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases. Science, 185(4157), 1124–1131.Vandenbosch, B., & Higgins, C. (1994). The measurement of learning from executive information systems. Proceedings of theTwenty-Seventh Hawaii International Conference on System Science, Hawaii, 572–585.Wason, P. C. (1960). On the failure to eliminate hypotheses in a conceptual task. The Quarterly Journal of ExperimentalPsychology, 12(3), 129–140.Zohar, D. (1999). When things go wrong: The effect of daily work hassles on effort, exertion, and negative mood. Journal ofOccupational and Organizational Psychology, 72(3), 265–283. 123