BEEMER AND GREGG: DYNAMIC INTERACTION IN DECISION SUPPORT 75 The purpose of this study is to evaluate how dynamic inter-action  is used in the context of eCommerce mashups. Theremainder of this paper is structured as follows. First, a reviewof eCommerce decision support mashups and end-user mashupliterature are presented, and the theoretical background sup-porting dynamic interaction is discussed. Next, ﬁve hypothe-ses are developed involving dynamic interaction, diagnosticity,conﬁdence, and intention. Then, an experiment is designed andconducted to evaluate the research model. The results of the Fig. 1. Substrata of dynamic interaction .experiment and a post hoc analysis of decision quality are thenpresented. This paper concludes with a discussion of the study’s Using the control theory, Beemer and Gregg  developedcontributions and implications for future research. a measurement scale to quantify the support of iterative de- cision making in decision tools that operate in unstructured domains. This paper deﬁned dynamic interaction as a formative II. T HEORETICAL BACKGROUND second-order construct , with the following three substrata: Prior researchers have found that, when decision tools are ap- 1) inclusive; 2) incremental; and 3) iterative . Inclusive refersplied to unstructured domains, their inability to justify solutions to the system’s ability to include user input into the KBS’scan result in low conﬁdence in the decision recommendations cognition process. Incremental refers to the ability of the system, , . There are two main schools of thought on how to break larger problems into smaller more manageable piecesto overcome this lack of conﬁdence. The ﬁrst focuses on devel- which are incrementally updated and then aggregated togetheroping more robust explanation facilities to justify the system’s . Finally, iterative refers to the decision tool’s supportsolution in unstructured domains . The second declares that for an iterative decision-making process . Fig. 1 shows a“the need for interaction between the system and the user has conceptualization of dynamic interaction.increased, mainly, to enhance the acceptability of the reasoning The inclusion of dynamic interaction in the decision-makingprocess and of the solutions proposed by the system” [24, p. 1]. tool fundamentally changes the way the users evaluate infor-Through dynamic interaction with the user, the system is able mation and inﬂuences their perceived reliability and perceivedto track with the user’s iterative cognition process in solving usefulness of the system . However, it also has the potentialunstructured decisions and involve the user’s opinion in the to change the user’s perceptions of the information provided bysystem’s logic, which gives them a sense of ownership (and the tool as well as their evaluation of the decisions made as aultimately conﬁdence) in the solution , . result of using a more interactive decision support tool. B. End-User Programming in MashupsA. Dynamic Interaction and Unstructured Decision Support Mashups are ideal for investigating dynamic interaction in Dynamic interaction’s underpinnings are found in system eCommerce domains because they support iterative user inter-control theory, which spans many academic disciplines ranging faces that track with the user’s iterative decision process .from engineering to economics and is primarily focused on As with the iterative nature of DSS designed for unstructuredinﬂuencing the behavior of dynamic systems . Speciﬁ- domains, researchers have postulated that, when developingcally stated, “Control theory is the area of application-oriented mashups for unstructured domains, end-users “actually workmathematics that deals with the basic principles underlying the iteratively on data, switching from aligning and cleaning up theanalysis and design of control systems. To control an object data to using the data and back, as they get to know the data bet-means to inﬂuence its behavior so as to achieve a desired goal.” ter over time” , p. 13. However, previous mashup research[72, p. 1]. The majority of control theory applications incor- has focused on extending the capabilities and functionality ofporate some variation of a feedback loop. Control feedback this new technology , ,  as opposed to evaluatingloops have three general phases that include the following: the dynamic decision-making processes that mashups support.1) inputting values; 2) processing input and calculating output; As mashups have begun to mature, academic researchers haveand 3) evaluating output and, if necessary, iterating back to recently identiﬁed the need to evaluate mashups in businessstep 1) and adjusting input values . domains such as decision support (e.g., ). Walczak et al. Researchers have found that KBS can be effective in sup-  conducted an eCommerce decision experiment to compareporting unstructured decisions when they are designed with a traditional search engine to a mashup in terms of conﬁdencefeedback loops, which allow the user to inﬂuence the behavior in the decision, time, ease of ﬁnding information, and knowl-of the system, so as to achieve the desired solution by evaluating edge acquisition; however, they did not explicitly address thealternative solutions . One example is a traditional KBS iterative processes inherent in mashup decision making.that incorporated an iterative interface designed to help logistics A major challenge in creating mashups is to seamlessly pack-professionals achieve more efﬁcient shipment processes , age mashup technologies in such a way that nontechnical users. In the eCommerce domain, interactive decision aids have can easily and effectively create mashup applications. Two dif-produced strong positive effects on both the quality and efﬁ- ferent approaches are commonly taken in addressing (enabling)ciency of the decision-making process . end-user mashup development. The ﬁrst approach is passive
76 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS: SYSTEMS, VOL. 43, NO. 1, JANUARY 2013by nature and focuses on designing plugins that work with the D. Consumer Conﬁdenceuser’s current browser to observe what the user is viewing and Conﬁdence is another important construct essential for un-to suggest related sources for potential mashing (e.g., , , derstanding the impact of a DSS on the decision-making pro-and ). A second approach to end-user mashup development cess. Conﬁdence is generally described as a state of beingis proactive by nature and is necessary when the mashup pro- certain either that a decision is correct or that a chosen coursecess becomes more complicated (e.g., process modeling or ad- of action is the best or most effective. Consumer conﬁdence hasvanced interface integration) . Tuchinda et al.  present substantial practical implications as an individual’s conﬁdencea tool that enables users to develop mashups in complicated in a belief or decision has been shown to inﬂuence one’sintegration domains by ﬁrst providing examples of what the end decision process . Researchers have found that conﬁdencemashup should look like; the tool then aims to mimic the format is positively related to decision satisfaction .of the end result, allowing for mashups to be developed by end- Koriat and Goldsmith  found that conﬁdence affects anusers who do not have programming experience. Tatemura et al. individual’s memory processes as their subjective conﬁdence take a similar “by example” approach in developing a tool determines whether they are willing to report information fromthat allows users to mashup disparate data sources by creating memory. Similarly, Russo and Shoemaker  indicate thatabstracted target schemas that are populated based on examples one’s conﬁdence in the quality of a particular decision canprovided by the user. Mashroom is another end-user mashing affect both the selection and implementation stages of theapplication that is based on the nested relational model and al- decision-making process. Both Koriat and Goldsmith  andlows users to iteratively construct mashups through continuous Russo and Shoemaker  describe situations of undercon-reﬁnement . ﬁdence which can signiﬁcantly decrease a decision maker’s This study focuses on proactive mashups, namely, websites willingness to act on a decision.that are devoted to the mashup process. The advantage of using Another problem in consumer decision situations is overcon-these preexisting mashup websites is that there are hundreds of ﬁdence. The overconﬁdence effect is a well-established biasdifferent mashups available at these sites that support differing from psychology in which someone’s subjective conﬁdencelevels of dynamic interaction with the user. in their judgments is reliably greater than their objective ac- curacy, particularly when conﬁdence is relatively high ,C. Diagnosticity . Researchers have found that people tend to exhibit greater overconﬁdence if the task is difﬁcult , , . Thus, Diagnosticity is a well-documented research stream in the overconﬁdence can be a signiﬁcant problem in an unstructuredconsumer behavior literature. This research examines user’s decision domain, like eCommerce. Both overconﬁdence andbeliefs about the usefulness of particular information for mak- underconﬁdence are problems in DSS design .ing a decision . The principles of diagnosticity , suggest that the likelihood that information is used for deci-sion making is as follows: 1) positively associated with the III. H YPOTHESIS D EVELOPMENTaccessibility of information in memory; 2) positively associatedwith the diagnosticity/usefulness of information in memory; Dynamic interaction research suggests that the ability to useand 3) negatively associated with the diagnosticity/usefulness a decision tool to iteratively develop and evaluate solutions forand accessibility of alternative information . Understanding unstructured problems can impact a wide variety of decisiondiagnosticity in DSS domains is important because without outcomes . This is similar to phase theorem from decisiona signiﬁcant level of diagnosticity, a DSS is less likely to science theory, which identiﬁes the distinct phases that indi-effectively support the decision-making process. viduals go through when solving complicated or unstructured Jiang and Benbasat  examined diagnosticity in an eCom- problems: 1) problem identiﬁcation; 2) assimilating necessarymerce context. They deﬁned diagnosticity as a consumer’s per- information; 3) developing possible solutions; 4) solution eval-ception of how helpful a website is in fostering understanding uation; and 5) solution selection , . Individuals itera-of the products being listed. They found that presentation for- tively repeat these phases and compare new information to theirmat inﬂuences perceived diagnosticity. For example, a virtual current knowledge of the decision domain , .product experience (VPE), in which consumers are empowered Researchers have suggested that, when developing mashups,with visual and functional control  was shown to increase users work iteratively on the data as they get to know the dataperceived diagnosticity. In addition, product presentation in a better over time . This process maps directly to phasesvideo-with-narration format was also found to increase diag- 2, 3, and 4 from phase theorem. Prior research has foundnosticity as compared to video-without-narration or a static- that the ability to iteratively use a decision tool to evaluatepicture format . Researchers have also examined the impact alternatives has a signiﬁcant inﬂuence on perceived reliabilityof diagnosticity on user conﬁdence in the decision through and perceived usefulness (e.g., ). An antecedental constructpostpurchase product evaluations. Kempf and Smith  in- similar to perceived reliability is diagnosticity, which refersdicate that consumers are more likely to be conﬁdent in their to the degree that retrieved information is useful to decisiondecisions if their product experience is more diagnostic. More makers developing reliable judgments . In the domain ofrecently, researchers have found that perceived diagnosticity of eCommerce, diagnosticity can be further deﬁned as “the extentthe website positively inﬂuences users’ conﬁdence calibration to which a consumer believes that the shopping experienceand users’ intention to purchase . is helpful to evaluate a product” [35, p. 111]. In general,
BEEMER AND GREGG: DYNAMIC INTERACTION IN DECISION SUPPORT 77diagnosticity is high whenever the consumer feels that theinformation allows him or her to categorize the product/serviceclearly into one group (e.g., high quality or low quality) .As such, it is hypothesized that the ability to inclusively,incrementally, and iteratively evaluate mashup informationthrough dynamic interaction will positively inﬂuence perceiveddiagnosticity. H1: The user’s ability to iteratively develop the mashup through dynamic interaction will have a positive inﬂu- ence on perceived diagnosticity. The inﬂuence of diagnosticity on conﬁdence and sat- Fig. 2. Research model. isfaction is well documented. For example, Lynch et al.  found that the extent to which a decision maker can evaluate a product will determine their conﬁdence of the product . As such, it is hypothesized that in their evaluation. Similarly, Kempf and Smith  the mashup’s ability to track with the user’s iterative found that the more diagnostic (amount of informa- decision-making process through dynamic interaction tion available) an individual’s product evaluation is, will have a positive inﬂuence on the user’s intention to the more conﬁdent they are in their decision. As such, use the mashup. these higher perceptions of diagnosticity are believed H3: The user’s ability to iteratively develop the mashup to strengthen a decision maker’s beliefs in their de- through dynamic interaction will have a positive inﬂu- cision . Researchers have observed that an inter- ence on their intention to use the mashup. face with greater diagnosticity provides the user with Additionally, diagnosticity (the amount of infor- more information cues and thus a better understanding mation a decision maker possesses) is believed to of product information, enhancing the user’s cognitive inﬂuence a user’s intention to use the information. evaluation of the product . Research also suggests Accessibility–diagnosticity research states that the that conﬁdence is positively related to the quality of probability that information is used for decision making the decision . Koriat and Goldsmith  found is inﬂuenced by the following: 1) the accessibility of that participant’s willingness to participate was based the information in memory; 2) the accessibility of alter- upon their conﬁdence in the accuracy of their answers native information in memory; and 3) the diagnosticity and thus inﬂuenced the participants overall accuracy. In of the information compared to alternative information eCommerce, conﬁdence in item attributes  and item . If the information is more diagnostic, it is more choice  is related to consumer product satisfaction. likely to be used in decision making. In eCommerce In other words, if the consumer is able to thoroughly domains, a website’s diagnosticity has been conceptu- evaluate a product, satisfaction with the product should alized as the cognitive belief in the website along with be inherently present in the user’s level of conﬁdence in other beliefs such as compatibility and enjoyment . their selection of the product. Therefore, if a website has higher perceived diagnos- H2: The perceived diagnosticity of the mashup will have a ticity and thus is more helpful to the consumer (by positive inﬂuence on the user’s decision conﬁdence. providing more information) when evaluating a product, Initially, researchers postulated that decision makers the consumer is more likely to use the website. execute the steps of the phase theorem in a sequential H4: The diagnosticity of the mashup will have a positive linear fashion . Later, it was discovered that this inﬂuence on the user’s intention to use the mashup. is only true in certain decision domains. In structured A participant’s willingness to use a tool is inﬂuenced decision domains that have a deﬁnable “right” solution, by their conﬁdence in the accuracy of the information decision makers do execute the phase theorem linearly, provided by the tool and, thus, their conﬁdence in their much like a decision tree , . However, in un- decision when using the tool . In purchasing deci- structured domains that contain outcome uncertainty, sions, conﬁdence in item attributes  and item choice the decision maker iterates through steps 2, 3, and 4  is related to consumer product satisfaction. When of the phase theorem by assimilating new informa- a consumer is able to thoroughly evaluate a product, tion, developing alternative solutions, and comparing they are more likely to experience conﬁdence in their the alternatives . This process is repeated until the decision and thus are more likely to use the decision following occurs: 1) The decision maker experiences tool which improves their decision conﬁdence , information overload and cannot assimilate any more , . Therefore, it is hypothesized that increased information, or 2) a time constraint on the decision is conﬁdence will have a positive inﬂuence on the user’s reached requiring the decision to be made . In the intention to use the mashup. The proposed research domain of eCommerce, the decision maker experiences model for this study is shown in Fig. 2. uncertainty with purchasing decisions because there is H5: Increased conﬁdence in the user’s decision will have a a risk that the seller is being untruthful about the quality positive inﬂuence on their intention to use the mashup.
78 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS: SYSTEMS, VOL. 43, NO. 1, JANUARY 2013 IV. E XPERIMENT D ESIGN TABLE I E XPERIMENT D ESIGN To evaluate the research model shown in Fig. 2, a 4 × 1online research experiment was designed. In the experiment,subjects made a consumer-purchasing decision using an onlinedecision support mashup and then answered survey questionsabout their perceptions of the decision process.A. eCommerce Mashups Web 2.0 is a new trend for Web applications that emphasizesservices, participation, scalability, remixability, and collectiveintelligence. Since Web 2.0 applications facilitate user involve-ment in contributing to information sources, there has beena vast increase in the amount of information and knowledgesources on the Web. The amount of information now availableon the Web can lead to information overload and an inability to The ﬁrst set of mashup tools consisted of mashuplike storeapply the best information available to a particular decision- interfaces that lacked dynamic interaction and are not truemaking task. eCommerce is a good example of where this mashups because only single sellers are included in the user in-problem is prevalent, as consumers desire information access terface. The second grouping consisted of mashups that includewhile making decisions, and such information access produces multiple sellers. The next grouping consisted of mashups thata more satisﬁed consumer . include both multiple sellers and have iterative functionality. Developers have begun developing eCommerce decision Finally, the last grouping consisted of mashups that includesupport mashups, to address information overload, and making multiple sellers, have iterative remashability, and provide in-relevant information available to the consumer. eCommerce cremental comparison mashup functionality. Two real-worldmashups can be categorized by the decision they are designed mashup sites were included in each of the experiment’s strata.to support. The ﬁrst decision is “what to buy?” 109things.com Table I contains the mashups and the corresponding dynamicis a mashup interface for Amazon.com designed to support interaction functionality of each substratum.this decision and allows users to select numerous items andthen compare them to one another. The second decision is B. Measurement Scales“where to buy?” Ugux.com/shopping is a mashup that allowsusers to select items and then to compare Amazon.com and The measures for diagnosticity were taken from the JiangEBay.com in terms of price, warranty, and shipping. Recently, and Benbasat’s  study on VPE. Items on conﬁdence weredevelopers have begun designing mashups to address both of derived from the work of Hess et al.  on calibration andthese decisions. Earlymisser.com is an example of such and conﬁdence in online shopping environments. Items for inten-enables the user to evaluate multiple products from multiple tion were taken from the well-established intention scale thatsellers. In addition to classifying eCommerce mashups by the is a part of the technology acceptance model . The actualquestion they address, they can also be classiﬁed by the mashup measurement items for these constructs are listed in Table II.functionality they obtain. Table III contains the measurement items that were derived A review of 568 eCommerce mashups, obtained from pro- for dynamic interaction from Beemer and Gregg . Sincegrammableweb.com/mashups, revealed three different “masha- dynamic interaction is a relatively new construct in IS literature,bility” traits prevalent in eCommerce mashups: including and because this study is applying this construct to a newmultiple sellers, incremental comparison, and the ability to domain (eCommerce mashups), two pretests were conducted toiteratively incorporate different decision attributes. Depending reﬁne and validate the derived measurement items. The preteston the context and what the mashup is designed for, mashups participants consisted of ten IT professionals.can contain one, two, or all three of these traits. Bestsport- The purpose of the ﬁrst pretest was to evaluate internaldeals.com is an example of a mashup that includes multiple sell- validity, by having participants perform a substratum clusteringers. A more complicated mashup is Pricegrabber.com, which procedure to group like items. Participants were given threeincludes both a multiple seller mashup and iterative remasha- envelopes, each containing the name and deﬁnition of one ofbility. Finally, Mysimon.com and Shopper.com are two of the dynamic interaction’s three substrata. Next, they were givenmost complex mashups which include multiple seller mashups, 15 index cards (each containing one of the scale items) anditerative remashability, and incremental comparison mashup were instructed to match each item with one of the substratafunctionality. Shopper.com’s comparison mashup functionality deﬁnitions. A fourth envelope was provided labeled, “does notallows the user to view the similarities and differences between ﬁt anywhere,” so that participants could discard items that theymultiple products. Based on the review of the 568 eCommerce felt did not match anywhere. The categorization data were thenmashups at programmableweb.com, eight different mashups cluster analyzed by placing in the same cluster the items thatwere selected (two mashups for each of the four substrata of six or more respondents placed in the same category .the experiment) as illustrated in Table I. “The clusters are considered to be a reﬂection of the domain
BEEMER AND GREGG: DYNAMIC INTERACTION IN DECISION SUPPORT 79 TABLE II D ERIVED M EASUREMENT I TEMS FOR D IAGNOSTICITY, C ONFIDENCE , S ATISFACTION , AND I NTENTION TABLE III D ERIVED DYNAMIC I NTERACTION S CALE I TEMSsubstrata for each construct and serve as a basis of assessing Researchers suggest that, when modeling second-order factorcoverage, or representativeness, of the item pools.” [17, p. 325]. models (like dynamic interaction is), each ﬁrst-order constructItems 3, 4, 6, 11, and 15 did not belong to any cluster and should have an equal number of indicators , . Therefore,were thus removed. This left the inclusive, incremental, and the incremental cluster was reﬁned by dropping item seveniterative substrata with three, four, and three items, respectively. because it was the lowest loading indicator for this substratum.
80 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS: SYSTEMS, VOL. 43, NO. 1, JANUARY 2013 TABLE IV R ESPONSE B IAS A NALYSIS AND D ESCRIPTIVE S TATISTICS The purpose of the second pretest was to evaluate the sub- one, which align with the four constructs in the hypothesisstrata coverage of the selected mashup tools. To lighten their model (dynamic interaction, diagnosticity, conﬁdence, and in-workload, the pretest participants were split into two equal tention). Additionally, no general factor was apparent in thegroups and were each asked to evaluate four of the eight unrotated factor structure, with the ﬁrst factor accounting formashups. The ﬁrst group was asked to evaluate mashups 1–4 less than 33% of the variance. Last, to determine if responsefrom Table I, using the same deﬁnitions from the ﬁrst pretest for bias was present among early and late responders, the meansinclusive, incremental, and iterative, to identify the functional- for each construct were calculated for the ﬁrst 30 and the lastity that the mashups possessed. The second group was asked to 30 respondents. As illustrated in Table IV, there is no signiﬁcantperform the same task but was given mashups 5–8 instead. All difference between early and late responders. Furthermore,of the classiﬁcations were aggregated, and overall, the pretest three control variables were collected (age, gender, and grad-participants classiﬁed the mashups with 80% accuracy in accor- uate versus undergraduate students), but none had signiﬁcantdance to Table I which suggests that the experiment substrata differences between groupings. Thus, the proactive design ofclassiﬁcations were properly assigned in the experiment design. the questionnaire, the results of the post hoc factor analysis, and the analysis of the early–late responders all suggest thatC. Data Collection Procedures common method bias is not a great concern in this study. To conduct the experiment, a hypothetical scenario was cre-ated for purchasing a laptop. This decision domain was selected V. DATA A NALYSISbecause of the following: 1) It is a complex domain that con- Visual PLS, version 1.04b, a partial least squares (PLS)tains several different speciﬁcations; 2) it contains uncertainty structural equation modeling (SEM) software package, wasas with any online purchasing decision; and 3) it includes many used to evaluate the hypothesis model. The decision to use PLSmashup sites for purchasing computers. An invitation was sent was based upon several considerations. PLS can be used to es-to 450 undergraduate and graduate students. Again, because of timate models that use both reﬂective and formative indicatorsthe widespread use of computers in college curriculums, it can , allows for modeling latent constructs under conditionsbe assumed that the students are potential laptop consumers. Of of nonnormality , and is appropriate for small-to-mediumthe 450 invitations sent, there were 114 respondents yielding sample sizes . A common heuristic used to determine thea 25.3% response rate. There were 73 male respondents and appropriate sample size for a PLS model is to take the depen-41 female respondents, and the average age of all respondents dent construct with the largest number of constructs impactingwas 27. Participants were randomly sent to one of the eight it and then multiply the number of impacting paths by ten .mashup tools in Table I and were given a scenario in which Using this heuristic, with intention being inﬂuenced by the threethey were asked to use the mashup tool to ﬁnd a laptop for substrata of dynamic interaction (inclusive, incremental, andunder $650, with 4 GB of RAM and a 2-GHz processor. Upon iterative), diagnosticity, and conﬁdence, the minimum sampleusing the mashup decision tool, the participants were given a size to evaluate the hypothesis model for this study would be 50.survey composed of the revised measurement scale items from Therefore, the sample size of 114 that was collected is adequateTables II and III. Each item was measured with a seven-point for this study.Likert scale. The psychometric properties of the research model were Three steps were taken to ﬁrst prevent and then evaluate the evaluated by examining item loadings, internal consistency, andexistence of common method bias. First, the online experiment discriminant validity. Researchers suggest that item loadingswas designed to guarantee response anonymity, and the mea- and internal consistencies greater than 0.70 are consideredsurements of predictor and criterion variables were separated acceptable . As can be seen by the shaded cells in Table V,. Second, at the suggestion of Podsakoff and Organ , all item loadings surpass this threshold. Internal consistencya post hoc factor analysis, also known as Harmin’s single- is evaluated by a construct’s composite reliability score. Thefactor test, was performed. If common method bias is present, composite reliability scores are located in the leftmost columnwe would expect to see a single factor that emerges from the of Table VI and are more than adequate for each construct.factor analysis that accounts for most of the covariance in the There are two parts to evaluating discriminant validity. First,independent and criterion variables . The results of the factor each item should load higher on its respective construct than onanalysis extracted four factors with eigenvalues greater than the other constructs in the model. Second, the average variance
BEEMER AND GREGG: DYNAMIC INTERACTION IN DECISION SUPPORT 81 TABLE V L OADINGS AND C ROSS L OADINGS Fig. 3. PLS SEM results. TABLE VII S UMMARY OF H YPOTHESIS T ESTS TABLE VI VI. Post Hoc A NALYSIS OF D ECISION I NTERNAL C ONSISTENCY AND D ISCRIMINANT VALIDITY Q UALITY AND C ALIBRATION A shortcoming of the current study and Beemer and Gregg  is that the consequential construct “intention” is used in both research models to evaluate the impact of dynamic interaction. Intention is merely an opinion that the user holds as to whether they intend to use the system. Furthermore, even if intention was captured in terms of actual use, there still remains the question of dynamic interaction’s actual effectiveness in terms of improving decision quality. The alignment between an indi-extracted (AVE) for each construct should be higher than vidual’s decision conﬁdence and the quality of their decision isthe interconstruct correlations . In Table V, by comparing referred to as calibration , , which has received limitedthe shaded cells to the nonshaded cells, we can see that all attention in eCommerce research . To address the short-items load higher on their respective construct than the other coming of using an opinion-oriented consequential construct inconstructs in the research model. Likewise, in Table VI, by “intention,” and to evaluate the existence of calibration, a postcomparing the shaded cells to the nonshaded cells, we can see hoc analysis was performed to evaluate the signiﬁcance of thethat the AVE for each construct is higher than the interconstruct relationship between conﬁdence and decision quality.correlations without exception. Overall, these two comparisons When completing the survey, the respondents were requiredsuggest that the model has sufﬁcient discriminant validity. to report the make, model, and price of the laptop that The results of the PLS SEM analysis are shown in Fig. 4. As they selected during the decision process. Google Productsprescribed by Beemer and Gregg , dynamic interaction was (www.google.com/products) was used to retrieve the amountmodeled as a formative second-order construct using the hier- of RAM and processor speed for each laptop; then, Amazonarchical component model and, thus, has an R2 of 1.0 because (www.amazon.com) was used to retrieve consumer productof the repeated indicators used in this approach . Diagnos- reviews for each laptop. On Amazon, consumers can rate eachticity, conﬁdence, and intention had R2 values of 0.51, 0.33, product from one to ﬁve stars, and Amazon reports the averageand 0.61, respectively. This means that 51% of the variation in stars for each product. The average stars for each laptop werediagnosticity is explained by dynamic interaction, 33% of the recorded and then aggregated for each mashup category. Invariation in conﬁdence is explained by dynamic interaction and result, the following data points were collected for each surveydiagnosticity, and 61% of the variation in intention is explained response: memory (RAM), processor speed (in gigahertz), cost,by dynamic interaction, diagnosticity, and conﬁdence. As Fig. 3 and consumer review, which were aggregated to representand Table VII show, four of the ﬁve hypotheses were supported. decision quality, as shown in Fig. 4.Hypotheses 1, 2, and 4 are signiﬁcant at 0.01, and hypothesis 5 To evaluate calibration (the alignment between an individ-is signiﬁcant at 0.05. The relationship between dynamic inter- ual’s decision conﬁdence and the quality of their decision), aaction and intention was not directly supported. Instead, the post hoc analysis was performed. PLS (Visual PLS, versionresults of this study suggest that the inﬂuence of dynamic 1.04b) was used to evaluate the relationship between the fourinteraction on intention is fully mediated by diagnosticity and conﬁdence scale items from Table II and the four decisionconﬁdence. quality data points shown in Fig. 4. The relationship between
82 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS: SYSTEMS, VOL. 43, NO. 1, JANUARY 2013 neous relationships between dynamic interaction → perceived reliability, perceived usefulness, and diagnosticity → intention. From an academic perspective, this study provides several signiﬁcant contributions. To our knowledge, two of the ﬁve hy- potheses that were evaluated have never been evaluated before: dynamic interaction − > diagnosticity and dynamic interaction − > intention. Another important contribution of this study was to further evaluate the role that dynamic interaction plays in decision support tools. Dynamic interaction plays a signiﬁcant role in explaining variation in diagnosticity, conﬁdence, and in-Fig. 4. Post hoc decision quality measure. tention, which suggests that it may be an important IS constructconﬁdence and decision quality was signiﬁcant at the 95% with implications in other system domains.conﬁdence level. With dynamic interaction (indirectly) inﬂu- To date, the majority of mashup literature has been practi-encing conﬁdence through diagnosticity, and the signiﬁcant re- tioner oriented and focuses on extending the capabilities andlationship between conﬁdence and decision quality, it suggests functionality of this new technology . However, the currentthat incorporating dynamic interaction in eCommerce mashups body of mashup literature lacks insight into the underlyingincreases decision quality. cognitive constructs that affect how the user assimilates and evaluates the information provided by the tool. The results of VII. D ISCUSSION AND L IMITATIONS this study provide an initial insight into these underlying cogni- tive factors and suggest that the incorporation of an iterative use The purpose of this study was to evaluate the relationships case (via dynamic interaction) indirectly fosters conﬁdence andbetween dynamic interaction, diagnosticity, conﬁdence, and intention and improves decision quality. This provides insightintention in the context of eCommerce mashups. Four of the for sellers on how to create mashup tools that are more likelyﬁve hypotheses were supported, providing support regarding to be used in that the relationship between conﬁdence anddynamic interaction’s nomological validity. The relationship intention was supported. In the current eCommerce advertisingbetween dynamic interaction and diagnosticity was supported. model, website use can generate revenue even if no purchase isThis provides evidence supporting the notion that the user’s actually made when third-party sponsored links are deployed.ability to combine information from multiple sources, and iter- One limitation of this study was the assumption that the un-atively organize it, improves their ability to use the information dergraduate and graduate student subjects had familiarity within the decision process. The hypothesized relationships between the product domain of laptop computers. If a substantial numberdiagnosticity, conﬁdence, and intention were also supported of the student subjects were not familiar with laptop computers,suggesting that improving diagnosticity improves the user’s this could introduce underlying factors inﬂuencing conﬁdenceperception of their ability to make reliable judgments using that were not controlled for in this study. For example, Herrthe tool. The only hypothesis that was not supported was the et al.  and Park and Lee  found that customers with lowrelationship between dynamic interaction and intention. This product knowledge may be more inclined to use a comparisonsuggests that the diagnosticity and conﬁdence resulting from tool. However, the survey instrument contained a text box forthe dynamic interaction serve to increase intention but not the comments or questions, and none of the respondents asked fordynamic interaction itself. clariﬁcation on the product domain, which suggests that this This study answered some important questions but raised limitation did not have a substantial impact on the results of thesome interesting ones as well. To date, only one other study study. Future research may overcome this limitation by exam-is known to have evaluated dynamic interaction. Beemer and ining dynamic interaction and mashups in a real-world setting.Gregg  developed the measurement scale for dynamic inter- A second limitation of the study is that user’s perceptions ofaction and then evaluated its nomological validity by testing dynamic interaction were not corroborated with evidence fromhypothesized relationships between perceived usefulness, per- how they actually used the mashup tool. The purpose of thisceived reliability, and intention. There are two signiﬁcant limi- study was to determine how user perceptions of dynamic in-tations to the measurement scale of Beemer and Gregg , both teraction inﬂuenced their perceptions of how helpful a mashupof which are addressed by this study. The ﬁrst limitation is that tool was in understanding and evaluating the products listed onthe decision domain used was not a business domain. This study the website and, in turn, how this inﬂuenced their conﬁdenceshowed that dynamic interaction is relevant for (eCommerce) in their decisions and their intention to buy. As such, it wasbusiness decisions. The second limitation is that only three of decided to use subjective measures to evaluate user perceptionsthe potential constructs in dynamic interaction’s nomological of the dynamic interaction capabilities of the two they werenet were evaluated. This study expanded dynamic interaction’s using. However, it is also possible to evaluate how users ac-nomological net by evaluating consequential constructs from tually interact with a mashup tool to see if their perceptionspsychology (diagnosticity), decision science (conﬁdence), and of dynamic interaction match the way they actually used theIS literature (intention). mashup tool. Future studies could have subjects use the mashup Both Beemer and Gregg  and this study found intention tool in a laboratory setting where their actual inclusion of data,to be a signiﬁcant indirect consequential construct of dynamic incremental solutions, and iterative decision making could beinteraction. Future research could beneﬁt from combining the observed and analyzed in comparison to their perceptions ofresearch models from these two studies to evaluate the simulta- the dynamic interaction capabilities of the mashup tools.
BEEMER AND GREGG: DYNAMIC INTERACTION IN DECISION SUPPORT 83 The post hoc analysis in this study observed a signiﬁcant  B. A. Beemer and D. G. Gregg, “Mashups: A literature review andrelationship between conﬁdence and decision quality. This classiﬁcation framework,” Future Internet J., vol. 1, no. 1, pp. 59–87, Dec. 2009.raises some interesting questions that could beneﬁt from future  B. A. Beemer and D. G. Gregg, “Dynamic interaction in knowledge basedresearch. It would be interesting to evaluate the role of dynamic systems: An exploratory investigation and empirical evaluation,” Decisioninteraction on the decision process from an objective perspec- Support Syst., vol. 49, no. 4, pp. 386–395, Nov. 2010.  A. Bhattacherjee, “Individual trust in online ﬁrms: Scale development andtive while observing the direct inﬂuences on decision quality. initial test,” J. Manage. Inf. Syst., vol. 19, no. 1, pp. 211–241, 2002.  P. F. Bone, “Word-of-mouth effects on short-term and long-term product VIII. C ONCLUSION judgments,” J. Bus. Res., vol. 32, no. 3, pp. 213–223, Mar. 1995.  N. Brewer, A. Keast, and A. Rishworth, “The conﬁdence-accuracy rela- Over the past few years, as eCommerce has grown into over tionship in eyewitness identiﬁcation: The effects of reﬂection and discon-a $100 billion industry , , a phenomenon described ﬁrmation on correlation and conﬁdence,” J. Exp. Psychol. Appl., vol. 8, no. 1, pp. 44–56, Mar. 2002.as Web 2.0 has emerged. Web 2.0 is a new trend for Web  O. Brim, G. C. David, C. Glass, D. E. Lavin, and N. Goodman, Personalityapplications that emphasizes services, participation, scalability, and Decision Processes. Stanford, CA: Stanford Univ. Press, 1962.remixability, and collective intelligence . One important  W. Chin, “Partial least squares for researchers: An overview and presen- tation of recent advances using the PLS approach,” in Proc. Int. Conf. Inf.Web 2.0 technology that is being used in the eCommerce Syst., Brisbane, Australia, 2000, Lecture Slides.domain is mashups, and thus, it is important that researchers  W. Chin and P. Newsted, “Structural equation modeling analysis withunderstand the factors that impact the use of these systems. The small samples using partial least squares,” in Statistical Strategies for Small Sample Research. Thousand Oaks, CA: Sage, 1999, pp. 307–341.majority of mashup literature to date is practitioner oriented ,  W. Chin, B. Marcolin, and P. Newsted, “A partial least squares latentwhich further motivates the need for academic evaluation of variable modeling, approach for measuring interaction effects: Resultsthese tools, and is addressed by this study. from a Monte Carlo simulation study and voice mail emotion/adoption study,” in Proc. Int. Conf. Inf. Syst., Cleveland, OH, 1996. This study evaluated the consumer’s use of mashups in an  F. D. Davis, “Perceived usefulness, perceived ease of use, and user accep-eCommerce decision domain. Existing measurement scales for tance of information technology,” MIS Quart., vol. 13, no. 3, pp. 319–340,dynamic interaction, diagnosticity, conﬁdence, and intention Sep. 1989.were used to evaluate the role that dynamic interaction plays  R. M. Dawes, “Conﬁdence in intellectual vs. conﬁdence in perceptual judgments,” in Similarity and Choice: Papers in Honor of Clyde Coombs.in the decision-making process. Then, the measurement scale Bern, Switzerland: Hans Huber, 1980, pp. 327–345.for dynamic interaction is applied to the domain of eCommerce  R. Ennals and M. Garofalakis, “Mashmaker: Mashups for the masses,” inmashups, and the results of this study provide strong support Proc. Int. Conf. Manage. Data, 2007, pp. 1116–1118.  R. Ennals and D. Gay, “User-friendly functional programming forfor the development of inclusive, incremental, and iterative mashups,” in Proc. Int. Conf. Funct. Program., 2007, pp. 223–234.functionality in eCommerce decision support tools. The em-  D. C. Fain and J. O. Pedersen, “Sponsored search: A brief history,”pirical evaluations conducted in this paper revealed that, for in Proc. 2nd Workshop Sponsored Search Auctions, Ann Arbor, MI, 2006.eCommerce decision support, dynamic interaction has a signif-  J. M. Feldman and J. G. Lynch, “Self-generated validity and other effectsicant impact on diagnosticity, conﬁdence, and intention. A post of measurement on belief, attitude, intention, and behavior,” J. Appl.hoc decision quality analysis was performed to detect whether Psychol., vol. 73, no. 3, pp. 421–435, 1988.over- or underconﬁdence was prevalent in the decision domain  G. Forslund, “Toward cooperative advice-giving systems: A case study in knowledge based decision support,” IEEE Expert, vol. 10, no. 4, pp. 56–when using the mashup tools. The results were consistent with 62, Aug. 1995.the evaluation of the hypothesis model, in that as dynamic  V. Furtado, Developing Interaction Capabilities in Knowledge-Based Sys-functionality increased and so did decision quality. As such, tems via Design Patterns, 2004.  D. Gefen, E. Karahanna, and D. W. Straub, “Trust and TAM in onlinethis study also provides evidence that dynamic interaction is shopping: An integrated model,” MIS Quart., vol. 27, no. 1, pp. 51–90,a legitimate IS construct that can be used by future researchers Mar. 2003.working on unstructured decision support domains.  D. Gefen, V. Rao, and N. Tractinsky, “The conceptualization of trust, risk and their relationship in electronic commerce: The need for clariﬁ- cations,” in Proc. Hawaii Int. Conf. Syst. Sci., 2003, p. 10. R EFERENCES  M. M. Goode, “Predicting consumer satisfaction from CD players,”  R. Agarwal and E. Karahanna, “Time ﬂies when you’re having fun: J. Consum. Behav., vol. 1, no. 4, pp. 323–335, Jun. 2002. Cognitive absorption and beliefs about information technology usage,”  D. Gregg and S. Walczak, “Auction advisor: Online auction recommen- MIS Quart., vol. 24, no. 4, pp. 665–695, Dec. 2000. dation and bidding decision support system,” Decision Support Syst.,  M. Albinola, L. Baresi, M. Carcano, and S. Guinea, “Mashlight: A vol. 41, no. 2, pp. 449–471, Jan. 2006. lightweight mashup framework for everyone,” in Proc. Int. World Wide  G. Häubl and V. Trifts, “Consumer decision making in online shopping Web Conf., Madrid, Spain, 2009. environments: The effects of interactive decision aids,” Market. Sci.,  V. Arnold, N. Clark, P. A. Collier, S. A. Leech, and S. G. Sutton, “The vol. 19, no. 1, pp. 4–21, Jan. 2000. differential use and effect of knowledge-based system explanations in  P. Herr, F. Kardes, and J. Kim, “Effects of word-of-mouth and product novice and expert judgment decisions,” MIS Quart., vol. 30, no. 1, pp. 79– attribute information on persuasion: An accessibility–diagnosticity per- 97, Mar. 2006. spective,” J. Consum. Res., vol. 17, no. 4, pp. 454–462, Mar. 1991.  P. S. Aulakh and E. F. Gencturk, “International principal–agent  T. J. Hess, F. Tang, and J. D. Wells, “Conﬁdence and conﬁdence with relationships—Control, governance, and performance,” Ind. Market. online shopping,” in Proc. 9th SIG IS Cognit. Res. Exchange Workshop, Manage., vol. 29, no. 6, pp. 521–538, Nov. 2000. Saint Louis, MO, 2009.  J. V. Baranski and W. M. Petrusic, “The calibration and resolution of  M. Heitmann, D. R. Lehmann, and A. Herrmann, “Choice goal attainment conﬁdence in perceptual judgments,” Perception Psychophys., vol. 55, and decision and consumption satisfaction,” J. Market. Res., vol. 44, no. 2, no. 4, pp. 412–428, Apr. 1994. pp. 234–250, May 2007.  B. A. Beemer, “Dynamic Interaction: A Measurement Development and  T. Hornung, K. Simon, and G. Lausen, “Mashing up the deep Web,” in Empirical Evaluation of Knowledge Based Systems and Web 2.0 Deci- Proc. Int. Conf. Web Inf. Syst. Technol., 2008, pp. 58–66. sion Support Mashups,” Ph.D. dissertation, Univ. Colorado, Denver, CO,  D. Huynh, R. Miller, and D. Karger, “Potluck: Data mash-up tool for May 2010. casual users,” in Proc. Int. Conf. World Wide Web, 2007, pp. 737–746.  B. A. Beemer and D. G. Gregg, “Advisory systems to support decision  Z. Jiang and I. Benbasat, “Virtual product experience: Effects of visual making,” Contributed Chapter in the International Handbook on Decision and functional control on perceived diagnosticity and ﬂow in electronic Support Systems, 2008. shopping,” J. Manage. Inf. Syst., vol. 21, no. 3, pp. 111–147, 2004.
84 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS: SYSTEMS, VOL. 43, NO. 1, JANUARY 2013 J. Karimi, T. Somers, and A. Bhattacherjee, “The impact of ERP im-  M. Raudsepp, Ideal Feedback Loop, 2007. [Online]. Available: http://en. plementations on business process outcomes: A factor-based study,” wikipedia.org/wiki/File:Ideal_feedback_model.svg#ﬁlelinks J. Manage. Inf. Syst., vol. 24, no. 1, pp. 101–134, 2007.  J. Russo and P. Schoemaker, Conﬁdent Decision Making. London, U.K.: G. M. Kasper, “A theory of decision support system design for user Piatkus, 1992. calibration,” Inf. Syst. Res., vol. 7, no. 2, pp. 215–232, Jun. 1996.  M. Sabbouh, J. Higginson, S. Semy, and D. Gagne, “Mashup scripting G. Kauffman, “A theory of symbolic representation in problem solving,” language,” in Proc. Int. World Wide Web Conf., 2007, pp. 1305–1306. J. Mental Imag., vol. 9, no. 2, pp. 51–69, 1985.  R. Shiller, Irrational Exuberance. Princeton, NJ: Princeton Univ. Press, D. Kempf and R. E. Smith, “Consumer processing of product trial and the 2000. inﬂuence of prior advertising: A structural modeling approach,” J. Market.  R. E. Smith, “Integrating information from advertising and trial: Processes Res., vol. 35, no. 3, pp. 325–338, Aug. 1998. and effects on consumer response to product information,” J. Market. G. Keren, “Calibration and probability judgments: Conceptual and Res., vol. 30, no. 2, pp. 204–219, May 1993. methodological issues,” Acta Psychol., vol. 77, no. 3, pp. 217–273,  P. W. Smith, R. A. Feinberg, and D. J. Burns, “An examination of classi- Oct. 1991. cal conditioning principles in an ecologically valid advertising context,” B. Kidwell, D. Hardesty, and T. Childers, “Emotional calibration effects on J. Market. Theory Pract., vol. 6, no. 1, pp. 63–72, 1998. consumer choice,” J. Consum. Res., vol. 35, no. 4, pp. 611–621, Dec. 2008.  H. Sneed, “Integrating legacy software into a service oriented architec- C. N. Kim, K. H. Yang, and J. Kim, “Human decision-making behavior ture,” in Proc. Conf. Softw. Maintenance Reeng., 2006, pp. 3–14. and modeling effects,” Decision Support Syst., vol. 45, no. 3, pp. 517–527,  E. D. Sontag, Mathematical Control Theory. New York: Springer- Jun. 2008. Verlag, 1998. A. Koriat and M. Goldsmith, “Monitoring and control processes in the  J. Tatemura, A. Sawires, O. Po, S. Chen, K. Candan, D. Argrawal, and strategic regulation of memory accuracy,” Psychol. Rev., vol. 103, no. 3, M. Goveas, “Mashup feeds: Continuous queries over web services,” in pp. 490–517, Jul. 1996. Proc. Int. Conf. Manage. Data, 2007, pp. 1128–1130. A. Koschmider, V. Torres, and V. Pelechano, “Elucidating the mashup  R. Tuchinda, P. Szekely, and C. A. Knoblock, “Building mashups by hype: Deﬁnitions, challenges, methodical guide and tools for mashups,” example,” in Proc. Int. Conf. Intell. User Interfaces, 2008, pp. 139–148. in Proc. Int. World Wide Web Conf., New York, 2009.  A. Vance, C. Elie-Dit_Cosaque, and D. Straub, “Examining trust in infor- O. Kwon, “Multi-agent system approach to context-aware coordinated mation technology artifacts: The effects of system quality and culture,” Web services under general market mechanism,” Decision Support Syst., J. Manage. Inf. Syst., vol. 24, no. 4, pp. 73–100, 2008. vol. 41, no. 2, pp. 380–399, Jan. 2006.  A. Vancea, M. Grossniklaus, and M. Norrie, “Database-driven mashups,” R. J. Lewicki, D. J. McAllister, and R. J. Bies, “Trust and distrust: New in Proc. Int. Conf. Web Eng., 2008, pp. 162–174. relationships and realities,” Acad. Manage. Rev., vol. 23, no. 3, pp. 438–  S. Walczak, D. L. Kellog, and D. G. Gregg, “An innovative service to 458, 1998. support decision-making in multi-criteria environments,” Int. J. Inf. Syst. F. L. Lewis, Applied Optimal Control and Estimation. Upper Saddle Serv. Sect., vol. 2, no. 4, pp. 39–56, Oct. 2010. River, NJ: Prentice-Hall, 1992.  G. Wang, S. Yang, and Y. Han, “Mashroom: End-user mashup program- H. C. Lau and W. T. Tsui, “An iterative heuristics expert system for ming using nested tables,” in Proc. Int. World Wide Web Conf., 2009, enhancing consolidation shipment process in logistics operations,” Intell. pp. 861–870. Inf. Process., vol. 228, pp. 279–289, 2006.  E. Witte, “Field research on complex decision-making processes— M. Lin and S. Lee, “Incremental update on sequential patterns in large The phase theorem,” Int. Studies Manage. Org., vol. 2, no. 2, pp. 156– databases by implicit merging and efﬁcient counting,” Inf. Syst., vol. 29, 182, 1972. no. 5, pp. 385–404, Jul. 2004.  J. Wong and J. Hong, “Making mashups with marmite: Towards end-user J. Lynch, H. Marmorstein, and M. Weigold, “Choices from sets including programming for the Web,” in Proc. Human Factors Comput. Syst. Conf., remembered brands: Use of recalled attributes and prior overall evalua- 2007, pp. 1435–1444. tions,” J. Consum. Res., vol. 15, no. 2, pp. 169–184, Sep. 1988.  L. Xiong and L. Liu, “A reputation-based trust model for peer-to-peer K. Matzler, A. Wurtele, and B. Renzl, “Dimensions of price satisfaction: ECommerce communities,” in Proc. ACM Conf. Elect. Commerce, 2003, A study in the retail banking industry,” Int. J. Bank Market., vol. 24, no. 4, pp. 228–229. pp. 216–231, 2006.  J. F. Yates, Judgment and Decision Making. Englewood Cliffs, NJ: D. Merrill, “Mashups: The new breed of web app,” IBM Web Architecture Prentice-Hall, 1990. Technical Library2006. G. Menon, P. Raghubir, and N. Schwarz, “Behavioral frequency judg- ments: An accessibility–diagnosticity framework,” J. Consum. Res., vol. 22, no. 2, pp. 212–228, Sep. 1995. Brandon A. Beemer received the M.S. degree and H. Mintzberg, D. Raisinghani, and A. Theoret, “The structure of ‘Unstruc- the Ph.D. degree in computer science and infor- tured’ decision processes,” Admin. Sci. Quart., vol. 21, no. 2, pp. 246–275, mation systems from the University of Colorado, Jun. 1976. Denver. S. Murugesan, “Understanding Web 2.0,” IT Prof., vol. 9, no. 4, pp. 34– He is an Enterprise Technical Engineer with 41, Jul./Aug. 2007. McKesson Provider Technologies, San Francisco, T. Nestler, “Towards a mashup-driven end-user programming of SOA- CA. His current research is focused on decision based applications,” in Proc. Int. Conf. Inf. Integr. Web-Based Appl. Serv., support in unstructured domains through dynamic 2008, pp. 551–554. interaction. His work has been published in jour- E. Obadia, Web 2.0 Marketing, 2007. [Online]. Available: nals including Future Internet and Decision Support http://marketing20.blogspot.com/2007/01/eCommerce-revenue-over-100- Systems. billion.html T. O’Reilly, What-is-Web-2.0, 2005. [Online]. Available: http://oreilly. com/web2/archive/what-is-web-20.html T. O’Reilly and J. Musser, “Web 2.0 principles and best practices,” Dawn G. Gregg received the B.S. degree in mechan- O’Reilly Radar2006. ical engineering from the University of California, G. Pallier, “The role of individual differences in the accuracy of conﬁ- Irvine, the M.B.A. degree from Arizona State Uni- dence judgments,” J. Gen. Psychol., vol. 129, no. 3, Jul. 2002. versity, Phoenix, and the M.S. degree in informa- C. Park and T. Lee, “Information direction, website reputation and eWOM tion management and the Ph.D. degree in computer effect: A moderating role of prodtype,” J. Bus. Res., vol. 62, no. 1, pp. 61– information systems from Arizona State University, 67, Jan. 2009. Tempe. N. Pollock, “Knowledge management: Next step to competitive She is an Associate Professor of information sys- advantage—Organizational excellence,” Program Manager, 2001. tems and entrepreneurship with the University of P. M. Podsakoff, S. B. MacKenzie, and J. Y. Lee, “Common method Colorado, Denver. Her current research seeks to im- biases in behavioral research: A critical review of the literature and rec- prove the quality and usability of Web-based infor- ommended remedies,” J. Appl. Psychol., vol. 88, no. 5, pp. 879–903, mation. Her work has been published in journals including MIS Quarterly, Oct. 2003. International Journal of Electronic Commerce, IEEE T RANSACTIONS ON P. M. Podsakoff and D. W. Organ, “Self-reports in organizational research: S YSTEMS , M AN , AND C YBERNETICS, Communications of the ACM, and Problems and prospects,” J. Manage., vol. 12, pp. 69–82, 1986. Decision Support Systems.