JMeter webinar - integration with InfluxDB and Grafana
Iannacci Cornford BAM_2017
1. On the quest for multi-methods in IS
evaluation: A Qualitative
Comparative Analysis
By Federico Iannacci & Tony Cornford
2. Outline
• Theoretical background: QCA
• Approaches to IS evaluation
• Focal theory: DeLone & McLean model
• Methodology
• Analysis & findings
• Implications
3. QCA vs. Replication
• Both are case oriented approaches
• Replication either confirms or disconfirms
findings across multiple cases
• QCA uses counterfactual thinking to simplify
complex solutions in a theoretically-guided
manner
4. QCA vs. Replication (2)
C*D + A*B + aB O
C*D + B O
Empirical
findings Remainder
(conjecture)
Outcome of
interest
5. QCA vs. Variance Theories
• Variance theories: cause (IV) is necessary and
sufficient for the outcome (DV)
• QCA: INUS causation (cause is Insufficient but
Necessary part of a more complex recipe that
is Unnecessary but Sufficient for the outcome)
• QCA revolves around multiple (many
recipes/paths) conjunctural (each recipe is a
combination of conditions) causation
6. Approaches to IS evaluation
• Evaluation as a judgment of worth of an IT
System or Project
• Variance approaches: Experimental and quasi-
experimental designs striving for universality
and, therefore, generalisability
• Contextualist (or processual) approaches:
single or multiple case studies within or across
settings
7. QCA as an evaluation approach
• It extends the logic of variance theories: the
social and the technical react to one another
in a conjunctural (or systemic) fashion
(complex interaction effects)
• It blends with process-tracing techniques: it
enables the study of processes in context (by
returning to individual cases)
8. Focal Theory
• We used a simplified version of D&M because
of our focus on organisational actors (e.g.,
Training Organisations, Beneficiary
Organisations, Managing Authorities, etc.)
10. Methodology
• Sequential research design based on within
case analysis (inductive coding of data) and
cross-case analysis (QCA) of seven cases (i.e.,
Austria, England, Flanders, France, Germany,
Greece & Hungary)
• Inductive coding: it produced both causal
variables (or conditions) and outcome through
iterative dialogue between D&M and data
11. Methodology (2)
QCA deployed in four steps:
•1) calibration (scoring) of outcome and causal conditions to
construct a truth-table
•2) determination of outcome value based on a frequency
threshold of 1 case (rows with consistency scores above 0.90
were assigned an outcome value of 1 or 0 otherwise)
•3) Counterfactual analysis based on “what if” thought
experiments
•4) Interpretation of findings by returning to the cases to trace
causal processes and unravel causal (interaction) mechanisms
12. Truth-Table Analysis
Truth Table for Positive Impact
IQ SQ Number of
cases with
score more
than 0.50
Outcome
code (based
on
consistency
score)
Consistency
(Xi≤
Yi)=∑[min(X
i, Yi)]/∑(Xi)
PRE(Xi≤
Yi)=∑[min(
Xi, Yi)]-
∑[min(Xi,
Yi,
yi)]/∑(Xi)-
∑[min(Xi,
Yi, yi)]
Product
0 0 2 0 2/3.5= 0.57 0/1.5=0.00 0.00
0 1 1 1 2.5/2.75= 0.91 0.50/0.75=
0.67
0.61
1 0 0 Remainder 2/2.25= 0.89 0.25/0.50=
0.50
0.44
1 1 0 Remainder 2/2= 1 0.25/0.25= 1 1.00
13. Truth-Table Analysis (2)
• iq*SQ + SQ*IQ + IQ*sq Impact
• SQ (iq + IQ) + IQ (SQ + sq) Impact
• SQ + IQ Impact
• Incorporation of remainders produces a
simplified solution (based on counterfactual
thinking)
14. Returning to the cases
Country/
Region
INFORMATION QUALITY SYSTEM QUALITY
Comprehensiveness
(Ideal Type: No
indicators missing;
i.e., financial, output,
result and impact
indicators are
present)
Consistency
(Ideal type:
All indicators
have
consistent
definitions.
The system
of indicators
is based on a
concerted
approach to
monitoring
as set out by
the EU; i.e.,
consistently-
defined
input, output
and result
indicators)
Currency
(Ideal
Type: All
indicators
are
regularly
collected
and
updated;
i.e., all
indicators
are
recorded in
a regular
fashion and
updated in
accordance
with new
information
needs)
Compatibility
(Ideal Type: IT
systems are
fully
compatible;
i.e., able to
communicate
thanks to
transmissions
of structured
data, well laid
out data
standards and
interoperability
across
interfaces)
Reliability
(Ideal Type:
IT systems,
components
and/or
procedures
are fully
dependable;
e.g., no data
losses, no
systems
breakdowns,
seamless
functionality,
etc.)
Automation
(Ideal Type:
IT systems
are fully
automated;
i.e., only use
pre-
programmed
verification
of data
entries and
automated
matching of
data
records)
Austria 0.75 1.00 1.00 0.75 0.75 0.49
England 0.49 0.25 0.25 0.25 0.25 0.25
Flanders 0.75 0.25 0.75 0.49 0.75 0.49
France 0.49 0.49 0.49 0.49 0.25 0.25
Germany 0.75 0.25 0.75 0.75 1.00 0.75
Greece 0.75 0.25 0.75 0.49 0.25 0.49
Hungary 0.49 0.25 0.25 0.49 0.49 0.49
17. Implications
• Testing a version of the D&M model using
QCA transcends the Qual-Quant divide;
• Developing typological theories of monitoring
systems success: well-oiled vs. hybrid
machines
• Causal mechanisms as complex interaction
systems (producing holistic effects not
inherent in their individual parts)
• Tracing processes (temporality often
overlooked in variance theories)
Editor's Notes
*= Conjunction (AND); Plus sign= OR; capital letters= presence; lower-case letters= absence
Remainder= Non-empirical finding (an empty truth table row)
Based on counterfactual analysis, we can ask what if questions: what if we had an instance (a case) where non-a * B were present? This would allow us to obtain a simplified (less complex) solution. Clearly, there will be what if questions that are theoretically corroborated (easy counterfactuals) but also difficult counterfactuals (non-backed up by relevant theories).
Unsurprisingly, variance theories are mostly quantitative and processual approaches mostly qualitative
The D&M model: 1) fits our empirical setting where monitoring data is transformed into validated information, thus producing efficient (or inefficient) outcomes (e.g., efficiency savings, satisfied stakeholders, etc.); it combines both variance and process theories, thus fitting QCA as a multi-method approach; 3) it is yet to be tested using QCA (or non-variance approaches).
Monitoring systems are interaction technologies where Training Providers (or Project Managers) are expected to enter data in their IT systems. This data is then transmitted to Beneficiary Organisations for the purpose of validation (and cross-checking) of data entries. Next, Beneficiary Organisations relay such data to Managing/Paying Authorities (and their Monitoring Committees) who aggregate it and transmit it to the European Commission.
Truth-tables list all logically-possible combinations of causal conditions. Next, we assigned each case to rows in which its membership score exceeded 0.50, thus obtaining both non-remainder (populated) and remainder (empty) rows.
“What if” thought experiments ran like this: what would be the outcome of interest like if there were cases that populated the empty rows (or remainders)? Counterfactual analysis enabled us to include remainder rows, thus moving beyond empirical regularities.
IQ: captures the various facets of the data (i.e., comprehensiveness or scope, consistency and currency)
SQ: captures the more technical features of the monitoring system (e.g., compatibility, reliability or dependability and automation or pre-programming)
Outcome is not the outcome of interest but whether the row (and the associated cases) are consistent subsets of the outcome of interest (1=yes; 0= no)
The best instances of SQ and IQ are Germany and Austria respectively (based on aggregating individual scores according to the rule of the minimum)
Different dimensions of IQ and SQ were extrapolated through a dialogue between data and theory (D&M) and subsequently scored as depicted above.
Black circles indicate the presence of a condition. Large circles indicate core (necessary) conditions; small ones, peripheral (contingent) conditions. Blank spaces indicate ‘don’t care’, that is, situations where causal conditions may be either present or absent.
We conceptualise Germany as a well-oiled machine and Austria as a hybrid machine which is transitioning from manual to automated validations
These findings show that compatibility compensates for the lack of consistency in Germany (thanks to the exchange of structured data across the interface). Conversely, consistency compensates for the lack of automation in Austria (provided that there are consistent definitions of data, human beings can compensate for the lack of automation).
There are two distinct processes to the outcome of interest depending on whether validation is automated (Germany) or not (Austria).