A Coding Scheme For Analysing Problem-Solving Processes Of First-Year Engineering Students
1. This article was downloaded by: [ Lisa Besnon]
On: 19 March 2014, At: 08: 07
Publisher: Taylor & Francis
Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered
office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK
European Journal of Engineering
Education
Publication details, including instructions for authors and
subscription information:
http:/ / www.tandfonline.com/ loi/ ceee20
A coding scheme for analysing
problem-solving processes of first-year
engineering students
Sarah J. Grigg
a
& Lisa C. Benson
a
a
Engineering and Science Education, Clemson University, Clemson,
SC, USA
Published online: 17 Mar 2014.
To cite this article: Sarah J. Grigg & Lisa C. Benson (2014): A coding scheme for analysing problem-
solving processes of first-year engineering students, European Journal of Engineering Education,
DOI: 10.1080/ 03043797.2014.895709
To link to this article: http:/ / dx.doi.org/ 10.1080/ 03043797.2014.895709
PLEASE SCROLL DOWN FOR ARTICLE
Taylor & Francis makes every effort to ensure the accuracy of all the information (the
âContentâ) contained in the publications on our platform. However, Taylor & Francis,
our agents, and our licensors make no representations or warranties whatsoever as to
the accuracy, completeness, or suitability for any purpose of the Content. Any opinions
and views expressed in this publication are the opinions and views of the authors,
and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content
should not be relied upon and should be independently verified with primary sources
of information. Taylor and Francis shall not be liable for any losses, actions, claims,
proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or
howsoever caused arising directly or indirectly in connection with, in relation to or arising
out of the use of the Content.
This article may be used for research, teaching, and private study purposes. Any
substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing,
systematic supply, or distribution in any form to anyone is expressly forbidden. Terms &
Conditions of access and use can be found at http: / / www.tandfonline.com/ page/ terms-
and-conditions
3. 2 S.J. Grigg and L.C. Benson
The purpose of this research is to establish a standardised method for analysing student problem-
solving processes that can be used in the evaluation of problem-solving performances. This paper
details the methodology used to develop a structured scheme for coding the solution processes of
first-year students solving engineering problems independently and presents the coding scheme
as a credible instrument for assessing first-year engineering studentsâproblem-solving skills using
a mixed model methodology (Tashakkori and Teddlie 1998).
2. Literature review
Much research has been conducted on problem-solving from a variety of perspectives. This review
of relevant literature describes various models that have been proposed to explain the problem-
solvingprocess,factorsthathavebeenshowntoimpactproblem-solvingsuccessintheeducational
problem-solving context and analysis tools that have been used by other researchers in the study
of problem-solving.
2.1. Problem-solving models
Several theoretical frameworks describe problem-solving in contexts as diverse as explaining
insights in creativity (Wallas 1926), to heuristics in mathematics (Polya 1957) and gaming strate-
gies in chess (Simon and Simon 1978). Wallasâ model describes creative problem-solving in four
stages: (1) preparation, (2) incubation, (3) inspiration and (4) verification (Wallas 1926). The first
widely accepted problem-solving methodology is credited to George Polya, who describes the
act of problem-solving in four steps: (1) understanding the problem, (2) devising a plan, (3) car-
rying out the plan and (4) looking back or reviewing (Polya 1957). However, like other heuristic
models, the implication that problem-solving is a linear process that can be memorised is flawed;
problem-solvers may iteratively transition back to previous steps (Wilson et al. 1993).
A more recent model depicts problem-solving as a seven-stage cycle that emphasises the itera-
tive nature of the cycle (Pretz et al. 2003). The stages include: (1) recognise/identify the problem,
(2) define and represent the problem mentally, (3) develop a solution strategy, (4) organise knowl-
edge about the problem, (5) allocate resources for solving the problem, (6) monitor progress
towards the goals and (7) evaluate the solution for accuracy. While this structure gives a more
complete view of the stages of problem-solving, in practice, there is much variability in how
people approach the problem and how well each of the stages are completed, if at all (Wilson
et al. 1993).
The stages listed above are based on Sternbergâs Triarchic Theory of Human Intelligence that
breaks analytical intelligence, the form of intelligence utilised in problem-solving, into three com-
ponents: meta-components, performance components and knowledge acquisition components.
Meta-components (metacognition) are higher level executive functions that consist of planning,
monitoring and evaluating the problem-solving process. Performance components are the cogni-
tive processes that perform operations such as making calculations, comparing data or encoding
information that are used to generate new knowledge. Knowledge acquisition components are the
processes used to gain or store new knowledge (Sternberg 1985).
2.2. Factors influencing problem-solving success
Research in problem types and strategies has shown that characteristics of the problem such
as the complexity or structure of the problem (Jonassen and Hung 2008), the person such as
prior experiences (Kirton 2003) and reasoning skills (Jonassen and Hung 2008), the process such
Downloaded
by
[Lisa
Besnon]
at
08:07
19
March
2014
4. European Journal of Engineering Education 3
as cognitive and metacognitive actions (Sternberg 1985; Greeno and Riley 1987) and strategies
(Nickerson 1994), and the environment such as the social context (Woods et al. 2000) all influence
problem-solving performance. In the search of behaviours that promote proficiency in problem-
solving, much research has focused on classifying variations in performance between expert and
novice problem-solvers (Hutchinson 1988) presumably because expert problem solutions exhibit
more successful application of problem-solving skills. Experts have been shown to be up to four
times faster at determining a solution than novices, even though experts also take time to pause
between retrieving equations or chunks of information (Chi et al. 1981) and spend more time than
novices in the problem representation phase of the problem-solving process (Pretz et al. 2003).
Experts also exhibit dramatically different approaches to solving the problem (Chi et al. 1981)
and organise their information differently than novices, displaying larger chunking of information
than novices (Larkin et al. 1980).
While expert strategies are linked with higher level performances, methods used by experts
to solve problems are not necessarily transferable to novices due to cognitive requirements of
these strategies. For example, cognitive overload has been noted as a factor in some of the major
hindrances to achieving problem-solving proficiency, including the inability to solve the prob-
lem without acquiring more information, lack of awareness of performance errors and resistance
to changing a selected method or representation (Wang and Chiew 2010). First-year engineer-
ing students, who have limited experience solving even well-defined problems, tend to access
information stored in memory in a piecemeal rather than systematic or organised fashion (Sweller
1988). Inefficient mapping of information requires more cognitive effort when attempting to iden-
tify relevant information needed to solve problems. This can force students into a state of cognitive
overload, which may limit their ability to complete the problem at all, but also reduces the amount
of cognitive capacity available for metacognitive tasks, such as assessing the reasonableness of
their answers (Wang and Chiew 2010; Sweller 1988). While strategies such as problem decom-
position or subgoaling can be used to alleviate some of the cognitive demand required to solve
a problem (Nickerson 1994); problem-solvers can become too reliant on strategies or use them
inappropriately, leading to a decrement in performance (Matlin 2001). Some of the best ways
of improving performance is through promoting heightened awareness of performance errors,
identifying the source of those errors and personalising instruction in addressing problem areas
(Stigler and Hiebert 2009).
2.3. Analysis tools in prior research
While several independent coding schemes have been developed to analyse problem-solving,
most have addressed written work in conjunction with a think-aloud (Artzt and Armour-Thomas
1992; Litzinger et al. 2010; Wong et al. 2002; Weston et al. 2001). These coding schemes are
tailored to analyse the studentsâ verbal expressions of their work, not necessarily the elements
explicitly contained in the artefact itself, i.e. the studentsâ actual problem solution by which they
communicate their problem-solving competencies in the classroom. In addition, studies have
shown that thinking out loud while working on a problem can alter a studentâs thought process
(Ericsson and Simon 1998).
Coding schemes for assessing studentsâ think-alouds are not readily applicable to the assess-
ment of the written problem solutions. Yet, written data are rich in many ways, and analysing
tasks explicitly enacted under authentic problem-solving conditions can reveal strategies or errors
that occur organically and that may impact problem-solving success. As an alternative to cur-
rent methods of assessing problem-solving processes, we utilise a method that has been utilised
in human factors research in the analysis of both physical and cognitive processes: task analysis.
Task analysis methods originated with the work of Gilbreth and Taylor (Gilbreth 1914, Taylor
Downloaded
by
[Lisa
Besnon]
at
08:07
19
March
2014
5. 4 S.J. Grigg and L.C. Benson
1911), whose work-study approaches were traditionally used to evaluate and improve the effi-
ciency of workers (Stammers and Shepherd 1990). The definition of task analysis has been
broadened to include the qualitative assessment of humans interacting with a system or pro-
cess to understand how to better match the demands of the task to the capabilities of the human
(Wickens et al. 1998). The subcomponent tasks obtained from task analyses, referred to as ele-
ments, often serve as inputs for other forms of data analysis including error analysis and process
charting techniques (Stanton et al. 2005). Research in mathematics education describes the impor-
tance of error analysis as providing the opportunity to diagnose learning difficulties and develop
criteria for differentiating education, so that instructors can tailor education to individual students
to improve their performance and understanding (Radatz 1980). While there is no consensus on
what constitutes an element, typically they are defined as discrete, measureable and repeatable
units of activity, and it is at the userâs discretion to assign elements that are appropriately sized
for their intended analysis (Stammers and Shepherd 1990).
3. Research purpose
This research establishes a standard method for analysing engineering student problem-solving
processes that can be used in the evaluation of problem-solving performances in a way that is
not context dependent. By analysing the processes and strategies used by first-year engineering
students during problem-solving attempts and measuring their association with errors committed,
researchers can identify which behaviours to emphasise and which to discourage to promote
problem-solving proficiency.
Analysing multiple studentsâ problem-solving attempts enables the identification of variations
inprocessesandtheevaluationofproblem-solvingperformances.However,toenablecomparisons
of processes over time and across contexts, such as when measuring skills development analysis
must be conducted using consistent methods and common, dependable instruments that are free
from problem-specific features. The development and testing of such an analysis method is an
arduous process, one that requires a critical perspective and iterative refinement. The development
process and coding scheme are presented in this manuscript.
4. Research methods
A task analysis approach was utilised to develop a taxonomy of component and subcomponent
tasks used in the completion of problem solutions, along with a taxonomy of errors and strategies
used by first-year engineering students. Together, these components comprise a coding scheme
that can be used to analyse the problem-solving process and assess performance. Measures of
rater agreement were used to assess dependability, generalizability and repeatability of the coding
scheme by testing multiple solutions from the same problem, different problems and with a new
rater, respectively. Credibility of the coding scheme was assessed by comparing summarised
coding data to results found in previous research on problem-solving tasks.
4.1. Educational environment and participants
Problem solutions were collected from first-year engineering students in an introductory engineer-
ing course taught in a studio setting using active learning techniques. Studio classrooms differ
from traditional lecture halls in that students sit with a small number of peers (6â8) at tables,
which facilitates group interaction unlike traditional lecture halls, which utilise rows of seats that
Downloaded
by
[Lisa
Besnon]
at
08:07
19
March
2014
6. European Journal of Engineering Education 5
are quite often stationary. Learning environments that facilitate student interactions (i.e. studio
environments) are effective in achieving student-centred and inquiry-based learning, which are
approaches for building studentsâproblem-solving and laboratory skills (Prince and Felder 2006).
An example of this type of learning environment is the Student-CentredActive Learning Environ-
ments for Undergraduate Programs (SCALE-UP) approach, which has been applied in science,
engineering, technology and mathematics courses (Oliver-Hoyo and Beichner 2004). Research on
the outcomes of SCALE-UP in engineering and mathematics showed that students participating
in SCALE-UP classrooms exhibited higher levels of efficacy with the course material (Benson
et al. 2010). There was evidence of improved academic performance, conceptual understand-
ing and skills development in students participating in SCALE-UP classrooms compared with
traditional lecture-based instruction. Affective outcomes associated with SCALE-UP classrooms
include higher attendance, completion and retention rates as well as more favourable perceptions
of the course overall (Beichner et al. 2007; Benson et al. 2010). While students were regularly
encouraged to work with their peers on in-class activities, students completed the problems in
this study independently. Students in the course sections taught by a member of the research team
were invited to participate in this study.
Studentsâ solutions to problems were collected for three problems completed throughout the
semester, with data collected for four semesters. Perceived mental workload was assessed by
administering the NASA-TLX, a validated measure of task load (Farmer and Brownson 2003,
Hart 2006), which students completed following the completion of each problem for two of the
four semesters. Data from these surveys were used to establish the relative difficulty of the three
problems (Grigg and Benson, 2012a, 2012b, 2012c).
Data from one problem from the first semester was utilised for the initial development of the
coding scheme (n = 24). Subsequent tests for generalizability and refinements were made using
solutions from all three problems from the first semester (n = 68). Finally, the original data-
set from the first problem was used to check the credibility of the coding scheme (n = 24) by
comparing results obtained to those obtained in previous research efforts. Coding of subsequent
semesterâs data proved that the coding scheme was adequate among a larger data-set (n = 187).
4.2. Engineering problems
Problems were chosen based on characteristics that would ensure moderate problem difficulty for
students in a first-year engineering classroom, who are building their engineering knowledge base
and process skills. The chosen problems (shown in Figure A1 in Appendix 1) struck a balance of
being well structured enough to limit the cognitive load on the students but remain challenging
and provide multiple perspectives to solving the problem in accordance with the guidelines for
problem-based learning (Jonassen and Hung 2008). All problems had (1) a constrained context,
including pre-defined elements (problem inputs), (2) allowed the use of multiple procedures or
algorithms and (3) had a single correct answer (Jonassen 2004). Three problems were selected
that reflected different types of well-structured problems, as defined by Jonassen (2010). One
originated from the course textbook (Stephan et al. 2010) and two were developed by the project
team. All three problems were story problems, in which the student is presented with a narrative
that embeds the values needed to obtain the final answer (Jonassen 2010). Problem 1 involved a
multi-stage solar energy conversion system and required calculation of the efficiency of one stage
given input and output values for the other stages (Stephan et al. 2010). Problem 2 required students
to solve for values of components in a given electrical circuit. This problem, developed by the
project team, also contained a Rule-Using/Rule Induction portion (a problem having one correct
solution but multiple rules governing the process (Jonassen 2010)), where students were asked
to determine an equivalent circuit based on a set of given constraints. Problem 3 involved total
Downloaded
by
[Lisa
Besnon]
at
08:07
19
March
2014
7. 6 S.J. Grigg and L.C. Benson
pressure calculations and required students to solve for values within the system, and conversion
between different unit systems. Average perceived mental workload scores on the NASA-TLX
survey was highest for the solar efficiency problem (47.04 out of 100), followed by the resistor
problem (43.54 out of 100) and the total pressure problem (41.93 out of 100) (Grigg and Benson
2012a, 2012b, 2012c). These three problems are included in Appendix 1.
4.3. Tablet PC technology and data collection software
To capture problem-solving processes for analysis, students wrote their solution attempts on a
Tablet PC using a custom-designed software called MuseInk (Bowman and Benson 2011, Grigg
and Benson 2011). The software allows students to work problems on a Tablet PC, and stores the
digital Ink so that it can be replayed, annotated and exported as data to a database for analysis.
Students work through problems much as they would with pen and paper, with the added benefit
of having electronic access to their work, while researchers are provided with a comprehensive
expression of the problem-solving attempt from beginning to end including work that was erased.
The MuseInk software differs from other tablet-based instructional applications, such as DyKnow
and Classroom Presenter, which allow insertion of pen-based feedback into student work in
real time but cannot archive data in a way that is useful for research purposes (DyKnow 2012;
University of Washington). Other instructional applications, such as CogSketch (Forbus et al.
2008) and BeSocratic (Bryfczynski et al. 2012), can provide students with automated feedback
and guidance in an interactive mode, however they do not have the capability of importing a
coding scheme or directly inserting codes into studentsâ work.
4.4. Coding scheme development
The coding scheme development process progressed through 10 steps. These steps are illustrated
in Figure 1. The results section details the findings of the testing and refinement stage.
Figure 1. Depiction of the steps to develop a coding scheme for qualitative data analysis.
Downloaded
by
[Lisa
Besnon]
at
08:07
19
March
2014
8. European Journal of Engineering Education 7
Figure 2. Summary of coding scheme elements (tasks, errors, strategies and accuracy).
5. Results and discussion
A robust coding scheme for the analysis of problem-solving skills of first-year engineering stu-
dents was developed, and the methodology used to develop and test these complex processes are
described in the following section. Using this coding scheme, studentsâ solutions were analysed
based on actions taken as a result of cognitive and metacognitive processes, errors, approach
strategies and solution accuracy.
5.1. Coding scheme
The primary result of this investigation is the coding scheme itself, which is summarised in
Figure 2. The following sections describe the major categories of codes and their significance
to the analysis of problem-solving performance. A more thorough description of each code is
included in Appendix 2.
5.1.1. Process element codes
Process elements are the backbone of the coding scheme and describe what the student is doing
as depicted in the problem solution. For codes related to process elements, the basic structure set
forth in the coding scheme by Wong et al. was used as an a priori framework. The framework was
originally used to code studentâs videotaped activity while studying mathematical materials during
a concurrent think-aloud to assess processing differences in students based on self-explanation
training versus a control group that did not receive self-explanation training (Wong et al. 2002).
Similar to Sternbergâs (1985) theoretical framework , this coding scheme separated elements into
categories of knowledge access (KA), knowledge generation (KG) and self-management (SM).
Downloaded
by
[Lisa
Besnon]
at
08:07
19
March
2014
9. 8 S.J. Grigg and L.C. Benson
âą KA codes describe instances of the student retrieving knowledge not explicitly stated in the
problem statement.
âą KG codes describe instances of transforming bits of information to form new connections or
relationships.
âą SM codes describe tasks related to assessing the current state of problem-solving activity. SM
is collective of subcategories of planning, monitoring, revising and evaluating.
The Wong et al. coding scheme provided the structure needed to distinguish between instances of
retrieval of information from cognitive resources (KA), cognitive elements (KG) and metacog-
nitive elements (self-management). We segmented the self-management category according
to elements of planning, monitoring, evaluating and revising the solution in accordance with
Hartmanâs (2001) definition of the executive management aspects of metacognition.
5.1.2. Error codes
Error codes indicate instances where an incorrect element first occurs; errors always occur in
conjunction with a process element. For codes relating to errors, we utilised a structure derived
from error detection literature in accounting, where it is common to classify errors as conceptual
and mechanical errors (Owhoso et al. 2002, Ramsay 1994). A category for management errors to
capture errors was added in metacognitive processes.
âą Conceptual errors describe instances of misunderstanding of the problem and/or underlying
fundamental concepts.
âą Mechanical errors describe instances of operation errors, such as calculation errors.
âą Management errors describe instances of problems managing information, including identifi-
cation of given information, transcribing values or erasing correct work.
With this error-coding structure, errors related to studentsâunderstanding of engineering concepts
and their computational skills can be identified separately, allowing researchers to pinpoint hin-
drances to learning. For example, the case in which a student does not use the efficiency equation
properly because s/he did not understand the concept of efficiency is a much different case than
if the student erred in using the equation because s/he has difficulty manipulating equations due
to weak algebra skills or inattention to details.
5.1.3. Strategy codes
Strategy codes are one-time-use codes that describe the overall approach taken to solve the
problem. For strategy codes, we utilised a subset of strategies that appeared most applicable
to story problems from the compilation described in âThinking and Problem Solvingâ (Nickerson
1994). The subset was refined from the broader list of strategies identified by Nickerson over the
course of reviewing multiple problems completed by different students with different academic
backgrounds. Specifically, six strategies appeared most frequently within the sample of problem
solutions. These include:
(1) Problem Decomposition (segmentation) â which involves breaking down a complex problem
to ease analysis (Nickerson 1994).
(2) Clustering (chunking) â which involves grouping similar information into larger units (Chi
et al. 1981)
(3) Means-End Analysis â which involves beginning with the identification of a goal state and
the current state followed by the problem-solver making efforts to reduce the gap between
states (Nickerson 1994).
Downloaded
by
[Lisa
Besnon]
at
08:07
19
March
2014
10. European Journal of Engineering Education 9
(4) Forward Chaining â which is similar to Mean-EndAnalysis but involves a direct path between
the current and goal states (Nickerson 1994).
Some problems could also be classified according to an apparent lack of strategy.
(5) Plug-and-chug â which involved inserting given values into an equation and producing an
answer without necessarily understanding the reasons for doing so (Wankat 1999).
(6) Guess-and-check â which is a slightly more sophisticated approach where the problem-
solver checks that the values inserted into an equation yields the correct units or checks
for reasonableness of the solution (Wankat 1999).
Plug-and-chug and guess-and-check strategies are considered beginner-level strategies. Problem
decomposition and means-end analysis strategies are considered intermediate-level strategies
while clustering and forward chaining are considered advanced strategies. These strategies can
be used as a proxy measure of âlevel of expertiseâ as has been used in much problem-solving
research in the past.
5.1.4. Solution accuracy codes
Solution accuracy codes describe the accuracy of the final answer and are used once in each
solution. While standard answer states of âCorrectâ and âIncorrectâ could be used to describe the
accuracy of the problem solution, two additional codes were included to describe solutions for
a more fine-grained analysis of accuracy. âCorrect but Missing/Incorrect unitsâ is a subset of
âCorrectâ in which the numerical value of the solution is correct, but the units are missing or the
answer is given in a unit other than what was required, such as 120 seconds when the correct
answer should have been reported in minutes. âIncompleteâ is a subset of âIncorrectâ indicating
that no final answer was obtained.
5.2. Assessment of dependability, generalizability and repeatability
Measures of rater agreement were used to assess dependability, generalizability and repeatability
to the coding scheme by testing multiple solutions from the same problem, different problems
and coding with a new rater, respectively. Coded solutions were assessed on three criteria: (1)
code agreement (i.e. Did all coders associate this particular code with the element?), (2) code
frequency (i.e. Did all coders code the same number of elements?) and (3) code timing (i.e. Were
elements coded by coders consistently at the same point within a solution?).
Inter-rater reliability was calculated based on overall agreement rates for all coded elements, as
shown in calculation (1), and adjusted overall agreement rate, which accounts for discrepancies
due to missing data by measuring agreement only for elements coded by all coders, as shown in
calculation (2) (Gwet 2010). âAgreementâ was defined as an instance where a code was identified
by all coders. âMissing dataâ was defined as a code that one coder applied but another did not,
which may be considered an important omission, depending on how the analysis is used to answer
the research question.
Overall Agreement Rate =
number of ratings in agreement
total number of ratings
, (1)
Adjusted Overall Agreement Rate =
number of ratings in agreement
total number of ratings â
number of ratings with missing data
. (2)
Downloaded
by
[Lisa
Besnon]
at
08:07
19
March
2014
11. 10 S.J. Grigg and L.C. Benson
Overall agreement answers the question, âTo what degree are the codes assigned to a solution the
same across raters?âThis approach examines the degree to which coders can both identify and code
elements precisely. Overall agreement rates are used to determine the ability of coders to identify
elements and complete the coding process consistently. Adjusted overall agreement (dropping
missing data from the analysis) answers the questions, âGiven a set of identified elements, do
two coders code elements similarly?â Instances with missing data may be dropped to examine
the consistency of the application of codes themselves but is not as good of a measure of coder
performance. For example, an instance where a coder fails to identify an element and neglects
to apply a code signals a need for better coder training. In contrast, an instance of disagreement
about what to code a jointly identified element indicates the need for additional clarity around
the meaning and interpretation of the codes, which is of larger concern in the coding scheme
development process.
5.2.1. Assessing dependability of the coding protocol
Dependability of the coding protocol was evaluated using data from three solutions of the same
problem (the efficiency problem). Inter-rater reliability was calculated by examining overall agree-
ment. Initial results showed an overall agreement rate of 55%, with individual agreement rates of
77%, 55% and 42% for the three solutions. Reviewing the instances of disagreements was instru-
mental in identifying inconsistencies in applying codes and revealing missing problem features
that were not captured in the initial coding scheme.
Two more iterations of revisions were conducted before reaching a satisfactory level of inter-
rater reliability. Initial inter-rater reliability for the second round was âModerateâ with overall
agreement lower than the initial round of coding (for the three solutions, inter-rater agreement
was 73%, 40% and 25%, for an overall agreement rate of 41%). This round of coding revealed
that, with the addition of new codes and reconfiguring code categories, there was confusion with
the use of specific codes as well as the frequency of assigning codes. The coding protocol was
clarified and documented in the codebook. To improve the inter-rater reliability of coders, a round-
robin review cycle was implemented for the next round of coding, in which each coded problem
was reviewed by a second coder as an internal check of adherence to proper coding procedures.
The third round of coding showed great improvement of overall agreement rates with an âalmost
perfectâ rating. Across the three solutions, agreement rates were 100%, 96% and 85%, for an
overall agreement rate of 92%.
As is shown in the second round of assessment, not all refinements result in automatic improve-
ments (Table 1). This iterative cycle of testing was crucial in refining the coding scheme to be of
a high quality that others can ensure is dependable for their assessment.
5.2.2. Establishing generalizability of coding scheme
Generalizability was evaluated by analysing solutions from three different problems with different
contexts and features. All three problems were different in terms of the context, but were similar
Table 1. Results of iterative refinement on overall coder agreement scores for efficiency problem.
Overall agreement (%) Solution 1 (%) Solution 2 (%) Solution 3 (%)
Round 1: Initial Coding Scheme; One
problem tested (three solutions)
55 77 55 42
Round 2:New codes added; One
problem tested (three solutions)
41 73 40 25
Round 3: Added review cycle; One
problem tested (three solutions)
92 100 96 85
Downloaded
by
[Lisa
Besnon]
at
08:07
19
March
2014
12. European Journal of Engineering Education 11
Table 2. Inter-rater reliability for two original raters calculated as overall agreement and
adjusted overall agreement (missing data removed) with Cohenâs Kappa coefficients.
Retain missing codes Drop missing codes
Problem Overall agreement (%) Îș Adjusted overall agreement (%) Îș
1 88 0.86 98 0.98
2 74 0.72 90 0.89
3 74 0.72 97 0.96
Average 78.7 0.77 95.0 0.94
Table 3. Inter-rater reliability for two original coders + new coder calculated as overall
agreement and adjusted overall agreement.
Retain missing codes Drop missing codes
Problem Overall agreement (%) Îș Adjusted overall agreement (%) Îș
1 70 0.72 98 0.97
2 62 0.65 94 0.94
3 38 0.47 94 0.93
Average 57 0.61 95 0.95
in terms of processes required to solve them. For example, problem-finding tasks were limited
since all problems were defined for the students. Also, no physical construction was required, as
all problems could be solved using mathematical means. This iteration of coding was important
to ensure that the coding scheme was robust enough to be used for a variety of problems within
engineering contexts.
Both overall agreement and adjusted overall agreement (removing instances of missing data)
were calculated along with Cohenâs Kappa coefficients for both measures of agreement.Two of the
three original coders conducted this round of coding. The overall agreement rate was âsubstantialâ
(0.769) and adjusted overall agreement rate was âalmost perfectâ(0.942) (Gwet 2010), indicating
that when an element was coded by both coders, it was very likely that they assigned the same
code. A summary of results is given in Table 2.
5.2.3. Assessing repeatability of coding scheme with new rater
To assess whether the coding protocol was repeatable, another member who had no previous
involvement in the coding scheme development was brought in to code solutions. Inter-rater
agreement showed that agreement was acceptable. Again, both overall agreement and adjusted
overall agreement (removing instances of missing data) were calculated along with Cohenâs
Kappa coefficients for both measures of agreement. As given in Table 3, inter-rater reliability
decreased with the addition of the new coder, but remained âsubstantialâ for problems 1 and 2
and âalmost perfectâ when adjusted to remove data points with missing ratings. In problem 3,
there was a sizeable portion that was not coded because the student iterated through the same
element repeatedly; these iterations were not captured by the new coder, resulting in only âfairâ
agreement. For the efficiency problem from which the coding scheme was initially developed,
the agreement rates were 70% (98% adjusted). The agreement rate was 62% (94% adjusted) for
the circuit problem and 38% (94%) for the pressure problem, leading to an overall agreement
rate of 57% (95% adjusted). Overall, Cohenâs Kappa coefficients were 0.614 (0.948 adjusted)
for a âsubstantialâ level of inter-rater reliability and ânear perfectâ level on adjusted scores. These
inter-rater reliability measures were encouraging, showing that the coding scheme is robust and
detailed enough to achieve high reliability between raters. By the end of scheme development
Downloaded
by
[Lisa
Besnon]
at
08:07
19
March
2014
13. 12 S.J. Grigg and L.C. Benson
Table 4. Average number of codes by the approach strategy.
Code frequencies by strategy group
Average Time to Average Average Average Average number
Strategy Sample number of completion number of number of number of of answer
group size codes (minutes) KA codes KG codes SM codes codes
Plug-and-chug 2 10 5.1 1 3 2.5 1
Guess-and-check 3 33.3 20.7 1.7 13.7 8 1
Segmentation 15 31.2 17.7 3.4 8.3 9.1 1.6
Chunking 4 21.3 14.3 2.5 5.8 5.8 1.5
Table 5. Average number of errors by the expertise level (as indicated by strategy).
Error code frequencies by strategy group
Strategy Sample Average number of Average number of Average number of Probability of
group size conceptual errors mechanical errors management errors success
Plug-and-chug 2 1 0 1 0
Guess-and-check 3 4 2 4.7 0
Segmentation 15 5.3 1 4.8 0.5
Chunking 4 3.5 2 3.3 0.6
and training, coders were consistently assigning the same codes, though there remained some
confusion for the new coder on when to assign a code (i.e. to code each instance of a task, even
when the tasks were iteratively written and erased)
5.2.4. Establishing credibility of coding scheme
The data from the analysis of the first problem (the efficiency problem) were used to ensure that
the results obtained are in line with those obtained by others when examining problem-solving
with respect to expertise (Chi et al. 1981). Tables 4 and 5 summarise the data used to make this
comparison.
Solutions were divided into four groups based on their approach strategy as a proxy of expertise
level as evidenced in this particular solution. (The remaining two strategies were not found in
this sample of solutions.) Results indicate that those who used a plug-and-chug strategy were
not successful but had the fewest number of codes, number of errors and the shortest time to
completion. These results can be explained by limited awareness of performance problems. The
other two novice groups (guess-and-check and segmentation groups) mirrored results identified in
the previous literature (Chi et al. 1988) as characteristic of novice performance, including longer
time to completion, more errors and a lower probability of success than the more expert level
of performance (chunking group). Our results indicated faster completion time for more expert
performance, though the results showed a more moderate difference between them and the more
novice performance groups than what was observed in the research by Chi et al., namely four times
faster. (Average completion times for ânoviceâ groups, guess-and-check and segmentation, were
20.72 minutes and 17.71 minutes, respectively, compared with the âexpertâ performance group,
chunking, which was 14.32 minutes). Our research supports the claim that novices commit more
errors. Guess-and-check and segmentation groups committed an average of 10.67 and 11.1 errors
respectively, compared with the chunking group, with an average of 8.83 errors, which indicates
that this coding scheme provides a reasonable assessment of problem-solving performance, as
indicated by relative expertise of the students.
Downloaded
by
[Lisa
Besnon]
at
08:07
19
March
2014
14. European Journal of Engineering Education 13
5.3. Solution assessment
Studentsâsolutions were analysed based on the actions they took as a result of their cognitive and
metacognitive processes, errors, approach strategies and solution accuracy. Examples of step-by-
stepcodingofastudentâssolutiontotheefficiencyproblemisshowninAppendix3.Usingthiscod-
ing scheme to evaluate problem solutions enabled several different types of analyses. Comparisons
were made to identify key tasks that were associated with successful problem solutions (Grigg
and Benson 2012a, 2012b, 2012c), to identify tasks and errors associated with extreme levels
of perceived mental workload (Grigg and Benson 2012a, 2012b, 2012c) and evaluate the effects
of prior academic experiences on process variations (Grigg and Benson 2012a, 2012b, 2012c).
This research indicated that planning activities had positive benefits on student problem-solving
success, while the use of visual representations was associated with lower mental workload dur-
ing problem-solving attempts. Conceptual errors had the highest association with unsuccessful
solutions; yet having pre-engineering experience did not have a significant impact on success-
fully solving problems. Mechanical errors were also significantly associated with unsuccessful
solution. Having previously completed a calculus course was significantly related to successful
problem-solving, even though none of the problems required calculus to solve them.
6. Implications for research and instruction
While the coding scheme was developed and tested using a set of well-defined story problems typi-
calofafirst-yearengineeringcourse,itislikelytransferrabletothestudyofothertypesofproblems
because of the general nature of the categories of processes, errors and strategies. Problem-solving
activities within various contexts require similar cognitive and metacognitive processes, evoke
similar types of errors and utilise similar approach strategies. However, the generalizability of the
coding scheme to problems beyond well-defined story problems within the first-year engineering
context is an area for future and ongoing research. Much work has gone into adapting critical ele-
ments of the coding scheme into an instructional tool that quantitatively assesses problem-solving
performance for both well-defined and more open-ended problems. The outcome of this research
is an assessment that supports teaching successful problem-solving strategies (Grigg et al. 2013).
The assessment tool maps the presented coding scheme to a problem-solving cycle (Pretz et al.
2003) and allows instructors to provide individualised feedback to students on their problem-
solving proficiency by summarising processes and errors committed as well as what stages of
the problem-solving cycle presented the biggest challenges. To date, this assessment tool has
been used to assess problem solutions in three different undergraduate engineering courses as
part of ongoing testing and refinement. The major advantage of the assessment tool over tradi-
tional grading methods is that it can provide personalised feedback to students about their level of
problem-solving proficiency as well as pinpoint skill deficiencies that need attention. It is believed
that this personalised feedback will improve studentsâ awareness of their cognitive processes and
their problem-solving performance. Because the criteria are consistent for each problem, student
performance could be tracked over time. This would be enhanced by a streamlined process for
digitising assessment data, which is currently under development. Once a large set of solution
assessments have been collected for a variety of problem types across several courses, regres-
sion models will be used to determine combinations of tasks that contribute to more successful
problem-solving performance.
Acknowledgements
The authors thank Michelle Cook, Catherine McGough, Jennifer Parham-Mocello, David Bowman and Roy Pargas for
their assistance with data collection and development of the coding scheme.
Downloaded
by
[Lisa
Besnon]
at
08:07
19
March
2014
15. 14 S.J. Grigg and L.C. Benson
Funding
This work was supported through a grant from the National Science Foundation [EEC 0935163].
References
Artzt, Alice F., and Eleanor Armour-Thomas. 1992. âDevelopment of a Cognitive-Metacognitive Framework for Protocol
Analysis of Mathematical Problem Solving in Small Groups.â Cognition and Instruction 9 (2):137â175.
Beichner, R., J. Saul, D. Abbott, J. Morse, D. Deardorff, R. Allain, S. Bonham, M. Dancy, and J. Risley. 2007. âStudent-
CenteredActivities for Large Enrollment Undergraduate Programs (SCALE-UP) project.â In Research-Based Reform
of University Physics Vol. 1, edited by E. F. Redish and P. J. Cooney 1â42. College Park, MD: American Association
of Physics Teachers. http://www.per-central.org/document/ServeFile.cfm?ID=4517
Benson, L. C., M. K. Orr, S. B. Biggers, W. F. Moss, M. W. Ohland, and S. D. Schiff. 2010. âStudent-Centered Active,
Cooperative Learning in Engineering.â International Journal of Engineering Education 26 (5): 1097â1110.
Bowman, D., and L. Benson. 2011. âMuseInk: Seeing and Hearing a Freshman Engineering Student Think.â Paper read
at 2010 ASEE annual conference, Louisville, KY.
Bryfczynski, S., R. P. Pargas, M. Cooper, and M. Klymkowsky. 2012. âAnalyzing and visualizing student work with
BeSocratic.â Paper read at proceedings of the 50th annual southeast regional conference, at ACM, Tuscaloosa, AL.
Chi, Michelene T., Paul J. Feltovich, and Robert Glaser. 1981. âCategorization and Representation of Physics Problems
by Experts and Novices.â Cognitive Science 5 (2): 121â152.
Chi,MicheleneT.,RobertGlaser,andMarchallJ.Farr.1988.TheNatureofExpertise.Hillsdale,NJ:L.ErlbaumAssociates.
DyKnow. 12/17/2012. âThe DyKnow Advantage.â Accessed December 17, 2012. http://www.dyknow.com.
Ericsson, K. Anders, and Herbert A. Simon. 1998. âHow to Study Thinking in Everyday Life: Contrasting Think-Aloud
Protocols with Descriptions and Explanations of Thinking.â Mind, Culture, and Activity 5 (3): 178â186.
Farmer, Eric, and Adam Brownson. 2003. Review of Workload Measurement, Analysis and Interpretation Methods.
Technical Report CARE-Integra-TRS-130-02-WP2, EUROCONTROL.
Forbus, K., J. Usher, A. Lovett, K. Lockwood, and J. Wetzel. 2008. âCogSketch: Open-domain Sketch Understanding for
Cognitive Science Research and for Education.â Paper read at proceedings of the fifth eurographics workshop on
sketch-based interfaces and modeling, Annecy, France.
Gilbreth, Lillian. 1914. The Psychology of Management. New York, NY: Sturgis & Walton.
Greeno, James G., and Mary S. Riley. 1987. âProcesses and Development of Understanding.â In Metacognition, Moti-
vation, and Understanding, edited by F. E. Weinert and R. H. Kluwe, 289â313. Hillsdale, NJ: Lawrence Erlbaum
Associates.
Grigg, S., and L. Benson. 2011. âWork in Progress: Robust Engineering Problems for the Study of Problem Solving
Strategies.â Proceedings of the Frontiers in Education conference, Rapid City, SD.
Grigg, Sarah J., and Lisa C. Benson. 2012a. âEffects of Student Strategies on Successful Problem Solving.â ASEE annual
conference, San Antonio, TX.
Grigg, Sarah J., and Lisa C. Benson. 2012b. âHow doesAcademic Preparation Influence How Engineering Students Solve
Problems?â Paper read at frontiers in education, sSeattle, Washington.
Grigg, Sarah J., and Lisa C. Benson. 2012c. âUsing the NASA-TLX toAssess FirstYear Engineering Problem Difficulty.â
Paper read at industrial and systems engineering research conference, Orlando, FL.
Grigg, S., J. Van Dyken, L. Benson, and B. Morkos. 2013. âProcess Analysis as a Feedback Tool for Development of
Engineering Problem Solving Skills.â Paper read at 2013 ASEE annual conference, Atlanta, GA.
Gwet, Kilem L. 2010. Handbook of Inter-Rater Reliability: The Definitive Guide to Measuring the Extent of Agreement
upon Multiple Raters. 2nd ed. Gaithersburg, MD: Advanced Analytics, LLC.
Hart, Sandra G. 2006. âNASA-Task Load Index (NASA-TLX); 20 Years Later.â Paper read at human factors and
ergonomics society annual meeting proceedings, San Francisco, CA.
Hartman, Hope J. 2001. âDeveloping Studentsâ Metacognitive Knowledge and Skills.â In Metacognition in Learning
and Instruction: Theory, Research and Practice, edited by H. J. Hartman, 33â68. Dordrecht: Kluwer Academic
Publishers.
Hutchinson, SallyA. 1988. âEducation and Grounded Theory.â In Qualitative Research in Education: Focus and Methods,
edited by R. R. Sherman and R. B. Webb. New York: Falmer Press.
Jablokow, Kathryn W. 2007. âEngineers as Problem-Solving Leaders: Embracing the Humanities.â IEEE Technology and
Society Magazine 26 (4): 29â35.
Jonassen, David H. 2004. Learning to Solve Problems. San Fransisco, CA: Pfeiffer.
Jonassen, David H. 2010. âResearch Issues in Problem Solving.â Paper read at 11th international conference on education
research, Seoul, Korea.
Jonassen, David H., and Woei Hung. 2008. âAll Problems are not Equal: Implications for Problem-Based Learning.â
The Interdisciplinary Journal of Problem-Based Learning 2 (2): 6â28.
Jonassen, David, Johannes Strobel, and Chwee Beng Lee. 2006. âEveryday Problem Solving in Engineering: Lessons for
Engineering Educators.â Journal of Engineering Education 95 (2): 139â151.
Kirton, M. J. 2003. Adaption-Innovation in the Context of Diversity and Change. Hove: Routledge.
Larkin, Jill, John McDermott, Dorothea P. Simon, and Herbert A. Simon. 1980. âExpert and Novice Performance in
Solving Physics Problems.â Science 208 (4450): 1335â1342.
Downloaded
by
[Lisa
Besnon]
at
08:07
19
March
2014
16. European Journal of Engineering Education 15
Litzinger, Thomas A, Peggy Van Meter, Carla M. Firetto, Lucas J. Passmore, Christine B. Masters, Stephen R. Turns,
Gary L. Gray, Francesco Costanzo, and Sarah E. Zappe. 2010. âA Cognitive Study of Problem Solving in Statics.â
Journal of Engineering Education 99 (4): 337â353.
Matlin, Margaret W., ed. 2001. Cognition. 5th ed. Hoboken, NJ: John Wiley & Sons.
National Academy of Engineering. 2004. The Engineer of 2020: Visions of Engineering in the New Century. Washington,
DC: National Academies Press.
Nickerson, Raymond S. 1994. âThe Teaching and Thinking of Problem Solving.â In Thinking and Problem Solving, edited
by R. J. Sternberg. San Diego, CA: Academic Press.
Oliver-Hoyo, M., and R. Beichner, 2004. âSCALE-UP: Bringing Inquiry-Guided Methods to Large Enrollment Courses.â
In Teaching and Learning through Inquiry: A Guidebook for Institutions and Instructors, edited by V. S. Lee, 51â70.
Sterling, VA: Stylus Publishing.
Owhoso, Vincent E., William F. Messier, and John G. Lynch Jr. 2002. âError Detection by Industry-Specialised Teams
During Sequential Audit Review.â Journal of Accounting Research 40 (3): 883â900.
Polya, George. 1957. How to Solve It. Garden City, New York: Doubleday.
Pretz, Jean E., Adam J. Naples, and Robert J. Sternberg. 2003. âRecognizing, Defining, and Representing Problems.â In
The Psychology of Problem Solving, edited by J. E. Davidson and R. J. Sternberg, 3â30. Cambridge, UK: Cambridge
University Press.
Prince, M., and R.M. Felder. 2006. âInductive Teaching and Learning Methods: Definitions, Comparisons, and Research
Bases.â Journal of Engineering Education 95 (2): 123â138.
Radatz, Hendrik. 1980. âStudentsâ Errors in the Mathematical Learning Environment.â For the Learning of Mathematics
1 (1): 16â20.
Ramsay, Robert J. 1994. âSenior/Manager Difference in Audit Workpaper Review Performance.â Journal of Accounting
Research 32 (1): 127â135.
Rugarcia,Armando, Richard M. Felder, Donald R.Woods, and James E. Stice. 2000. âThe Future of Engineering Education
I. A Vision for a New Century.â Chemical Engineering Education 34 (1): 16â25.
Simon, Dorothea P., and Herbert A. Simon. 1978. âIndividual Differences in Solving Physics Problems.â In Childrenâs
Thinking: What Develops? edited by R. S. Sigler, 325â348. NJ: Lawrence Erlbaum.
Stammers, Robert B., andAndrew Shepherd. 1990. âTaskAnalysis.â In Evaluation of Human Work, edited by J. R. Wilson
and E. N. Corlett, 144â168. Philadelphia, PA: Taylor & Francis.
Stanton, Neville,A., Paul M. Salmon, Guy H. Walker, Chris Baber, and Daniel P. Jenkins. 2005. Human Factors Methods:
A Practical Guide for Engineering and Design. Burlington, VT: Ashgate.
Stephan, E. A., W. J. Park, B. L.Sill, D. R. Bowman, and M. H. Ohland. 2010. Thinking Like an Engineer: An Active
Learning Approach. Upper Saddle River, NJ: Pearson Education.
Sternberg, Robert J. 1985. Beyond IQ: A Triarchic Theory of Human Intelligence. NewYork: Cambridge University Press.
Stigler, J. W., and J. Hiebert. 2009. The Teaching Gap: Best Ideas from the Worldâs Teachers for Improving Education in
the Classroom. New York, NY: Free Press.
Sweller, John. 1988. âCognitive Load during Problem Solving: Effects on Learning.â Cognitive Science 12 (2): 257â285.
Tashakkori, Abbas, and Charles Teddlie. 1998. Mixed Methodology. Vol. 46. Applied Social Research Methods Series.
Thousand Oaks, CA: Sage Publications.
Taylor, F. W. 1911. The Principles of Scientific Management. New York, NY: Harper & Brothers.
University of Washington. âClassroom Presenter 3.â Accessed December 17, 2012. http://classroompresenter.cs.
washington.edu/.
Wallas, Graham. 1926. The Art of Thought. London: J. Cape.
Wang, Yingxu, and Vincent Chiew. 2010. âOn the Cognitive Process of Human Problem Solving.â Cognitive Systems
Research 11 (1): 81â92.
Wankat, Phillip C. 1999. âReflective Analysis of Student Learning in a Sophomore Engineering Course.â Journal of
Engineering Education 88 (2): 195â203.
Weston, Cynthia, Terry Gandell, Jacinthe Beauchamp, Lynn McAlpine, Carol Wiseman, and Cathy Beauchamp. 2001.
âAnalyzing Interview Data: The Development and Evolution of a Coding System.â Qualitative Sociology 24 (3):
381â400.
Wickens, C. D., S. E. Gordon, andY. Liu. 1998. An Introduction to Human Factors Engineering. NewYork, NY: Longman.
Wilson, James W., Maria L. Fernandez, and Nelda Hadaway. 1993. âMathematical Problem Solving.â Chapter 4 In
Research Ideas for the Classroom: High School Mathematics, edited by P. S. Wilson. New York, NY: MacMillan.
http://www.recsam.edu.my/Mathematical_Problem_Solving.pdf
Wong, Regina M. F., Michael J. Lawson, and John Keeves. 2002. âThe Effects of Self-Explanation Training on Studentsâ
Problem Solving in High-School Mathematics.â Learning and Instruction 12 (1): 233â262.
Woods, Donald R., Richard M. Felder,Armando Rugarcia, and James E. Stice. 2000. âThe Future of Engineering Education
III. Developing Critical Skills.â Chemical Engineering Education 34 (2): 108â117.
About the authors
Sarah J. Grigg is a Lecturer in General Engineering at Clemson University. Her research focuses on process improvement
and error mitigation across various contexts, including engineering education, healthcare and transportation. She received
PhD, M.S. and B.S. degrees in Industrial Engineering and an MBA from Clemson University.
Downloaded
by
[Lisa
Besnon]
at
08:07
19
March
2014
17. 16 S.J. Grigg and L.C. Benson
Lisa C. Benson is an Associate Professor of Engineering and Science Education at Clemson University, with a joint
appointment in Bioengineering. Her research focuses on how student motivation affects learning experiences. Her projects
involve assessing and studying the interaction of motivation, problem-solving strategies and knowledge transfer. Other
projects in the Benson group include utilising Tablet PCs to enhance and assess learning, implementing student-centred
active learning, and incorporating engineering into secondary science and mathematics classrooms. Her education includes
a B.S. in Bioengineering from the University of Vermont and M.S. and PhD in Bioengineering from Clemson University.
Appendix 1. Engineering problems under analysis.
(a)
(b)
Figure A1. (a) The efficiency problem. Reprinted with permission from Pearson Education. (b) The resistor problem
and (c) pressure problem were written by members of the research team.
Downloaded
by
[Lisa
Besnon]
at
08:07
19
March
2014
18. European Journal of Engineering Education 17
(c)
Figure A1. Continued.
Appendix 2. Description of coding scheme elements
Category Code Description
Knowledge access Identify equation Equation with variables, no values
Implicit equation identification No formal equation shown, values inserted initially
Identified assumption Explicit statement of assumption or self-imposed constraint
Identify prior knowledge Identifying outside knowledge to solve the problem
Identify conversion factor Explicit listing of conversion factor(s) used
Use conversion factor Ex 1ft = 12 in 4ft => in = 48 in
Knowledge generation Draw a picture/diagram Flow diagram, schematic, sketch, Venn diagram, etc.
Make a table Organising similar data in lists
Relate variables Assigning relationships in the system, show connections,
insert known values in diagram
Manipulate equation Solving an equation for another variable
Derive units Ex: 4 ftâ12in/1ft = 48 in
Plug values in equation Inserting given or derived values
Document math Documentation of mathematical calculations
Solve intermediate value Getting a sub answer
Self-management
Planning Restate problem Summarising in phrases or sentences
Identify known value Defining variables by given values from problem statement
Identify unknown value Explicitly identifying what is being solved for
Identify constraint Information from problem statement (Ex: only one of each
type of resistors)
Revising Labelling/renaming Clarifying documentation, relabelling variables
Erase work Transition or correction (not fixing penmanship)
Abandon process/start over Completely changing gears
Evaluating Check accuracy Plugging answer back in and checking
Identify final answer Boxed/underlined/circled answer
Monitoring Identify error Correcting or erasing a previous error
Conceptual errors Incorrectly relate variables EX: P1out = P2in, P2out = P3in
Misuse governing equation Error in equation EX: flipped variables or sign
Incorrect visual/graphic
representation
Misrepresenting underlying concepts
Incorrect assumptions Placing or misusing constraints on the system or assumptions
not given in problem statement
Mechanical errors Incorrectly manipulate equation Algebra problem
Incorrect calculation Plugging numbers in calculator incorrectly
Incorrect unit derivation Error in deriving units
Management errors Incorrect known value Inserting wrong number for variable
Incorrect unknown value Solving for wrong variable
Ignored problem constraints Not conforming to constraints given in problem statement
Irrelevant information Using values that are not given and not needed
Inconsistent transcription Correct information rewritten incorrectly (miscopy)
Inconsistent units Mismatch of units in a calculation (EX: mixing English and
SI units in an equation)
Incorrect unit assignment Labelling units on value incorrectly (arbitrarily with no
other documentation)
Using incorrectly generated
information
Using incorrect equation or value calculated in previous part
of problem
Missing units throughout No use of units (or few) in calculations throughout
(Continued)
Downloaded
by
[Lisa
Besnon]
at
08:07
19
March
2014
19. 18 S.J. Grigg and L.C. Benson
Appendix 2. Continued
Category Code Description
Erasing correct work Correcting âmistakeâ that is not really wrong
Strategies Plug-and-chug Plugging numbers into equations without understanding
why
Guess-and-check Trying values and seeing what gives good answers
Work backwards Choosing steps based on known solution
Utilise a similar problem Referring to or working from example in book or notes
Segmentation Discovering or acknowledging multiple parts to problem
(AKA problem decomposition or subgoaling)
Chunking Collapsing multiple parts into one step
Means-ends analysis Working to minimise differences between end goal and
starting point
Forward chaining Planning out path to solve problem
Specialisation/extreme cases Considering abstract or extreme forms of problem
Solution accuracy Correct answer Correctly calculating final answer
Correct but missing/incorrect
units
Correct value with no or incorrect units
Incorrect answer Solving for wrong variable, skipped steps
Incomplete No final answer produced
Appendix 3. Example of coded student work: step-by-step coding of solutions
Downloaded
by
[Lisa
Besnon]
at
08:07
19
March
2014
20. European Journal of Engineering Education 19
These snapshots at different points in time throughout the solution indicate the studentâs work and the associated codes
that were inserted at relevant points in the work.
Downloaded
by
[Lisa
Besnon]
at
08:07
19
March
2014