SlideShare a Scribd company logo
1 of 83
i
School of Chemistry
4th
Year Research Project (F14RPC)
Title: Developing a Tool to Quantitatively
Describe the Conceptual Understanding
Content of 1st
Year Organic Chemistry
Examinations
Student’s name: Jake Turner
Student ID Number: 4181525
Supervisor: Dr. June McCombie
Assessor: Prof. Katharine Reid
Personal Tutor: Dr. Darren Walsh
I hereby certify that this project report is my own
work:
Student’s signature:
Please ensure this document is date stamped when
handed in to the SSO.
ii
Acknowledgments
I would like to thank my supervisor, Dr. June McCombie, who from the beginning helped to
shape this work into its current form, and provided me with the resources and motivation
necessary to succeed.
I would also like my assessor, Prof. Katharine Reid, whose mid-way feedback played a
crucial role in ensuring this work was as focused as possible.
I also extend a large thank you to my fellow researchers, Sandi and Jess, who’ve attentively
watched what must amount to several hours of presentations, and have sparked many
interesting discussions that have helped form these results.
I would also like thank all of the lecturers who participated in this study. You were all
exceedingly friendly, welcoming, and willing to help.
Above all I thank my wife, Zoe, who has supported me in countless ways as I crafted this
study. Without her, this work would not exist.
iii
Abstract
In this work, a pilot study was conducted to develop a method for obtaining a quantitative
description of how effectively conceptual understanding is examined (if at all), and a
comparison of these results is made to qualitative data obtained from the authors of the exams
(the lecturers), revealing their attitudes towards conceptual understanding, and performing a
comparison between their perceptions of the state of conceptual understanding in the exams
with the reality.
iv
Tableof Contents
Acknowledgments...................................................................................................................... ii
Abstract ......................................................................................................................................iii
Introduction................................................................................................................................1
Section 1 - Literature Review ....................................................................................................4
Conceptual Understanding.....................................................................................................4
Conceptual Understanding in Organic Chemistry .................................................................5
Concepts in Organic Chemistry .............................................................................................6
Section 2 - Research Methodology..........................................................................................10
Interviewing Lecturers .........................................................................................................10
Conducting the First Interviews.......................................................................................10
Conducting the Second Interviews ..................................................................................10
Analysing Past Exam Papers ...............................................................................................18
Section 3 – Results and Discussion .........................................................................................31
Generated Codes ..................................................................................................................31
Individual Module Results...................................................................................................37
F11MSB...........................................................................................................................37
F11MSP ...........................................................................................................................41
F11OMC..........................................................................................................................45
F11OSS............................................................................................................................49
F11SOS............................................................................................................................51
Overall Results and Conclusion...........................................................................................54
Section 4 - Future Work...........................................................................................................57
References..................................................................................................................................iii
Appendix 1 – Interview Transcriptions .....................................................................................v
Lecturer 1 – Interview 2.........................................................................................................v
Lecturer 2 – Interview 2........................................................................................................ ix
Lecturer 3 – Interview 2........................................................................................................xii
Appendix 2 – Figures and Tables ............................................................................................xvi
Classification of Reaction Codes (from Figure 18) .............................................................xvi
Individual Aspect Frequencies in F11MSB.......................................................................xviii
Individual Aspect Frequencies in F11MSP ......................................................................... xx
Individual Aspect Frequencies in F11OMC .......................................................................xxii
1
Introduction
On July 1st 2015, Jo Johnson, the Minister of State for Universities and Science,
delivered a speech in which he outlined plans for the Teaching Excellence Framework
(TEF)1. The TEF is a proposed system, like that of the Research Excellence Framework
(REF), in which universities are judged by the quality of their teaching, recognising
excellence and providing clear incentives to improve.
Johnson gave a variety of reasons behind his proposal, including tackling degree
classification inflation, increasing student’s engagement in their courses and providing
incentives for institutions to increase the retention and progression of disadvantaged students.
The core of TEF would be clear outcome-centred criteria and metrics, assessed by an
independent quality body. The exact nature of this framework, Johnson said, was yet to be
designed, and consultation was scheduled to take place for an autumn publication of the
green paper.
A week later, George Osbourne stated in his Summer Budget 2015 speech that
universities that demonstrate excellent teaching would be able to raise their fees over the
current £9,000 cap2.
This prompted concern that the TEF would lead to a vicious cycle where universities
would prioritize meeting TEF standards (becoming ‘TEF-able’), much like how they
currently deal with REF (becoming ‘REF-able’), and not focus on improving teaching
standards. Furthermore, as the nature of the metrics the TEF would operate under was
currently unknown, predictions arose that the TEF would become ‘data-driven’.
For an example of an education inspectorate that is data-driven, one can look to
Ofsted. It currently strives to collect more data, year after year. Some stakeholders say this
leads to a system where children are replaced with data-points, and policies are introduced
that improve said data points. In theory, this should lead to improvements in the child’s
learning, but as the data starts covering wider ranges, this idealism can become lost amongst
the tide of statistics. The worry, therefore, was that under the TEF regime, changes to
university education would be made because it is good for the TEF, and important changes
would be ignored on the grounds they would not be represented in the TEF metrics.
In November 2015, the green paper was presented to Parliament under the name
“Fulfilling our Potential: Teaching Excellence, Social Mobility and Student Choice”. There
2
was clarification on the aims and rational for the TEF, along with a briefly proposed model
for how the metrics will function. In summary, the proposed TEF will occur in stages over a
number of years, with increasing levels of the TEF award becoming available.
In the first year, a Quality Assessment (QA) review will award Level 1 of the TEF to
institutions that meet or exceed the expectations for quality and standards in England, and
allows application for financial incentives in the academic year of 2018/2019. A ‘successful
QA review’ was defined as the most recent review undertaken by the Quality Assurance
Agency for Higher Education (QAA) or an equivalent review used for course designation.
Having a Level 1 award entitles universities to raise their course fees for new students.
In the second year, higher level TEF awards become available. These higher levels
will be granted upon the successful assessment of the institution through as-of-yet undecided
metrics and criteria. Though unknown, the means of assessment are outlined to be
independent from Government, instead being handled by a panel of experts. This panel is
proposed to consist of academic experts in learning and teaching, student representatives, and
employer/professional representatives.
In order for this assessment process to be straightforward and robust, and because
there is no single direct measurement of excellent teaching, the green paper proposes using a
common set of metrics from quality assured national datasets. In recognition of the fears
people had about the TEF becoming ‘data-driven’, the green paper states that these metrics
alone will not give a complete picture of excellent teaching, and therefore proposes to ask
institutions to supplement the assessment panel with additional evidence.
Self-reflection upon teaching standards is already prevalent within most universities,
although there currently exists very little in the way of tools to perform this analysis. If a
standard system of awarding teaching excellence is to be successful, tools of this nature must
be developed so that the universities can provide convincing, clear evidence that they not
only currently exhibit high teaching standards, but that they are also committed to developing
them.
As employability is one of the key aspects of the TEF, it seems likely that the
classification of the degree a student is awarded, and consequently the methods used to
decide upon that classification, will attract the attention of the independent TEF assessment
board. With the incredibly long list of courses supplied by the countries institutions, it makes
sense to transfer the responsibility of proving the worth of their exams onto the administering
schools.
3
In summary, tools to deconstruct and evaluate assessments need to be developed in
order to provide evidence of excellent teaching.
With the correct tools, an institution could provide evidence that they are committed
to assessing the conceptual understanding held by their students, over the more traditional
rote learning or algorithmic problem solving abilities. In the post-graduation world, real
problems demand ways of thinking that are often far removed from tradition, and possessing
a conceptual understanding of the material used during day-to-day work is essential if an
employee is to be cost effective, especially in the science sector. Jo Johnson stressed that he
wanted to provide ways for students to determine their value for money, and giving them a
way of assessing their future worth as an employee certainly achieves this.
In addition, if an exam is proven to test conceptual understanding of the topics it
assesses, high marks will infer that the teaching of the content was focused on developing
conceptual understanding. From the mark distribution of a population of exam recipients and
an understanding of the content of the exam, judgements might be made as to the teaching
quality.
Research into conceptual understanding within chemistry started in 1987, where
Nurrenbern and Pickering showed a difference in ability to solve conceptual and numerical
problems3. In 1990, Pickering went on to show that this difference arose from two
educational goals; conceptual understanding and algorithmic problem solving, rather than
some innate difference in ability4. Conceptual understanding has since been the subject of
intensive research, though no analysis has yet been performed on the conceptual
understanding content of actual assessment materials from universities.
This initial pilot study therefore aims to probe conceptual understanding in exams of a
single area of first year study; organic chemistry. This area was chosen partly due to the
familiarity I have with the material, and partly due to the general view that organic chemistry
is a ‘gatekeeper’ module, either making or breaking students with its vast amount of complex
reactions, largely un-encountered by the average post A-level chemist.
4
Section1 - LiteratureReview
Conceptual Understanding
Research has been progressively uncovering the effects and benefits of conceptual
understanding for a long time, although until recently the definition of conceptual
understanding was not set in stone. More often than not, it was simply a case of ‘knowing-it-
when-you-saw-it’. Research was guided by the idea that conceptual understanding is the
ability to move between different levels of understanding. These levels were the macroscopic,
particulate and symbolic5. Further levels, such as ‘process’6 and ‘quantum’7 have been
proposed since.
A recent paper by Holme et al. took an interesting approach to resolving the
definition, crowd-sourcing it from over 1,300 instructors8. This new five-part definition has
not yet been used in the literature as a basis for looking further into conceptual understanding.
I will therefore be using this five-part definition as a lens through which to view conceptual
understanding within university education. Whilst the definition is broad, yet concise, it does
suffer in respect to the ‘Problem Solving’ aspect relying on the term ‘critical thinking’,
which suffers from the same type of vague, varied definition that ‘conceptual understanding’
used to. This makes using this aspect of the definition an exercise in interpretation, ironically
exactly what the paper was aiming to put a stop to.
It should be noted that this definition of conceptual understanding is explicitly stated
to be for general chemistry, and lacks discussion on how organic-specific content (e.g.
reaction mechanisms) fits in. Therefore during this work, it was necessary at times to depart
from this definition or expand it.
Aspects of this definition have been explored before, both in respect to teaching and
assessment. Student generated analogies have been shown to increase their conceptual
understanding of halogen reactions9, which in the light of the new definition can be seen as
the Translate aspect (translating the behaviour of electrons into a macroscopic world analogy
counterpart) reinforcing the ability of the student to Predict/Explain (‘Which of the reactions
will happen?’) with Depth (they were able to communicate their reasoning with language
that demonstrated skills beyond rote memorization).
5
Dori et al. have succeeded in developing a module for teaching quantum mechanics
utilising a visual-conceptual approach7, 10. They focus on the ability to Translate between
visual representations.
Cooper et al. assessed student’s Depth and ability to Translate their understanding of
intermolecular forces11.
Raviolo developed an assessment of solubility equilibrium12. His assessment requires
the student to Transfer their knowledge to a novel AgCl salt solution, Predict/Explain with
Depth the process that takes place, and Translate their ideas into graphical representations.
The power of this definition is clear; entire learning/assessment cycles can be broken
down according to specific aspects of the greater conceptual understanding whole. It
therefore should be possible to look at isolated assessment material, and analyse the
relationships to aspects of conceptual understanding.
Conceptual Understanding in Organic Chemistry
It has been said that “much of what is chemistry exists at a molecular level and is not
accessible to direct perception”13. This lack of direct perception is a problem for the student,
so it is not surprising that the majority of research into conceptual understanding takes place
around concepts that are either directly observable, or easily represented in some form of
diagram, resulting in a large amount of research into the area of conceptual understanding
within physical chemistry. Research into conceptual understanding of organic chemistry is a
relatively young area14, and unsurprisingly, most of this research is focused on reaction
mechanisms.
Bhattacharyya and Bodner identified that, like the mathematical problems in
Nurrenbern’s work3, students can produce correct answers to mechanistic tasks without
having an understanding of the chemical concepts15.
Ferguson and Bodner expanded on this by probing how students made sense of the
arrow-pushing formalism16. They found that in many students’ minds, the curly arrows were
not being used as a powerful construct to understand the mechanism of the reaction; they
were just pushing arrows around until they obtained a product.
Kraft, Strickland and Bhattacharyya investigated the cues organic chemistry graduate
students obtain during mechanism tasks, and the reasoning processes induced by those cues17.
They found that the students exhibited a poor interpretation of the reaction mechanisms
6
provided to them, which cued them to rely primarily upon a case-based reasoning (CBR)
approach, where they tried to relate the given problem to a more familiar case. They also
relied upon a rules-based reasoning (RBR) approach, which lead to an under-accounting of
the relevant variables in the problem. The most successful students utilised a model-based
reasoning (MBR) approach. This work suggests that further instruction on how to reason with
concepts is necessary for the students, not just an increase in their conceptual understanding.
Grove, Cooper and Cox investigated scenarios in which the students choose to use
mechanisms to solve problems, rather than forcing them too. They found that in complex
problems, mechanistic thinking increases the chance of succession, but didn’t help
significantly with simple problems18. They also highlighted that an alarming number of
students (51%) didn’t use mechanisms, or only used one, to solve the six problems, indicating
there is a need to better understand the barriers that students face in trying to use mechanisms
and the curved-arrow notation.
Not all research into conceptual understanding within organic chemistry is mechanism
focused. Arellano and Towns investigated students understanding of the alkyl halide
functional group and its reactions, finding that students had trouble classifying substances as
bases and/or nucleophiles, assessing the basic or nucleophilic strength of substances and
accurately describing the steps that take place or the reactive intermediates that form during
alkyl halide reaction mechanisms19. This seems to suggest that problems with conceptual
understanding in organic chemistry originate much deeper than at a mechanistic level.
In conclusion, the current state of research into conceptual understanding in organic
chemistry seems to be focused around thinking methodology, particularly when solving
mechanistic problems. There are currently no studies that investigate how organic chemistry
is assessed during undergraduate studies.
Concepts in Organic Chemistry
During the initial stages of the project, it was my intention to investigate conceptual
understanding in a much wider area than just examinations. In order to accomplish this, I
planned to present a few topics to students in the form of conceptual questions, ultimately
trying to find particular areas of difficulty, hoping to connect this data to the outcome of the
examination analysis. I explored various topics, looking for those which lent themselves well
7
to the aspects of the five-part definition of conceptual understanding. This meant possessing
qualities such as being representable in multiple forms or determining aspects of a chemical
system, so that students would be able to predict the outcome of changes. However, I realized
that such selectivity was not possible when it came to actually teaching all the topics required
for a first year course, thus I should investigate the most important concepts. To aid this
process, three lecturers were interviewed (see ‘Conducting the First Interviews’ below).
During the process of deciding upon concepts to probe, it became clear to me that I
needed to refine my idea of what exactly a concept was in the context of organic chemistry.
Was it right to equate something like pKa, which is essentially just an equation, with a whole
mechanistic reaction? Considering that pKa is one of the many factors that need to be taken
into account when looking at nucleophilic substitution reactions, it isn’t fair to say that a
whole organic mechanism is a ‘concept’ in the same way individual aspects of the
mechanisms are. Although this direction of the project was ultimately diverted towards
examinations, the reasoning that follows contributed heavily to the interpretation of the
results.
Enabling a student to devise and explain mechanisms for entirely new reactions is a
major goal for organic chemistry education. Recent research has shown the difference in the
use of rule-based reasoning (RBR), case-based reasoning (CBR) and model-based reasoning
(MBR) between students during mechanistic problems17. The study shows that students are
likely to resort to CBR 50% of the time, where they look towards mechanisms they are
familiar with to solve unfamiliar mechanisms. This worked well for some familiar reactions,
but lacked true predictive power, as the students often forced the problem to match their pre-
conceived case.
The most successful participants used a higher order thinking models to proceed
through the reaction stepwise, considering how each variable will interact with the others
during each step.
With this in mind, it is possible to frame the goal of organic chemistry education as to
provide students with the conceptual understanding of many different models (pKa,
resonance, carbocation stability etc.) so that they can work through novel mechanistic
problems with a MBR approach. The conceptual understanding of each model can be broken
down into the five-part definition, whereas the concept of the mechanism itself is a complex
8
interplay between these aspects, the details of which are yet to be revealed by research (see
Section 4 - Future Work).
In Holmes’ paper, Depth is described as thinking void of rout memorisation or
algorithmic execution8. This accurately describes the process of MBR, instead of the
memorisation tactics typical of RBR and CBR. For example, a student might be asked to
propose a mechanism for a substitution reaction of a benzene derivative. They must consider
the following models: relative reactivity of the benzene derivative compared to unsubstituted
benzene, the site where the substitution will occur, and the flow of electrons between the two
species. There are problems involved with this interpretation, as it becomes a matter of debate
as to what constitutes a model. In the given example, the concepts behind the site where
substitution will occur could be broken down into inductive and mesomeric effects, or
expanded to include the reaction energy profile of substitution at that position, where the
lowest energy transition state will lead to the major product.
A standard must therefore be employed to prevent varying interpretation. One
possibility is ‘one mark = one model’. This assumes that the teacher wrote the question with
this in mind, and hasn’t deemed a specific demonstration of knowledge as worth multiple
marks. This standard is fast to perform and objective, leaving no room for interpretation, but
would probably not be very accurate or precise.
Another possibility is that only what the candidate is being asked to demonstrate will
be considered as a model. This still leaves room for interpretation, but doesn’t restrict itself to
the assumption that every mark is a demonstration of a model, and performing this level of
analysis would be very time consuming. This issue could be further resolved by discussion
with the teacher who wrote the question, as the breakdown of marks could be assigned to the
models. Ultimately, due to the change of staff over the years, both of these standards are
impractical, although for future examinations it would be easy to obtain an exact measure of
the intended depth of the question from the author themselves.
Application of these models to the current mechanistic situation can be viewed as the
Transfer aspect. True to the aspect’s description, MBR requires the student to recognise the
novelty of the situation.
During the mechanism, many steps may look like they are able to proceed in multiple
directions. This requires the student to Predict/Explain where the reaction will go through
9
MBR. If the problem requires a prediction only, RBR or CBR may be viable approaches,
although if the student is also asked to explain why the reaction proceeded in that fashion, the
conceptual understanding of the model and how it applies to the current chemical situation
can be more thoroughly assessed. Even if there are no plausible alternate routes the reaction
may proceed down, if the student has no prompt of a product to aim for they are required to
Predict/Explain.
Although the mechanism is portrayed through two-dimensional drawings, often the
key to deducing the correct outcome exists in the third dimension. The student must therefore
demonstrate representational competence, or the ability to Translate between various
representations. An example of this is the elimination reactions E1 and E2 of halogen
substituted cyclohexane rings. To accurately solve a mechanistic problem like this, the
student must translate the perfect hexagonal representation into the chair configurations, and
then determine how the p-orbitals lie in respect to the leaving group and the eliminated
hydrogen, finally translating their chair structure back into the perfect hexagonal structure.
Spatial reasoning plays a huge part in organic chemistry, and being able to translate back and
forth between representations like the Newman projection is vital.
Other representations could include energy profiles, or a graphical depiction of the
orbitals involved in the reaction.
The process of devising the correct mechanism, be it the electron flow, the reagents,
the stereochemically correct outcome, or the entire mechanism, is the Problem Solving
aspect. In line with Holmes’ definition, critical thinking is required for effectively solving
mechanistic problems8.
The separation and relationship between models and mechanisms is an important line
of investigation to carry over to future research. The time constraints of this study will not
allow extensive linking of the teaching process to the conceptual understanding of the
models, and then to the extent of conceptual understanding assessed in exam papers, but the
connection made here will allow this in future work.
10
Section2 - Research Methodology
Interviewing Lecturers
Conductingthe First Interviews
In order to investigate topics that were suitable to use during the probing of
conceptual understanding, I interviewed three 1st year lecturers who were willing to be
involved in this study. To preserve anonymity, they will be referred to as Lecturers 1-3
consistently throughout this report. Even though the direction of this study changed to focus
upon the examination analysis, the information discussed below serves to provide a more
complete picture of the participants.
I asked the lecturers what they thought the most important concepts in organic
chemistry were and which concepts they thought students understood the least.
The two questions solicited the same response in all three interviews; there seemed to
be an agreement that the most important fundamental concepts were the ones least
understood. There was also a clear alignment between what the lecturers thought of as the
most important, and the topics they were responsible for teaching.
Areas such as organic structure (drawing molecules, functional groups and
nomenclature), stereochemistry, pKa, carbonyl chemistry, and HOMO-LUMO interactions
were seen as the most important and least understood amongst the three lecturers.
Conducting the SecondInterviews
Whilst the first interviews were focused on concepts, I wanted to gain a better picture
of how the lecturers viewed conceptual understanding.
The second interview was composed of four questions:
 In your own words, could you please describe conceptual understanding in the context
of organic chemistry?
 How would you structure an exam question to test conceptual understanding?
 Assume a paper has 100 available marks, how many of these marks should come from
rote learning, algorithmic problem solving and conceptual understanding?
 As a percentage, how much of your lecture content appears in the exam?
11
The first question is designed to allow a direct comparison to the definition
formulated by Holme et al 8. The answers could highlight areas of potential improvement, as
a fuller awareness of what conceptual understanding is would allow staff to adjust their
teaching styles to promote conceptual learning, and therefore demonstrate a higher standard
of teaching excellence. That being said, if the answers to this question reveal holes in the
definition, it doesn’t mean that the teaching reflects those holes. It could be that the lecturers
are unconsciously aware of certain aspects, or simply didn’t see it as necessary to express an
aspect that seems obvious in words.
The second question probes how the lecturer views the examination as a tool, and
might reveal new ideas that would be useful when looking at past exam papers. It is also an
extension to the first question, but requires the lecturer to think about conceptual
understanding from the position of testing for it, rather than just describing what it is. My
intention is to take answers to both the first and second question when looking at the
lecturer’s scores for their definitions of conceptual understanding. This is in contrast to
Holme’s paper, where the teachers are only asked to define conceptual understanding. My
reasoning is that in the survey sent out by Holme, there was a section of questions that
“focused on conceptual understanding through topics taught and question structure.”8 This
section may have helped the teachers refine their ideas of conceptual understanding before
giving a definition. Combined with the facts that they had as much time as they needed to
answer the question and freedom to refine their answers, I gained the impression that the
teachers in the study by Holme et al8. generated much fuller answers than the lecturers I was
interviewing would be able to manage in a time restricted, verbal environment. I therefore felt
it necessary to include a further prompt to think about conceptual understanding from a
different perspective.
The third question reveals the intention of the lecturer to exam their students’
conceptual knowledge. I expected when writing the question that all of the answers will place
a large proportion of the exam under ‘conceptual understanding’. These answers will provide
a useful set of numbers to compare actual papers to, potentially revealing whether there is an
agreement between the intended content of an exam and the actual content.
The final question was asked to see if there was any sort of weighting applied to
concepts. If not all were examined every year, it would be interesting to look at the factors
that determine whether a concept appears on an exam.
For complete transcripts of the interviews, please see ‘Appendix 1 – Interview
Transcriptions’.
12
Results
In order to aid readability during this analysis, certain quotes will be editorialized by
removing extraneous verbal filler.
Lecturer 1
Lecturer 1 saw conceptual understanding as “[…] just understanding the basic ideas
and concepts to allow students, or to allow people to work out problems.” The rubric in
Holmes work gives this definition a score of 1 under the Problem Solving fragment. The rest
of the answer is devoid of additional fragments, but they describe an idea that any concept
introduced after the first year is simply a combination of the basic organic chemistry
concepts. They felt that if a person truly understood the first year, they would be able to
problem solve their way through every other year. They presented this as idealism, fully
aware that the other years are necessary, but it is interesting to think about what the
fundamental ‘axioms’ of organic chemistry might be.
When asked about how they would structure an exam question, they responded “I
would make it a problem solving exercise. […] They would have to solve a specific problem,
something similar to what they’d of seen in lectures/tutorials, but they shouldn’t be able to do
that problem well unless they understand the key concepts.”
This is in line with their definition of conceptual
understanding and their focus on problem solving. At this point, I wanted to clarify what they
meant by ‘problem solving’. They responded that in order to problem solve they “really have
to be drawing on their core knowledge”, and “they can’t just regurgitate learned
information”. This answer hints upon another fragment, Depth, as the lecturer is recognizing
that there needs to be a deep understanding devoid of memorization. As I interpret it, this
answer is enough to assign 1 point in the Depth fragment, but as this question was an
extension of another, and was not asked to the other lecturers, I cannot include this point in
the total score for this lecturer. The other lecturers may have been able to provide additional
fragment descriptions with further prompts, but did not get the opportunity, therefore only the
one point for Problem Solving will be counted.
13
For the third question, I was expecting relatively short, confident answers. In reality,
this question seemed to prompt a lot of thought and dialogue. Lecturer 1 initially answered
that between 10% and 20% of the paper would be recalling information, and the rest would
be testing conceptual understanding. They enquired how algorithms applied to organic
chemistry, to which I responded that I had seen very simple reaction mechanisms taught
algorithmically, where the student identifies the electrophile, then the nucleophile, then
pushes arrows from the nucleophile to the electrophile, etc. The lecturer was slightly taken
aback by this, as I had described the way they taught the nucleophilic substitution reactions.
They expanded upon this by identifying that to go along with such an algorithmic question,
they would probe the students conceptual understanding by asking them to explain particular
properties like nucleophilicity. Again, there is a hint of the Predict/Explain fragment in this
answer, but I cannot include it in the score for the reasons I outlined previously.
Finally, the lecturer answered that 60% of their lecture content will make it into the
exam. However, they seemed very unsure. They reasoned that a lot of the lecturers at the
beginning of the course are introducing fundamental concepts, a lot of which won’t be
specifically examined upon, but will be implicit within other questions. When I asked about
the other 40%, they confirmed that there are simply certain topics that do not get examined,
but the student must possess knowledge of these to answer other questions in full.
Lecturer 2
Lecturer 2 described conceptual understanding as “[…] being able to solve a problem
by working out the answer from first principles, and getting to the answer that way, instead of
thinking ‘Ahh I’ve seen that in a book’, and recalling it just from memory.” This earns a
score of 1 in the Problem Solving fragment, and 1 in the Depth fragment.
They claimed that structuring an exam question to test conceptual understanding is
“fairly easily done” by “ask[ing] for an explanation of something”. This gains the definition
of conceptual understanding a point in the Predict/Explain fragment. They also go on to
describe an example, a question of reactivity’s of esters and amides, and said they would
“like to see a simple diagram and two or three lines of text” in the explanation. This answer
hints at the Translate fragment, with the use of images and text to explain a concept, but as
14
they don’t specifically mention that transitioning between representations is an aspect of
conceptual understanding, so I will not assign them this point.
They extended their answer by adding that a question could “ask for a mechanism to
be supplied where it had been seen before in terms of the concept, but this is a different
example.” They said students would “have to understand that it’s that mechanism which
applies there, and then draw it.” This demonstrates an awareness of the full Transfer
fragment, where a student must apply their knowledge to a novel situation, gaining them
another 2 points.
For the third question, the lecturer seemed very unsure. They described the paper
having three section; A, B and C. Section A was compulsory, and contained “just under half
of the marks”. These questions were said to contain mostly memorisation or algorithmic
questions, though the lecturer did not give a figure to this ratio. Interestingly, they
commented on two types of question archetypes (see the section ‘Analysing Past Exam
Papers’); ‘Provide Product’ and ‘Nomenclature from structure’, labelling them as rote
memorisation and algorithmic respectively. The student must then choose between answering
section B or C. These sections were said to contain questions that ask “How does this work?”,
but that if mechanistic questions were asked, they “probably still would be a bit more towards
the recalling it out of the notes side of it”, but would test conceptual understanding by asking
for explanations. Again, they did not provide a figure to the ratio of marks in this section.
Overall, they claimed “it’s about roughly half [conceptual understanding] and half
[rote memorisation/algorithmic]. Maybe slightly more towards the algorithmic and recalling
side of it, in the first year.” Interestingly, they commented that in the fourth year of study, the
conceptual understanding content in an exam should increase. This is in contrast to Lecturer
3’s answer (see below).
The lack of absolute figures provided by Lecturer 2 makes this a difficult result to
quantify. I think this is entirely down to me as the interlocutor, as I didn’t ask further
questions or clarify. In order to compare this answer with the others, I will assume that 55%
of the paper was either rote memorisation or algorithmic problem solving, and 45% is
conceptual understanding.
The final question was also answered vaguely. They responded by stating that the
model answers to an exam are perhaps 5 pages in length, whereas the lecture notes are about
100 pages, bringing the total content of the exam to 5% of the lecture material. They
15
supplemented their answer with the reasoning that in lectures, they would “give three or four
examples of one particular thing, [whereas] in the exam we’d just be asking for one”, and that
“some of [the lecture material is] just other background, some of its examples”. This answer
highlights that the question is flawed, and that there is too much room for interpretation.
Their answer fails to take into consideration any of the ways these concepts interact, where
the student must have knowledge about many topics which, whilst not being explicitly
examined, do contribute to the ability to answer questions on topics that are explicitly
examined. The answer also doesn’t separate lecture material that aims to teach conceptual
understanding from material that provides background.
Lecturer 3
Lecturer 3 described conceptual understanding as “an understanding of the concepts,
rather than relying on some kind of rote learning, or committing things to memory…” and
that “if you understand the concepts, then you can apply those general concepts to unfamiliar
situations, so that you can rationalize things you haven’t seen before.”
This answer gains a score of 2 for Transfer and 1 for Depth.
In response to the second question, they stated “best questions are problems, rather
than ‘Tell me everything you know about… ‘X’ or ‘Y’’”. This answer gains an additional
point in the Problem Solving fragment.
In regards to the types of questions, they thought a good balance would be 50%
conceptual understanding, 25% rote memorisation and 25% algorithmic problem solving.
They stated, in contrast to Lecturer 2, that in the fourth year the rote memorisation aspects
will take up as much as 50% of the marks, reasoning that in such specialised topics it is hard
to set problems. Most of the special topics that they are referring to, however, are not organic
chemistry based, so this is not an answer I can use.
Finally, they said that the content of the lecture material that appears in the exam will
vary according to the stage of the course. In the first year, they expected all of the course
content to be examined over three iterations of the paper, with only a small number of
questions changing each year. In the second year they teach a problem-based spectroscopy
course, so they expect all of the methods they teach to be utilized on an exam.
16
Overall Results
Before comparing these results to Holme’s work, it should be highlighted that the
rubric developed by Holme is specifically for conceptual understanding in general chemistry,
not specifically organic chemistry. As such, various fragments in Holme’s work reflect ideas
that are only loosely applicable, such as the specific mention of translating through
Johnstone’s domains (symbolic, microscopic, macroscopic). However, the majority of the
rubric is totally applicable to organic chemistry, and the rubric must be the same if
comparisons between this work and Holmes’ results are to be made
Figure 1 shows the total scores of the
three lecturers. Compared to Holmes’ results in
Error! Not a valid bookmark self-reference.,
it can be seen that lecturers 2 and 3 actually
scored very high. Only ~75 of the instructors
from the 1,395 scored 4 points, and less than
25 scored 5 points. This indicates that the
lecturers have an above average idea of what
conceptual understanding means in regards to
organic chemistry. The low score of lecturer 1
reflects the mode score in Holmes’ work. As
Holmes’ points out, this does not mean that all
other aspects of conceptual understanding do
not factor into their working definition, but I
do think that these scores will reflect on their
teaching style.
Figure 3 shows the breakdown of the
lecturer’s scores into each aspect of conceptual
understanding. It can be seen that the most
well described aspect is Transfer, with two
lecturers gaining the full two points, whilst the
least frequently described fragment was the
Translate aspect, which wasn't described at
all. These results reflect the results in Holmes’
0
1
2
3
4
5
Lecturer 1 Lecturer 2 Lecturer 3
Score
Total scores of three lecturers
definitions of conceptual
understanding in organic chemistry
as scored following the rubric
developed by Holme et al.
Total score of the conceptual understanding
definitions provided by the 1,395 general
chemistry instructors as scored following the
rubric developed by Holme et al.
Figure 1 – Total scores of the instructors in
Holmes’ work.
Figure 2 – The total score of the three lecturers
definitions of conceptual understanding in organic
chemistry.
17
paper. It is possible that the Translate aspect was too obvious to mention, or that multiple
representations of the same concept are seen as means of explaining observations, rather than
building up a translational fluency within the student. Analysis of the lectures delivered by
these lecturers could provide insight into why these aspects are favoured or largely
unmentioned, but this lies beyond the scope of this work (see ‘Section 4 - Future Work’).
In contrast, the most frequently described fragment was Problem Solving, whilst in
Holme’s work this was second least frequently described.
Figure 3 – The scores obtained by the three
lecturers within each aspect of conceptual
understanding.
Figure 4 – The number of definition
fragments used by the 1395 chemistry
instructors.
0
1
2
ScoreObtained
Score obtained by the three lecturers
definitions in the five aspects of
conceptual understanding.
Lecturer 1
Lecturer 2
Lecturer 3
Number of definition fragments (by
score) used to define conceptual
understanding by the 1,395 general
chemistry instructors who provided
their definition of conceptual
understanding.
18
Analysing Past Exam Papers
It is my aim in this section to describe the methodology adhered to during the largest
aspect of this project; the analysis of 17 organic chemistry exam papers given to 1st year
students. Due to the modular format of the course, it was often the case that organic questions
were spread through multiple exams. It is my intention to look at all exam questions that
relate to organic chemistry, but where it is not possible to do so it shall be made explicit.
The primary goal of this pilot study was to systematically deconstruct individual exam
questions in order to inspect which aspects of conceptual understanding, if any, are assessed.
I used a grounded theory approach to generate codes from the exam papers. Grounded
theory is a systematic methodology in which codes are generated by a researcher upon
reviewing data (see ‘Coding Exam Papers Using Grounded Theory’ below).
Initially, I focused on first classifying the question as conceptual, rote learning or
algorithmic, and then trying to construct a thought process required to answer the question.
For example, a question might require the student to first recognise a functional group from
nomenclature, and then think about the reactivity of said functional group, finally drawing the
product of a reaction between the functional group and another. The code would thus read
“Structure from nomenclature, functional group reactivity, draw product”. Then I attempted
to list additional, unordered codes that related to the question, e.g. “SN2, Leaving group
stability”.
This method had several problems. Firstly, classifying a question as conceptual, rote
or algorithmic was not always simple; a mechanistic problem, for example, may be intended
to test conceptual knowledge of the electron flow in a particular system, but it could easily be
the case that the student has memorized the system due to it being highlighted as important
during a lecture. The line between conceptual understanding and rote memorization is a
blurry one, particularly at an early stage in a university course, as there are simply some
things that the student must learn, and as a consequence, those topics tend to be examined via
asking the student to recall what they have learnt. Secondly, the route the student mentally
travels to arrive at an answer is sure to vary among different students. Although simple low-
mark questions may have an obvious route, a more complex, multi-model question may have
several routes. Therefore it became clear that it is incorrect to assign a definitive order to the
codes. Thirdly, attempting to list every possible code associated with the question, as
19
grounded theory requires the coder to do, proved overwhelming. I noticed a stark contrast
between codes that arose from the question archetype (the way it was structured and the form
of answer it demanded) and the codes that arose from the concepts within the question. The
resulting codes felt disjointed, and pulling a conclusion from the data would have been
difficult, if not impossible.
The combination of these problems lead me to discard my initial results. It was clear
that a new approach was needed that still operated under grounded theory, but removed some
of the complexity. I decided the best way to proceed was to split apart the question archetype
from the conceptual content. Eventually, I concluded that the concepts present within each
question provided no additional information on how conceptual understanding is assessed. As
such, this work is concerned with assessing the presence and properties of conceptual
understanding-based questions, and not the concepts present within them, although the
relationship between the two is a possible route for future work (Section 4 - Future Work).
Coding Exam Papers Using Grounded Theory
I used Microsoft Excel® to perform the grounded theory coding. A table was
constructed to show the title of the paper, the question identifier/number, the marks available,
and the codes I associated with the question archetypes.
Once I had listed all the words/phrases I could relate to the question, I proceeded onto
the next. Upon finishing the second question, I would look back at the first and compare the
codes I had generated, looking for any that crossed over. Once the sets of codes had been
altered (if at all), I would advance on to the next question. This cycle was repeated until the
end of the paper, when I would advance onto the next. It quickly became apparent that
although there is a large amount of question archetype variance within each exam, the overall
content doesn’t change much from year to year. The consequence of this was that a set of
codes generated from early papers in each module tended to be able to completely describe
exams set years later. I suspect that this is very much intended, and arises from the notion that
a student will revise by completing past exam papers, thus will feel more relaxed in the actual
exam if the format (a consequence of the way the paper is organised and its question
archetypes) remains largely unchanged.
Another important aspect of coding using grounded theory is the constant use of
memos. Throughout the coding I kept written notes on my thoughts for personal use later to
help connect the codes together.
20
Defining a ‘Question’
As a consequence of the variation in the authors of the exam papers, questions are
often formatted differently. In one paper, there may only be four main questions which are
worth 20 marks each, but each of these questions are broken down into smaller 2 – 5 mark
sub-questions (labelled (a), (b), (c), … or (i), (ii), (iii), …), whereas another paper may
number all of these questions as individual. This is largely an unimportant difference, and
only really serves to guide the student along the paper, making it explicit which questions are
at least semi-related to each other in concept.
For the purpose of analysis, a question will be considered separate if there is a
separate mark indicator. For example, Figure 5 and Figure 6 below would be treated as a
single question, and three individual questions respectively.
Figure 5 – An example of a single question with two subsections (from F11MSB 2010-2011).
Figure 6 – An example of three sub-questions that are considered separate (from F11MSP 2008-2009).
21
If a single question includes multiple identical aspects, such as in Figure 7 below
(where the student is asked to assign the hybridisation state and lone pair positions of two
different molecules), the question is said to have a multiplicity. In the example below, the
multiplicity of the question is 2.
Figure 7 – An example of a question showcasing a multiplicity of 2 (from F11MSB 2010-2011).
Defining ‘Question Archetypes’
If examinations consist of a set of tools designed to deconstruct and assess the
conceptual understanding of the student, it is wise to obtain an inventory of the toolbox.
These tools take the form of question archetypes.
Figure 5 below shows two questions that probe differing topics; reactivity towards
SN2 reactions, and acidity of phenols:
Figure 8 – Two questions with the archetypes “Sort Molecule According To Property” and “Explain
Order”.
22
These two questions involve very different concepts, but the overall archetype of the
question is the same. The student is expected to sort the molecules according to a given
property, and then explain why the order is such. If the student was only asked to sort the
molecules, with no explanation, it stands to reason that there will be no separation between
candidates who know why the order exists, and those who guess an order. Question
archetypes are clearly important, but can they be connected with specific aspects of
conceptual understanding?
23
Criteria for Sorting Codes into Aspects of Conceptual Understanding
Before the codes that were generated can be sorted into aspects of conceptual
understanding, the definitions of each aspect must be reviewed.
Box 1 shows the conceptual
understanding aspect definitions as
described in Holmes’ paper8. These will
be used as the basis for sorting codes
into categories.
The Transfer aspect is difficult
to form meaningful criteria for, as the
novelty of a chemical situation to a
certain student is unpredictable, due
largely to the large variety in exposure
and experience amongst a class. A
question could be considered novel if it
doesn’t appear in another exam paper,
rendering it impossible for the student to
have seen it, although this assumes all
students look through all past exam
papers, which is often not the case.
Furthermore, it is impossible to know
which chemical situations were presented to the students during their course. As the
definition for this aspect is based upon an unknowable, unquantifiable state of the question
(the novelty), this aspect must be ignored.
The Depth fragment is equally difficult to handle. The definition suggests that any
question that has no aspect of rote memory or algorithmic problem solving can be categorised
under Depth. This means that is it unlikely any code will be generated that is directly Depth
related, but a rudimentary score can be calculated as an anti-rote memory/algorithmic
problem solving score, simply the inverse of the number of codes belonging to these two
categories.
Holmes’ paper doesn’t expand on the definition of a problem, which has many
definitions across many disciplines, but explicitly mentions critical thinking and reasoning.
Transfer Apply core chemistry ideas to
chemical situations that are
novel to the student.
Depth Reason about core chemistry
ideas using skills that go
beyond mere rote
memorization or algorithmic
problem solving.
Predict/
Explain
Expand situational knowledge
to predict and/or explain
behaviour of chemical systems.
Problem
Solving
Demonstrate the critical
thinking and reasoning
involved in solving problems
including laboratory
measurement.
Translate Translate across scales and
representations.
Box 1 – Definitions of the aspects of conceptual
understanding presented in ‘Defining Conceptual
Understanding in General Chemistry’8
.
24
Definitions of critical thinking vary, but seem to overlap into the process of
conceptualising, analysing, evaluating and applying information gathered from some sort of
experience, be it observation or communication, and then using the results of this thinking
process to guide actions or decisions. In summary, this definition is vague, but hints that there
must be some ‘real-world’ aspect to the question for it to be considered a problem. In the
context of organic chemistry, this will mainly take the form of questions dealing with the
outcome of a chemical situation. Codes will therefore be assessed as Problem Solving if they
require the student to critically think about data in order to generate a solution to a problem in
a real-world experimental context.
The Translate and Predict/Explain aspects are well defined. Codes that require the
student to make some prediction given a set of factors, or explain a result or prediction in
terms of factors, will be categorised as Predict/Explain, whereas codes related to generating
or considering an alternative representation to one that is given will be categorised as
Translate.
CriteriaforSorting Reactions
During the initial stages of coding the exam papers, every mechanistic question was
treated as though they were equal, only differentiating between questions involving single or
multiple ‘concepts’. As the coding developed, I moved away from looking at concepts, and
started deconstructing the way in which they are examined through the processes that the
questions require the student to execute. Once I started deconstructing the mechanistic
questions, I realised that they belonged to a broader class of questions; reactions. Where I had
previously seen a difference between asking a student to predict a product, and to predict the
flow of electrons in the system, I now saw a relationship tying them together. They were two
incomplete parts of a puzzle, and shared some common pieces.
When a reaction is broken down, it usually only has six aspects; starting material(s),
product(s), reagent(s), condition(s), a name, and a mechanism. In order to assess a
student’s conceptual understanding of these reactions, combinations of these elements can be
presented, often along with missing elements, and the student must exercise different modes
of thinking to fill in the blanks, or to explain some aspect of the system.
By noting whether a question gives, partially gives, demands, or leaves out each of
these six aspects, a complete description of the question can be assigned.
25
Of the 18 papers I looked at, 120 questions were coded as “Reaction”. Some of these
questions contained multiple different reactions (the multiplicity of the question), bringing the
total number of reactions to 204.
The each aspect of the reaction within the question was given a score according to the
key in Table 1. The count of each score within each aspect is shown in Table 2.
Aspect is not present in the question and not
demanded in the answer
0
Aspect is fully present in the question (complete
molecular structures, or unambiguous chemical
formula (e.g. LiAlH4, H2O, etc.))
1
Aspect is partially present in the question
(ambiguous chemical formula (e.g. C5H11O),
nomenclature)
1b
Aspect is partially present and fully demanded in
the answer
1b/2
Aspect is fully demanded by the student 2
Table 1 – The key used when coding the reaction questions.
0 1 1b 1b/2 2
Starting Material(s) 0 169 8 6 21
Product(s) 1 97 0 23 83
Reagent(s) 8 91 35 0 70
Condition(s) 172 11 1 0 20
Mechanism 108 4 0 7 85
Name 164 36 0 0 4
Table 2 – The count of each score given to each aspect in the sample of reaction questions (N=204).
From this, it can be seen that the starting materials(s), product(s), and reagent(s)
are most likely to be fully given in the question, whilst the condition(s), mechanism and
name of the reaction are mostly absent from the questions.
Table 3 lists every unique combination of reaction aspects found in the 204 sample
questions, along with their frequency and total marks assigned. The frequency was generated
as the sum of the multiplicities for a given reaction code. This means if a single question with
26
a single mark has three reactions in it, all with the same code (a multiplicity of 3), this will be
counted as 3 separate reactions. The total amount of marks attributed to each code was
generated by summing the marks for each occurrence of code, ignoring the multiplicity of the
question.
Starting
Material(s)
Product(s) Reagent(s) Condition(s) Mechanism Name
Frequency of
occurrence
Marks
1 1 1 0 2 0 14 63
1 1b/2 1 0 2 0 5 50
1 2 1 0 2 0 11 49
1 1 2 0 0 0 32 42
1 1 2 0 2 0 7 42
1 1 1 0 2 1 7 39
2 2 2 2 2 1 4 37
1 2 1b 0 0 0 14 23
1 2 1 0 1b/2 0 7 23
1 2 1 0 0 0 14 22
1 1 1b 0 2 1 4 22
1 1 1 1 2 0 3 18
1 2 1 0 2 1 3 15
1 1 2 0 2 1 1 15
1 2 1 0 0 1 5 14
1 1 2 2 0 0 7 13
1 2 1 0 1 0 4 12
1 2 2 0 2 1 2 10
1 1 1b 1 2 0 2 9
1 2 1b 0 2 0 2 9
1b 2 2 2 0 0 4 8
1 2 0 0 2 1 2 8
1 1b/2 1 1 0 0 2 8
1 1b/2 1 1 2 0 2 8
1 1b/2 1 0 0 0 7 7
1 1b/2 1b 0 0 0 2 7
2 1 1 0 0 0 4 6
1b/2 1b/2 1b 0 0 0 2 6
1 2 1b 0 0 1 1 6
1 2 1b 0 2 1 1 6
1 1 0 0 0 2 1 6
1 1 1b 0 2 0 1 6
2 1 2 0 0 0 5 5
1 1 0 0 2 2 1 5
2 1 1b 0 0 0 2 4
1b/2 1b/2 1 0 0 0 2 4
27
Starting
Material(s)
Product(s) Reagent(s) Condition(s) Mechanism Name
Frequency of
occurrence
Marks
1 1 1b 0 2 2 1 4
1b 1b/2 0 0 2 0 1 4
1 1 0 1b 2 0 1 4
2 2 2 0 2 0 1 4
1b 1 2 1 2 0 1 3
1 2 0 1 0 0 1 3
1b 2 1b 0 2 1 1 2
1b 2 1b 0 2 0 1 2
1 0 0 0 2 0 1 2
1b/2 1 2 0 0 0 1 2
1b/2 1 1 0 0 0 1 2
1 1 1b 0 0 2 1 1
Table 3 – Every unique combination of aspects within 204 reaction questions, sorted by their total marks
available.
The relationship between frequency and mark was plotted, shown in Figure 9. From
this graph, three distinct ‘zones’ arose. The green zone includes points with a total amount of
marks above 30, or a frequency of above 10. The blue zone includes points with a total
amount of marks between 11 and 30, or a frequency between 4 and 10. The third uncoloured
zone contains codes with a total amount of marks below 11 or a frequency below 4. These
zones were assigned based of the visual look of the graph alone.
These zones are useful when considering the importance placed on the codes by the
examiners; more frequently occurring codes, or codes that are awarded more marks are
considered to be important. Therefore codes in the green zone are more important than codes
in the blue zone, and codes in the uncoloured zone are considered least important.
28
Figure 9 – A scatter graph showing the relationship between frequency of occurrence and the marks
awarded to each archetype.
Figure 10 shows how the reaction questions can be broken down. The two main
aspects of the reaction, starting material(s) and product(s), are connected by a reaction
mechanism. If either the starting material(s) or product(s) are missing from the reaction, the
student must use their understanding of the chemical systems to either synthesise or
retrosynthesise the correct molecules. Questions that provide a starting material and demand
a product can be thought of as occurring in the ‘forward’ direction, whereas questions that
demand a starting material that will be transformed into a given product can be considered as
occurring in the ‘backward’ direction. Reagents and conditions are used as supplementary to
produce the correct answers. The mechanism of the reaction only occurs in the forward
direction. Alternatively, the question may provide both starting material(s) and product(s),
and then require the student to consider the process of the reaction, usually to generate the
correct reagent or mechanism. These three types of questions are similar in that they test the
students understanding of the process of the reaction. Of the 204 reactions analysed in this
work, 96 (47.1%) were forward reactions, 13 (6.4%) were backwards reactions, and 85
(41.7%) gave both starting materials and products. As these test the understanding of the
process of the reaction, these were labelled “Reaction Process” questions.
0
10
20
30
40
50
60
70
0 5 10 15 20 25 30 35
Marks
Frequency
Marks vs FrequencyofOccurrence for Each Unique CombinationofReaction
Aspectsin 205 Reaction Questions
Unique archetype
29
The 10 remaining reactions required the student to generate both the starting
material(s) and product(s) from either the name of a reaction (e.g. Addition-Elimination) or a
reagent, and the student must generate their own examples of starting materials, products,
reagents and conditions. As these tested the student’s knowledge of the whole of the reaction,
these were labelled “Whole Reaction” questions.
As there is no current research into how reactions test conceptual understanding,
differentiating amongst reactions which test conceptual understanding and those that don’t
proved difficult. Ultimately, I decided upon the following criteria:
 Reactions that involve a transformation, in either the forward or backwards directions,
test the conceptual understanding of how the functional group(s) behave in the novel
chemical system.
 Any reaction that demands a mechanism from the student tests conceptual
understanding of how the electrons in the system behave.
 Therefore, the only reaction questions that do not test conceptual understanding are
those which ask for reagents and/or conditions only. These test the rote memory of the
student.
Starting Material Products
Forward
Backward
Mechanism
Reaction Process
Whole Reaction
Figure 10 – Diagram showing the aspects of a reaction question.
30
165 (80.9%) of the reaction codes were classified as testing conceptual understanding,
whereas 39 (19.1%) of the reaction codes were classified as not testing conceptual
understanding. These results are expanded upon in the section “Individual Module
Results”.
31
Section3 – Resultsand Discussion
Generated Codes
50 unique codes were generated over 17 full exam papers, shown in Table 4. They
were sorted into categories according to the criteria outlined Section 2 - Research
Methodology, ‘Criteria for Sorting Codes’. Where necessary, a description of the code is
shown, along with the justification for the particular category assignment.
Category Justification Code Descriptor
Rote
Memory
These codes all
ask the student to
generate
information that
is impossible to
generate from
first principles,
and therefore
must be recalled
from memory.
Assign Bond Angles
Label a figure of a molecule
with its bond angels.
Assign Functional Group
Name a molecule’s functional
group.
Assign Hybridisation
Determine the hybridisation
state of given atoms within a
molecule.
Assign Lone Pairs
Label a molecule with the
position of any lone pairs of
electrons.
Assign Orbitals
Label a molecule with
atomic/molecular orbitals
relevant to the question or
reactivity.
Assign Reagents
Match reagents to their
reactions.
The information
requested cannot
be generated in
the exam,
therefore must be
learned
beforehand.
Describe Problem With
Reaction
Describe the real world
problems with performing a
given reaction.
Describe Reaction
Describe the ‘details’ of a given
reaction.
These codes
assess the depth
of a student’s
organic
chemistry
awareness,which
the student must
recall through
rote memory
during the test.
Provide Examples
Generate examples that fulfil
criteria.
Provide Reaction
Generate a reaction that fulfils
criteria.
Recall Definition N/A
Algorithmic
Problem
Solving
The cognitive
processes
demanded from
these codes
require the
student to apply
a process to a
Assign Electron
Configuration
Write the electron configuration
of an atom within a molecule.
Assign Polarity
Describe the polarity of a
molecule.
Assign Property
Assign a molecule a descriptor
as requested in the question,
describing a general property of
32
group or
individual case,
determining a
description of the
state(s) of the
molecule(s).
that molecule (e.g. Coloured).
Assign Rate Determining
Step
State which of a given set of
steps determines the overall rate
of a reaction.
Assign Stereochemistry
Assign a stereochemical
descriptor to a molecule.
Assign Term
Assign a molecule a given
descriptor as requested in the
question, not limited to a
property of the molecule (e.g.
Aldehyde, Ketone).
Assign Value
Assign a molecule a value
corresponding to a property of
that molecule (e.g. pKa).
Determining
these answers
requires the
student to follow
an algorithmic
process.
Derive Rate Equation
Formulate the equation used to
determine the rate of a given
reaction
Determine Limiting Reagent N/A
Determine Stability
Assess how stable a molecule is,
often in comparison to other
similar molecules.
Formulate Equilibrium
Expression
Formulate the equation used to
describe the equilibrium of a
reversible reaction.
IUPAC naming
is an algorithmic
process.
Nomenclature
Assign standard IUPAC names
to molecules.
Answers are
generated
through applying
information in
the question to
an algorithm
(usually an
equation).
Numeric Problem N/A
Translate
The student is
translating their
knowledge of the
movement of
individual
electrons into a
general ‘smeared
out’ picture.
Describe Electron
Distribution
Summarise the distribution of
electrons in a molecule.
Drawing the
required
information
requires the
student to either
manipulate a
three
dimensional
image in their
mind, or to
generate an
Draw Diagram Draw an unspecified diagram.
Draw Intermediate Draw a reaction intermediate.
Draw Isomers
Draw isometric pairs
(unspecified isomerism).
Draw Newman Projection N/A
Draw Orbital Interaction
Draw how given orbital will
interact during a reaction.
Draw Resonance Diagrams N/A
Draw Stereoisomers
Draw isometric pairs (specified
stereoisomerism).
33
alternate
representation
from a given one.
Draw Structure From
Descriptor
Draw a molecule from a given
descriptor (e.g. Nucleophile,
Carboxylic Acid).
Draw Transition State
Draw a transition state of a
molecular system during a given
reaction.
The student must
translate a poorly
drawn molecule
into a well-drawn
one.
Redraw Structure
Redraw a presented molecule in
a more correct manner.
Predict/
Explain
The student is
asked to fully
explain a concept
that is related to
the behaviour of
a chemical
system.
Explain Concept N/A
These codes
require the
student to predict
and/or explain an
aspect of a
chemical system.
Explain Observation N/A
Explain Order
Explain the ordering of a list of
molecules, sorted according to a
property (e.g. Reactivity, pKa).
Explain Property
Explain the reasons behind a
given property of a molecule
(e.g. high pKa).
Explain Reaction Condition
Explain the reasoning behind a
given reaction condition (e.g.
performed at 0°C).
Explain Reaction Outcome N/A
Explain Regiochemistry
Explain why a product in a
given reaction has the possessed
regiochemistry.
Explain Stereochemistry
Explain why a product in a
given reaction has the possessed
stereochemistry.
Explain Variable Interaction
Explain how two variables will
interact in a system (e.g.
‘Temperature’ and ‘Rate of
Reaction’)
Predict Reaction Outcome N/A
Predict Stereochemistry N/A
This code
requires a
prediction of
how variables in
a chemical
system will
interact to affect
the overall
property that is
being sorted
against.
Sort Molecules According
To Property
Sort a list of molecules
according to a given property
(e.g. pKa).
34
Problem
Solving
These codes
directly ask for
the solutions to a
presented real
world problem.
Provide Conditions To
Favour Pathway
Provide a set of conditions to
favour a specific reaction
pathway (e.g. stereochemical
outcome).
Solve Problem
Generate a solution to the given
problem with a given reaction.
Reaction N/A
CU Reaction
A reaction that assesses
conceptual understanding.
Non-CU Reaction
A reaction that assesses rote
memory.
Table 4 – The 51 codes generated from the grounded theory coding of 17 first year organic chemistry
papers, with a brief descriptor and justification of category where necessary.
Table 5 shows the frequency of each code through the entire set of exam papers
looked at in this work. 23 of the codes occur between 1-5 times, making up 4.9% of the 754
codes (4.9%). This could possibly mean these codes are too specific, and could be condensed
through combination with other codes, or alternatively the questions these codes belong too
could have received negative feedback, prompting the exam authors to leave such questions
out of the following paper.
Code Occurrences
Draw Orbital Interaction 1
Provide Reaction 1
Determine Limiting Reagent 1
Draw Intermediate 1
Assign Reagents 1
Assign Rate Determining Step 1
Describe Problem With Reaction 1
Explain Reaction Condition 1
Predict Stereochemistry 1
Explain Reaction Outcome 1
Describe Reaction 1
Explain Regiochemistry 1
Derive Rate Equation 1
Explain Stereochemistry 1
Provide Conditions To Favour Pathway 2
Numeric Problem 2
Draw Stereoisomers 2
Predict Reaction Outcome 2
Formulate Equilibrium Expression 2
35
Assign Value 2
Recall Definition 3
Describe Electron Distribution 4
Determine Stability 4
Explain Order 5
Assign Electron Configuration 6
Assign Property 6
Solve Problem 6
Draw Newman Projection 7
Explain Variable Interaction 8
Draw Transition State 8
Assign Functional Group 8
Draw Structure From Descriptor 9
Explain Property 10
Assign Bond Angles 13
Assign Polarity 13
Assign Orbitals 14
Draw Resonance Diagrams 16
Assign Lone Pairs 17
Provide Examples 22
Explain Concept 22
Draw Isomers 22
Sort Molecules According To Property 22
Draw Diagram 25
Explain Observation 31
Redraw Structure 37
Assign Hybridisation 39
Assign Term 42
Nomenclature 53
Assign Stereochemistry 59
CU Reaction 165
Non-CU Reaction 39
Grand Total 761
Table 5 – The total occurrences of each code within all 17 exam papers.
36
Figure 11 shows how the codes are distributed once sorted into their aspects of
conceptual understanding, rote memory, algorithmic problem solving, or reaction.
Figure 11 – A pie chart showing the distribution of codes within the aspects of conceptual understanding
for the 17 papers looked at in this work.
120, 16%
192, 25%
132, 17%
105, 14%
8, 1%
165, 22%
39, 5%
Number of Codes Assigned in Each Conceptual Understanding
Aspect in 17 1st Year Organic Chemistry Papers
Rote Memory
Algorithmic
Translate
Predict/Explain
Problem Solving
CU Reaction
Non-CU Reaction
37
IndividualModule Results
F11MSB
‘Molecular Structure and Bonding’ was a module that aimed to teach the basics of
structure and bonding in both general chemistry and organic chemistry. As this work is only
concerned with organic material, certain questions in these papers are irrelevant and will not
be included in the analysis.
Seven papers from 2007 - 2013 were available from the past paper archive, with a
total of 87 marked organic questions. Questions 7 – 10 in section A along with all of section
C are concerned with general structure and bonding, and thus will not be analysed.
Generated Codes
23 different codes were generated, shown in Table 6 below in order of frequency. For
a description of each code see Table 4.
Rote Memory
Assign Orbitals
Recall Definition
Provide Examples
Assign Lone Pairs
Assign Hybridisation
Algorithmic
Numeric Problem
Assign Value
Determine Stability
Assign Term
Nomenclature
Assign Stereochemistry
Translate
Draw Resonance Diagrams
Describe Electron Distribution
Draw Isomers
Draw Diagram
Redraw Structure
Predict/Explain
Predict Reaction Outcome
Explain Order
Explain Property
Sort Molecules According To Property
Explain Concept
Problem Solving Solve Problem
Reaction Reaction
Table 6 – The codes generated during the analysis of seven F11MSB past exam papers, as organised into
categories, and then ordered by frequency of occurrence.
38
223 codes were assigned in
total within the 7 papers, averaging ≈32
codes per paper. 83 marks in total were
available for each paper (only
including the questions looked at in this
work). Therefore the average amount
of marks per code is ≈2.6.
The total amount of codes
generated per paper can be seen in
Figure 12. The steady rise in the
number of codes doesn’t indicate an
increase in cognitive demand required
from the students, as only the
‘Algorithmic’ category is increasing in
size. Figure 13 shows how the variety
(how many of the 23 generated codes
were assigned to each paper) generally
decreases over time as the ‘Algorithmic’ category becomes dominant.
The frequency of each code was plotted
for the whole module (see Figure 14). The most
frequent code was ‘Assign Stereochemistry’,
with 52 counts (23% of the codes), whilst the
least frequent was 'Predict Reaction Outcome’
with only 1 count.
The abundance of ‘Assign
Stereochemistry’ is due to the fact that the
questions on the subject of stereochemistry
always have a multiplicity of at least two
(matching up to the stereochemical pair), and
that there are many types of stereochemistry
presented in the module lectures (Cis/Trans-
isomerism, Enantiomers, Distereoisomers).
9
10
11
12
13
14
15
Numberofuniquecodespresentinpaper
Year of Paper
Number of Unique Codes Generated Upon
Analysis of Seven F11MSB Past Exam
Papers (2007 - 2013)
Figure 13 - The number of unique codes generated
upon analysis of each F11MSB exam paper.
0
5
10
15
20
25
30
35
40
45
Numberofcodesassigned
Year of Paper
Total Number of Codes Assigned to Each
Paper in Each Aspect of Conceptual
Understanding in F11MSB (2007 - 2013)
CU-Reaction
Problem
Solving
Predict/Explain
Translate
Algorithm
Rote Memory
Figure 12 - The total number of codes generated under
each aspect during the analysis of seven F11MSB past
exam papers.
39
Figure 14 – The frequency that each code was generated during the analysis of seven F11MSB past exam
papers.
Figure 15 shows the size of each category of code in the module once the generated
codes were sorted into groups.
58% of the module (130 codes) was composed of archetypes that were either ‘Rote
Memory’ or ‘Algorithmic’ associated. This is a larger amount than any of the interviewed
lecturers suggested, although this figure is an average across all the years the module ran, and
also ignores the non-organic parts of the paper. As ‘Depth’ is defined as reasoning about
core ideas using skills other than rote memory or algorithmic problem solving, 42% of the
module can be considered to test ‘Depth’.
2
3
6
15
19
2
2
4
12
13
52
2
2
3
3
33
1
3
4
8
14
3
17
0 10 20 30 40 50 60
Assign Orbitals
Recall Definition
Provide Examples
Assign Lone Pairs
Assign Hybridisation
Numeric Problem
Assign Value
Determine Stability
Assign Term
Nomenclature
Assign Stereochemistry
Draw Resonance Diagrams
Describe Electron Distribution
Draw Isomers
Draw Diagram
Redraw Structure
Predict Reaction Outcome
Explain Order
Explain Property
Sort Molecules According To Property
Explain Concept
Solve Problem
CU Reaction
Total Occurrences
Total Occurrences of Codesin F11MSB
(2007-2013)
40
‘Translate’ was the biggest
group out of the five conceptual
understanding aspects, which
indicates it is easy to examine, and
possibly that it was considered the
most important aspect for the
young organic chemist. Indeed, the
ability to fluently move between
representations is crucial for an
area of chemistry that is so
concerned with 3D structures.
There were a relatively low
amount of ‘Reaction’ codes, which
make up only 9% of the data set,
but all reactions present were found
to relate to conceptual
understanding. The paper is therefore more focused on the individual concepts that will go on
to construct the set of tools the organic student will use in the future, rather than look at
organic reactions as a complex interplay of many concepts.
Frequency of Each Aspect (2007-2012)
Figure 12 also show how the frequency of the total number of codes in each ‘aspect’
changed throughout the timeline of the module. For a clearer representation of these results,
please see ‘Appendix 2 – Figures’.
The ‘Rote Memory’, ‘Algorithmic’, ‘Translate’ and ‘Problem Solving’ aspects
generally increase in frequency over time, whilst the ‘Predict/Explain’ aspect generally
decreases.
45, 20%
85, 38%
43, 19%
30, 14%
3, 1% 17, 8%
Counts ofCode inF11MSB Groupedby
Conceptual UnderstandingAspects
(2007 - 2013)
Rote Memory
Algorithmic
Translate
Predict/Explain
Problem Solving
CU Reaction
Figure 15 – A pie chart showing the size of each category of code
assigned during the analysis of seven F11MSB past exam papers.
41
F11MSP
‘Mechanism, Synthesis and Pi-Bond Chemistry’ was a module that aimed to continue
building up students’ knowledge of fundamental organic reactions, with a focus on
mechanisms. All questions on each paper are relevant to this work.
Four papers from 2007 – 2010 were available from the past paper archive, with a total
of 90 marked questions.
Generated Codes
30 codes were generated, shown in Table 7 below, sorted into their groups and in order
of frequency.
Rote Memory
Assign Reagents
Describe Reaction
Describe Problem With Reaction
Provide Examples
Provide Reaction
Assign Orbitals
Assign Bond Angles
Assign Hybridisation
Algorithmic
Assign Rate Determining Step
Assign Stereochemistry
Nomenclature
Translate
Draw Orbital Interaction
Describe Electron Distribution
Draw Newman Projection
Redraw Structure
Draw Transition State
Draw Resonance Diagrams
Draw Diagram
Predict/Explain
Explain Reaction Condition
Explain Regiochemistry
Predict Stereochemistry
Explain Reaction Outcome
Explain Order
Explain Variable Interaction
Explain Property
Explain Concept
Sort Molecules According To Property
Explain Observation
Problem Solving Provide Conditions To Favour Pathway
Reaction
CU Reaction
Non-CU Reaction
Table 7 - The codes generated during analysis of four F11MSP past exam papers, organized into
categories and sorted by frequency of occurrence.
42
In comparison to F11MSB, a
larger variety of codes were generated,
showing that there is more variety to the
paper. As more codes have been
generated that were categorised under
‘Translate’ and ‘Predict/Explain’, it can
be concluded that the F11MSP papers are
inherently better at testing conceptual
understanding due to an increase in the
number of ways the student must
demonstrate it.
There are the same number of
codes in the combined Rote Memory and
Algorithmic categories, showing that
there is still a certain amount of ‘you just
have to know it’ content, although now
the Rote Memory category is larger.
A total of 208 codes were assigned
within the 4 papers, averaging 52 codes per
paper. The total marks for each paper was 120.
Therefore the average amount of marks per
code is ≈2.3. This is approximately the same as
F11MSB. In conclusion to this rudimentary
analysis, F11MSP uses a wider variety of
methods to test conceptual understanding, but
places equal worth on each method.
Figure 16 show how many codes were
assigned to each F11MSP paper. The general
decrease in the quantity of codes that were
assigned shows how the cognitive load on the
student decreases, mostly due to the reduction
Figure 16 - The total amount of codes assigned
during analysis of four F11MSP past papers, sorted
into their categories.
0
2
4
6
8
10
12
14
16
18
20
Numberofcodesused
Year of Paper
Number of UniqueCodes in Each Paper
in F11MSP
(2007 - 2010)
Figure 17 – The number of codes assignedduring analysis
of each of the four F11MSP past exam papers.
0
10
20
30
40
50
60
70
80
Numberofcodesassigned
Year of Paper
Number of Codes Assigned to Each Paper
in Each Aspect of Conceptual
Understanding in F11MSP
(2007 - 2010)
CU Reaction
Problem
Solving
Predict/Explain
Translate
Non-CU
Reaction
Algorithm
Rote
43
of Translate and Predict/Explain codes.
Figure 17 shows the variety of codes present within the paper, plotted over the life
span of the module. An almost 50% reduction in the variety of codes occurs, from 19
different codes in 2007 – 2008, to only 10 different codes in 2010 – 2011. This is additional
evidence that the module becomes less effective at assessing conceptual understanding over
time.
The frequency of each code was plotted for the whole module (see Figure 19). The
most frequent code was ‘CU Reaction’, with 82 counts (39% of the codes). This reinforces
the claims of the module description that this module is focused on mechanistic thinking.
Twelve codes appear only once over the module. The large number of single occurrence
codes can be traced to the low number of papers (N=4), as there is less time for question
archetypes to be recycled, but could also indicate that the generated codes were too specific
and need generalising.
Figure 18 shows the size of
each group once the codes are
collapsed into their relevant aspects.
30% of the module (62 codes) was
composed of codes that were either
‘Rote Memory’, ‘Algorithmic’ or
‘Non-CU Reaction’ associated. This
is a smaller amount than Lecturers 2
and 3 suggested. As ‘Depth’ is
defined as reasoning about core ideas
using skills other than rote memory or
algorithmic problem solving, 70% of
the module can be considered to test
‘Depth’. For the non-reaction type
questions, the most frequently examined aspect of conceptual understanding was
‘Predict/Explain’ (39 codes, 19%), in contrast to F11MSB where the most examined aspect
was ‘Translate’. This also reinforces the claims of the module (in regards to a focus on
mechanisms), as the ability to correctly predict the evolution of a chemical system is arguably
the most important part of organic chemistry reactions, and the main way to accomplish
accurate predictions is through the mechanisms.
Figure 18 - A pie chart showing the size of each category of
code generated during the analysis of four F11MSP past
exam papers.
20, 10%
23, 11%
23, 11%
39, 19%
2, 1%
82, 39%
19, 9%
Counts of Codein F11MSP Grouped by
ConceptualUnderstanding Aspects
(2007 - 2010)
Rote Memory
Algorithmic
Translate
Predict/Explain
Problem Solving
CU Reaction
Non-CU Reaction
44
Approximately 19% of the reactions present within the papers were assessed as
reactions that do no assess an aspect of conceptual understanding, as students are asked to
recall reagents without being asked to demonstrate any mechanistic thinking.
The main way the Predict/Explain aspect was assessed was through asking the
student to explain observations.
Figure 19 – The frequency of each code generated upon analysis of four F11MSP past exam papers.
Frequency of Each Aspect (2007-2010)
Figure 16 also shows how the size of each aspect changes over time. For a clearer
representation of this data, please see ‘Appendix 2 – Figures’. A general decrease is observed
in all categories.
1
1
1
1
1
5
5
5
1
2
20
1
1
2
3
3
4
9
1
1
1
1
2
2
3
4
7
17
2 82
19
0 20 40 60 80 100 120
Assign Reagents
Describe Reaction
Describe Problem With Reaction
Provide Examples
Provide Reaction
Assign Orbitals
Assign Bond Angles
Assign Hybridisation
Assign Rate Determining Step
Assign Stereochemistry
Nomenclature
Draw Orbital Interaction
Describe Electron Distribution
Draw Newman Projection
Redraw Structure
Draw Transition State
Draw Resonance Diagrams
Draw Diagram
Explain Reaction Condition
Explain Regiochemistry
Predict Stereochemistry
Explain Reaction Outcome
Explain Order
Explain Variable Interaction
Explain Property
Explain Concept
Sort Molecules According To Property
Explain Observation
Provide Conditions To Favour Pathway
CU Reaction/Non-CU Reaction
Total Occurrences
Total Occurences of Codes inF11MSP (2007 - 2011)
45
F11OMC
‘Reactivity of Organic Molecules and Coordination Chemistry’ was a module that
aimed primarily to give an introduction to the fundamental synthetic transformations
involving oxidation, reduction and carbon-carbon bond formation in organic synthesis, along
with several transition metal topics, which won’t be covered in this work.
Three papers from 2007 – 2009 were available from the past paper archive, with a
total of 57 marked organic questions over 2 sections (A/B). The 2007 – 2008 paper was
corrupted past question 1 in section B. These questions will still be analysed, but any
comment on the differences between the papers will be restricted to the latter two papers.
Generated Codes
24 different codes were generated, shown in Table 8 below in order of frequency. For
a description of each code see Table 4.
Rote Memory
Assign Bond Angles
Assign Functional Group
Assign Hybridization
Assign Lone Pairs
Assign Orbitals
Algorithmic
Assign Electron Configuration
Assign Polarity
Assign Property
Assign Term
Derive Rate Equation
Translate
Draw Diagram
Draw Intermediate
Draw Isomers
Draw Newman Projection
Draw Resonance Diagrams
Draw Transition State
Predict/Explain
Explain Variable Interactions
Explain Observation
Explain Property
Explain Stereochemistry
Explain Variable Interaction
Predict Reaction Outcome
Sort Molecules According To Property
Problem Solving Solve Problem
Reaction Reaction
Table 8 - The codes generated during analysis of four F11OMC past exam papers, as organized into
categories and sorted by frequency of occurrence.
46
In comparison to the other papers,
the amount of unique codes generated
closely resembles F11MSB, though the
Translate and Predict/Explain aspects
are expanded upon, whilst the
‘Algorithmic’ category is reduced in size.
153 codes were assigned in total.
For the two non-corrupted papers, the total
number of codes assigned was 124,
averaging 62 codes per paper. 205 marks
were assigned over the two papers,
averaging ≈3.3 marks per code. This is
more than both F11MSP and F11MSB.
Therefore, F11OMC uses a smaller variety
of methods to test conceptual
understanding, but places a higher worth
on each method.
Figure 20 shows the total number
of codes assigned to each paper. The
low amount of codes in the year 2007 –
2008 is due to the corruption of the
document. The decrease in number of
codes between 2008 and 2009 is due
mainly to the reduction of Rote
Memory codes, though the
Predict/Explain and Problem Solving
categories also decrease in size.
However, the Translate category
increases in size to compensate. Figure 21
shows the increase in the variety of
codes used, with 3 additional unique
codes generated during analysis of the
Figure 20 - The total number of codes generated under
each aspect during analysis of three F11OMC past
papers.
7
9
11
13
15
17
19
21
23
2007 - 2008 2008 - 2009 2009 - 2010
Numberofuniquecodespresentinpaper
Year of Paper
Number of Unique Codes Generated Upon
Analysis of Three F11OMC Past Exam Papers
(2007 - 2009)
Figure 21 - The number of unique codes generated upon
analysis of each F11OMC past exam paper.
0
10
20
30
40
50
60
70
2007 -
2008
2008 -
2009
2009 -
2010
Numberofcodesassigned
Year of Paper
Total Number of Codes Assigned to Each
Paper in Each Aspect of Conceptual
Understanding in F11OMC
(2007 - 2009)
CU Reaction
Problem Solving
Predict/Explain
Translate
Algorithmic
Rote Memory
47
2009 – 2010 paper. Hence, whilst the cognitive load on the student decreases and less aspects
of conceptual understanding are assessed, the paper becomes more varied.
The frequency of each code was plotted for the whole module (see Figure 22). The
most frequently used code was ‘Reaction’ with 26 counts, whilst six of the codes (Assign
Lone Pairs, Derive Rate Equation, Draw Intermediate, Explain Stereochemistry and Predict
Reaction Outcome) were used only once.
Figure 22 – The frequency that each code was generated during the analysis of three F11OMC past exam
papers.
1
3
6
8
11
1
6
6
13
21
1
3
4
5
8
11
1
1
2
3
3
8
1
26
0 5 10 15 20 25 30
Assign Lone Pairs
Assign Orbitals
Assign Bond Angles
Assign Functional Group
Assign Hybridization
Derive Rate Equation
Assign Electron Configuration
Assign Property
Assign Polarity
Assign Term
Draw Intermediate
Draw Newman Projection
Draw Transition State
Draw Resonance Diagrams
Draw Diagram
Draw Isomers
Explain Stereochemistry
Predict Reaction Outcome
Explain Variable Interaction
Explain Property
Sort Molecules According To Property
Explain Observation
Solve Problem
CU Reaction
Total Occurrences
Total Occurances of Code inF11OMC
(2007 - 2009)
48
Figure 24 shows the size of each aspect once the codes are collapsed from all three
papers. Figure 23 shows the size of each group if the corrupted paper is ignored.
49% of the codes generated were Rote Memory/Algorithmic associated, meaning
51% of the module tests the ‘Depth’ aspect. Translate was the most extensively assessed
aspect of conceptual understanding, whilst Problem Solving was the least extensively
assessed.
All ‘Reaction’ codes that were generated were later determined to assess conceptual
understanding.
Frequency of each aspect (2007-2009)
Figure 20 also shows how the size of each aspect changes each year. For a clearer
representation of this data, please see ‘Appendix 2 – Figures’.
Whilst the ‘Translate’ and ‘Reaction’ codes increase in size, the ‘Rote Memory’,
‘Problem Solving’ and ‘Predict/Explain’ codes decrease in frequency.
29, 19%
47, 31%
32, 21%
18,
12%
1, 0%
26, 17%
Counts of Code in F11OMC Grouped by
Conceptual Understanding Aspects
(2007 - 2009)
Rote
Algorithm
Translate
Predict/Explain
Problem Solving
Reaction
Figure 24 – A pie chart showing the size of each
category of code assignedduring the analysis of three
F11OMC past exam papers.
25, 20%
36, 29%
28, 23%
11, 9%
1, 1%
23, 18%
Counts of Code in F11OMC Grouped by
Conceptual Understanding Aspects
(2008/2009)
Rote
Algorithm
Translate
Predict/Explain
Problem Solving
Reaction
Figure 23 – A pie chart showing the size of each
category of code assignedduring the analysis of two
uncorrupted F11OMC past exam papers.
49
F11OSS
No description of ‘Organic Structure and Stereochemisty’ was available from the
module catalogue.
Two papers were available, from the years 2007 – 2008 and 2009 – 2010.
Generated Codes
16 codes were generated, shown in Table 9. For a description of each code see Table 4.
Rote Memory
Assign Hybridisation
Assign Orbitals
Algorithmic
Formulate Equilibrium Expression
Assign Stereochemistry
Assign Term
Nomenclature
Translate
Draw Diagram
Draw Isomers
Draw Newman Projection
Draw Resonance Diagrams
Predict/Explain
Explain Concept
Explain Observation
Sort Molecules According To Property
Problem Solving Solve Problem
Reaction
CU Reaction
Non-CU Reaction
Table 9 – The codes generated upon analysis of two F11OSS past papers, organised into categories and
sorted according to frequency of occurrence.
94 codes in total were assigned to the
two papers, for an average of 47 codes per
paper. 250 marks were available over the
two papers, meaning each code was worth
≈2.7 marks, approximately the same as in
F11MSB, and less than in F11OMC.
Figure 25 shows the relative
abundance of each category of code over the
two papers. The variance of the papers was
approximately equal, with 15 and 14 of the
16 generated codes used in each paper
respectively.
The frequency of each code was plotted for
the whole module (see Figure 26).
Figure 25– The total number of codes generated
under each aspect during analysis of two F11OSS
past exam papers.
0
10
20
30
40
50
60
Numberofcodesassigned
Year of Paper
Total Number of Codes Assigned to Each
Paper in Each Aspect of Conceptual
Understanding in F11OSS
(2007/2009)
CU Reaction
Problem Solving
Predict/Explain
Translate
Non-CU Reaction
Algorithmic
Rote Memory
50
‘CU Reaction’ was the most frequently generated code, with 22 occurrences, whilst
six codes were only generated twice. This is unsurprising given the small number of past
papers available.
Figure 26 – The frequency of each generated code during the analysis of two F11OSS exam papers.
Figure 27 shows the size of each
category of code in the module once the
generated codes were sorted into groups.
38% of the codes that were
generated were classified as ‘Rote
Memory’, ‘Algorithmic’, or ‘Non-CU
Reaction’. As ‘Depth’ is defined as
reasoning about core ideas using skills
other than rote memory or algorithmic
problem solving, 62% of the module can
be considered to test ‘Depth’.
‘Translate’ was the largest group
out of the five aspects of conceptual
2
2
2
4
9
9
2
2
4
8
9
3
3
3
2 22 8
0 5 10 15 20 25 30 35
Assign Hybridisation
Assign Orbitals
Formulate Equilibrium Expression
Assign Sterochemistry
Assign Term
Nomenclature
Draw Diagram
Draw Newman Projection
Draw Resonance Diagrams
Draw Isomers
Draw Structure From Descriptor
Explain Concept
Explain Observation
Sort Molecules According To Property
Solve Problem
CU Reaction/Non-CU Reaction
Total Occurrences
Total Occurances of Code inF11OSS
(2007/2009)
4, 4%
24, 26%
25, 27%
9, 10%
2, 2%
22, 23%
8, 8%
Counts ofCode inF11OSS Groupedby
Conceptual UnderstandingAspect
(2007/2009)
Rote Memory
Algorithmic
Translate
Predict/Explain
Problem Solving
CU Reaction
Non-CU Reaction
Figure 27 – A pie chart showing the size of each category
of code generated during the analysis of two F11OSS past
exam papers.
51
understanding, making up 27% of the codes.
Approximately 27% of reaction codes were classified as not assessing any aspect of
conceptual understanding.
Frequency of Each Aspect (2007/2009)
Table 10 shows the size of each category of code that was generated upon
analysis of each paper from F11OSS. The ‘Algorithmic’, ‘Translate’ and ‘Non-CU Reaction’
categories decrease in size, whilst the ‘Predict/Explain’ category increases.
Rote
Memory Algorithmic Translate
Predict/
Explain
Problem
Solving
CU
Reaction
Non-CU
Reaction
2007 - 2008 2 13 13 4 1 11 5
2009 - 2010 2 11 12 5 1 11 3
Table 10 – The size of each category of codes generated during analysis of two F11OSS past exam papers.
F11SOS
19 codes were generated, shown in Table 11.
Rote Memory
Assign Lone Pairs
Assign Bond Angles
Assign Hybridisation
Assign Orbitals
Provide Examples
Algorithmic
Assign Stereochemistry
Determine Limiting Reagent
Nomenclature
Translate
Describe Electron Distribution
Draw Resonance Diagrams
Draw Transition State
Redraw Structure
Draw Stereoisomers
Draw Diagram
Predict/Explain
Explain Concept
Sort Molecules According To
Property
Explain Observation
Explain Variable Interaction
Reaction
CU Reaction
Non-CU Reaction
Table 11 – The 19 codes generated through analysis of 24 F11SOS questions over two years.
52
84 codes in total were generated
during analysis of both F11SOS past
papers, for an average of 42 codes per
paper. 141 marks were available across
both papers, meaning each code was
worth ≈1.7 marks. This is the lowest of
any of the modules analysed in this work.
Figure 28 shows the total amount
of code generated during analysis of both
past papers. More codes were generated
during analysis of the 2010 – 2011 paper,
showing an increase in the cognitive
demand from the student. Furthermore,
this increase cognitive demand is focused
onto the ‘Predict/Explain’ aspect of
conceptual understanding, and
‘Algorithmic’ codes are replaces by ‘Rote Memory’ or ‘Non-CU Reaction’ codes.
The variety of codes within the
papers also increases, from 11 codes
to 14 codes. Hence, the second paper
requires more conceptual
understanding, and is more varied than
the first.
Figure 30 shows how frequently
each code was generated during
analysis of the two F11SOS exam
papers. The most abundant code was
‘CU Reaction’, whilst 9 of the codes
were only generated once.
Figure 29 shows the size of each
category of code that was generated.
56% of the codes were classified as either ‘Rote Memory’, ‘Algorithmic’ or ‘Non-CU
Reaction’. Therefore, 44% of the paper can be considered to assess ‘Depth’. Of the aspects of
0
5
10
15
20
25
30
35
40
45
50
Numberofcodesgenerated
Year of Paper
Total Number of Codes Assigned to Each
Paper in Each Aspect of Conceptual
Understanding in F11SOS
(2009 - 2010)
CU Reaction
Predict/Explain
Translate
Non-CU
Reaction
Algorithmic
Rote Memory
22, 26%
13, 16%
9, 11%
10, 12%
18, 21%
12, 14%
Counts of Code in F11SOS Grouped by
Conceptual Understanding Aspect
(2009 - 2010)
Rote Memory
Algorithmic
Translate
Predict/Explain
CU Reaction
Non-CU Reaction
Figure 28 – The total number of codes generated under each
aspect during the analysis of two F11SOS past exam papers.
Figure 29 - A pie chart showing the size of each category of
code assigned during analysis of seven F11MSB past exam
papers.
Project Final Version
Project Final Version
Project Final Version
Project Final Version
Project Final Version
Project Final Version
Project Final Version
Project Final Version
Project Final Version
Project Final Version
Project Final Version
Project Final Version
Project Final Version
Project Final Version
Project Final Version
Project Final Version
Project Final Version
Project Final Version
Project Final Version
Project Final Version
Project Final Version
Project Final Version
Project Final Version
Project Final Version
Project Final Version
Project Final Version
Project Final Version

More Related Content

Viewers also liked

Marcador digestivo
Marcador digestivoMarcador digestivo
Marcador digestivo
zeta30
 
Carolinas Core Lab BCI Timeline - Rev 4 12-08-14
Carolinas Core Lab BCI Timeline - Rev 4  12-08-14Carolinas Core Lab BCI Timeline - Rev 4  12-08-14
Carolinas Core Lab BCI Timeline - Rev 4 12-08-14
Bin Kallan
 
[Done] star theory
[Done] star theory[Done] star theory
[Done] star theory
vren88
 

Viewers also liked (17)

玩具小貨卡分享
玩具小貨卡分享玩具小貨卡分享
玩具小貨卡分享
 
Marcador digestivo
Marcador digestivoMarcador digestivo
Marcador digestivo
 
Maternal II A 4º BIM professora Cássia
Maternal II A 4º BIM professora CássiaMaternal II A 4º BIM professora Cássia
Maternal II A 4º BIM professora Cássia
 
Search & Earned Media | Brightedge Share 2013 Conference
Search & Earned Media | Brightedge Share 2013 ConferenceSearch & Earned Media | Brightedge Share 2013 Conference
Search & Earned Media | Brightedge Share 2013 Conference
 
Pesquisa de Imagem e Influência Digital
Pesquisa de Imagem e Influência DigitalPesquisa de Imagem e Influência Digital
Pesquisa de Imagem e Influência Digital
 
Medialogue no Social Analytics Summit 2014
Medialogue no Social Analytics Summit 2014Medialogue no Social Analytics Summit 2014
Medialogue no Social Analytics Summit 2014
 
Kperry6
Kperry6Kperry6
Kperry6
 
Folleto adultos y profesionales 2015
Folleto adultos y profesionales 2015Folleto adultos y profesionales 2015
Folleto adultos y profesionales 2015
 
Estufas practil
Estufas practilEstufas practil
Estufas practil
 
RANURAS DE EXPANSIÓN...HERNAN
RANURAS DE EXPANSIÓN...HERNANRANURAS DE EXPANSIÓN...HERNAN
RANURAS DE EXPANSIÓN...HERNAN
 
Country paper
Country paperCountry paper
Country paper
 
Carolinas Core Lab BCI Timeline - Rev 4 12-08-14
Carolinas Core Lab BCI Timeline - Rev 4  12-08-14Carolinas Core Lab BCI Timeline - Rev 4  12-08-14
Carolinas Core Lab BCI Timeline - Rev 4 12-08-14
 
Cómo hacer publicidad en Facebook
Cómo hacer publicidad en FacebookCómo hacer publicidad en Facebook
Cómo hacer publicidad en Facebook
 
Pon Fesr 12810 "Realizzazione ambienti digitali"
Pon Fesr 12810  "Realizzazione ambienti digitali"Pon Fesr 12810  "Realizzazione ambienti digitali"
Pon Fesr 12810 "Realizzazione ambienti digitali"
 
DA_KarnatakaStateGov
DA_KarnatakaStateGovDA_KarnatakaStateGov
DA_KarnatakaStateGov
 
[Done] star theory
[Done] star theory[Done] star theory
[Done] star theory
 
Презентація Попкової Т.М.
Презентація Попкової Т.М.Презентація Попкової Т.М.
Презентація Попкової Т.М.
 

Similar to Project Final Version

DENG Master Improving data quality and regulatory compliance in global Inform...
DENG Master Improving data quality and regulatory compliance in global Inform...DENG Master Improving data quality and regulatory compliance in global Inform...
DENG Master Improving data quality and regulatory compliance in global Inform...
Harvey Robson
 
Reflective report For hospitality study.pdf
Reflective report For hospitality study.pdfReflective report For hospitality study.pdf
Reflective report For hospitality study.pdf
4934bk
 
Plants and Other ResourcesFor purposes of external reporting, no.docx
Plants and Other ResourcesFor purposes of external reporting, no.docxPlants and Other ResourcesFor purposes of external reporting, no.docx
Plants and Other ResourcesFor purposes of external reporting, no.docx
randymartin91030
 
Centre-for-Homeopathic-Education-HER-14
Centre-for-Homeopathic-Education-HER-14Centre-for-Homeopathic-Education-HER-14
Centre-for-Homeopathic-Education-HER-14
Nabeel Zaidi
 
Hey Carzetta,  You did a beautiful job on your char.docx
Hey Carzetta,  You did a beautiful job on your char.docxHey Carzetta,  You did a beautiful job on your char.docx
Hey Carzetta,  You did a beautiful job on your char.docx
hanneloremccaffery
 
Assessment Matters Newsletter_November 2015 (3)
Assessment Matters Newsletter_November 2015 (3)Assessment Matters Newsletter_November 2015 (3)
Assessment Matters Newsletter_November 2015 (3)
Tom Kohntopp
 
mapping-quality-summer-internships-washington-dc
mapping-quality-summer-internships-washington-dcmapping-quality-summer-internships-washington-dc
mapping-quality-summer-internships-washington-dc
Natalia Trujillo
 
Police and Fire On-Line Courseware Training Trends and Evaluation Study
Police and Fire On-Line Courseware Training Trends and Evaluation StudyPolice and Fire On-Line Courseware Training Trends and Evaluation Study
Police and Fire On-Line Courseware Training Trends and Evaluation Study
Interact Business Group
 

Similar to Project Final Version (20)

Exploring scholarship and scholarly activity in college-based Higher Education
Exploring scholarship and scholarly activity in college-based Higher EducationExploring scholarship and scholarly activity in college-based Higher Education
Exploring scholarship and scholarly activity in college-based Higher Education
 
DENG Master Improving data quality and regulatory compliance in global Inform...
DENG Master Improving data quality and regulatory compliance in global Inform...DENG Master Improving data quality and regulatory compliance in global Inform...
DENG Master Improving data quality and regulatory compliance in global Inform...
 
Reflective report For hospitality study.pdf
Reflective report For hospitality study.pdfReflective report For hospitality study.pdf
Reflective report For hospitality study.pdf
 
Qa handbook
Qa handbookQa handbook
Qa handbook
 
Petroc-HER-14
Petroc-HER-14Petroc-HER-14
Petroc-HER-14
 
Plants and Other ResourcesFor purposes of external reporting, no.docx
Plants and Other ResourcesFor purposes of external reporting, no.docxPlants and Other ResourcesFor purposes of external reporting, no.docx
Plants and Other ResourcesFor purposes of external reporting, no.docx
 
Centre-for-Homeopathic-Education-HER-14
Centre-for-Homeopathic-Education-HER-14Centre-for-Homeopathic-Education-HER-14
Centre-for-Homeopathic-Education-HER-14
 
Hey Carzetta,  You did a beautiful job on your char.docx
Hey Carzetta,  You did a beautiful job on your char.docxHey Carzetta,  You did a beautiful job on your char.docx
Hey Carzetta,  You did a beautiful job on your char.docx
 
ACTION RESEARCH RATIONALE OF THE STUDY LAC SESSION.docx
ACTION RESEARCH RATIONALE OF THE STUDY LAC SESSION.docxACTION RESEARCH RATIONALE OF THE STUDY LAC SESSION.docx
ACTION RESEARCH RATIONALE OF THE STUDY LAC SESSION.docx
 
Learning Outcomes.pdf
Learning Outcomes.pdfLearning Outcomes.pdf
Learning Outcomes.pdf
 
Assessment Matters Newsletter_November 2015 (3)
Assessment Matters Newsletter_November 2015 (3)Assessment Matters Newsletter_November 2015 (3)
Assessment Matters Newsletter_November 2015 (3)
 
mapping-quality-summer-internships-washington-dc
mapping-quality-summer-internships-washington-dcmapping-quality-summer-internships-washington-dc
mapping-quality-summer-internships-washington-dc
 
What is high quality education and care
What is high quality education and careWhat is high quality education and care
What is high quality education and care
 
Improving quality in the early years executive summary
Improving quality in the early years executive summaryImproving quality in the early years executive summary
Improving quality in the early years executive summary
 
Police and Fire On-Line Courseware Training Trends and Evaluation Study
Police and Fire On-Line Courseware Training Trends and Evaluation StudyPolice and Fire On-Line Courseware Training Trends and Evaluation Study
Police and Fire On-Line Courseware Training Trends and Evaluation Study
 
Benchmarking for technology enhanced learning
Benchmarking for technology enhanced learningBenchmarking for technology enhanced learning
Benchmarking for technology enhanced learning
 
BSU-AQR-Case-Study
BSU-AQR-Case-StudyBSU-AQR-Case-Study
BSU-AQR-Case-Study
 
The Impact of Educational Practice Project
The Impact of Educational Practice ProjectThe Impact of Educational Practice Project
The Impact of Educational Practice Project
 
Feedback as dialogue and learning technologies: can e-assessment be formative?
Feedback as dialogue and learning technologies: can e-assessment be formative?Feedback as dialogue and learning technologies: can e-assessment be formative?
Feedback as dialogue and learning technologies: can e-assessment be formative?
 
backman chapter 3.pdf
backman chapter 3.pdfbackman chapter 3.pdf
backman chapter 3.pdf
 

Project Final Version

  • 1. i School of Chemistry 4th Year Research Project (F14RPC) Title: Developing a Tool to Quantitatively Describe the Conceptual Understanding Content of 1st Year Organic Chemistry Examinations Student’s name: Jake Turner Student ID Number: 4181525 Supervisor: Dr. June McCombie Assessor: Prof. Katharine Reid Personal Tutor: Dr. Darren Walsh I hereby certify that this project report is my own work: Student’s signature: Please ensure this document is date stamped when handed in to the SSO.
  • 2. ii Acknowledgments I would like to thank my supervisor, Dr. June McCombie, who from the beginning helped to shape this work into its current form, and provided me with the resources and motivation necessary to succeed. I would also like my assessor, Prof. Katharine Reid, whose mid-way feedback played a crucial role in ensuring this work was as focused as possible. I also extend a large thank you to my fellow researchers, Sandi and Jess, who’ve attentively watched what must amount to several hours of presentations, and have sparked many interesting discussions that have helped form these results. I would also like thank all of the lecturers who participated in this study. You were all exceedingly friendly, welcoming, and willing to help. Above all I thank my wife, Zoe, who has supported me in countless ways as I crafted this study. Without her, this work would not exist.
  • 3. iii Abstract In this work, a pilot study was conducted to develop a method for obtaining a quantitative description of how effectively conceptual understanding is examined (if at all), and a comparison of these results is made to qualitative data obtained from the authors of the exams (the lecturers), revealing their attitudes towards conceptual understanding, and performing a comparison between their perceptions of the state of conceptual understanding in the exams with the reality.
  • 4. iv Tableof Contents Acknowledgments...................................................................................................................... ii Abstract ......................................................................................................................................iii Introduction................................................................................................................................1 Section 1 - Literature Review ....................................................................................................4 Conceptual Understanding.....................................................................................................4 Conceptual Understanding in Organic Chemistry .................................................................5 Concepts in Organic Chemistry .............................................................................................6 Section 2 - Research Methodology..........................................................................................10 Interviewing Lecturers .........................................................................................................10 Conducting the First Interviews.......................................................................................10 Conducting the Second Interviews ..................................................................................10 Analysing Past Exam Papers ...............................................................................................18 Section 3 – Results and Discussion .........................................................................................31 Generated Codes ..................................................................................................................31 Individual Module Results...................................................................................................37 F11MSB...........................................................................................................................37 F11MSP ...........................................................................................................................41 F11OMC..........................................................................................................................45 F11OSS............................................................................................................................49 F11SOS............................................................................................................................51 Overall Results and Conclusion...........................................................................................54 Section 4 - Future Work...........................................................................................................57 References..................................................................................................................................iii Appendix 1 – Interview Transcriptions .....................................................................................v Lecturer 1 – Interview 2.........................................................................................................v Lecturer 2 – Interview 2........................................................................................................ ix Lecturer 3 – Interview 2........................................................................................................xii Appendix 2 – Figures and Tables ............................................................................................xvi Classification of Reaction Codes (from Figure 18) .............................................................xvi Individual Aspect Frequencies in F11MSB.......................................................................xviii Individual Aspect Frequencies in F11MSP ......................................................................... xx Individual Aspect Frequencies in F11OMC .......................................................................xxii
  • 5. 1 Introduction On July 1st 2015, Jo Johnson, the Minister of State for Universities and Science, delivered a speech in which he outlined plans for the Teaching Excellence Framework (TEF)1. The TEF is a proposed system, like that of the Research Excellence Framework (REF), in which universities are judged by the quality of their teaching, recognising excellence and providing clear incentives to improve. Johnson gave a variety of reasons behind his proposal, including tackling degree classification inflation, increasing student’s engagement in their courses and providing incentives for institutions to increase the retention and progression of disadvantaged students. The core of TEF would be clear outcome-centred criteria and metrics, assessed by an independent quality body. The exact nature of this framework, Johnson said, was yet to be designed, and consultation was scheduled to take place for an autumn publication of the green paper. A week later, George Osbourne stated in his Summer Budget 2015 speech that universities that demonstrate excellent teaching would be able to raise their fees over the current £9,000 cap2. This prompted concern that the TEF would lead to a vicious cycle where universities would prioritize meeting TEF standards (becoming ‘TEF-able’), much like how they currently deal with REF (becoming ‘REF-able’), and not focus on improving teaching standards. Furthermore, as the nature of the metrics the TEF would operate under was currently unknown, predictions arose that the TEF would become ‘data-driven’. For an example of an education inspectorate that is data-driven, one can look to Ofsted. It currently strives to collect more data, year after year. Some stakeholders say this leads to a system where children are replaced with data-points, and policies are introduced that improve said data points. In theory, this should lead to improvements in the child’s learning, but as the data starts covering wider ranges, this idealism can become lost amongst the tide of statistics. The worry, therefore, was that under the TEF regime, changes to university education would be made because it is good for the TEF, and important changes would be ignored on the grounds they would not be represented in the TEF metrics. In November 2015, the green paper was presented to Parliament under the name “Fulfilling our Potential: Teaching Excellence, Social Mobility and Student Choice”. There
  • 6. 2 was clarification on the aims and rational for the TEF, along with a briefly proposed model for how the metrics will function. In summary, the proposed TEF will occur in stages over a number of years, with increasing levels of the TEF award becoming available. In the first year, a Quality Assessment (QA) review will award Level 1 of the TEF to institutions that meet or exceed the expectations for quality and standards in England, and allows application for financial incentives in the academic year of 2018/2019. A ‘successful QA review’ was defined as the most recent review undertaken by the Quality Assurance Agency for Higher Education (QAA) or an equivalent review used for course designation. Having a Level 1 award entitles universities to raise their course fees for new students. In the second year, higher level TEF awards become available. These higher levels will be granted upon the successful assessment of the institution through as-of-yet undecided metrics and criteria. Though unknown, the means of assessment are outlined to be independent from Government, instead being handled by a panel of experts. This panel is proposed to consist of academic experts in learning and teaching, student representatives, and employer/professional representatives. In order for this assessment process to be straightforward and robust, and because there is no single direct measurement of excellent teaching, the green paper proposes using a common set of metrics from quality assured national datasets. In recognition of the fears people had about the TEF becoming ‘data-driven’, the green paper states that these metrics alone will not give a complete picture of excellent teaching, and therefore proposes to ask institutions to supplement the assessment panel with additional evidence. Self-reflection upon teaching standards is already prevalent within most universities, although there currently exists very little in the way of tools to perform this analysis. If a standard system of awarding teaching excellence is to be successful, tools of this nature must be developed so that the universities can provide convincing, clear evidence that they not only currently exhibit high teaching standards, but that they are also committed to developing them. As employability is one of the key aspects of the TEF, it seems likely that the classification of the degree a student is awarded, and consequently the methods used to decide upon that classification, will attract the attention of the independent TEF assessment board. With the incredibly long list of courses supplied by the countries institutions, it makes sense to transfer the responsibility of proving the worth of their exams onto the administering schools.
  • 7. 3 In summary, tools to deconstruct and evaluate assessments need to be developed in order to provide evidence of excellent teaching. With the correct tools, an institution could provide evidence that they are committed to assessing the conceptual understanding held by their students, over the more traditional rote learning or algorithmic problem solving abilities. In the post-graduation world, real problems demand ways of thinking that are often far removed from tradition, and possessing a conceptual understanding of the material used during day-to-day work is essential if an employee is to be cost effective, especially in the science sector. Jo Johnson stressed that he wanted to provide ways for students to determine their value for money, and giving them a way of assessing their future worth as an employee certainly achieves this. In addition, if an exam is proven to test conceptual understanding of the topics it assesses, high marks will infer that the teaching of the content was focused on developing conceptual understanding. From the mark distribution of a population of exam recipients and an understanding of the content of the exam, judgements might be made as to the teaching quality. Research into conceptual understanding within chemistry started in 1987, where Nurrenbern and Pickering showed a difference in ability to solve conceptual and numerical problems3. In 1990, Pickering went on to show that this difference arose from two educational goals; conceptual understanding and algorithmic problem solving, rather than some innate difference in ability4. Conceptual understanding has since been the subject of intensive research, though no analysis has yet been performed on the conceptual understanding content of actual assessment materials from universities. This initial pilot study therefore aims to probe conceptual understanding in exams of a single area of first year study; organic chemistry. This area was chosen partly due to the familiarity I have with the material, and partly due to the general view that organic chemistry is a ‘gatekeeper’ module, either making or breaking students with its vast amount of complex reactions, largely un-encountered by the average post A-level chemist.
  • 8. 4 Section1 - LiteratureReview Conceptual Understanding Research has been progressively uncovering the effects and benefits of conceptual understanding for a long time, although until recently the definition of conceptual understanding was not set in stone. More often than not, it was simply a case of ‘knowing-it- when-you-saw-it’. Research was guided by the idea that conceptual understanding is the ability to move between different levels of understanding. These levels were the macroscopic, particulate and symbolic5. Further levels, such as ‘process’6 and ‘quantum’7 have been proposed since. A recent paper by Holme et al. took an interesting approach to resolving the definition, crowd-sourcing it from over 1,300 instructors8. This new five-part definition has not yet been used in the literature as a basis for looking further into conceptual understanding. I will therefore be using this five-part definition as a lens through which to view conceptual understanding within university education. Whilst the definition is broad, yet concise, it does suffer in respect to the ‘Problem Solving’ aspect relying on the term ‘critical thinking’, which suffers from the same type of vague, varied definition that ‘conceptual understanding’ used to. This makes using this aspect of the definition an exercise in interpretation, ironically exactly what the paper was aiming to put a stop to. It should be noted that this definition of conceptual understanding is explicitly stated to be for general chemistry, and lacks discussion on how organic-specific content (e.g. reaction mechanisms) fits in. Therefore during this work, it was necessary at times to depart from this definition or expand it. Aspects of this definition have been explored before, both in respect to teaching and assessment. Student generated analogies have been shown to increase their conceptual understanding of halogen reactions9, which in the light of the new definition can be seen as the Translate aspect (translating the behaviour of electrons into a macroscopic world analogy counterpart) reinforcing the ability of the student to Predict/Explain (‘Which of the reactions will happen?’) with Depth (they were able to communicate their reasoning with language that demonstrated skills beyond rote memorization).
  • 9. 5 Dori et al. have succeeded in developing a module for teaching quantum mechanics utilising a visual-conceptual approach7, 10. They focus on the ability to Translate between visual representations. Cooper et al. assessed student’s Depth and ability to Translate their understanding of intermolecular forces11. Raviolo developed an assessment of solubility equilibrium12. His assessment requires the student to Transfer their knowledge to a novel AgCl salt solution, Predict/Explain with Depth the process that takes place, and Translate their ideas into graphical representations. The power of this definition is clear; entire learning/assessment cycles can be broken down according to specific aspects of the greater conceptual understanding whole. It therefore should be possible to look at isolated assessment material, and analyse the relationships to aspects of conceptual understanding. Conceptual Understanding in Organic Chemistry It has been said that “much of what is chemistry exists at a molecular level and is not accessible to direct perception”13. This lack of direct perception is a problem for the student, so it is not surprising that the majority of research into conceptual understanding takes place around concepts that are either directly observable, or easily represented in some form of diagram, resulting in a large amount of research into the area of conceptual understanding within physical chemistry. Research into conceptual understanding of organic chemistry is a relatively young area14, and unsurprisingly, most of this research is focused on reaction mechanisms. Bhattacharyya and Bodner identified that, like the mathematical problems in Nurrenbern’s work3, students can produce correct answers to mechanistic tasks without having an understanding of the chemical concepts15. Ferguson and Bodner expanded on this by probing how students made sense of the arrow-pushing formalism16. They found that in many students’ minds, the curly arrows were not being used as a powerful construct to understand the mechanism of the reaction; they were just pushing arrows around until they obtained a product. Kraft, Strickland and Bhattacharyya investigated the cues organic chemistry graduate students obtain during mechanism tasks, and the reasoning processes induced by those cues17. They found that the students exhibited a poor interpretation of the reaction mechanisms
  • 10. 6 provided to them, which cued them to rely primarily upon a case-based reasoning (CBR) approach, where they tried to relate the given problem to a more familiar case. They also relied upon a rules-based reasoning (RBR) approach, which lead to an under-accounting of the relevant variables in the problem. The most successful students utilised a model-based reasoning (MBR) approach. This work suggests that further instruction on how to reason with concepts is necessary for the students, not just an increase in their conceptual understanding. Grove, Cooper and Cox investigated scenarios in which the students choose to use mechanisms to solve problems, rather than forcing them too. They found that in complex problems, mechanistic thinking increases the chance of succession, but didn’t help significantly with simple problems18. They also highlighted that an alarming number of students (51%) didn’t use mechanisms, or only used one, to solve the six problems, indicating there is a need to better understand the barriers that students face in trying to use mechanisms and the curved-arrow notation. Not all research into conceptual understanding within organic chemistry is mechanism focused. Arellano and Towns investigated students understanding of the alkyl halide functional group and its reactions, finding that students had trouble classifying substances as bases and/or nucleophiles, assessing the basic or nucleophilic strength of substances and accurately describing the steps that take place or the reactive intermediates that form during alkyl halide reaction mechanisms19. This seems to suggest that problems with conceptual understanding in organic chemistry originate much deeper than at a mechanistic level. In conclusion, the current state of research into conceptual understanding in organic chemistry seems to be focused around thinking methodology, particularly when solving mechanistic problems. There are currently no studies that investigate how organic chemistry is assessed during undergraduate studies. Concepts in Organic Chemistry During the initial stages of the project, it was my intention to investigate conceptual understanding in a much wider area than just examinations. In order to accomplish this, I planned to present a few topics to students in the form of conceptual questions, ultimately trying to find particular areas of difficulty, hoping to connect this data to the outcome of the examination analysis. I explored various topics, looking for those which lent themselves well
  • 11. 7 to the aspects of the five-part definition of conceptual understanding. This meant possessing qualities such as being representable in multiple forms or determining aspects of a chemical system, so that students would be able to predict the outcome of changes. However, I realized that such selectivity was not possible when it came to actually teaching all the topics required for a first year course, thus I should investigate the most important concepts. To aid this process, three lecturers were interviewed (see ‘Conducting the First Interviews’ below). During the process of deciding upon concepts to probe, it became clear to me that I needed to refine my idea of what exactly a concept was in the context of organic chemistry. Was it right to equate something like pKa, which is essentially just an equation, with a whole mechanistic reaction? Considering that pKa is one of the many factors that need to be taken into account when looking at nucleophilic substitution reactions, it isn’t fair to say that a whole organic mechanism is a ‘concept’ in the same way individual aspects of the mechanisms are. Although this direction of the project was ultimately diverted towards examinations, the reasoning that follows contributed heavily to the interpretation of the results. Enabling a student to devise and explain mechanisms for entirely new reactions is a major goal for organic chemistry education. Recent research has shown the difference in the use of rule-based reasoning (RBR), case-based reasoning (CBR) and model-based reasoning (MBR) between students during mechanistic problems17. The study shows that students are likely to resort to CBR 50% of the time, where they look towards mechanisms they are familiar with to solve unfamiliar mechanisms. This worked well for some familiar reactions, but lacked true predictive power, as the students often forced the problem to match their pre- conceived case. The most successful participants used a higher order thinking models to proceed through the reaction stepwise, considering how each variable will interact with the others during each step. With this in mind, it is possible to frame the goal of organic chemistry education as to provide students with the conceptual understanding of many different models (pKa, resonance, carbocation stability etc.) so that they can work through novel mechanistic problems with a MBR approach. The conceptual understanding of each model can be broken down into the five-part definition, whereas the concept of the mechanism itself is a complex
  • 12. 8 interplay between these aspects, the details of which are yet to be revealed by research (see Section 4 - Future Work). In Holmes’ paper, Depth is described as thinking void of rout memorisation or algorithmic execution8. This accurately describes the process of MBR, instead of the memorisation tactics typical of RBR and CBR. For example, a student might be asked to propose a mechanism for a substitution reaction of a benzene derivative. They must consider the following models: relative reactivity of the benzene derivative compared to unsubstituted benzene, the site where the substitution will occur, and the flow of electrons between the two species. There are problems involved with this interpretation, as it becomes a matter of debate as to what constitutes a model. In the given example, the concepts behind the site where substitution will occur could be broken down into inductive and mesomeric effects, or expanded to include the reaction energy profile of substitution at that position, where the lowest energy transition state will lead to the major product. A standard must therefore be employed to prevent varying interpretation. One possibility is ‘one mark = one model’. This assumes that the teacher wrote the question with this in mind, and hasn’t deemed a specific demonstration of knowledge as worth multiple marks. This standard is fast to perform and objective, leaving no room for interpretation, but would probably not be very accurate or precise. Another possibility is that only what the candidate is being asked to demonstrate will be considered as a model. This still leaves room for interpretation, but doesn’t restrict itself to the assumption that every mark is a demonstration of a model, and performing this level of analysis would be very time consuming. This issue could be further resolved by discussion with the teacher who wrote the question, as the breakdown of marks could be assigned to the models. Ultimately, due to the change of staff over the years, both of these standards are impractical, although for future examinations it would be easy to obtain an exact measure of the intended depth of the question from the author themselves. Application of these models to the current mechanistic situation can be viewed as the Transfer aspect. True to the aspect’s description, MBR requires the student to recognise the novelty of the situation. During the mechanism, many steps may look like they are able to proceed in multiple directions. This requires the student to Predict/Explain where the reaction will go through
  • 13. 9 MBR. If the problem requires a prediction only, RBR or CBR may be viable approaches, although if the student is also asked to explain why the reaction proceeded in that fashion, the conceptual understanding of the model and how it applies to the current chemical situation can be more thoroughly assessed. Even if there are no plausible alternate routes the reaction may proceed down, if the student has no prompt of a product to aim for they are required to Predict/Explain. Although the mechanism is portrayed through two-dimensional drawings, often the key to deducing the correct outcome exists in the third dimension. The student must therefore demonstrate representational competence, or the ability to Translate between various representations. An example of this is the elimination reactions E1 and E2 of halogen substituted cyclohexane rings. To accurately solve a mechanistic problem like this, the student must translate the perfect hexagonal representation into the chair configurations, and then determine how the p-orbitals lie in respect to the leaving group and the eliminated hydrogen, finally translating their chair structure back into the perfect hexagonal structure. Spatial reasoning plays a huge part in organic chemistry, and being able to translate back and forth between representations like the Newman projection is vital. Other representations could include energy profiles, or a graphical depiction of the orbitals involved in the reaction. The process of devising the correct mechanism, be it the electron flow, the reagents, the stereochemically correct outcome, or the entire mechanism, is the Problem Solving aspect. In line with Holmes’ definition, critical thinking is required for effectively solving mechanistic problems8. The separation and relationship between models and mechanisms is an important line of investigation to carry over to future research. The time constraints of this study will not allow extensive linking of the teaching process to the conceptual understanding of the models, and then to the extent of conceptual understanding assessed in exam papers, but the connection made here will allow this in future work.
  • 14. 10 Section2 - Research Methodology Interviewing Lecturers Conductingthe First Interviews In order to investigate topics that were suitable to use during the probing of conceptual understanding, I interviewed three 1st year lecturers who were willing to be involved in this study. To preserve anonymity, they will be referred to as Lecturers 1-3 consistently throughout this report. Even though the direction of this study changed to focus upon the examination analysis, the information discussed below serves to provide a more complete picture of the participants. I asked the lecturers what they thought the most important concepts in organic chemistry were and which concepts they thought students understood the least. The two questions solicited the same response in all three interviews; there seemed to be an agreement that the most important fundamental concepts were the ones least understood. There was also a clear alignment between what the lecturers thought of as the most important, and the topics they were responsible for teaching. Areas such as organic structure (drawing molecules, functional groups and nomenclature), stereochemistry, pKa, carbonyl chemistry, and HOMO-LUMO interactions were seen as the most important and least understood amongst the three lecturers. Conducting the SecondInterviews Whilst the first interviews were focused on concepts, I wanted to gain a better picture of how the lecturers viewed conceptual understanding. The second interview was composed of four questions:  In your own words, could you please describe conceptual understanding in the context of organic chemistry?  How would you structure an exam question to test conceptual understanding?  Assume a paper has 100 available marks, how many of these marks should come from rote learning, algorithmic problem solving and conceptual understanding?  As a percentage, how much of your lecture content appears in the exam?
  • 15. 11 The first question is designed to allow a direct comparison to the definition formulated by Holme et al 8. The answers could highlight areas of potential improvement, as a fuller awareness of what conceptual understanding is would allow staff to adjust their teaching styles to promote conceptual learning, and therefore demonstrate a higher standard of teaching excellence. That being said, if the answers to this question reveal holes in the definition, it doesn’t mean that the teaching reflects those holes. It could be that the lecturers are unconsciously aware of certain aspects, or simply didn’t see it as necessary to express an aspect that seems obvious in words. The second question probes how the lecturer views the examination as a tool, and might reveal new ideas that would be useful when looking at past exam papers. It is also an extension to the first question, but requires the lecturer to think about conceptual understanding from the position of testing for it, rather than just describing what it is. My intention is to take answers to both the first and second question when looking at the lecturer’s scores for their definitions of conceptual understanding. This is in contrast to Holme’s paper, where the teachers are only asked to define conceptual understanding. My reasoning is that in the survey sent out by Holme, there was a section of questions that “focused on conceptual understanding through topics taught and question structure.”8 This section may have helped the teachers refine their ideas of conceptual understanding before giving a definition. Combined with the facts that they had as much time as they needed to answer the question and freedom to refine their answers, I gained the impression that the teachers in the study by Holme et al8. generated much fuller answers than the lecturers I was interviewing would be able to manage in a time restricted, verbal environment. I therefore felt it necessary to include a further prompt to think about conceptual understanding from a different perspective. The third question reveals the intention of the lecturer to exam their students’ conceptual knowledge. I expected when writing the question that all of the answers will place a large proportion of the exam under ‘conceptual understanding’. These answers will provide a useful set of numbers to compare actual papers to, potentially revealing whether there is an agreement between the intended content of an exam and the actual content. The final question was asked to see if there was any sort of weighting applied to concepts. If not all were examined every year, it would be interesting to look at the factors that determine whether a concept appears on an exam. For complete transcripts of the interviews, please see ‘Appendix 1 – Interview Transcriptions’.
  • 16. 12 Results In order to aid readability during this analysis, certain quotes will be editorialized by removing extraneous verbal filler. Lecturer 1 Lecturer 1 saw conceptual understanding as “[…] just understanding the basic ideas and concepts to allow students, or to allow people to work out problems.” The rubric in Holmes work gives this definition a score of 1 under the Problem Solving fragment. The rest of the answer is devoid of additional fragments, but they describe an idea that any concept introduced after the first year is simply a combination of the basic organic chemistry concepts. They felt that if a person truly understood the first year, they would be able to problem solve their way through every other year. They presented this as idealism, fully aware that the other years are necessary, but it is interesting to think about what the fundamental ‘axioms’ of organic chemistry might be. When asked about how they would structure an exam question, they responded “I would make it a problem solving exercise. […] They would have to solve a specific problem, something similar to what they’d of seen in lectures/tutorials, but they shouldn’t be able to do that problem well unless they understand the key concepts.” This is in line with their definition of conceptual understanding and their focus on problem solving. At this point, I wanted to clarify what they meant by ‘problem solving’. They responded that in order to problem solve they “really have to be drawing on their core knowledge”, and “they can’t just regurgitate learned information”. This answer hints upon another fragment, Depth, as the lecturer is recognizing that there needs to be a deep understanding devoid of memorization. As I interpret it, this answer is enough to assign 1 point in the Depth fragment, but as this question was an extension of another, and was not asked to the other lecturers, I cannot include this point in the total score for this lecturer. The other lecturers may have been able to provide additional fragment descriptions with further prompts, but did not get the opportunity, therefore only the one point for Problem Solving will be counted.
  • 17. 13 For the third question, I was expecting relatively short, confident answers. In reality, this question seemed to prompt a lot of thought and dialogue. Lecturer 1 initially answered that between 10% and 20% of the paper would be recalling information, and the rest would be testing conceptual understanding. They enquired how algorithms applied to organic chemistry, to which I responded that I had seen very simple reaction mechanisms taught algorithmically, where the student identifies the electrophile, then the nucleophile, then pushes arrows from the nucleophile to the electrophile, etc. The lecturer was slightly taken aback by this, as I had described the way they taught the nucleophilic substitution reactions. They expanded upon this by identifying that to go along with such an algorithmic question, they would probe the students conceptual understanding by asking them to explain particular properties like nucleophilicity. Again, there is a hint of the Predict/Explain fragment in this answer, but I cannot include it in the score for the reasons I outlined previously. Finally, the lecturer answered that 60% of their lecture content will make it into the exam. However, they seemed very unsure. They reasoned that a lot of the lecturers at the beginning of the course are introducing fundamental concepts, a lot of which won’t be specifically examined upon, but will be implicit within other questions. When I asked about the other 40%, they confirmed that there are simply certain topics that do not get examined, but the student must possess knowledge of these to answer other questions in full. Lecturer 2 Lecturer 2 described conceptual understanding as “[…] being able to solve a problem by working out the answer from first principles, and getting to the answer that way, instead of thinking ‘Ahh I’ve seen that in a book’, and recalling it just from memory.” This earns a score of 1 in the Problem Solving fragment, and 1 in the Depth fragment. They claimed that structuring an exam question to test conceptual understanding is “fairly easily done” by “ask[ing] for an explanation of something”. This gains the definition of conceptual understanding a point in the Predict/Explain fragment. They also go on to describe an example, a question of reactivity’s of esters and amides, and said they would “like to see a simple diagram and two or three lines of text” in the explanation. This answer hints at the Translate fragment, with the use of images and text to explain a concept, but as
  • 18. 14 they don’t specifically mention that transitioning between representations is an aspect of conceptual understanding, so I will not assign them this point. They extended their answer by adding that a question could “ask for a mechanism to be supplied where it had been seen before in terms of the concept, but this is a different example.” They said students would “have to understand that it’s that mechanism which applies there, and then draw it.” This demonstrates an awareness of the full Transfer fragment, where a student must apply their knowledge to a novel situation, gaining them another 2 points. For the third question, the lecturer seemed very unsure. They described the paper having three section; A, B and C. Section A was compulsory, and contained “just under half of the marks”. These questions were said to contain mostly memorisation or algorithmic questions, though the lecturer did not give a figure to this ratio. Interestingly, they commented on two types of question archetypes (see the section ‘Analysing Past Exam Papers’); ‘Provide Product’ and ‘Nomenclature from structure’, labelling them as rote memorisation and algorithmic respectively. The student must then choose between answering section B or C. These sections were said to contain questions that ask “How does this work?”, but that if mechanistic questions were asked, they “probably still would be a bit more towards the recalling it out of the notes side of it”, but would test conceptual understanding by asking for explanations. Again, they did not provide a figure to the ratio of marks in this section. Overall, they claimed “it’s about roughly half [conceptual understanding] and half [rote memorisation/algorithmic]. Maybe slightly more towards the algorithmic and recalling side of it, in the first year.” Interestingly, they commented that in the fourth year of study, the conceptual understanding content in an exam should increase. This is in contrast to Lecturer 3’s answer (see below). The lack of absolute figures provided by Lecturer 2 makes this a difficult result to quantify. I think this is entirely down to me as the interlocutor, as I didn’t ask further questions or clarify. In order to compare this answer with the others, I will assume that 55% of the paper was either rote memorisation or algorithmic problem solving, and 45% is conceptual understanding. The final question was also answered vaguely. They responded by stating that the model answers to an exam are perhaps 5 pages in length, whereas the lecture notes are about 100 pages, bringing the total content of the exam to 5% of the lecture material. They
  • 19. 15 supplemented their answer with the reasoning that in lectures, they would “give three or four examples of one particular thing, [whereas] in the exam we’d just be asking for one”, and that “some of [the lecture material is] just other background, some of its examples”. This answer highlights that the question is flawed, and that there is too much room for interpretation. Their answer fails to take into consideration any of the ways these concepts interact, where the student must have knowledge about many topics which, whilst not being explicitly examined, do contribute to the ability to answer questions on topics that are explicitly examined. The answer also doesn’t separate lecture material that aims to teach conceptual understanding from material that provides background. Lecturer 3 Lecturer 3 described conceptual understanding as “an understanding of the concepts, rather than relying on some kind of rote learning, or committing things to memory…” and that “if you understand the concepts, then you can apply those general concepts to unfamiliar situations, so that you can rationalize things you haven’t seen before.” This answer gains a score of 2 for Transfer and 1 for Depth. In response to the second question, they stated “best questions are problems, rather than ‘Tell me everything you know about… ‘X’ or ‘Y’’”. This answer gains an additional point in the Problem Solving fragment. In regards to the types of questions, they thought a good balance would be 50% conceptual understanding, 25% rote memorisation and 25% algorithmic problem solving. They stated, in contrast to Lecturer 2, that in the fourth year the rote memorisation aspects will take up as much as 50% of the marks, reasoning that in such specialised topics it is hard to set problems. Most of the special topics that they are referring to, however, are not organic chemistry based, so this is not an answer I can use. Finally, they said that the content of the lecture material that appears in the exam will vary according to the stage of the course. In the first year, they expected all of the course content to be examined over three iterations of the paper, with only a small number of questions changing each year. In the second year they teach a problem-based spectroscopy course, so they expect all of the methods they teach to be utilized on an exam.
  • 20. 16 Overall Results Before comparing these results to Holme’s work, it should be highlighted that the rubric developed by Holme is specifically for conceptual understanding in general chemistry, not specifically organic chemistry. As such, various fragments in Holme’s work reflect ideas that are only loosely applicable, such as the specific mention of translating through Johnstone’s domains (symbolic, microscopic, macroscopic). However, the majority of the rubric is totally applicable to organic chemistry, and the rubric must be the same if comparisons between this work and Holmes’ results are to be made Figure 1 shows the total scores of the three lecturers. Compared to Holmes’ results in Error! Not a valid bookmark self-reference., it can be seen that lecturers 2 and 3 actually scored very high. Only ~75 of the instructors from the 1,395 scored 4 points, and less than 25 scored 5 points. This indicates that the lecturers have an above average idea of what conceptual understanding means in regards to organic chemistry. The low score of lecturer 1 reflects the mode score in Holmes’ work. As Holmes’ points out, this does not mean that all other aspects of conceptual understanding do not factor into their working definition, but I do think that these scores will reflect on their teaching style. Figure 3 shows the breakdown of the lecturer’s scores into each aspect of conceptual understanding. It can be seen that the most well described aspect is Transfer, with two lecturers gaining the full two points, whilst the least frequently described fragment was the Translate aspect, which wasn't described at all. These results reflect the results in Holmes’ 0 1 2 3 4 5 Lecturer 1 Lecturer 2 Lecturer 3 Score Total scores of three lecturers definitions of conceptual understanding in organic chemistry as scored following the rubric developed by Holme et al. Total score of the conceptual understanding definitions provided by the 1,395 general chemistry instructors as scored following the rubric developed by Holme et al. Figure 1 – Total scores of the instructors in Holmes’ work. Figure 2 – The total score of the three lecturers definitions of conceptual understanding in organic chemistry.
  • 21. 17 paper. It is possible that the Translate aspect was too obvious to mention, or that multiple representations of the same concept are seen as means of explaining observations, rather than building up a translational fluency within the student. Analysis of the lectures delivered by these lecturers could provide insight into why these aspects are favoured or largely unmentioned, but this lies beyond the scope of this work (see ‘Section 4 - Future Work’). In contrast, the most frequently described fragment was Problem Solving, whilst in Holme’s work this was second least frequently described. Figure 3 – The scores obtained by the three lecturers within each aspect of conceptual understanding. Figure 4 – The number of definition fragments used by the 1395 chemistry instructors. 0 1 2 ScoreObtained Score obtained by the three lecturers definitions in the five aspects of conceptual understanding. Lecturer 1 Lecturer 2 Lecturer 3 Number of definition fragments (by score) used to define conceptual understanding by the 1,395 general chemistry instructors who provided their definition of conceptual understanding.
  • 22. 18 Analysing Past Exam Papers It is my aim in this section to describe the methodology adhered to during the largest aspect of this project; the analysis of 17 organic chemistry exam papers given to 1st year students. Due to the modular format of the course, it was often the case that organic questions were spread through multiple exams. It is my intention to look at all exam questions that relate to organic chemistry, but where it is not possible to do so it shall be made explicit. The primary goal of this pilot study was to systematically deconstruct individual exam questions in order to inspect which aspects of conceptual understanding, if any, are assessed. I used a grounded theory approach to generate codes from the exam papers. Grounded theory is a systematic methodology in which codes are generated by a researcher upon reviewing data (see ‘Coding Exam Papers Using Grounded Theory’ below). Initially, I focused on first classifying the question as conceptual, rote learning or algorithmic, and then trying to construct a thought process required to answer the question. For example, a question might require the student to first recognise a functional group from nomenclature, and then think about the reactivity of said functional group, finally drawing the product of a reaction between the functional group and another. The code would thus read “Structure from nomenclature, functional group reactivity, draw product”. Then I attempted to list additional, unordered codes that related to the question, e.g. “SN2, Leaving group stability”. This method had several problems. Firstly, classifying a question as conceptual, rote or algorithmic was not always simple; a mechanistic problem, for example, may be intended to test conceptual knowledge of the electron flow in a particular system, but it could easily be the case that the student has memorized the system due to it being highlighted as important during a lecture. The line between conceptual understanding and rote memorization is a blurry one, particularly at an early stage in a university course, as there are simply some things that the student must learn, and as a consequence, those topics tend to be examined via asking the student to recall what they have learnt. Secondly, the route the student mentally travels to arrive at an answer is sure to vary among different students. Although simple low- mark questions may have an obvious route, a more complex, multi-model question may have several routes. Therefore it became clear that it is incorrect to assign a definitive order to the codes. Thirdly, attempting to list every possible code associated with the question, as
  • 23. 19 grounded theory requires the coder to do, proved overwhelming. I noticed a stark contrast between codes that arose from the question archetype (the way it was structured and the form of answer it demanded) and the codes that arose from the concepts within the question. The resulting codes felt disjointed, and pulling a conclusion from the data would have been difficult, if not impossible. The combination of these problems lead me to discard my initial results. It was clear that a new approach was needed that still operated under grounded theory, but removed some of the complexity. I decided the best way to proceed was to split apart the question archetype from the conceptual content. Eventually, I concluded that the concepts present within each question provided no additional information on how conceptual understanding is assessed. As such, this work is concerned with assessing the presence and properties of conceptual understanding-based questions, and not the concepts present within them, although the relationship between the two is a possible route for future work (Section 4 - Future Work). Coding Exam Papers Using Grounded Theory I used Microsoft Excel® to perform the grounded theory coding. A table was constructed to show the title of the paper, the question identifier/number, the marks available, and the codes I associated with the question archetypes. Once I had listed all the words/phrases I could relate to the question, I proceeded onto the next. Upon finishing the second question, I would look back at the first and compare the codes I had generated, looking for any that crossed over. Once the sets of codes had been altered (if at all), I would advance on to the next question. This cycle was repeated until the end of the paper, when I would advance onto the next. It quickly became apparent that although there is a large amount of question archetype variance within each exam, the overall content doesn’t change much from year to year. The consequence of this was that a set of codes generated from early papers in each module tended to be able to completely describe exams set years later. I suspect that this is very much intended, and arises from the notion that a student will revise by completing past exam papers, thus will feel more relaxed in the actual exam if the format (a consequence of the way the paper is organised and its question archetypes) remains largely unchanged. Another important aspect of coding using grounded theory is the constant use of memos. Throughout the coding I kept written notes on my thoughts for personal use later to help connect the codes together.
  • 24. 20 Defining a ‘Question’ As a consequence of the variation in the authors of the exam papers, questions are often formatted differently. In one paper, there may only be four main questions which are worth 20 marks each, but each of these questions are broken down into smaller 2 – 5 mark sub-questions (labelled (a), (b), (c), … or (i), (ii), (iii), …), whereas another paper may number all of these questions as individual. This is largely an unimportant difference, and only really serves to guide the student along the paper, making it explicit which questions are at least semi-related to each other in concept. For the purpose of analysis, a question will be considered separate if there is a separate mark indicator. For example, Figure 5 and Figure 6 below would be treated as a single question, and three individual questions respectively. Figure 5 – An example of a single question with two subsections (from F11MSB 2010-2011). Figure 6 – An example of three sub-questions that are considered separate (from F11MSP 2008-2009).
  • 25. 21 If a single question includes multiple identical aspects, such as in Figure 7 below (where the student is asked to assign the hybridisation state and lone pair positions of two different molecules), the question is said to have a multiplicity. In the example below, the multiplicity of the question is 2. Figure 7 – An example of a question showcasing a multiplicity of 2 (from F11MSB 2010-2011). Defining ‘Question Archetypes’ If examinations consist of a set of tools designed to deconstruct and assess the conceptual understanding of the student, it is wise to obtain an inventory of the toolbox. These tools take the form of question archetypes. Figure 5 below shows two questions that probe differing topics; reactivity towards SN2 reactions, and acidity of phenols: Figure 8 – Two questions with the archetypes “Sort Molecule According To Property” and “Explain Order”.
  • 26. 22 These two questions involve very different concepts, but the overall archetype of the question is the same. The student is expected to sort the molecules according to a given property, and then explain why the order is such. If the student was only asked to sort the molecules, with no explanation, it stands to reason that there will be no separation between candidates who know why the order exists, and those who guess an order. Question archetypes are clearly important, but can they be connected with specific aspects of conceptual understanding?
  • 27. 23 Criteria for Sorting Codes into Aspects of Conceptual Understanding Before the codes that were generated can be sorted into aspects of conceptual understanding, the definitions of each aspect must be reviewed. Box 1 shows the conceptual understanding aspect definitions as described in Holmes’ paper8. These will be used as the basis for sorting codes into categories. The Transfer aspect is difficult to form meaningful criteria for, as the novelty of a chemical situation to a certain student is unpredictable, due largely to the large variety in exposure and experience amongst a class. A question could be considered novel if it doesn’t appear in another exam paper, rendering it impossible for the student to have seen it, although this assumes all students look through all past exam papers, which is often not the case. Furthermore, it is impossible to know which chemical situations were presented to the students during their course. As the definition for this aspect is based upon an unknowable, unquantifiable state of the question (the novelty), this aspect must be ignored. The Depth fragment is equally difficult to handle. The definition suggests that any question that has no aspect of rote memory or algorithmic problem solving can be categorised under Depth. This means that is it unlikely any code will be generated that is directly Depth related, but a rudimentary score can be calculated as an anti-rote memory/algorithmic problem solving score, simply the inverse of the number of codes belonging to these two categories. Holmes’ paper doesn’t expand on the definition of a problem, which has many definitions across many disciplines, but explicitly mentions critical thinking and reasoning. Transfer Apply core chemistry ideas to chemical situations that are novel to the student. Depth Reason about core chemistry ideas using skills that go beyond mere rote memorization or algorithmic problem solving. Predict/ Explain Expand situational knowledge to predict and/or explain behaviour of chemical systems. Problem Solving Demonstrate the critical thinking and reasoning involved in solving problems including laboratory measurement. Translate Translate across scales and representations. Box 1 – Definitions of the aspects of conceptual understanding presented in ‘Defining Conceptual Understanding in General Chemistry’8 .
  • 28. 24 Definitions of critical thinking vary, but seem to overlap into the process of conceptualising, analysing, evaluating and applying information gathered from some sort of experience, be it observation or communication, and then using the results of this thinking process to guide actions or decisions. In summary, this definition is vague, but hints that there must be some ‘real-world’ aspect to the question for it to be considered a problem. In the context of organic chemistry, this will mainly take the form of questions dealing with the outcome of a chemical situation. Codes will therefore be assessed as Problem Solving if they require the student to critically think about data in order to generate a solution to a problem in a real-world experimental context. The Translate and Predict/Explain aspects are well defined. Codes that require the student to make some prediction given a set of factors, or explain a result or prediction in terms of factors, will be categorised as Predict/Explain, whereas codes related to generating or considering an alternative representation to one that is given will be categorised as Translate. CriteriaforSorting Reactions During the initial stages of coding the exam papers, every mechanistic question was treated as though they were equal, only differentiating between questions involving single or multiple ‘concepts’. As the coding developed, I moved away from looking at concepts, and started deconstructing the way in which they are examined through the processes that the questions require the student to execute. Once I started deconstructing the mechanistic questions, I realised that they belonged to a broader class of questions; reactions. Where I had previously seen a difference between asking a student to predict a product, and to predict the flow of electrons in the system, I now saw a relationship tying them together. They were two incomplete parts of a puzzle, and shared some common pieces. When a reaction is broken down, it usually only has six aspects; starting material(s), product(s), reagent(s), condition(s), a name, and a mechanism. In order to assess a student’s conceptual understanding of these reactions, combinations of these elements can be presented, often along with missing elements, and the student must exercise different modes of thinking to fill in the blanks, or to explain some aspect of the system. By noting whether a question gives, partially gives, demands, or leaves out each of these six aspects, a complete description of the question can be assigned.
  • 29. 25 Of the 18 papers I looked at, 120 questions were coded as “Reaction”. Some of these questions contained multiple different reactions (the multiplicity of the question), bringing the total number of reactions to 204. The each aspect of the reaction within the question was given a score according to the key in Table 1. The count of each score within each aspect is shown in Table 2. Aspect is not present in the question and not demanded in the answer 0 Aspect is fully present in the question (complete molecular structures, or unambiguous chemical formula (e.g. LiAlH4, H2O, etc.)) 1 Aspect is partially present in the question (ambiguous chemical formula (e.g. C5H11O), nomenclature) 1b Aspect is partially present and fully demanded in the answer 1b/2 Aspect is fully demanded by the student 2 Table 1 – The key used when coding the reaction questions. 0 1 1b 1b/2 2 Starting Material(s) 0 169 8 6 21 Product(s) 1 97 0 23 83 Reagent(s) 8 91 35 0 70 Condition(s) 172 11 1 0 20 Mechanism 108 4 0 7 85 Name 164 36 0 0 4 Table 2 – The count of each score given to each aspect in the sample of reaction questions (N=204). From this, it can be seen that the starting materials(s), product(s), and reagent(s) are most likely to be fully given in the question, whilst the condition(s), mechanism and name of the reaction are mostly absent from the questions. Table 3 lists every unique combination of reaction aspects found in the 204 sample questions, along with their frequency and total marks assigned. The frequency was generated as the sum of the multiplicities for a given reaction code. This means if a single question with
  • 30. 26 a single mark has three reactions in it, all with the same code (a multiplicity of 3), this will be counted as 3 separate reactions. The total amount of marks attributed to each code was generated by summing the marks for each occurrence of code, ignoring the multiplicity of the question. Starting Material(s) Product(s) Reagent(s) Condition(s) Mechanism Name Frequency of occurrence Marks 1 1 1 0 2 0 14 63 1 1b/2 1 0 2 0 5 50 1 2 1 0 2 0 11 49 1 1 2 0 0 0 32 42 1 1 2 0 2 0 7 42 1 1 1 0 2 1 7 39 2 2 2 2 2 1 4 37 1 2 1b 0 0 0 14 23 1 2 1 0 1b/2 0 7 23 1 2 1 0 0 0 14 22 1 1 1b 0 2 1 4 22 1 1 1 1 2 0 3 18 1 2 1 0 2 1 3 15 1 1 2 0 2 1 1 15 1 2 1 0 0 1 5 14 1 1 2 2 0 0 7 13 1 2 1 0 1 0 4 12 1 2 2 0 2 1 2 10 1 1 1b 1 2 0 2 9 1 2 1b 0 2 0 2 9 1b 2 2 2 0 0 4 8 1 2 0 0 2 1 2 8 1 1b/2 1 1 0 0 2 8 1 1b/2 1 1 2 0 2 8 1 1b/2 1 0 0 0 7 7 1 1b/2 1b 0 0 0 2 7 2 1 1 0 0 0 4 6 1b/2 1b/2 1b 0 0 0 2 6 1 2 1b 0 0 1 1 6 1 2 1b 0 2 1 1 6 1 1 0 0 0 2 1 6 1 1 1b 0 2 0 1 6 2 1 2 0 0 0 5 5 1 1 0 0 2 2 1 5 2 1 1b 0 0 0 2 4 1b/2 1b/2 1 0 0 0 2 4
  • 31. 27 Starting Material(s) Product(s) Reagent(s) Condition(s) Mechanism Name Frequency of occurrence Marks 1 1 1b 0 2 2 1 4 1b 1b/2 0 0 2 0 1 4 1 1 0 1b 2 0 1 4 2 2 2 0 2 0 1 4 1b 1 2 1 2 0 1 3 1 2 0 1 0 0 1 3 1b 2 1b 0 2 1 1 2 1b 2 1b 0 2 0 1 2 1 0 0 0 2 0 1 2 1b/2 1 2 0 0 0 1 2 1b/2 1 1 0 0 0 1 2 1 1 1b 0 0 2 1 1 Table 3 – Every unique combination of aspects within 204 reaction questions, sorted by their total marks available. The relationship between frequency and mark was plotted, shown in Figure 9. From this graph, three distinct ‘zones’ arose. The green zone includes points with a total amount of marks above 30, or a frequency of above 10. The blue zone includes points with a total amount of marks between 11 and 30, or a frequency between 4 and 10. The third uncoloured zone contains codes with a total amount of marks below 11 or a frequency below 4. These zones were assigned based of the visual look of the graph alone. These zones are useful when considering the importance placed on the codes by the examiners; more frequently occurring codes, or codes that are awarded more marks are considered to be important. Therefore codes in the green zone are more important than codes in the blue zone, and codes in the uncoloured zone are considered least important.
  • 32. 28 Figure 9 – A scatter graph showing the relationship between frequency of occurrence and the marks awarded to each archetype. Figure 10 shows how the reaction questions can be broken down. The two main aspects of the reaction, starting material(s) and product(s), are connected by a reaction mechanism. If either the starting material(s) or product(s) are missing from the reaction, the student must use their understanding of the chemical systems to either synthesise or retrosynthesise the correct molecules. Questions that provide a starting material and demand a product can be thought of as occurring in the ‘forward’ direction, whereas questions that demand a starting material that will be transformed into a given product can be considered as occurring in the ‘backward’ direction. Reagents and conditions are used as supplementary to produce the correct answers. The mechanism of the reaction only occurs in the forward direction. Alternatively, the question may provide both starting material(s) and product(s), and then require the student to consider the process of the reaction, usually to generate the correct reagent or mechanism. These three types of questions are similar in that they test the students understanding of the process of the reaction. Of the 204 reactions analysed in this work, 96 (47.1%) were forward reactions, 13 (6.4%) were backwards reactions, and 85 (41.7%) gave both starting materials and products. As these test the understanding of the process of the reaction, these were labelled “Reaction Process” questions. 0 10 20 30 40 50 60 70 0 5 10 15 20 25 30 35 Marks Frequency Marks vs FrequencyofOccurrence for Each Unique CombinationofReaction Aspectsin 205 Reaction Questions Unique archetype
  • 33. 29 The 10 remaining reactions required the student to generate both the starting material(s) and product(s) from either the name of a reaction (e.g. Addition-Elimination) or a reagent, and the student must generate their own examples of starting materials, products, reagents and conditions. As these tested the student’s knowledge of the whole of the reaction, these were labelled “Whole Reaction” questions. As there is no current research into how reactions test conceptual understanding, differentiating amongst reactions which test conceptual understanding and those that don’t proved difficult. Ultimately, I decided upon the following criteria:  Reactions that involve a transformation, in either the forward or backwards directions, test the conceptual understanding of how the functional group(s) behave in the novel chemical system.  Any reaction that demands a mechanism from the student tests conceptual understanding of how the electrons in the system behave.  Therefore, the only reaction questions that do not test conceptual understanding are those which ask for reagents and/or conditions only. These test the rote memory of the student. Starting Material Products Forward Backward Mechanism Reaction Process Whole Reaction Figure 10 – Diagram showing the aspects of a reaction question.
  • 34. 30 165 (80.9%) of the reaction codes were classified as testing conceptual understanding, whereas 39 (19.1%) of the reaction codes were classified as not testing conceptual understanding. These results are expanded upon in the section “Individual Module Results”.
  • 35. 31 Section3 – Resultsand Discussion Generated Codes 50 unique codes were generated over 17 full exam papers, shown in Table 4. They were sorted into categories according to the criteria outlined Section 2 - Research Methodology, ‘Criteria for Sorting Codes’. Where necessary, a description of the code is shown, along with the justification for the particular category assignment. Category Justification Code Descriptor Rote Memory These codes all ask the student to generate information that is impossible to generate from first principles, and therefore must be recalled from memory. Assign Bond Angles Label a figure of a molecule with its bond angels. Assign Functional Group Name a molecule’s functional group. Assign Hybridisation Determine the hybridisation state of given atoms within a molecule. Assign Lone Pairs Label a molecule with the position of any lone pairs of electrons. Assign Orbitals Label a molecule with atomic/molecular orbitals relevant to the question or reactivity. Assign Reagents Match reagents to their reactions. The information requested cannot be generated in the exam, therefore must be learned beforehand. Describe Problem With Reaction Describe the real world problems with performing a given reaction. Describe Reaction Describe the ‘details’ of a given reaction. These codes assess the depth of a student’s organic chemistry awareness,which the student must recall through rote memory during the test. Provide Examples Generate examples that fulfil criteria. Provide Reaction Generate a reaction that fulfils criteria. Recall Definition N/A Algorithmic Problem Solving The cognitive processes demanded from these codes require the student to apply a process to a Assign Electron Configuration Write the electron configuration of an atom within a molecule. Assign Polarity Describe the polarity of a molecule. Assign Property Assign a molecule a descriptor as requested in the question, describing a general property of
  • 36. 32 group or individual case, determining a description of the state(s) of the molecule(s). that molecule (e.g. Coloured). Assign Rate Determining Step State which of a given set of steps determines the overall rate of a reaction. Assign Stereochemistry Assign a stereochemical descriptor to a molecule. Assign Term Assign a molecule a given descriptor as requested in the question, not limited to a property of the molecule (e.g. Aldehyde, Ketone). Assign Value Assign a molecule a value corresponding to a property of that molecule (e.g. pKa). Determining these answers requires the student to follow an algorithmic process. Derive Rate Equation Formulate the equation used to determine the rate of a given reaction Determine Limiting Reagent N/A Determine Stability Assess how stable a molecule is, often in comparison to other similar molecules. Formulate Equilibrium Expression Formulate the equation used to describe the equilibrium of a reversible reaction. IUPAC naming is an algorithmic process. Nomenclature Assign standard IUPAC names to molecules. Answers are generated through applying information in the question to an algorithm (usually an equation). Numeric Problem N/A Translate The student is translating their knowledge of the movement of individual electrons into a general ‘smeared out’ picture. Describe Electron Distribution Summarise the distribution of electrons in a molecule. Drawing the required information requires the student to either manipulate a three dimensional image in their mind, or to generate an Draw Diagram Draw an unspecified diagram. Draw Intermediate Draw a reaction intermediate. Draw Isomers Draw isometric pairs (unspecified isomerism). Draw Newman Projection N/A Draw Orbital Interaction Draw how given orbital will interact during a reaction. Draw Resonance Diagrams N/A Draw Stereoisomers Draw isometric pairs (specified stereoisomerism).
  • 37. 33 alternate representation from a given one. Draw Structure From Descriptor Draw a molecule from a given descriptor (e.g. Nucleophile, Carboxylic Acid). Draw Transition State Draw a transition state of a molecular system during a given reaction. The student must translate a poorly drawn molecule into a well-drawn one. Redraw Structure Redraw a presented molecule in a more correct manner. Predict/ Explain The student is asked to fully explain a concept that is related to the behaviour of a chemical system. Explain Concept N/A These codes require the student to predict and/or explain an aspect of a chemical system. Explain Observation N/A Explain Order Explain the ordering of a list of molecules, sorted according to a property (e.g. Reactivity, pKa). Explain Property Explain the reasons behind a given property of a molecule (e.g. high pKa). Explain Reaction Condition Explain the reasoning behind a given reaction condition (e.g. performed at 0°C). Explain Reaction Outcome N/A Explain Regiochemistry Explain why a product in a given reaction has the possessed regiochemistry. Explain Stereochemistry Explain why a product in a given reaction has the possessed stereochemistry. Explain Variable Interaction Explain how two variables will interact in a system (e.g. ‘Temperature’ and ‘Rate of Reaction’) Predict Reaction Outcome N/A Predict Stereochemistry N/A This code requires a prediction of how variables in a chemical system will interact to affect the overall property that is being sorted against. Sort Molecules According To Property Sort a list of molecules according to a given property (e.g. pKa).
  • 38. 34 Problem Solving These codes directly ask for the solutions to a presented real world problem. Provide Conditions To Favour Pathway Provide a set of conditions to favour a specific reaction pathway (e.g. stereochemical outcome). Solve Problem Generate a solution to the given problem with a given reaction. Reaction N/A CU Reaction A reaction that assesses conceptual understanding. Non-CU Reaction A reaction that assesses rote memory. Table 4 – The 51 codes generated from the grounded theory coding of 17 first year organic chemistry papers, with a brief descriptor and justification of category where necessary. Table 5 shows the frequency of each code through the entire set of exam papers looked at in this work. 23 of the codes occur between 1-5 times, making up 4.9% of the 754 codes (4.9%). This could possibly mean these codes are too specific, and could be condensed through combination with other codes, or alternatively the questions these codes belong too could have received negative feedback, prompting the exam authors to leave such questions out of the following paper. Code Occurrences Draw Orbital Interaction 1 Provide Reaction 1 Determine Limiting Reagent 1 Draw Intermediate 1 Assign Reagents 1 Assign Rate Determining Step 1 Describe Problem With Reaction 1 Explain Reaction Condition 1 Predict Stereochemistry 1 Explain Reaction Outcome 1 Describe Reaction 1 Explain Regiochemistry 1 Derive Rate Equation 1 Explain Stereochemistry 1 Provide Conditions To Favour Pathway 2 Numeric Problem 2 Draw Stereoisomers 2 Predict Reaction Outcome 2 Formulate Equilibrium Expression 2
  • 39. 35 Assign Value 2 Recall Definition 3 Describe Electron Distribution 4 Determine Stability 4 Explain Order 5 Assign Electron Configuration 6 Assign Property 6 Solve Problem 6 Draw Newman Projection 7 Explain Variable Interaction 8 Draw Transition State 8 Assign Functional Group 8 Draw Structure From Descriptor 9 Explain Property 10 Assign Bond Angles 13 Assign Polarity 13 Assign Orbitals 14 Draw Resonance Diagrams 16 Assign Lone Pairs 17 Provide Examples 22 Explain Concept 22 Draw Isomers 22 Sort Molecules According To Property 22 Draw Diagram 25 Explain Observation 31 Redraw Structure 37 Assign Hybridisation 39 Assign Term 42 Nomenclature 53 Assign Stereochemistry 59 CU Reaction 165 Non-CU Reaction 39 Grand Total 761 Table 5 – The total occurrences of each code within all 17 exam papers.
  • 40. 36 Figure 11 shows how the codes are distributed once sorted into their aspects of conceptual understanding, rote memory, algorithmic problem solving, or reaction. Figure 11 – A pie chart showing the distribution of codes within the aspects of conceptual understanding for the 17 papers looked at in this work. 120, 16% 192, 25% 132, 17% 105, 14% 8, 1% 165, 22% 39, 5% Number of Codes Assigned in Each Conceptual Understanding Aspect in 17 1st Year Organic Chemistry Papers Rote Memory Algorithmic Translate Predict/Explain Problem Solving CU Reaction Non-CU Reaction
  • 41. 37 IndividualModule Results F11MSB ‘Molecular Structure and Bonding’ was a module that aimed to teach the basics of structure and bonding in both general chemistry and organic chemistry. As this work is only concerned with organic material, certain questions in these papers are irrelevant and will not be included in the analysis. Seven papers from 2007 - 2013 were available from the past paper archive, with a total of 87 marked organic questions. Questions 7 – 10 in section A along with all of section C are concerned with general structure and bonding, and thus will not be analysed. Generated Codes 23 different codes were generated, shown in Table 6 below in order of frequency. For a description of each code see Table 4. Rote Memory Assign Orbitals Recall Definition Provide Examples Assign Lone Pairs Assign Hybridisation Algorithmic Numeric Problem Assign Value Determine Stability Assign Term Nomenclature Assign Stereochemistry Translate Draw Resonance Diagrams Describe Electron Distribution Draw Isomers Draw Diagram Redraw Structure Predict/Explain Predict Reaction Outcome Explain Order Explain Property Sort Molecules According To Property Explain Concept Problem Solving Solve Problem Reaction Reaction Table 6 – The codes generated during the analysis of seven F11MSB past exam papers, as organised into categories, and then ordered by frequency of occurrence.
  • 42. 38 223 codes were assigned in total within the 7 papers, averaging ≈32 codes per paper. 83 marks in total were available for each paper (only including the questions looked at in this work). Therefore the average amount of marks per code is ≈2.6. The total amount of codes generated per paper can be seen in Figure 12. The steady rise in the number of codes doesn’t indicate an increase in cognitive demand required from the students, as only the ‘Algorithmic’ category is increasing in size. Figure 13 shows how the variety (how many of the 23 generated codes were assigned to each paper) generally decreases over time as the ‘Algorithmic’ category becomes dominant. The frequency of each code was plotted for the whole module (see Figure 14). The most frequent code was ‘Assign Stereochemistry’, with 52 counts (23% of the codes), whilst the least frequent was 'Predict Reaction Outcome’ with only 1 count. The abundance of ‘Assign Stereochemistry’ is due to the fact that the questions on the subject of stereochemistry always have a multiplicity of at least two (matching up to the stereochemical pair), and that there are many types of stereochemistry presented in the module lectures (Cis/Trans- isomerism, Enantiomers, Distereoisomers). 9 10 11 12 13 14 15 Numberofuniquecodespresentinpaper Year of Paper Number of Unique Codes Generated Upon Analysis of Seven F11MSB Past Exam Papers (2007 - 2013) Figure 13 - The number of unique codes generated upon analysis of each F11MSB exam paper. 0 5 10 15 20 25 30 35 40 45 Numberofcodesassigned Year of Paper Total Number of Codes Assigned to Each Paper in Each Aspect of Conceptual Understanding in F11MSB (2007 - 2013) CU-Reaction Problem Solving Predict/Explain Translate Algorithm Rote Memory Figure 12 - The total number of codes generated under each aspect during the analysis of seven F11MSB past exam papers.
  • 43. 39 Figure 14 – The frequency that each code was generated during the analysis of seven F11MSB past exam papers. Figure 15 shows the size of each category of code in the module once the generated codes were sorted into groups. 58% of the module (130 codes) was composed of archetypes that were either ‘Rote Memory’ or ‘Algorithmic’ associated. This is a larger amount than any of the interviewed lecturers suggested, although this figure is an average across all the years the module ran, and also ignores the non-organic parts of the paper. As ‘Depth’ is defined as reasoning about core ideas using skills other than rote memory or algorithmic problem solving, 42% of the module can be considered to test ‘Depth’. 2 3 6 15 19 2 2 4 12 13 52 2 2 3 3 33 1 3 4 8 14 3 17 0 10 20 30 40 50 60 Assign Orbitals Recall Definition Provide Examples Assign Lone Pairs Assign Hybridisation Numeric Problem Assign Value Determine Stability Assign Term Nomenclature Assign Stereochemistry Draw Resonance Diagrams Describe Electron Distribution Draw Isomers Draw Diagram Redraw Structure Predict Reaction Outcome Explain Order Explain Property Sort Molecules According To Property Explain Concept Solve Problem CU Reaction Total Occurrences Total Occurrences of Codesin F11MSB (2007-2013)
  • 44. 40 ‘Translate’ was the biggest group out of the five conceptual understanding aspects, which indicates it is easy to examine, and possibly that it was considered the most important aspect for the young organic chemist. Indeed, the ability to fluently move between representations is crucial for an area of chemistry that is so concerned with 3D structures. There were a relatively low amount of ‘Reaction’ codes, which make up only 9% of the data set, but all reactions present were found to relate to conceptual understanding. The paper is therefore more focused on the individual concepts that will go on to construct the set of tools the organic student will use in the future, rather than look at organic reactions as a complex interplay of many concepts. Frequency of Each Aspect (2007-2012) Figure 12 also show how the frequency of the total number of codes in each ‘aspect’ changed throughout the timeline of the module. For a clearer representation of these results, please see ‘Appendix 2 – Figures’. The ‘Rote Memory’, ‘Algorithmic’, ‘Translate’ and ‘Problem Solving’ aspects generally increase in frequency over time, whilst the ‘Predict/Explain’ aspect generally decreases. 45, 20% 85, 38% 43, 19% 30, 14% 3, 1% 17, 8% Counts ofCode inF11MSB Groupedby Conceptual UnderstandingAspects (2007 - 2013) Rote Memory Algorithmic Translate Predict/Explain Problem Solving CU Reaction Figure 15 – A pie chart showing the size of each category of code assigned during the analysis of seven F11MSB past exam papers.
  • 45. 41 F11MSP ‘Mechanism, Synthesis and Pi-Bond Chemistry’ was a module that aimed to continue building up students’ knowledge of fundamental organic reactions, with a focus on mechanisms. All questions on each paper are relevant to this work. Four papers from 2007 – 2010 were available from the past paper archive, with a total of 90 marked questions. Generated Codes 30 codes were generated, shown in Table 7 below, sorted into their groups and in order of frequency. Rote Memory Assign Reagents Describe Reaction Describe Problem With Reaction Provide Examples Provide Reaction Assign Orbitals Assign Bond Angles Assign Hybridisation Algorithmic Assign Rate Determining Step Assign Stereochemistry Nomenclature Translate Draw Orbital Interaction Describe Electron Distribution Draw Newman Projection Redraw Structure Draw Transition State Draw Resonance Diagrams Draw Diagram Predict/Explain Explain Reaction Condition Explain Regiochemistry Predict Stereochemistry Explain Reaction Outcome Explain Order Explain Variable Interaction Explain Property Explain Concept Sort Molecules According To Property Explain Observation Problem Solving Provide Conditions To Favour Pathway Reaction CU Reaction Non-CU Reaction Table 7 - The codes generated during analysis of four F11MSP past exam papers, organized into categories and sorted by frequency of occurrence.
  • 46. 42 In comparison to F11MSB, a larger variety of codes were generated, showing that there is more variety to the paper. As more codes have been generated that were categorised under ‘Translate’ and ‘Predict/Explain’, it can be concluded that the F11MSP papers are inherently better at testing conceptual understanding due to an increase in the number of ways the student must demonstrate it. There are the same number of codes in the combined Rote Memory and Algorithmic categories, showing that there is still a certain amount of ‘you just have to know it’ content, although now the Rote Memory category is larger. A total of 208 codes were assigned within the 4 papers, averaging 52 codes per paper. The total marks for each paper was 120. Therefore the average amount of marks per code is ≈2.3. This is approximately the same as F11MSB. In conclusion to this rudimentary analysis, F11MSP uses a wider variety of methods to test conceptual understanding, but places equal worth on each method. Figure 16 show how many codes were assigned to each F11MSP paper. The general decrease in the quantity of codes that were assigned shows how the cognitive load on the student decreases, mostly due to the reduction Figure 16 - The total amount of codes assigned during analysis of four F11MSP past papers, sorted into their categories. 0 2 4 6 8 10 12 14 16 18 20 Numberofcodesused Year of Paper Number of UniqueCodes in Each Paper in F11MSP (2007 - 2010) Figure 17 – The number of codes assignedduring analysis of each of the four F11MSP past exam papers. 0 10 20 30 40 50 60 70 80 Numberofcodesassigned Year of Paper Number of Codes Assigned to Each Paper in Each Aspect of Conceptual Understanding in F11MSP (2007 - 2010) CU Reaction Problem Solving Predict/Explain Translate Non-CU Reaction Algorithm Rote
  • 47. 43 of Translate and Predict/Explain codes. Figure 17 shows the variety of codes present within the paper, plotted over the life span of the module. An almost 50% reduction in the variety of codes occurs, from 19 different codes in 2007 – 2008, to only 10 different codes in 2010 – 2011. This is additional evidence that the module becomes less effective at assessing conceptual understanding over time. The frequency of each code was plotted for the whole module (see Figure 19). The most frequent code was ‘CU Reaction’, with 82 counts (39% of the codes). This reinforces the claims of the module description that this module is focused on mechanistic thinking. Twelve codes appear only once over the module. The large number of single occurrence codes can be traced to the low number of papers (N=4), as there is less time for question archetypes to be recycled, but could also indicate that the generated codes were too specific and need generalising. Figure 18 shows the size of each group once the codes are collapsed into their relevant aspects. 30% of the module (62 codes) was composed of codes that were either ‘Rote Memory’, ‘Algorithmic’ or ‘Non-CU Reaction’ associated. This is a smaller amount than Lecturers 2 and 3 suggested. As ‘Depth’ is defined as reasoning about core ideas using skills other than rote memory or algorithmic problem solving, 70% of the module can be considered to test ‘Depth’. For the non-reaction type questions, the most frequently examined aspect of conceptual understanding was ‘Predict/Explain’ (39 codes, 19%), in contrast to F11MSB where the most examined aspect was ‘Translate’. This also reinforces the claims of the module (in regards to a focus on mechanisms), as the ability to correctly predict the evolution of a chemical system is arguably the most important part of organic chemistry reactions, and the main way to accomplish accurate predictions is through the mechanisms. Figure 18 - A pie chart showing the size of each category of code generated during the analysis of four F11MSP past exam papers. 20, 10% 23, 11% 23, 11% 39, 19% 2, 1% 82, 39% 19, 9% Counts of Codein F11MSP Grouped by ConceptualUnderstanding Aspects (2007 - 2010) Rote Memory Algorithmic Translate Predict/Explain Problem Solving CU Reaction Non-CU Reaction
  • 48. 44 Approximately 19% of the reactions present within the papers were assessed as reactions that do no assess an aspect of conceptual understanding, as students are asked to recall reagents without being asked to demonstrate any mechanistic thinking. The main way the Predict/Explain aspect was assessed was through asking the student to explain observations. Figure 19 – The frequency of each code generated upon analysis of four F11MSP past exam papers. Frequency of Each Aspect (2007-2010) Figure 16 also shows how the size of each aspect changes over time. For a clearer representation of this data, please see ‘Appendix 2 – Figures’. A general decrease is observed in all categories. 1 1 1 1 1 5 5 5 1 2 20 1 1 2 3 3 4 9 1 1 1 1 2 2 3 4 7 17 2 82 19 0 20 40 60 80 100 120 Assign Reagents Describe Reaction Describe Problem With Reaction Provide Examples Provide Reaction Assign Orbitals Assign Bond Angles Assign Hybridisation Assign Rate Determining Step Assign Stereochemistry Nomenclature Draw Orbital Interaction Describe Electron Distribution Draw Newman Projection Redraw Structure Draw Transition State Draw Resonance Diagrams Draw Diagram Explain Reaction Condition Explain Regiochemistry Predict Stereochemistry Explain Reaction Outcome Explain Order Explain Variable Interaction Explain Property Explain Concept Sort Molecules According To Property Explain Observation Provide Conditions To Favour Pathway CU Reaction/Non-CU Reaction Total Occurrences Total Occurences of Codes inF11MSP (2007 - 2011)
  • 49. 45 F11OMC ‘Reactivity of Organic Molecules and Coordination Chemistry’ was a module that aimed primarily to give an introduction to the fundamental synthetic transformations involving oxidation, reduction and carbon-carbon bond formation in organic synthesis, along with several transition metal topics, which won’t be covered in this work. Three papers from 2007 – 2009 were available from the past paper archive, with a total of 57 marked organic questions over 2 sections (A/B). The 2007 – 2008 paper was corrupted past question 1 in section B. These questions will still be analysed, but any comment on the differences between the papers will be restricted to the latter two papers. Generated Codes 24 different codes were generated, shown in Table 8 below in order of frequency. For a description of each code see Table 4. Rote Memory Assign Bond Angles Assign Functional Group Assign Hybridization Assign Lone Pairs Assign Orbitals Algorithmic Assign Electron Configuration Assign Polarity Assign Property Assign Term Derive Rate Equation Translate Draw Diagram Draw Intermediate Draw Isomers Draw Newman Projection Draw Resonance Diagrams Draw Transition State Predict/Explain Explain Variable Interactions Explain Observation Explain Property Explain Stereochemistry Explain Variable Interaction Predict Reaction Outcome Sort Molecules According To Property Problem Solving Solve Problem Reaction Reaction Table 8 - The codes generated during analysis of four F11OMC past exam papers, as organized into categories and sorted by frequency of occurrence.
  • 50. 46 In comparison to the other papers, the amount of unique codes generated closely resembles F11MSB, though the Translate and Predict/Explain aspects are expanded upon, whilst the ‘Algorithmic’ category is reduced in size. 153 codes were assigned in total. For the two non-corrupted papers, the total number of codes assigned was 124, averaging 62 codes per paper. 205 marks were assigned over the two papers, averaging ≈3.3 marks per code. This is more than both F11MSP and F11MSB. Therefore, F11OMC uses a smaller variety of methods to test conceptual understanding, but places a higher worth on each method. Figure 20 shows the total number of codes assigned to each paper. The low amount of codes in the year 2007 – 2008 is due to the corruption of the document. The decrease in number of codes between 2008 and 2009 is due mainly to the reduction of Rote Memory codes, though the Predict/Explain and Problem Solving categories also decrease in size. However, the Translate category increases in size to compensate. Figure 21 shows the increase in the variety of codes used, with 3 additional unique codes generated during analysis of the Figure 20 - The total number of codes generated under each aspect during analysis of three F11OMC past papers. 7 9 11 13 15 17 19 21 23 2007 - 2008 2008 - 2009 2009 - 2010 Numberofuniquecodespresentinpaper Year of Paper Number of Unique Codes Generated Upon Analysis of Three F11OMC Past Exam Papers (2007 - 2009) Figure 21 - The number of unique codes generated upon analysis of each F11OMC past exam paper. 0 10 20 30 40 50 60 70 2007 - 2008 2008 - 2009 2009 - 2010 Numberofcodesassigned Year of Paper Total Number of Codes Assigned to Each Paper in Each Aspect of Conceptual Understanding in F11OMC (2007 - 2009) CU Reaction Problem Solving Predict/Explain Translate Algorithmic Rote Memory
  • 51. 47 2009 – 2010 paper. Hence, whilst the cognitive load on the student decreases and less aspects of conceptual understanding are assessed, the paper becomes more varied. The frequency of each code was plotted for the whole module (see Figure 22). The most frequently used code was ‘Reaction’ with 26 counts, whilst six of the codes (Assign Lone Pairs, Derive Rate Equation, Draw Intermediate, Explain Stereochemistry and Predict Reaction Outcome) were used only once. Figure 22 – The frequency that each code was generated during the analysis of three F11OMC past exam papers. 1 3 6 8 11 1 6 6 13 21 1 3 4 5 8 11 1 1 2 3 3 8 1 26 0 5 10 15 20 25 30 Assign Lone Pairs Assign Orbitals Assign Bond Angles Assign Functional Group Assign Hybridization Derive Rate Equation Assign Electron Configuration Assign Property Assign Polarity Assign Term Draw Intermediate Draw Newman Projection Draw Transition State Draw Resonance Diagrams Draw Diagram Draw Isomers Explain Stereochemistry Predict Reaction Outcome Explain Variable Interaction Explain Property Sort Molecules According To Property Explain Observation Solve Problem CU Reaction Total Occurrences Total Occurances of Code inF11OMC (2007 - 2009)
  • 52. 48 Figure 24 shows the size of each aspect once the codes are collapsed from all three papers. Figure 23 shows the size of each group if the corrupted paper is ignored. 49% of the codes generated were Rote Memory/Algorithmic associated, meaning 51% of the module tests the ‘Depth’ aspect. Translate was the most extensively assessed aspect of conceptual understanding, whilst Problem Solving was the least extensively assessed. All ‘Reaction’ codes that were generated were later determined to assess conceptual understanding. Frequency of each aspect (2007-2009) Figure 20 also shows how the size of each aspect changes each year. For a clearer representation of this data, please see ‘Appendix 2 – Figures’. Whilst the ‘Translate’ and ‘Reaction’ codes increase in size, the ‘Rote Memory’, ‘Problem Solving’ and ‘Predict/Explain’ codes decrease in frequency. 29, 19% 47, 31% 32, 21% 18, 12% 1, 0% 26, 17% Counts of Code in F11OMC Grouped by Conceptual Understanding Aspects (2007 - 2009) Rote Algorithm Translate Predict/Explain Problem Solving Reaction Figure 24 – A pie chart showing the size of each category of code assignedduring the analysis of three F11OMC past exam papers. 25, 20% 36, 29% 28, 23% 11, 9% 1, 1% 23, 18% Counts of Code in F11OMC Grouped by Conceptual Understanding Aspects (2008/2009) Rote Algorithm Translate Predict/Explain Problem Solving Reaction Figure 23 – A pie chart showing the size of each category of code assignedduring the analysis of two uncorrupted F11OMC past exam papers.
  • 53. 49 F11OSS No description of ‘Organic Structure and Stereochemisty’ was available from the module catalogue. Two papers were available, from the years 2007 – 2008 and 2009 – 2010. Generated Codes 16 codes were generated, shown in Table 9. For a description of each code see Table 4. Rote Memory Assign Hybridisation Assign Orbitals Algorithmic Formulate Equilibrium Expression Assign Stereochemistry Assign Term Nomenclature Translate Draw Diagram Draw Isomers Draw Newman Projection Draw Resonance Diagrams Predict/Explain Explain Concept Explain Observation Sort Molecules According To Property Problem Solving Solve Problem Reaction CU Reaction Non-CU Reaction Table 9 – The codes generated upon analysis of two F11OSS past papers, organised into categories and sorted according to frequency of occurrence. 94 codes in total were assigned to the two papers, for an average of 47 codes per paper. 250 marks were available over the two papers, meaning each code was worth ≈2.7 marks, approximately the same as in F11MSB, and less than in F11OMC. Figure 25 shows the relative abundance of each category of code over the two papers. The variance of the papers was approximately equal, with 15 and 14 of the 16 generated codes used in each paper respectively. The frequency of each code was plotted for the whole module (see Figure 26). Figure 25– The total number of codes generated under each aspect during analysis of two F11OSS past exam papers. 0 10 20 30 40 50 60 Numberofcodesassigned Year of Paper Total Number of Codes Assigned to Each Paper in Each Aspect of Conceptual Understanding in F11OSS (2007/2009) CU Reaction Problem Solving Predict/Explain Translate Non-CU Reaction Algorithmic Rote Memory
  • 54. 50 ‘CU Reaction’ was the most frequently generated code, with 22 occurrences, whilst six codes were only generated twice. This is unsurprising given the small number of past papers available. Figure 26 – The frequency of each generated code during the analysis of two F11OSS exam papers. Figure 27 shows the size of each category of code in the module once the generated codes were sorted into groups. 38% of the codes that were generated were classified as ‘Rote Memory’, ‘Algorithmic’, or ‘Non-CU Reaction’. As ‘Depth’ is defined as reasoning about core ideas using skills other than rote memory or algorithmic problem solving, 62% of the module can be considered to test ‘Depth’. ‘Translate’ was the largest group out of the five aspects of conceptual 2 2 2 4 9 9 2 2 4 8 9 3 3 3 2 22 8 0 5 10 15 20 25 30 35 Assign Hybridisation Assign Orbitals Formulate Equilibrium Expression Assign Sterochemistry Assign Term Nomenclature Draw Diagram Draw Newman Projection Draw Resonance Diagrams Draw Isomers Draw Structure From Descriptor Explain Concept Explain Observation Sort Molecules According To Property Solve Problem CU Reaction/Non-CU Reaction Total Occurrences Total Occurances of Code inF11OSS (2007/2009) 4, 4% 24, 26% 25, 27% 9, 10% 2, 2% 22, 23% 8, 8% Counts ofCode inF11OSS Groupedby Conceptual UnderstandingAspect (2007/2009) Rote Memory Algorithmic Translate Predict/Explain Problem Solving CU Reaction Non-CU Reaction Figure 27 – A pie chart showing the size of each category of code generated during the analysis of two F11OSS past exam papers.
  • 55. 51 understanding, making up 27% of the codes. Approximately 27% of reaction codes were classified as not assessing any aspect of conceptual understanding. Frequency of Each Aspect (2007/2009) Table 10 shows the size of each category of code that was generated upon analysis of each paper from F11OSS. The ‘Algorithmic’, ‘Translate’ and ‘Non-CU Reaction’ categories decrease in size, whilst the ‘Predict/Explain’ category increases. Rote Memory Algorithmic Translate Predict/ Explain Problem Solving CU Reaction Non-CU Reaction 2007 - 2008 2 13 13 4 1 11 5 2009 - 2010 2 11 12 5 1 11 3 Table 10 – The size of each category of codes generated during analysis of two F11OSS past exam papers. F11SOS 19 codes were generated, shown in Table 11. Rote Memory Assign Lone Pairs Assign Bond Angles Assign Hybridisation Assign Orbitals Provide Examples Algorithmic Assign Stereochemistry Determine Limiting Reagent Nomenclature Translate Describe Electron Distribution Draw Resonance Diagrams Draw Transition State Redraw Structure Draw Stereoisomers Draw Diagram Predict/Explain Explain Concept Sort Molecules According To Property Explain Observation Explain Variable Interaction Reaction CU Reaction Non-CU Reaction Table 11 – The 19 codes generated through analysis of 24 F11SOS questions over two years.
  • 56. 52 84 codes in total were generated during analysis of both F11SOS past papers, for an average of 42 codes per paper. 141 marks were available across both papers, meaning each code was worth ≈1.7 marks. This is the lowest of any of the modules analysed in this work. Figure 28 shows the total amount of code generated during analysis of both past papers. More codes were generated during analysis of the 2010 – 2011 paper, showing an increase in the cognitive demand from the student. Furthermore, this increase cognitive demand is focused onto the ‘Predict/Explain’ aspect of conceptual understanding, and ‘Algorithmic’ codes are replaces by ‘Rote Memory’ or ‘Non-CU Reaction’ codes. The variety of codes within the papers also increases, from 11 codes to 14 codes. Hence, the second paper requires more conceptual understanding, and is more varied than the first. Figure 30 shows how frequently each code was generated during analysis of the two F11SOS exam papers. The most abundant code was ‘CU Reaction’, whilst 9 of the codes were only generated once. Figure 29 shows the size of each category of code that was generated. 56% of the codes were classified as either ‘Rote Memory’, ‘Algorithmic’ or ‘Non-CU Reaction’. Therefore, 44% of the paper can be considered to assess ‘Depth’. Of the aspects of 0 5 10 15 20 25 30 35 40 45 50 Numberofcodesgenerated Year of Paper Total Number of Codes Assigned to Each Paper in Each Aspect of Conceptual Understanding in F11SOS (2009 - 2010) CU Reaction Predict/Explain Translate Non-CU Reaction Algorithmic Rote Memory 22, 26% 13, 16% 9, 11% 10, 12% 18, 21% 12, 14% Counts of Code in F11SOS Grouped by Conceptual Understanding Aspect (2009 - 2010) Rote Memory Algorithmic Translate Predict/Explain CU Reaction Non-CU Reaction Figure 28 – The total number of codes generated under each aspect during the analysis of two F11SOS past exam papers. Figure 29 - A pie chart showing the size of each category of code assigned during analysis of seven F11MSB past exam papers.