2. Reflection
This course focused on assessments for the online learning environment.
Regardless of the generation of learners, all involve multiple technologies that have lead
us to be able to deliver widespread classes and assessments; however it does not change
the fundamentals that are essential to effect assessments. These fundamentals include
distinctions between an assessment’s formative and summative roles, implication that
different types of knowledge such as declarative and procedural affect how one assess
learning outcomes and issues of validity and generalizability. (Oosterhof, Conrad & Ely,
2008).
When designing and developing online assessments, there are three types of
knowledge to have test takers demonstrate. Procedural knowledge is defined as knowing
how to do something (Oosterhof, et. al., 2008) and can be divided up into three sub
groups which are: making discriminations, understanding concepts and applying rules
that govern relationships. Discrimination is the most basic of these skills. It involves
making a determination of whether something is the same or different.
Declarative knowledge refers to what a person knows or what facts can be
recalled and is assertion oriented (Turban & Aronson, 1988). “Declarative knowledge is
conscious and can often be verbalized. Metalinguistic knowledge, or knowledge about a
linguistic form, is an example of declarative knowledge (Connelly, 2003).
According to Mayer and Wittrock, problem solving is “cognitive processing
directed at achieving a goal when no solution method is obvious to the problem solver”
(2006,). Mayer and Wittrock (2006) also state that, “problem solving is related to other
terms such as thinking, reasoning, decision making, critical thinking, and creative
thinking. Thinking refers to a problem solver's cognitive processing, but it includes both
3. directed thinking (which is problem solving) and undirected thinking (such as
daydreaming).”
“Validity is the most central and essential question in the development and use
of educational measures” (Oosterhof, et al., 2008). Criterion-related evidence “indicates
how well performance on a test correlates with performance on relevant criterion
measures that are external to the test” (Oosterhof, et al., 2008).
Generalizability is defined by Oosterhof, et.al., as “being concerned with
inconsistencies between different samples of student proficiency that are or could be
included in an assessment” (2008). These inconsistencies happen particularly when
more than one test item is designed to measure the same skill (Oosterhof, et al., 2008).
The cause of this happening can be because of the learner’s interest about the topic,
their perception of what is being asked, or good/bad luck with guessing (Oosterhof, et
al., 2008).
Different learners will solve problems in a variety of ways. When writing
curriculum and assessments it is important for designers to take into consideration all
the different ways for students to express learning.
The technologies I discovered through-out this course were the most exciting for
me! I prefer to use the Canvas Learning Management System to design assessments and
tutorials. Canvas allows instructors to import and export grades, quizzes, and so on
with ease. I took the opportunity to “play around” with Moodle and Blackboard, but felt
Canvas met all my needs at the time.
Another technology I enjoy using that is new to me is Voicethread,
www.voicethread.com Who knew that you could record an email through voice? I am
still awe inspired by this program. I have used it when sending emails to colleagues and
4. principals, although their feedback was of mixed reactions. Perhaps they are out of the
instruction design loop and are not aware of multimedia instruction as “the presentation
of material using both words and pictures with the intention of promoting learning”
(Mayer, 2009). I hope they make the choice to upgrade their own technology tool box.
Lastly, I had the option to work with the Adobe Creative Suite. When I started
this degree program, purchasing this software was a requirement. Now, this
requirement has been lifted. I think Adobe can do some really awesome things!
However, I have not had the opportunity to work in it to be considered an effective user.
I still stumble around within the different programs trying to figure things out. I am
hoping that my future career as an instructional designer will allow me to work with an
Adobe Creative Suite guru who can teach me the ins and outs of the program. I tend to
learn better by doing.
Feedback is essential for student learning. It is more than a letter grade or a
score on an assignment. Feedback consists of any response that is given to a student in
relation to their performance on a task or product. Typically it is given by an instructor
or peer and is usually in spoken or written form (Oliver, Yeo, & Tucker, 2012). “Students
need to know how they are doing in their learning journey. They need not only grades
but also descriptions of what they have done well, where they have gone wrong and
suggestions on how to improve” (Costello & Crane, 2010).
Draper categorizes feedback into six different categories:
1. Technical Knowledge
2. Effort
3. Method of learning about the task
4. Ability, trait, aptitude
5. 5. Random
6. The judgment process was wrong (Draper, 2009).
According to his six different categories, the interpretation of the feedback depends on
the perspective of the learner. This is very significant because if the instructor provides
feedback to the learner within one category, and the learner needs, wants, or expects it
in another, then there is a flaw in the feedback system.
I also gained a new appreciation for the rubric. As a teacher I have used these
repeatedly to grade assignments. However, my own interpretation of them was skewed
until a class mate said, “View a rubric as a checklist of things that need to be included”.
As basic as that sounds, my perception of the rubric has changed. When I have to use it
as a student, I use the rubric to guide what I am including in the assignment. When it
comes to using the rubric for grading I do more than just highlight the rubric. I write
specific comments as related to the assignment.
According to Costello & Crane (2009) Feedback should be SMART; Specific,
Meaningful, Applicable, Reflective, and Timely. Feedback should provide the student
with specific and meaningful examples in a timely manner. There are many different
ways to provide feedback to learners in a distance learning environment. Instructors can
use word processing options, email comments or feedback forms, audio and videos to
create messages to students (Hatziapostolou & Paraskakis, 2010).
By far this class has been one of the most interesting and helpful classes I have
taken at Walden University while pursuing my degree. I take away a lot of new
knowledge and am grateful for the experience. I can be a more effective teacher and feel
secure in my pursuit of becoming an instructional designer.
6. References
Connelly (2003) Learning and Teaching Foreign Languages. Retrieved from:
http://unt.unice.fr/uoh/learn_teach_FL/index.php?lang=eng&connexion=
Costello, J. & Crane, D. (2010). Technologies for learner centered feedback. Retrieved
from:http://w3.stu.ca/stu/academic/departments/social_work/pdfs/Costelloan
dCrane.pdf
Draper, S. W. (2009). What are learners actually regulating when given feedback?
British Journal of Educational Technology, 40(2), 306-315. Retrieved from
EBSCOhost.
Hatziapostolou, T., & Paraskakis, I. (2010). Enhancing the impact of formative feedback
on student learning through an online feedback system. Electronic Journal of eLearning, 8(2), 111-122. Retrieved from EBSCOhost.
Mayer, R., and Wittrock, M., (2009) Problem Solving Retrieved from:
http://www.education.com/reference/article/problem-solving1/
Oliver, B., Yeo, S., & Tucker, B. (2012). Using eVALUate to improve student learning:
Providing feedback for student learning. Retrieved from
http://evaluate.curtin.edu.au/local/docs/5providing-feedback-for-studentlearning.pdf
Oosterhof, A., Conrad, R.M., Ely, D.P. (2008). Assessing learners online. Upper
Saddle River, NJ: Pearson.
Turban, E., and J. Aronson. (1988) Decision Support Systems and Intelligent Systems.
Upper Saddle River, NJ: Prentice Hall, Inc.