Cognitive theory of multimedia learning, krista greear, csun 2017

188 views

Published on

Published in: Education
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
188
On SlideShare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
12
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide
  • Krista
  • Multimedia is presented; words and pictures.
    Your senses take in this information through ears and eyes. It then chooses which words/pictures to move forward into working memory.
    Throughout this, there is color differentiating that words are part of the auditory/verbal channel. Pictures are part of the visual/pictorial channel.
  • Selected words and images are processed in working memory. There seems to be this interplay where this information is processed and then it gets converted into long-term memory.
  • Krista
  • Baddeley’s (1986) theory of working memory and Paivio’s (1986; Clark and Paivio, 1991) dual coding theory.
  • limited capacity assumption is based on cognitive load theory (Sweller, 1988,1994) and states that each subsystem of working memory has a limited capacity.
  • active processing assumption which suggests that people construct knowledge in meaningful ways when they pay attention to the relevant material, organize it into a coherent mental structure, and integrate it with their prior knowledge (Mayer, 1996, 1999).
  • Baddeley’s (1986) theory of working memory and Paivio’s (1986; Clark and Paivio, 1991) dual coding theory.
  • limited capacity assumption is based on cognitive load theory (Sweller, 1988,1994) and states that each subsystem of working memory has a limited capacity.
  • Limitations as to how much information that the human brain can take. Even now, as I talk, you are both taking in the words, visuals and are grappling with it in your working memory, and organizing it in a way that it can be committed to long term memory. If I put too many words on the slide, little sticks. If I put too many pictures, it’s a lot of information to process. If I put lots of words and pictures, it’s no good. Overload information and processing demands.
  • Feels like both channels are equally weighted in CTML when in reality they may not be.
  • the ability of the brain to form and reorganize synaptic connections, especially in response to learning or experience or following injury.
  • Why do we try to make theories and recommendations that are general across everyone? What other options do we have?
  • Good. Direct connection to cognitive load theory
  • How is #1 applicable to BVI?
    How does this work when a screen reader user interacts with ONE element in the digital environment at a time?
    #3 Is the animation accessible to blind and/or deaf?
  • Again, not helpful if the information within the animation isn’t available
  • Are graphics really helpful? What about tactile graphics? Are tactile graphics still considered to be in the pictorial channel?
  • Umm… captioning anyone!?!?
  • Collision between Educational Technology and Accessibility
  • How does all of this fit
  • Feel like that because these ideas were perhaps created in an industry-specific vacuum, there is this intellectual tension.

    Research that seems to exclude people with disabilities VS a framework that is not a theory that may not hold as much “weight” in academia curriculum
  • We, as an industry, need to not be separated or excluded from other industries. There needs to be more cross collaboration, cross discipline research.

    Instructional design
    Marketing
    Neuroplasticity
    User Experience

  • Cognitive theory of multimedia learning, krista greear, csun 2017

    1. 1. Krista Greear Assistant Director Disability Resources for Students greeark@uw.edu Cognitive Theory of Multimedia Learning
    2. 2. > Been in industry since 2007 > Working on Masters since 2014 Backstory
    3. 3. Agenda > Theory > Reaction > Analysis > So what?
    4. 4. Cognitive Theory of Multimedia Learning > Hypothesis: learning by pictures and words is better than words alone > How to maximize learning when using pictures and words
    5. 5. Words > Printed text > Spoken text
    6. 6. Pictures
    7. 7. Cognitive Theory of Multimedia Learning
    8. 8. Cognitive Theory of Multimedia Learning
    9. 9. Why So Interesting? > Convert textbooks and documents into accessible formats > Convert videos into accessible formats > Work with websites as needed
    10. 10. All I do is work with multimedia!
    11. 11. (1) Dual-channel > a channel for processing visual/pictorial (pictures) > a separate channel for processing auditory/verbal (words)… – Baddeley’s theory of working memory – Paivio’s dual coding theory
    12. 12. (2) Limited capacity > …each channel has a limited capacity and… – Sweller’s cognitive load theory
    13. 13. (3) Active-processing > …active learning occurs when learner engages in cognitive processing (Moreno & Mayer, 2002). – Cognitive theory
    14. 14. What’s the Problem?
    15. 15. (1) Dual-channel > a channel for processing visual/pictorial (pictures) > a separate channel for processing auditory/verbal (words)…
    16. 16. Concerns > Assumes that both channels work similarly across all humans – Blind? Deaf? Auditory processing disorders? Deaf-Blind? Traumatic brain injuries? Learning disabilities? > What about tactile? Where’s that “channel”?
    17. 17. (2) Limited capacity > …each channel has a limited capacity and…
    18. 18. Likes > Emphasizes cognitive load theory
    19. 19. Concerns > Does not account for differences in capacity in two channels – Blind humans often listen to content 2-3 times faster than non-blind humans
    20. 20. (3) Active-processing > …active learning occurs when learner engages in cognitive processing (Moreno & Mayer, 2002).
    21. 21. > Neuroplasticity General Concerns
    22. 22. General Concerns > Individual differences – Although human brains all share the same basic recognition architecture and recognize things in roughly the same way, our recognition networks come in many shapes, sizes, and patterns. In anatomy, connectivity, physiology, and chemistry, each of us has a brain that is slightly different from everyone else’s. (Rose & Meyer, 2002, p. 17).
    23. 23. “Evidence-based” Principles of CTML
    24. 24. Coherence Principle > Use simpler visuals to promote understanding > Avoid irrelevant graphics, stories, and lengthy text > Avoid irrelevant videos, animations, music, stories, and lengthy narrations
    25. 25. Contiguity Principle > Integrate text nearby the graphic on the screen > Avoid covering or separating information that must be integrated for learning > Allow learners to play an animation before or after reviewing a text description
    26. 26. Segmentation Principle > Break content down into small topic chunks that can be accessed at the learner’s preferred rate (using a continue or next button) > Use a continue and replay button on animations that are segmented into short logical stopping points
    27. 27. Multimedia Principle > Use relevant graphics and text to communicate content > Use explanatory visuals that show relationships among content topics to build deeper understanding
    28. 28. Redundancy Principle > Do not present words as both onscreen text and narration when there are graphics on the screen
    29. 29. Implications
    30. 30. Universal Design for Learning > accommodate the widest spectrum of users without individual adaptation or specialized design (Rose & Meyer, 2002) > addressing the divergent needs of special populations increase[s] usability for everyone (p. 71)
    31. 31. Intellectual Crisis > CTML (theory) vs UDL (framework) > Instructional Design vs Disability Services > Research vs reality
    32. 32. Need Cross Collaboration

    ×