Narrated lecture available here: https://mix.office.com/watch/1qbqob5fta39i
Mini-lectures for a course on developing online learning media in the agricultural and natural resource context. Book referred to is Clark & Mayer's e-Learning and the Science of Instruction.
3. Add on-screen text when…
1. There are no pictures
2. The presentation is slow
3. Key words are contiguous with their graphic
4. New words, complex ideas
5. Working with second language learners and
certain disabilities
The redundancy principle tells us that on-screen text is an unhelpful distraction when audio narration is presented with other graphics.Lady Redundant Woman may argue, differ and disagree however…In fact there are some contexts where on-screen text and audio words do work together.This is not one of them. This is a bad example.
In this presentation, we will define the redundancy principle, describe when it applies, and discuss why the popular idea of learning styles does not apply to this principle.
Hearing words can be described as the default setting for most humans. In fact most humans with normal hearing learn to understand words through spoken language long before they read and write. For many acquiring word based information through audio is the preferable mode until they become fluent readers. As you learned when we looked at the modality principle, hearing words is not only effective but frees up our visual channel and allows us to look at graphics, animations, and video simultaneously with audio.
Just like, the modality principle, the key idea here is to keep the visual channel open to process images, animations and video rather than redundant on-screen text if you are providing words in the form of audio. When there are no visuals present like in the slide we are looking at now, this rule changes.
There are five general exceptions to when on-screen text and audio narration can or should be presented together.
In the first two exceptions, on-screen text does not add to the processing demands of the learner since there is no competition from images with the words or because the pacing of the instruction allows for learners to view and hear all elements. Typically the instruction would be self-paced rather than just slow and the individual has the control to stay on a screen until they have finished.
Exceptions 3 and 4 actually serve to support processing as the on screen text functions as signals to key words and concepts. In fact that is what is happening now. I want you to focus on the five exceptions to the redundancy principle so I have concisely summarized them and this summary goes along with the audio narrative in which I elaborate and the reading you have already completed. This also can be applied when there is a graphic present as long as the key words are contiguously applied to the graphic.
Finally, just like the modality principle, learners who primarily speak another language and learners with specific disabilities often benefit from narration made available in a text format. Many people think that this should be the case for most learners to support both visual and auditory learners.
So, why do people think that presenting words verbally and in text is helpful rather than problematic?
The idea of learning styles is a popular concept which assumes that our sensory preferences have a dominant modality. The example here is from the VARK model that maybe familiar to educators and non-educators alike.
Most human beings will have approaches, like taking notes, that they connect with formal learning, but all humans unless they are disabled in regard to one of these modes, use all of these approaches when they are learning – they look, listen, touch, and interact with reading and writing.
This idea of having a distinct preference is undermined by two facts, even when using this type of classification, most people are represented as multimodal. This means they have more than one preference rather than being uni-modal, which indicates having only one strong preference. Further, the majority of all people would be classified as visual learners with 65% of the population falling into this category, followed by 30% auditory and a mere 5% with a kinesthetic preference.
In any case, when developing instruction, learners should be engaged across the senses and what we are concerned about is managing the load of input coming through both the visual and verbal channels.