Adaptive Learning Environments


Published on

Paul De Bra

Published in: Education, Technology
  • Be the first to comment

  • Be the first to like this

No Downloads
Total Views
On Slideshare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide
  • <number><number>
  • This lecture is taken from what is normally a whole semester course (well… actually about 10 weeks max).So we have to cut corners here and there to make it fit in one lecture.Part of the solution is to put more information in more slides than we will actually cover during the lecture.We start by explaining a bit why adaptation is needed, and what “adaptive” means exactly.We then turn to the issue of user modeling, needed for adaptation.Regarding adaptation itself we only consider three main aspects: content, presentation and navigation.We will take a look at the architecture of the GRAPPLE environment that enables Learning Management Systems and Adaptive Learning Environments to work together.We also briefly discuss the issue of authoring, using a preview of what authoring tools in GRAPPLE will be like.During the workshop we will actually use a simplified authoring environment (that actually works).If we have time (depends on the number of questions during the lecture…) we will also show some examples of adaptive applications.<number>
  • In the physical world we often encounter things that do not fit. This holds for one size fits all garments like a hotel bathrobe and for the pedal trains in the Efteling, but in fact there are a lot of items in the physical world that are constructed for the average person. Just look around you, to discover that tables, chairs, light switches, door handles, bathroom sinks, etc. have a standard height. And when something does not have that height we find it odd (like the light switches in some houses or offices).But there exists no average person in our society. We experience that doorknobs and light switches are either too high or too low. Car seats are always too far forward or backward. The mirrors always need adjusting. Lots of things can be adjusted because they are not positioned correctly for us by default. When some aspect of a system can be adjusted to suit our needs we call it an adaptable system.In this lecture we go one step further: we want the systems to automatically adapt themselves to their environment. A system that automatically adapts is what we call an adaptive system. Do we know a common example of an adaptive system? Yes we do: the human.The human is very good at adapting to his or her environment. We have learned to use non-adaptive systems by adapting ourselves. We can actually use the most unusable systems because we are so good at adaptation.But in this lecture we study systems that are the other way around: they do not require the human user to adapt but they adapt themselves.<number>
  • Here we show two signs that warn for the danger of (forest) fire.The “brandgevaar” (fire danger) sign in the Dutch woods is sometimes very appropriate and at other times just ridiculous. The problem is that when we see the sign often enough at times when it has no meaningful information we will ignore it when it really matters.The second image however is a variable sign that you always need to really look at to check how serious the fire danger is at this moment. This sign is always informative, and therefore will not be ignored as easily. Because the sign is primitive it requires someone to actually go there to put up the red flag and move the pointer, but we could easily think of an networked sign that displays the appropriate message based on information it receives through a network. It can thus be made such that it automatically adapts to the real level of danger (as determined by the weather bureau).One of the things we are doing in adaptive learning environments is give warning signs that something is still too difficult or otherwise not advised to study (yet), and to do that adaptively so that warnings are only given to the students who need them. Similarly we can give positive advice to study something to students for whom the topic should be recommended.<number>
  • There are different levels or orders of automated behavior.Many systems are automatic in some way. The automatic transmission in a car for instance automatically changes gears depending on the speed of the car and the demand of power (as per position of the throttle or gas pedal). These systems follow a fixed set of rules. A fixed formula between car and engine speed, and throttle determines which gear is needed (with some hysteresis built in as well).In an adaptive system the automatic behavior does not follow fixed rules, but rather rules that causes changes in the system behavior depending on the environment. An adaptable automatic transmission can be switched between normal and sports mode. An adaptive automatic transmission detects the driver's driving style and changes the formula for deciding when to change gears depending on whether the driver has a sporty style or not.In such a first-order adaptive system the rules for detecting the environment and the needed adaptation are fixed. In a second-order adaptive system the system gradually discovers what the best rules are for performing adaptation. There is no limit in these levels or orders of adaptation. A system may discover the best way to discover the best way to perform adaptation to the rules of an automatic system, etc.In this lecture we will mainly consider first-order adaptive systems but we will also look towards the future with second-order adaptive systems.The adaptation to the environment can be interpreted in a broad sense. There is adaptive audio filtering (to cancel noise without affecting the music), adaptive network routing (to optimize throughput), adaptive lighting (in digital cameras, to bring out shadow detail), etc. In this course we concentrate on adaptation to the user or a group of users.<number>
  • Information systems are typically used to manipulate information: entering information into a system and retrieving information. In this lecture we deal mainly (but not only) with the retrieval of information through searching and navigation, for the purpose of learning. The information we retrieve and the process of interacting with the information system can both be adapted.When we are interacting with an IS the information we receive may depend not only on a direct action like a click on a link on a website, or specific keywords entered in a search form, but on many other aspects:The system may give different information depending on who we are: in an information source with a lot of technical content a beginner will receive different information from that presented to an expert. When we are looking for a restaurant or hotel we may receive different information depending on where we are and perhaps even what time of day it is.The system may record what we do and form an internal profile of each user that can be used to adapt the information. A “technology enhanced learning” system will keep track of our changing knowledge. A museum site will deduce our interest from what we see and ask for. Likewise with other recommender systems like on-line tv-guides, movie recommenders, news sites, etc.The presentation format may also change depending on circumstances. A presentation on a PDA should look different from that on a computer screen.The process of interacting with the information may also be adapted to the individual:In a dialog or question/answer system our answers may influence the questions that follow. Our actions, especially errors, lead to (remedial) actions by the system.In websites the links that are shown or recommended may change based on who we are or what we did before.The workflow or order of tasks or steps may be adapted to our needs as well.<number>
  • Adaptive systems have many advantages, mainly because the systems work better and a better system leads to user satisfaction, which leads to returning customers.An adaptive information system can present all the information the user needs through fewer navigation steps, fewer queries, and by only presenting relevant information the user is not distracted and can thus finish in less time. There are no superfluous questions, no unnecessary extra navigation steps, etc. The expectation is also that in learning systems the learner has better retention of knowledge, and in any case the user has more confidence in the system.This better performance leads to increased satisfaction. A system that does what the user wants, and does it automatically, is always appreciated more than a system that makes errors. An adaptive IS gives relevant information, good advice. An adaptive interactive application, like a game (chess for instance) learns to interact better (play stronger) and does not repeat stupid moves.Good adaptation has economic benefits. Recommendations for products the user needs are more likely to result in a sale than advertisements for products the user does not need or does not want. Good adaptation of a museum site may attract people to visit the physical museum. Good adaptation of a TV guide may stimulate the user to watch TV (and more viewers means more advertisement income). Other adaptive systems have economical benefits as well. An adaptive automatic transmission can learn which gear shifting behavior leads to the lowest fuel consumption. Adaptive car radios have better reception, GSM and Internet routers can increase overall throughput, an adaptive heating system can heat a home with less fuel (by learning when not to heat an empty house), adaptive lighting systems can improve the atmosphere while conserving energy, etc., etc.<number>
  • Adaptation does have a dark side. For instance, it is well known that chess computers that learn how to beat the human player may become very bad chess players when they are trained using bad chess players.Adaptation is also a very personal thing: systems learn how to provide the right information to an individual. Other users may not be able to benefit at all from the behavior the system learned for one user.Adaptation of the second order may go completely wrong: when the system learns how to adapt the danger is very real that it will start behaving in ways the designers did not anticipate, and in some movies the systems also learn to defend themselves and take over the world.But even in less extreme cases the best adaptation may not be successful. When a game learns a perfect strategy to beat the user, the game loses its attractiveness. It's just no fun to always lose.Another danger is that a system that thinks it knows which information the user needs may effectively prevent the user from accessing other information. This becomes censorship. A solution is to allow the user to override the advice of the system, or to give the user a way to tell the system the advice is wrong. However, it is very hard to design a system that can actually use the “this is wrong” message in a way that helps it decide what else it should present that is actually right.<number>
  • This drawing illustrates how a user-adaptive system works.The system collects data about the user while the user is interacting with the system. These data are processed in order to create and update a user model.The user model is used by the system to perform the adaptation to the interaction, for instance by adapting information that is presented.It is important to note here that the user model is not just a log of the user’s actions. It is not a log of interaction events. There are two reasons for this:There is a technical reason, namely that in order to perform meaningful adaptation the system needs fairly global information about the user. In the case of adaptation to the user’s knowledge for instance, the system will often adapt to the user’s knowledge of major topics, and to fairly global knowledge levels. Occasionally the system may perform small adaptations that depend on whether the user performed one specific action, like reading a specific Web-page, but more often the adaptation will be based on aggregated information.There is also a legal reason, namely that in order to protect the user’s privacy the law in some countries requires adaptive systems to remove the log of actions at the end of a session. The system is only allowed to maintain a user model that contains more global and thus less specific information. Fortunately with user consent almost everything is allowed (as long as consent is not required to deliver a basic version of the same information service as the adaptive one).<number>
  • Whenever we are designing an adaptive application we need to ask ourselves a number of questions, related to the processes and data involved, as described on the previous slide.Why do we want adaptation? Why is a well-designed application (in whichever area we are aiming at) not good enough without adaptation?Very related: what can we actually adapt in the application. We need to think of what is feasible and what is not. (Having a door handle move up or down when a person approaches is not feasible, but creating a door handle that works for tall and short people and does not require adaptive functionality is.)‏What can we adapt to? And of course very related are the questions whether we can observe something from which we can conclude something about the user that we can then user for adaptation. What we observe can be something fairly global, like the user giving up and going away, or the user selecting a chapter of a course to study, or can be something very detailed like eye movement or fingers moving towards a key but not yet pressing it. The main question is how we can translate observations into meaningful information about the user.As an exercise we can (in class) jointly try to answer these questions for a number of applications. All the applications mentioned are some form of interaction between a user and a computer that involves information. But we can of course also consider other types of applications like a system that controls light, air and temperature in a building.We are going to revisit the questions later, and treat them in detail.<number>
  • Forward and backward reasoning are two common techniques used in the process of collecting information about user actions, processing it into user model information and then using that user model information to perform adaptation.The difference is first of all in what is stored: more low level information like which events happened, or more high level information like which concepts you know or which art style you like, etc.Forward reasoning means that you start drawing conclusions from the user's actions, and you store these conclusions, whether this high-level information about the user will ever be needed or not.Backward reasoning means that you only draw conclusions about the user when you really need it, at the time of deciding on adaptation to be performed.Can you come up with advantages and drawbacks of both approaches?Think of storage needs, the time at which the processing is done, privacy/security issues, etc.<number>
  • There are many potential application areas for adaptive hypermedia systems. Some of these have been realized, either through research prototypes or through fully functional production systems. Unfortunately, some of these systems have not been done very well. As a result, some people already have a negative opinion about adaptive systems, just because the adaptive systems they have encountered are pretty bad, for example the Microsoft Office “intelligent paperclip”, or “Clippy”.The application areas in which successful adaptive systems have been developed include educational systems, information systems, help systems, information retrieval and filtering, and more. We will concentrate on these four types and give some examples. In one of the coming weeks we will study the educational systems more in detail and not discuss the others.In this lecture we use mainly three terms:AEH: Adaptive Educational Hypermedia, and AEHS: Adaptive Educational Hypermedia SystemsAES: Adaptive Educational SystemsALE: Adaptive Learning Environments<number>
  • Adaptive educational hypermedia systems are derived from the area of intelligent tutoring systems.The main difference is that in an intelligent tutoring system the focus is on the tutor who decides what the learner should study next. These decisions are based on what the learner reads and how he or she performs on tests. The main function of an intelligent tutoring system is often described as adaptive course sequencing.In an adaptive educational hypermedia application the focus is on the learner. The system provides guidance, but does not enforce a single reading sequence upon the learner. The biggest challenge in adaptive educational hypermedia is that the adaptation is largely based on the learner’s knowledge but that knowledge is constantly changing as the learner is studying the course material. The “context” to which the system tries to adapt is thus a moving target.The tests in adaptive educational hypermedia serve two purposes. First of all they provide a means for the learner to verify his or her knowledge of the course material. But secondly the learner’s performance also gives the system a reliable estimate of the learner’s knowledge. As the adaptation is based on that knowledge it is important for the system to have a good estimate of that knowledge.In the following we are FIRST going to look at the issue what to adapt to and SECOND to the issue of what to adapt.
  • The GRAPPLE project is the “latest” development in adaptive learning: bringing state of the art adaptive learning environments and learning management systems together. Whereas the LMS is more management oriented the ALE is more learner oriented. Learning takes place mostly in the ALE (through an adaptive course text for instance) but some part of the process is done through the LMS (taking tests, handing in assignment work, and grading it). An event bus allows the ALE and LMS to communicate with each other, but also allows different LMSs to talk to each other and different instantiations of the GRAPPLE ALE as well.An LMS and an ALE typically each store information about the user. Some user data is needed only for their internal operation. A lot of user model data is used by the ALE to perform adaptation and is of no concern to other applications. However, some user data needs to be exchanged between the LMS and ALE. The ALE may wish to be notified when a user takes a test, because the result of the test may influence the adaptation. The Grapple User Model Framework (GUMF) provides distributed access to user model data. It can also perform some data transformations: the result of a test in some LMS may be on a different scale than what the ALE wishes to use to decide upon adaptation.The adaptation is based on a conceptual adaptation model (CAM) that we will explain in detail later. Graphical authoring tools make it easy for authors to create a CAM. A compiler translates the CAM to the low level adaptation rules that are used by the adaptation engine.<number>
  • This picture shows the GRAPPLE architecture from the learner perspective. The learner appears to only interact with the LMS. However, first of all the identification and authentication of the user is communicated with the Shibboleth environment to ensure single sign-on between all GRAPPLE components. Then, when the user accesses an adaptive course text from the LMS the course text is actually served by the GRAPPLE ALE (GALE), but this happens in a way that the user may think everything is done by the LMS.GRAPPLE also has some additional features that we will not cover in this lecture: visualization of user model information, and adaptation to devices, for instance when switching between a workstation and a mobile device (pda or smartphone).<number>
  • A GRAPPLE author is mainly concerned with creating a domain model and a conceptual adaptation model for an adaptive course text. Graphical tools let him/her do this. Where the adaptation needs to be based on user model information not stored locally in the ALE the GUMF is used. It needs to be told how to translate LMS data to GALE data.An author typically also needs to create (or retrieve) content for the course. GALE can access content stored locally or remotely on any web server. In some cases it may be possible to use the LMS as a content repository.<number><number>
  • Most adaptive hypermedia systems are educational systems. It is therefore not surprising that adaptation to the user’s knowledge is popular. A simple approach to adapting to knowledge is to distinguish a few global knowledge levels for the whole application, like “beginner”, “intermediate” and “expert”. While this may be sufficient when a user first starts using the application, it fails to take into account that the user is becoming more knowledgeable about the part he or she studied than about the rest.To solve this most applications use an overlay model. The whole application is described using a set or hierarchy of concepts. The system keeps track of the user’s increasing knowledge about these concepts. Adaptation, like link annotation or the inclusion of prerequisite explanations, can be based on the knowledge level of the user about a specific topic. Sometimes the system may wish to have a very fine grained representation of the concept structure, perhaps down to the page level or even individual fragments. This is for instance needed to perform adaptation that depends on having read a certain definition, or having seen a certain picture. But in many cases a more coarse grained representation is sufficient, like the knowledge of a high-level concept or a chapter of a course.The biggest problem with adaptation to the user’s knowledge is how to determine what the user’s knowledge about a concept actually is. The system can register which pages a user visits and use an association between pages and concepts to deduce knowledge about these concepts. But of course the system does not know how much the user actually understands of a visited page. The user may spend little time reading it (but instead go get coffee) or may read the page without understanding it. The user’s performance on tests is a much more reliable way to determine the user’s knowledge. However, since the purpose of the application is to present information for the user to study, spending a lot of time on tests may not be very helpful, and users may not like it.For adaptation to goals or tasks it is helpful to have a representation of the hyperspace using concepts, so that the goals can be described using an overlay model, just like for knowledge. Unlike with knowledge however it is much harder to determine the user’s goal by observing pages the user visits. And unlike with knowledge the user’s goal may change almost instantly. Adaptation to goals or tasks works best if these goals or tasks are entered explicitly into the system. As said earlier workflow systems are good for determining the goals and tasks.<number>
  • Other aspects an adaptive hypermedia system can adapt to are not related to the information the system has to offer. The system can adapt to the user’s background for instance, which represents aspects like the user’s profession, experience in using hypermedia applications, and possibly also the user’s experience with the system, with the hyperspace and how to navigate through it.Preferences are also global aspects of the user, like whether the user prefers video over images, and cognitive style aspects that determine whether the user needs more or less guidance, prefers to see examples before or after definitions, etc. There are ways to deduce preferences from the user’s browsing behavior, but these do not always work reliably and they require the user to first work for a while with the system not yet adapted to his or her preferences. Therefore it is easier and better to initialize the preference settings through a form or questionnaire.The context or work environment can also be used to adapt, again mostly things like media and media quality settings. When the user is using a pda with gsm connection high-definition video will have to be replaced by low resolution images, and text pages must be reasonably small to avoid excessive scrolling. Unlike preferences these contextual aspects are relatively easy to determine automatically.<number>
  • First we look at the aspect “User Modeling” in adaptive systems.In order to adapt to the needs of the individual user (or perhaps a user group) we need to be able to “classify” users.In a simplistic view each instance of adaptation can be considered as a decision to do something or not to do that something. So the system needs to distinguish the users for which the adaptation action needs to be performed from the users for which the adaptation actions need not be performed. It is thus a matter of decision, based on all the information we have about the user.In this lecture we focus on discovering information about the user.In a later lecture we will focus on using that user model to generate adaptation.<number>
  • Representing the user's knowledge is the most common form of user modeling, but it is also the hardest to work with because as the user is using an application his/her knowledge is changing. The system will thus need to adapt to a moving target.The scalar model uses a single numeric scale to keep track of the user's knowledge of a whole application. MetaDoc for instance offers information on the Unix system. Pages contain some text shown to everyone, and have “hotwords” that can be expanded by inserting a fragment when the hotword is clicked on (and also collapsed again). Initially the fragments relevant to the user will already be expanded, others collapsed. For users with low knowledge additional explanations are shown, and for users with high knowledge some additional technical details are shown. “Knowledge about Unix” is considered to be just one thing, without looking at individual topics.More commonly used are structural models where the subject domain is divided into certain independent fragments. The knowledge is measured for these fragments. Clearly the level of knowledge is important,but for a single topic it may also be interesting to see whether the user just knows the facts or whether (s)he can apply the knowledge. E.g. knowing how something works is different from being able to operate that something.Typically, in Intelligent Tutoring Systems (ITS) declarative knowledge is represented in the form of a network of concepts and procedural knowledge is represented as a set of problem-solving rules. A very common, but maybe too simplistic model, is the overlay model: for every concept of the domain the system keeps track of how much the user knows about it.A bug model on the other hand keeps track of what the user does not know (by tracking mistakes the user makes in procedures) to determine what should be studied next.<number>
  • To have a more fine-grained representation (finer than a scalar model) AES use an overlay model, whereby a domain model is used that describes the course or application domain using concepts. (Other terms have been used, such as knowledge items, topics, knowledge elements, learning objectives, learning outcomes.)The simplest overlay model considers just a set of unrelated concepts. Knowledge about one concept does not indicate knowledge about any other concept in the domain. Because there is no structure here, the set model does not offer any hints as to how to perform adaptationA hierarchy, e.g. by using “part-of” relationships, can be used to form an idea about the user's knowledge of high-level concepts by aggregating knowledge values for the lower-level concepts. Depending on a preference of the learner the system may offer guidance to navigate through the hierarchy in a breadth or depth-first manner.A network is a more general structure. Many AES define a structure of prerequisites in addition to part-of relationships.rerequisites can be useful to ensure that a certain learning order is followed (without reducing the use of the application to a single path).Different types of relationships are possible. An example is the inhibitor which is the opposite of a prerequisite: once you are advanced you are inhibited from seeing introductory explanations.ystems like AHA! allow to create arbitrary types of relationships that have arbitrary effect on the user model, and thus indirectly also on the adaptation. Defining new interesting types of relationships is non-trivial, so authors typically stick to common types such as prerequisites.<number>
  • Early systems (including the first version of AHA! developed at the TU/e) only had Boolean values: you know the concept or you don't. This works for sets of unrelated concepts, but deciding whether you know a whole chapter is difficult (but can to some extent be expressed as a Boolean expression over individual knowledge values).Numeric values are much better: it is relatively easy to perform knowledge propagation from low to high level concepts. Knowledge values can be added and then divided by the number of children of the higher-level concept. However, there exist two interpretations: how much you know versus the probability that you know. Although the interpretation of the probability that you know seems strange for a chapter, by aggregating values of probability from smaller concepts, the rules for calculating with the values are the same as with the how much you know interpretation. Furthermore, whether you can simply “add” values depends on whether the subtopics are independent, whether you deal with knowledge value or with probability.Not all knowledge is created equal. It may make sense to distinguish information about what you read, what you master procedurally, what you proved to know through a test, etc. This can be implemented through a structure of several knowledge values per concept. In generalized overlay models knowledge (and other things like interest) are not modeled as concept/value pairs but concept/aspect/value triples.<number>
  • Traditional information retrieval is based on finding documents that match a set of keywords. By representing user interest using keywords, and perhaps assigning weights (importance) to the keywords, automatic selection of documents becomes possible without the user having to search for something explicitly, and enhancing search becomes possible by adding the keywords describing the user's interest to the search terms.Replacing keywords by concepts results in a more powerful representation of interests. It doesn't matter which words are used to refer to a concept, if the match is made then any document about the concept will be found.Semantic links can help identify relevant documents. When the interest is in a general topic then every more specific topic (below the general one in a concept hierarchy) also matches. In this way it doesn't matter that there are few documents that match very specific topic interests, because more general interest can be matched with many of these more specific topics.Initially documents were classified by hand, but nowadays automatic matching to given ontologies, like e.g. the Yahoo directory, and then describing user interest using the concepts from that directory, has greater potential.<number>
  • Adapting to the user's goals or tasks requires that the system know what these goals or tasks are.Unlike knowledge and interests the user's goals and tasks are much less stable, and thus more difficult to determine automatically. A user's actions may be indicative of a goal, and then the user (perhaps having reached that goal) suddenly starts working on another goal or task.When a user is working on two goals simultaneously every system trying to determine the goal gets lost.The “glass box” approach means that the system shows what it thinks the user is trying to achieve. Such a system, and many others, let users select a goal or task from a list. This eliminates the problem of discovering what the goal is.A typical way to describe the goal is through the equivalent of an overlay model. The goal is described in terms of concepts from the application domain. These concepts can be matched with the document-concept relationships in the domain model in order to determine which documents are relevant. Prerequisite relationships can then be used to guide the user through these documents in a meaningful order.<number>
  • The user's background is essentially everything that the system may know about the user and that has no immediate relation to the application domain (or at least to the representation of that domain within the system).The user's previous education or job are aspects that are virtually impossible to determine just by observing how the user uses the application. Therefore background is typically initialized when the user first registers with the system, and never changes.In adaptive systems we follow the principle of only record what you wish to use which is a sound information systems principle. Only aspects of the user's background that can be (and are) used by the system to perform adaptation are asked for and stored.An example of the use of adaptation to the user's background would be a medical database that can change its presentation depending on the user's general knowledge of the field of medicine. Doctors, nurses and non-medically-trained people all get different explanations, with or without the proper medical terminology, and with more or less detail.<number>
  • Whereas background says something about what the user knows, what skill (s)he has, etc., the “individual traits” have everything to do with personality.A person's personality and cognitive abilities are largely stable, and very hard to change. Approaching a person in different ways may greatly affect how pleasant and how effective the interaction is. It is thus important for adaptive systems to be aware of and adapt to such characteristics.In adaptive systems what has been used the most are cognitive styles and in the case of AES also learning styles. We will deal with this topics more in detail in a later lecture.Two well-known examples of cognitive/learning styles are:visualizer versus verbalizer: some users prefer graphical representations, and are best at remembering images or video, perhaps even just sound, whereas other users like text (written or spoken) and remember things through text. Example: some people can remember telephone numbers as a string of digits whereas other people remember the numbers by the way they sound. As a result the latter people have trouble remembering and saying a phone number in a different language from the one they used when memorizing the number.field dependent versus field independent. There are different terms for this as well, but essentially it means that some users need to see something in context in order to learn it whereas other people can easily study details without knowing what they will lead up to. (Think of proving a theorem using lemmas: do you just start with the lemmas or do you first need to know what the final theorem will be?)‏<number>
  • Only in recent research is context being considered.Several aspects of users or factors of the environment in which applications are being used are considered as user model aspects.Platform is becoming important especially in Web-based applications, because from the same server applications can be used on large screens as well as mobile devices, with fast or slow networks, different browsers, etc.Location is becoming important in mobile applications or other types of applications that can automatically deliver information that is adapted to where the user is.Affective computing is a new research field where through measuring heartbeat, blood pressure, perspiration and other factors a system may try to detect emotions like anxiety, anger, happiness, frustration, the degree of motivation of the user, etc. An example of an application is a system that would detect how users react to TV programs.<number>
  • Early adaptive systems always used stereotype user modeling. Variations of content were prepared carefully for a small number of types of users.In feature-based user models many details are recorded, each of which can be used for adapting something. There is no guarantee that the resulting adaptation makes sense. Still, most current research on adaptive systems considers detailed user models, most commonly by using an overlay model.Stereotypes continue to be used: they are useful to bootstrap the user modeling and adaptation process by letting new users indicate to which stereotype they belong. From that initial model updates lead to a detailed and continuously updated feature-based user model.<number>
  • Few researchers have considered fuzzy logic but more have considered Bayesian networks.The idea is that user actions provide evidence that the user has knowledge of a concept (or failing a test provides evidence that the user does not have that knowledge),and when the user has certain knowledge it should lead to evidence of that.The difficulties in developing a qualitative model for the domain is to define a structure of “random variables” where there are clear sources of evidence and where there is independence between the variables.<number>
  • Most adaptive systems have a built-in user model. The main reason is that until recently generic user modeling systems or services were not available.But having a built-in UM has an important advantage: performance. As most user modeling activity is done between a user-generated event and the system's response a very short response time is imperative.A UM built into an AS is typically not accessible by other applications. Because of this a user who starts using a second AS is faced with the same cold start problem as with the first AS.Using a generic UM system is advantageous for everything except performance. A performance hit is cause bycommunication overhead between the AS and the UMpossibly suboptimal performance of the UM because it may have loads of features the AS does not need.Performance problems can be partially solved through caching within the AS.We will see how GRAPPLE deals with UM in a generic way.<number>
  • Generality is nowadays mostly limited to domain independence. A “student model” service is domain independent but only needs to be suited to learning applications.Systems need to be able to reason over UM data, like infer high-level knowledge from low-level evidence.Quick adaptation deals with the cold start problem: quickly start forming an idea about the user in broad terms, and fill in details laterAPIs for accessing the user model should allow integration with other applicationsCommunication between services that all contain some data about the same user is needed, but this should not result in a big performance hit.Integrating information from difference sources increases the amount of information about a user that can be used. The authority of these data should of course be weighed.Standards are coming up for different application areas. Standard descriptions of certain domains (through ontologies) are also emerging.When doing personalization for large user bases performance becomes a real problem. The load is very unpredictable on the Web. Quick migration of user models between servers may help balance the load.When many AS start depending on a UM service the service should become more fault tolerant.Many AS working on the same UM should be allowed to coexist. Caching then becomes a problem.Privacy is still largely an unsolved issue. Legislation poses a serious threat for the whole field of user modeling at this time.<number>
  • The idea of having shared, available UM services is appealing but troublesome:A first problem is to ensure that different AS using an UM service can identify users correctly.There is shared data about the person, which must match. If two AS disagree on the user's date of birth, who is allowed to override the other?Applications dealing with the same topic want to store concept/aspect/value information, but the concepts must be matched. Using shared ontologies is best, but if not then a mapping between ontologies must be defined. This is called ontology alignment.When two AS agree on the meaning of a concept, and perhaps an aspect like knowledge, one system may wish to store a value of “75” and the other of “well learned”. How to map one to the other? And if both say “75”, do they mean the same?Not only systems must agree on values, users must (by law) be allowed to inspect and possibly correct their user model. In order to do so the data must be understandable. Neural networks are not liked in UM because they lead to incomprehensible user models. When a user is represented by an array of parameters for the neurons of the network, this may mean nothing to the user or even to an expert.<number>
  • In GRAPPLE we make no assumptions as to where which information about each user is located and stored/maintained.The idea in life-long learning is that different UM services may have information about a user, and UM data may also need to be migrated as the learner completes courses, changes schools or employers, etc.In any case we consider that information about a user exists at least in two places: the LMS and the ALE.We can assume that information in the LMS is somewhat more stable than in the ALE. Every action within the ALE causes user model updates, used for adaptation. In an LMS we only store things like test results and completed courses.Especially when there are different LMSs and ALEs there may be complementary but also contradictory information about the same user. Conflict resolution is needed in order to deduce the information an ALE can use. Some UM services may be more “trusted” than others.Besides “trust” in correctness there may also be the issue of trust regarding who is allowed to access and update what information. When lots of UM services can collaborate, it’s important that end-users can control which service has access to which information, and also for organizations to protect information they gathered about users and which they are not allowed or willing to share with others.
  • Within GRAPPLE applications communicate user model data using GRAPPLE statements, which we will explain later.For now these GRAPPLE statements can be considered like “tuples” in a relational database.The GRAPPLE User Model Framework (GUMF) offers each application its own dataspace, containing GRAPPLE statements, but also derivation rules so that information can be calculated from stored data.Data stored in GUMF is not necessarily accessible by other applications. It typically will be, because applications have their own storage for private data, but new applications can be developed without having their own UM data store and they can use GUMF as their general purpose UM database.<number>
  • Applications can query GUMF for specific data items. For instance, when a student goes to an adaptive course text, the adaptation engine (GALE) may query GUMF for results the student obtained in multiple-choice tests performed on the LMS. This is thus a pull request. GALE need not know which LMS the student was using as the derivation rules in GUMF hide that from GALE. Once the student is studying the adaptive course text GALE may wish to be informed when the student completes a test on the LMS. This is a push request. GALE will subscribe to statements about the test(s) so that it is informed whenever test results become available.A browsing interface for GUMF allows users to inspect the information GUMF stores about them. This allows for scrutability and is required by law in a number of countries.<number>
  • This slides illustrates the main data elements contained in a grapple statement. It has a “main” part with the real data and a “meta” part with data about the data, like who created it when, under which conditions is it accessible and/or valid, etc.<number>
  • Here we show part of a grapple statement (in some arbitrary syntax). It expresses that “Peter is interested in Sweden”.The statement has an id, it refers to a user, defines “interest” for an object which is also a reference.<number>
  • Here is an RDF serialization of the previous GRAPPLE statement. It actually contains some extra information like the “creator” of the information and the creation date.<number>
  • What is this “interest” in Sweden? We have referred to the FOAF ontology for this, but any ontology could be used. The FOAF ontology says that “interest” means that a person is interested in a topic.<number><number>
  • Now that we have some idea as to what may be available in a user model we can look into the issue of using UM data for adaptation.Typically we will not use GUMF for adaptation as the overhead of an adaptation engine (GALE) communicating with a general purpose user model framework (GUMF) would slow things down dramatically.However, the principle of user model storage is the same, whether we use GALE’s own database or GUMF.<number>
  • In this lecture we study adaptive educational hypermedia. We consider applications that mostly present information, and that offer navigation through links.We distinguish two categories of adaptation: adapting the presentation, or what is shown to the user, and adapting the navigation, or where the user can go. We will describe these categories more in detail later.With adaptive presentation we refer to the information that is shown. The system may change that information, and we do not just mean the text, but really the meaning of the text. In a learning application for instance we may add information to a page to compensate for knowledge the learner does not yet have. Because of the navigational freedom the user may navigate to a page where a technical term is used the user has not seen before. The system may compensate for this by inserting a short definition of the term. In this way the page actually contains more information for this user than for users who already know the term and don’t need the extra explanation. The system may also change the presentation without the intension to change the meaning. It can tell a story through different narrators, it can replace an unknown technical term by a known non-technical term which means more or less the same, and some systems may even be capable of summarizing the text so it fits on a smaller screen.A more drastic form of adaptation occurs when the system decides which media to use for the presentation. A video can be replaced by an image or a slide show when there is not enough network bandwidth or processing power to show the video. For images and video the quality can also be adapted to the context, to save bandwidth and/or space on the screen.Adaptive navigation refers to manipulation of the links, then thereby the possible ways in which the user can navigate through the hyperspace. A subtle way of adapting links is to change the destination of a link. The user may not be aware that the same visible link may lead to different destinations depending on the context. It is different when the link anchor is changed, for instance to indicate relevance of the link destination. Link adaptation is also often used to help the user find his or her way through the hyperspace. This can be done through “suggestions” like a “next” button in a guided tour, or through a graphical overview of the hyperspace, with links to jump directly to a different part of that space. Adaptation is then used to decide which part of the hyperspace to show, and how to arrange the nodes in the graphical overview.In the following viewgraphs we give an overview of the most common techniques used to perform adaptation. We will then give some examples of their use. <number>
  • This diagram is taken from Peter Brusilovsky’s Adaptive Hypermedia paper. It shows that there are first of all three categories of adaptive presentation: the adaptation of multimedia, which means manipulation of multimedia content like video or image quality and size; the adaptation of modality, which means the selection of which media to use. Most of the attention in adaptive hypermedia is going towards adaptive text presentation. Real manipulation of the text, based on natural language understanding, is difficult. Research prototypes of, for instance, automatic summarization software, have been realized already. It is mostly difficult because in a normal story sentences may refer back to topics, objects or actions that appear in earlier or future sentences. All these references have to be kept intact or removed. Sentences one and two may name two different persons, and when sentence three refers to “he”, one must make sure to keep the reference correct when sentence two is removed. We will not go into details of natural language adaptation any further in this course, but we remember that it is an important challenge for the future, especially when we wish to interact with systems using natural language, be it written or spoken.Most adaptive hypermedia examples perform some form of canned text adaptation. They change the presentation of a page by adding or removing fragments, changing their order, hiding them in different ways, etc. We will see details and examples of each of these techniques later, and even some more.<number>
  • Among the many ways to perform adaptation to text the technique of inserting or removing fragments is the most popular. This is probably due to the fact that this technique is easy to implement. With a fragment a condition can be associated, a Boolean expression on information from the user model, and this condition determines whether a fragment will be shown or not. We distinguish three areas in which this technique is often used:In prerequisite explanations an extra explanation is added for users who need it. A page that uses a technical term or a name the user has not yet seen may conditionally include a short introduction or explanation for that term or name. In our hypermedia course (2L690) for instance we sometimes refer to the system “Xanadu”. For students who have not yet read the page describing Xanadu a one-line explanation of Xanadu is added to pages referring to it. That one-liner disappears after reading the full page about Xanadu.Additional explanations can be given to users who are ready for them. Whereas prerequisite explanations try to compensate for missing foreknowledge additional explanations take advantage of users’ knowledge to offer more in-depth information to users who can understand it.A special kind of this are the comparative explanations. This technique refers to a comparison between topics described on different pages. The comparison can only be understood by users who have read both pages. So when visiting one of these pages first, the comparison will not be made, but when visiting the other page the comparison appears.The altering fragments technique can be used to select between a number of alternative explanations, perhaps using different wording to suit different types of users. But the technique can also be used to replace a technical term by a non-technical one for users who have not seen the definition of the term.Sorting fragments is most useful when a number of more or less independent fragments can be presented in any order. The sorting can be done to perform relevance ranking, like in a list of search results, but it can also be done to show an example before an explanation or the other way around. Learners with different cognitive styles may prefer a different order of descriptions or explanations of a concept. <number>
  • Adaptive stretchtext is a technique which is essentially an adaptive version of the replacement links in the Guide hypertext system. This system was developed by Peter Brown of the University of Canterbury and was later commercialized by the OWL company. The Metadoc system is the best known system offering stretchtext. The idea is that items or paragraphs can be displayed or hidden, and that the system decides adaptively which items to open when the page is first displayed. The user can always decide to open or close items at will, and this may give feedback to the system that can be used to change the user model so that the decisions which items to open automatically may change.With the dimming fragments technique the relevance of a text fragment is not used to hide it, as with stretchtext, but instead to make it less visible. The idea is that the user is stimulated to not read the text, although it is still there and can be read when desired. Shading or graying out can be used to indicate optional reading material. In magazines one often sees so-called “side bars” with such optional text.It is also possible to imagine a combination of stretchtext with dimming: small pieces of text can be shown like a tooltip, meaning that there is a small icon which when clicked on or when the mouse moves over it shows the text temporarily. <number>
  • This example shows a prerequisite explanation in our hypermedia course. The course contains several of these conditionally included fragments. In general users may find it disturbing when the same page does not always contain the same contents. However, in the hypermedia course the changes are so subtle that until now no one has complained about the content adaptation.<number>
  • This example shows that once an object is chosen arguments can be presented depending on how well they match the user's preferences.The user can set a relevance threshold to see only the arguments of at least a certain relevance.Other systems, like the Intrigue adaptive tourist guide, can present arguments in favor and against objects, see next slide.<number>
  • Intrigue uses profiles of individual users to generate advice as to which tourist attractions to visit, depending on the preferences of individuals and of the group as a whole.It can explain that a suggestion for the group has good and bad properties.<number>
  • <number>
  • Scaling as shown here still gives access to all information. It has been taken to the extreme in the sense that you really need to expand the scaled sections in order to read them.It is thus a relative of stretchtext but gives you just a bit more information to go on in order to decide whether you want to open an item or not.<number>
  • This is the second part of the diagram from Peter Brusilovsky’s adaptive hypermedia paper. It shows the different ways in which the links or link structure can be adapted.We will also describe each of these in detail later. We see essentially two different types of adaptation here: direct guidance and adaptive link generation are techniques to generate links that enable the user to go to places to which there is no link in the “standard” presentation of the page. Adaptive link sorting, hiding it all its forms, annotation and also map adaptation, are all ways to manipulate the links that are already present on a page. They influence the navigation by steering the user towards some links and away from other links.<number>
  • Direct guidance is a technique to offer users a possibility to be guided as in a guided tour. Typically a “next” button invites the user to go to the “next” page. But unlike in a static guided tour the adaptive system determines the destination of that “next” button, so different users may go to a different page when clicking on the “next” button on the same page.Of course direct guidance can also be more subtle. Apart from buttons that clearly lead to a tour other links on a page may also have adaptively determined link destinations. The user may have the impression that there is a lot more navigational freedom than is actually the case, because links may not lead to where the user thinks.Adaptive link generation goes one step further and not only generates link destinations but the link anchors as well. There are many ways in which the system can decide to create new links. In open hypermedia all links are always generated. This is done by matching text on a page with a database of links. Adaptive link generation can also be based on the discovery of similarities between (the topics of) pages. This is certainly adaptive if it is done in pages from an open corpus of documents. The list of links that result from a search request in information retrieval or filtering systems is also adaptively generated.<number>
  • Adaptive link annotation is the most popular link adaptation technique. It is the least restrictive technique: all the links are accessible. Annotations are used to indicate how interesting the link is for the user. Many systems use some kind of icon in front of or behind the link anchor to indicate the relevance of the link. Since the Web has been extended with style sheets it has also become possible to use the color of the link anchor as an annotation. This is not without drawbacks: some users are so used to links on the Web being blue or purple that they do not recognize words in other colors as being link anchors.Adaptive link hiding means that links that are not considered relevant for this user are hidden, disabled or removed in some way. Link hiding means that the link anchor cannot be seen as being a link anchor. When the text on a page is black, a black link anchor, not underlined, looks just like plain text. If the link is still there many browsers will show a special cursor when the mouse pointer is moved over the anchor. The link can also be disabled, meaning that the anchor text is no longer a link anchor. On the Web this is easy to realize by removing the anchor tag. However, that performs hiding as well as disabling. It is possible to use font color and optionally underlining to make the anchor still look like a link anchor, but this is seldom done because it is frustrating for users to see link anchors that do not work as links.Link removal means that the anchor text is removed, thereby automatically disabling the link as well. Link removal can easily be done in a list of links, but not in running text because the user needs to be able to read the text. When asked in an informal setting a large majority of users has indicated that they preferred links in a list to be annotated or “hidden”, but not removed. <number>
  • The final form of adaptive navigation support is map adaptation. In order to give users an idea of the whole hyperspace, and some orientation support regarding where the user is in this space, many applications offer some kind of map. Websites often offer a textual sitemap, mostly because this is easy to generate. A graphical map, preferably based on conceptual relationships rather than link relationships, is a better tool for giving insight into the application’s structure.However, maps are often too large to be insightful. A map can adaptively be reduced so that the user can still grasp the overall picture. Nodes on the map can also be annotated to indicate relevance, to indicate where the user has gone before, and perhaps even to indicate where other users have gone.We leave a detailed discussion of how to best visualize large information spaces up to specific visualization courses.We continue with a set of examples.<number>
  • Direct guidance is the simplest technique for adaptive navigation support: simply indicate what the next best node is to go to.The example here is from the WebWatcher Project: the recommended link is presented in bold font and surrounded by “a pair of curious eyes”.Note that this is a nice example of “pure” direct guidance by adapting the recommended links only.More typically there would be a “next” button, or the recommended link would be placed in a specific location, but here none of that is done.Direct Guidance is popular in adaptive educational hypermedia systems that are based on an Intelligent Tutoring Systems approach. The recommendation leads to a single recommended sequence for learning.Unfortunately, when the user does not wish to follow the system's recommendation, there is no support whatsoever. Nowadays other adaptive navigation support techniques are more popular, that provide recommendations on more than one link.<number>
  • The idea of adaptive sorting is to prioritize all the links of a particular page according to the user model.Many systems employing this technique exist. Hypadapter (see the figure) was the first one we know of.Newer systems like HYPERFLEX allow the user to move items around (by dragging) to indicate that (s)he does not agree with the suggested order. This feedback can be used to update the user model.Note that link sorting is different from link generation (which we discuss later) which is what search engines perform.The example here shows that there are a fixed number of topics to be mentioned (and presented as links) and that fixed list is sorted.As can be seen Hypadapter not only sorts the links but also uses decreasing font sizes to emphasize the best and deemphasize the worst suggestions.Obviously, link sorting is something you can only do to a list of links. You cannot use this technique to indicate suitability of links that appear in running text.<number>
  • Link annotation is currently the most common technique, and in fact we already saw it to some extent in the previous examples: the “curious eyes” in WebWatcher were link annotations, and so were the font changes in Hypadapter and the blue versus purple links in the link hiding examples of AHA!.We start with a screendump from ELM-ART, one of the most influencial adaptive educational hypermedia systems ever. ELM-ART is a Lisp tutor (and the system itself is written in Lisp as well).Link annotation is a term that covers every possible way in which a link (anchor) is augmented with some form of annotation, be it in the form of font, color, additional icons, mouse-over annotations or anything else.Current systems can distinguish up to 6 states indicated using colors and icons (except for AHA! that can distinguish an unlimited number of states) whereas with link hiding you can only distinguish 2 states (hidden or not hidden).In ELM-ART we see orange, green, red and white balls, and a black arrow (which indicates where you are). Green is recommende, red is not recommended, white means completely studied, orange is partly done The bar also indicates the progress.<number>
  • This example shows adaptive link annotation in Interbook. Links in the partial table of contents are annotated with colored balls and checkmarks. Green means the link is recommended, red means not recommended, white means that the page contains no new knowledge. The checkmarks indicate how much the user already knows about the concepts of the page. No checkmark means no knowledge, a small checkmark some knowledge and a large checkmark full knowledge. Note that in the example there is something strange because the links to subsections 0.1.1 through 0.1.5 are not recommended even though the user already has a lot of knowledge about them. The recommendation however depends on the knowledge of prerequisites, not of the concepts treated in the pages themselves.In the page the links are annotated with arrows. These indicate whether the link goes down or up in the hierarchical structure of the course. They are not adaptive.The only adaptive part in the page is the red bar. It indicates that the page the user is reading is not recommended.<number>
  • The “strangest” color and annotation schemes have been used. This figure shows ISIS-Tutor, a tutor system for the ISIS library information system.Links leading to the “current” goal of the lesson are marked with a – (minus sign). Links leading to pages that are already known are marked with a + (plus sign). Partly or fully learned pages are marked in green, and recommended links are marked in red. All other non-recommended links are blueish-grey.What is a bit strange here (for us western people) is that red is recommended.This version of ISIS-Tutor was hindered a bit by the fact that not all links fit on one screen, so sometimes you had to page up or down.<number>
  • A second version of ISIS-Tutor was developed in which the non-recommended links are removed. This implies that the user cannot go to non-recommended links (and as a result experiments showed a lower number of pages read by users to learn the same goal) and that all links fit on a single screen.<number>
  • We see here a screenshot from ALICE, an electronic textbook about Java.Instead of having fixed links that are annotated the main navigation support in ALICE consists of link generation.The typical green/yellow/red color annotation scheme is used. The section on “Pointers in Java” that is shown was not recommended, and therefore there are also no next sections that are suggested. There is a list of background knowledge that needs to be studied, and this is automatically generated through a structure of prerequisite relationships.<number>
  • In GRAPPLE adaptation needs to be done to content of very different types: textual information as well as 3D environments. As long as the information is represented using XML it can be adapted.GALE has a separate UM server that can communicate with any number of applications, through GALE’s event bus.In GALE the domain model (DM), consisting of concepts is also served by a server, together with the adaptation model (AM) that contains the adaptation rules.When a user logs on, the adaptation engine (listening to login events) asks UM for the user model, and asks DM/AM for the domain model for the application the user logs on to. Any updates to that user model or to the domain/adaptation model can be signaled to AE later. AE holds everything in its cache to avoid unnecessary or repeat communication through the event bus.GALE is mainly concerned with a conceptual structure. Adaptation rules will assign resources (actually URLs) to concepts and GALE can adapt the content of these resources.The adaptation rules can be simple expressions and statements, written in a special simple rule language, but the rules can also contain calls to arbitrary Java code. In fact, when a rule refers to an attribute of a concept, GALE translates this into Java code for accessing that data.In order to accommodate any type of adaptation that is currently common in use GALE supports forward and backward reasoning. UM values can be calculated by forward reasoning through rules, but they can also be declared to be volatile and calculated only when they are queried for.The use of the event bus enables other components to be added to GALE. However, within GRAPPLE the GALE event bus is mainly used for communication between GALE components, not for communicating with other applications.
  • Creating GALE applications is a lot like creating a “normal” web-based course text. It’s a matter of coming up with a structure and then writing the content.A good author always draws up a structure for any book. Creating an adaptive course text is no different. First you define a domain model (DM) which structures the course domain. It’s much like creating an ontology. There are topics with subtopics, and conceptual relationships between concepts. We will illustrate this with a “Milky Way” example in which the “moon” Is an instance of a “satellite” and is a satellite of the earth, etc.We also create a conceptual adaptation model (CAM) which also links concepts through relationships, but this time relationships that have a pedagogical meaning. When A is a “prerequisite” for B it means the learner should study A before B. It says nothing about a possible relationship between A and B in the domain model.Then you have to create the actual course material. GALE can serve and adapt data in any XML format, although for a course text it is most likely that you will be using (X)HTML. If parts of a page need to be adaptive, like to conditionally include a text fragment or to annotate links to indicate whether they are suitable or not, you must use tags that belong to the GALE namespace. Pages can thus use HTML and GALE tags, but if HTML is the default namespace the GALE tags need to refer to the GALE namespace explicitly.<number>
  • <number>
  • Creating GALE course pages is just like creating XHTML pages. Standard XHTML can be used as long as nothing within the page is adaptive. To use adaptation the HTML header must include a reference to the gale namespace.<number>
  • In GALE pages you can define expressions that are evaluated over the current user model. Such expressions are used for instance in a <gale:if> tag to conditionally include a piece of text, or in a <gale:variable> tag to print the result of the expression on the page.The expressions are Java expressions that can call methods in order to retrieve values. We have a ${…} shorthand notation for commonly used requests: to retrieve the value of an attribute of a concept.Using GALE expressions the same page can show very different content depending on the user model and on what the current concept is. In the “template-based” authoring approach a single page will be used for many concepts. The actual pieces of content to be shown are retrieved from attributes of the current concept.<number>
  • We are going to look at a (not so) small example of an application with content taken from Wikipedia. The application is about the solar system. There are pages about a type of celestial object and pages about the instances of such objects. Objects are also related to each other, like moons are related to the corresponding planet.It is to be expected that many pages will look very similar in terms of structure. This means that instead of creating each page separately it makes sense to use templates that retrieve content based on the current concept.<number>
  • Here is a sketch of some example pages from the MilkyWay application. These pages all show more or less the same information, as listed on the slide.An interesting thing is the “list of children concept”. This requires some form of “for loop” that generates more or less HTML depending on how many children concepts there are.<number><number>
  • Adaptive Learning Environments

    1. 1. Adaptive Learning Environments Prof. dr. Paul De Bra Eindhoven University of Technology Terchova, June 2, 2009 JESS Summer School 2009 Slide 1
    2. 2. Topics • The need for adaptation – personalized: adaptable / adaptive • User Modeling • Adaptation – adaptive presentation – adaptive navigation • The GRAPPLE architecture • Authoring Terchova, June 2, 2009 JESS Summer School 2009 • Examples (if we have time) Slide 2
    3. 3. We live in a “one size fits all” world But we are not all the same size (physically or mentally) Terchova, June 2, 2009 JESS Summer School 2009 Slide 3
    4. 4. What’s the main difference between these pictures? Terchova, June 2, 2009 JESS Summer School 2009 Slide 4
    5. 5. Automatic ≠ Adaptive • Automatic systems = automatic fixed behavior (according to fixed rules) • Adaptive systems = automatic behavior that depends on environmental factors – first-order adaptation: the change in the automatic behavior follows fixed rules – second-order adaptation: the change in the automatic behavior is itself also adaptive – etc.: there is no limit to how adaptive systems can be • In this lecture we deal with user-adaptive systems: they adapt to users and the users’ JESS Summer School 2009 environment Slide 5 Terchova, June 2, 2009
    6. 6. Adaptation in any type of Information System • Adaptation of the Information – information adapted to who/where/when you are – information adapted to what you are doing and what you have done before (e.g. learning) – presentation adapted to circumstances (e.g. the device you use, the network, etc.) • Adaptation of the Process – adaptation of interaction and/or dialog – adaptation of navigation structures Terchova, June 2, 2009 JESS Summer School 2009 – adaptation of the order of tasks and steps Slide 6
    7. 7. Advantages of Adaptive Systems • Increased efficiency: – optimal process (of navigation, dialog, study order, etc.) – minimum number of steps – maximum benefit (of relevant information) • Increased satisfaction: – system gives good advice and relevant information – interactive applications do not make stupid moves • Return on investment: – recommending products the user needs is a form of advertising that really works – adaptive (non-IS) systems have better technical performance Terchova, June 2, 2009 JESS Summer School 2009 Slide 7
    8. 8. Disadvantages of Adaptive Systems • Adaptive Systems may learn the wrong behavior – adaptive games learn badly from bad players – generally: adaptation good for one user may be bad for another user; it is personal after all • Adaptive Systems may outsmart the users – all doomsday movies in which machines take over the world blame second order adaptive systems – a game that learns how always to win is no fun – an adaptive information system may effectively perform censorship – it may be hard to tell an adaptive system that it is wrong Terchova, June 2, 2009 JESS Summer School 2009 Slide 8
    9. 9. User-Adaptive Systems Terchova, June 2, 2009 JESS Summer School 2009 Slide 9
    10. 10. Main issues in Adaptive Systems • Questions to ask when designing an adaptive application: – Why do we want adaptation? – What can be adapted? – What can we adapt to? – How can we collect the right information? – How can we process/use that information • Exercise: answer these questions for: – a presentation (lectures, talks at conferences) – an on-line textbook – a newspaper site or an on-line TV-guide – a (book, cd, computer, etc.) store – a (computer) help system JESS Summer School 2009 Terchova, June 2, 2009 Slide 10
    11. 11. Forward and Backward Reasoning • Two opposite approaches for adaptation: • forward reasoning: 1. register events 2. translated events to user model information 3. store the user model information 4. adaptation based directly on user model information • backward reasoning: 1. register events 2. store rules to deduce user model information from events 3. store rules to deduce adaptation from user model information 4. performing adaptation requires backward reasoning: decide which user model information is needed and then Terchova, June 2, 2009 deduce which event information is needed for that. JESS Summer School 2009 Slide 11
    12. 12. Application Areas of AS • Educational hypermedia systems – on-line course text, with on-line multiple-choice or other machine- interpretable tests – we use AEH, AES and ALE as near-synonyms • On-line information systems – information “kiosk”, documentation systems, encyclopedias, etc. • On-line help systems – context-sensitive help, (think of “Clippy”) • Information retrieval and filtering – adaptive recommender systems Terchova, June 2, 2009 • etc. JESS Summer School 2009 Slide 12
    13. 13. Adaptive Educational Hypermedia • Origin: Intelligent Tutoring Systems – combination of reading material and tests – adaptive course sequencing, depending on test results • In Adaptive Educational Hypermedia: – more freedom for the learner: guidance instead of enforced sequence – adaptive content of the course material to solve comprehension problems when pages or chapters are read out of sequence Terchova, June 2, 2009 JESS Summer School 2009 – adaptation based on reading as well as tests Slide 13
    14. 14. Learning Management Systems • LMSs offer a “personal” learning environment: – registration for courses – personalization of the “workspace” – access to course material – assignments, tests, group work – communication tools: messages, discussion forums, chat – no built-in adaptive School 2009 functionality JESS Summer learning Terchova, June 2, 2009 Slide 14
    15. 15. The GRAPPLE Project • Glues an ALE and LMS together, offering an adaptive within the LMS • LMS and ALE talk with each other through a shared event bus • User Model data can be exchanged through the Grapple User Model Framework (GUMF) • Authoring is done mostly through graphical interfaces to create a domain model (DM) and a conceptual adaptation model (CAM) Terchova, June 2, 2009 JESS Summer School 2009 Slide 15
    16. 16. The GRAPPLE Learner View GRAPPLE User Learne r Learner scenario LMS Model Framework Shibboleth GRAPPLE Event Bus Reposito Repositor ryy GALE – adaptatio Student n engine Visualizatio ns Device GALE Adaptation Reposito Terchova, June 2, 2009 JESS Summer School 2009 ry Slide 16
    17. 17. The GRAPPLE Author View GRAPPLE Autho Authoring r tool (CAM, DM, CRT LMS DM CAM GALE – Reposito Reposito compile ry ry r GRAPPLE User Content Content Repositor Model Repositor GALE y Framewor y Repositor k y JESS Summer School 2009 Terchova, June 2, 2009 Slide 17
    18. 18. What can we Adapt to? • Knowledge of the user – initialization using stereotypes (beginner, intermediate, expert) – represented in an overlay model of the concept structure of the application – fine grained or coarse grained – based on browsing and on tests • Goals, tasks or interest – mapped onto the applications concept structure – difficult to determine unless it is preset by the user or a workflow system – goals may change often and more radically than Terchova, June 2, 2009 knowledge JESS Summer School 2009 Slide 18
    19. 19. What can we Adapt to? (cont.) • Background and experience – background = user’s experience outside the application – experience = user’s experience with the application’s hyperspace • Preferences – any explicitly entered aspect of the user that can be used for adaptation – examples: media preferences, cognitive style, etc. • Context / environment – aspects of the user’s environment, like browsing device, window size, network bandwidth, processing power, etc. Terchova, June 2, 2009 JESS Summer School 2009 Slide 19
    20. 20. User Modeling Terchova, June 2, 2009 JESS Summer School 2009 Slide 20
    21. 21. Modeling “Knowledge” in AES • Moving target: knowledge changes while using the application – scalar model: knowledge of whole course measured on one scale (used e.g. in MetaDoc) – structural model: domain knowledge divided into independent fragments; knowledge measured per fragment • type of knowledge (declarative vs. procedural) • level of knowledge (compared to some “ideal”) – positive (overlay) or negative information (bug model) can be used Terchova, June 2, 2009 JESS Summer School 2009 Slide 21
    22. 22. Overlay Modeling of User Knowledge • Domain of an application modeled through a structure (set, hierarchy, network) of concepts. – concepts can be large chunks (like book chapters) – concepts can be tiny (like paragraphs or fragments of text, rules or constraints) – relationships between concepts may include: • part-of: defines a hierarchy from large learning objectives down to small (atomic) items to be learned • is-a: semantic relationship between concepts • prerequisite: study this before that • some systems (e.g. AHA!) allow the definition of June Slide 22 JESS Summer School 2009 Terchova, 2, 2009
    23. 23. Which types of knowledge values? • Early systems: Boolean value (known/not known) – works for sets of concepts, but not for hierarchies (not possible to propagate knowledge up the hierarchy) • Numeric value (e.g. percentage) – how much you know about a concept – what is the probability that you know the concept • Several values per concept – e.g. to distinguish sources of the information – knowledge from Summer School different from June 2, 2009 JESS reading is 2009 Terchova, Slide 23
    24. 24. Modeling Users’ Interest • Initially: weighed vector of keywords – this mimics how early IR systems worked • More recently: weighed overlay of domain model – more accurate representation of interest – able to deal with synonyms (since terms are matched to concepts) – semantic links (as used in ontologies) allow to compensate for sparsity – move from manual classification of documents to automatic matching between documents and Terchova, June 2, 2009 an ontology JESS Summer School 2009 Slide 24
    25. 25. Modeling Goals and Tasks • Representation of the user's purpose – goal typically represented using a goal catalog (in fact an overlay model) – systems typically assume the user has one goal – automatic determination of the goal is difficult; glass box approach: show goal, let user change it – the goal can change much more rapidly than knowledge or interest • Determining the user's goal/task is much easier when adaptation is done within a workflow management system Terchova, June 2, 2009 JESS Summer School 2009 Slide 25
    26. 26. Modeling Users’ Background • User's previous experience outside the core domain of the application – e.g. (prior) education, profession, job responsibilities, experience in related areas, ... – system can typically deal with only a few possibilities, leading to a stereotype model – background is typically very stable – background is hard to determine automatically Terchova, June 2, 2009 JESS Summer School 2009 Slide 26
    27. 27. Modeling Individual Traits • Features that together define the user as an individual: – personality traits (e.g. introvert/extrovert) – cognitive styles (e.g. holist/serialist) – cognitive factors (e.g. working memory capacity) – learning styles (like cognitive styles but specific to how the user likes to learn) Terchova, June 2, 2009 JESS Summer School 2009 Slide 27
    28. 28. Modeling Users’ Context of Work • User model contain context features although these are not really all “user” features. – platform: screen dimensions, browser software and network bandwidth may vary a lot – location: important for mobile applications – affective state: motivation, frustration, engagement Terchova, June 2, 2009 JESS Summer School 2009 Slide 28
    29. 29. Feature-Based vs. Stereotype Modeling • Stereotypes: simple, can be designed carefully, very useful for bootstrapping adaptive applications • Feature-Based: allows for many more variations – each feature considered can be used to adapt something – detailed features leading to micro-adaptation do not necessary leading to overall adaptation that makes sense Terchova, June 2, 2009 JESS Summer School 2009 Slide 29
    30. 30. Uncertainty-Based User Modeling • Most used techniques: Bayesian Networks and Fuzzy Logic – user actions provide “evidence” that the user has (or does not have) knowledge of a concept – an expert needs to develop a qualitative model: • each concept becomes a “random variable” (node in BN) • source of evidence: reading time, answers to tests, etc. • consider direction between evidential nodes E and knowledge nodes K – causal direction: K → E (knowledge leads to evidence) – diagnostic direction: E → K (evidence leads to knowledge) • independence of variables influences validity Terchova, June 2, 2009 JESS Summer School 2009 of the model Slide 30
    31. 31. Generic User Modeling Systems • Adaptive Systems with built-in UM: – close match between UM structure and AS needs – high performance possible (no communication overhead) – UM not easily exchangeable with other AS • AS using a generic User Modeling System – cuts down on AS development cost – communication overhead – unneeded features may involve performance penalty Terchova, June 2, 2009 JESS Summer School 2009 – UM can be shared between AS Slide 31
    32. 32. Requirements for Generic UM Systems • Generality, including domain independence • Expressiveness and strong inferential capabilities • Support for quick adaptation • Extensibility • Import of External User-Related Information • Management of Distributed Information • Support for Open Standards • Load Balancing • Failover Strategies • Transactional Consistency • Privacy Support Terchova, June 2, 2009 JESS Summer School 2009 Slide 32
    33. 33. Requirements for Sharing UM Data • Sharing a technical API is not enough: – the AS must translate its internal user identities to the UM's user identities (and vice versa) – data about users need to be standardized – shared ontologies are needed for different AS dealing with the same domain (ontology alignment) – agreement on who can update what – agreement on meaning of “values” in the UM • “Scrutability” of UM: – UM data must be understandable for the user JESS control over their – users must have Summer School 2009 Terchova, June 2, 2009 Slide 33
    34. 34. User Modeling in GRAPPLE • User model is inherently distributed: – The LMS contains fairly stable information about the user (and also some assessment results) – The ALE contains mainly dynamically changing information about the user – There may be several components of each type • Different UM services may contradict each other – conflict resolution needed • Not every application is allowed to access/update UM data on every server Terchova, June 2, 2009 JESS Summer School 2009 Slide 34
    35. 35. Adding UM data to GUMF • GRAPPLE applications use GRAPPLE statements to communicate UM data • Registered clients have their own dataspace: subset of ‘own’ statements, derivation rules and schema extensions • Derivation rules generate new Grapple statements • Data can be declared public or private Terchova, June 2, 2009 JESS Summer School 2009 Slide 35
    36. 36. Retrieving GRAPPLE Statements Three ways to retrieve statements (plus combinations): Pull: Simple query interface to retrieve statements that match a certain pattern Push: Subscribing to a stream of statements; activated upon an event Manual: Browsing interface (for admin usage or scrutability) Terchova, June 2, 2009 JESS Summer School 2009 Slide 36
    37. 37. Grapple Statement Structure Main Part • Subject Subproperty: User • Predicate Property (specified in ontology) • Object Value of the statement • Level Qualification/level (if applicable) • Origin The statement in its original form (if applicable) Meta Part • ID Globally unique • Creator Entity that created the statement • Created Time of creation/submission of statement • Access Data for any kind of access control mechanism • Temporal Constraints on validity of statement • Spatial In which contexts is statement valid Terchova, June 2, 2009 • Evidence Refers to or embodies formal JESS Summer School 2009 evidence Slide 37
    38. 38. Example GRAPPLE Statement “Peter is interested in Sweden” gc = foaf = gc.Statement { gc:id gc:statement- peter-2009-01-01-3234190; gc:user; gc:predicate foaf:interest; gc:object:; } JESS Summer School 2009 Terchova, June 2, 2009 Slide 38
    39. 39. RDF/XML Serialization “Peter is interested in Sweden” <rdf:RDF xmlns:rdf=““ xml:base=““ gc = “” foaf => <rdf:Description rdf:ID=“gc:statement-peter-2009-01-01-3234190“> <user> </user> <predicate> foaf:interest </predicate> <object> object> <creator></creator> <created> 2009.01.01 </created> … </rdf:Description> Terchova, June 2, 2009 JESS Summer School 2009 Slide 39 </rdf>
    40. 40. What does ‘interest’ mean? This is defined in the FOAF ontology (any kind of ontology can be used) <rdf:Property rdf:about=quot;; vs:term_status=quot;testingquot; rdfs:label=quot;interestquot; rdfs:comment=quot;A page about a topic of interest to this person.quot;> <rdf:type rdf:resource=quot; rtyquot;/> <rdfs:domain rdf:resource=quot;;/> <rdfs:range rdf:resource=quot;;/> 2, 2009 Terchova, June JESS Summer School 2009 <rdfs:isDefinedBy rdf:resource=quot;;/ Slide 40
    41. 41. Adaptation Terchova, June 2, 2009 JESS Summer School 2009 Slide 41
    42. 42. What Do We Adapt in AEH? • Adaptive presentation: – adapting the information – adapting the presentation of that information – selecting the media and media-related factors such as image or video quality and size • Adaptive navigation: – adapting the link anchors that are shown – adapting the link destinations – giving “overviews” for navigation support and for orientation support Terchova, June 2, 2009 JESS Summer School 2009 Slide 42
    43. 43. Adaptive Content/Presentation Terchova, June 2, 2009 JESS Summer School 2009 Slide 43
    44. 44. Canned Text Adaptation • Inserting/removing fragments – prerequisite explanations: inserted when the user appears to need them – additional explanations: additional details or examples for some users – comparative explanations: only shown to users who can make the comparison • Altering fragments – Most useful for selecting among a number of alternatives – Can be done to choose explanations or examples, but also to choose a single term • Sorting fragments – Can be done to perform relevance ranking for Terchova, June 2, 2009 instance JESS Summer School 2009 Slide 44
    45. 45. Canned Text Adaptation (cont.) • Stretchtext – Similar to replacement links in the Guide hypertext system – Items can be open or closed; system decides adaptively which items to open when a page is accessed • Dimming fragments – Text not intended for this user is de-emphasized (greyed out, smaller font, etc.) – Can be combined with stretchtext to create de- emphasized text that conditionally appears, or only appears after some event (like clicking on a tooltip icon) Terchova, June 2, 2009 JESS Summer School 2009 Slide 45
    46. 46. Example of inserting/removing fragments, course “2L690” • Before reading about Xanadu the URL page shows: – … In Xanadu (a fully distributed hypertext system, developed by Ted Nelson at Brown University, from 1965 on) there was only one protocol, so that part could be missing. … • After reading about Xanadu this becomes: – … In Xanadu there was only one protocol, so that part could be missing. … Terchova, June 2, 2009 JESS Summer School 2009 Slide 46
    47. 47. Example of inserting/removing fragments: the GEA system. • selects objects based on matching attributes (arguments) to user preferences • presents arguments with relevance greater than a (customizable) threshold. Terchova, June 2, 2009 JESS Summer School 2009 Slide 47
    48. 48. Example with group adaptation: Intrigue (adaptive tourist guide) Terchova, June 2, 2009 JESS Summer School 2009 Slide 48
    49. 49. Stretchtext example: the Push system Terchova, June 2, 2009 JESS Summer School 2009 Slide 49
    50. 50. Scaling-based Adaptation Terchova, June 2, 2009 JESS Summer School 2009 Slide 50
    51. 51. Adaptive Navigation Support Terchova, June 2, 2009 JESS Summer School 2009 Slide 51
    52. 52. Adaptive Navigation Support • Direct guidance – like an adaptive guided tour – “next” button with adaptively determined link destination • Adaptive link generation – the system may discover new useful links between pages and add them – the system may use previous navigation or page similarity to add links – generating a list of links is typical in information retrieval and filtering systems • Variant: Adaptive link destinations – link anchor is fixed (or at least always present)Terchova, June 2, 2009 JESS Summer School 2009 but the Slide 52
    53. 53. Adaptive Navigation Support (cont.) • Adaptive link annotation – all links are visible, but an “annotation” indicates relevance – the link anchor may be changed (e.g. in color) or additional annotation symbols can be used • Adaptive link hiding – pure hiding means the link anchor is shown as normal text (the user cannot see there is a link) – link disabling means the link does not work; it may or may not still be shown as if it were a link – link removal means the link anchor is removed (and as a consequence the link cannot be used) – a combination is possible: hiding+disabling means the link Terchova, June 2, 2009 anchor text is justJESS Summer School 2009 plain text Slide 53
    54. 54. Adaptive Navigation Support (cont.) • Map adaptation – complete (site)maps are not feasible for a non- trivial hyperspace – a “local” or “global” map can be adapted by annotating or removing nodes or larger parts – a map can also be adapted by moving nodes around – maps can be graphical or textual – adaptation can be based on relevance, but also on group presence Terchova, June 2, 2009 JESS Summer School 2009 Slide 54
    55. 55. Example of Direct Guidance • Simple: suggest one best page to go to – Webwatcher: curious eyes – Sometimes a “next” button – Popular in ITS (sequencing) Terchova, June 2, 2009 JESS Summer School 2009 Slide 55
    56. 56. Example: Link Ordering/Sorting • Sorting links from most to least relevant. – first introduced in Hypadapter (Lisp tutor) – manual reordering by the user (if supported) can be used as feedback to update the user model Terchova, June 2, 2009 JESS Summer School 2009 Slide 56
    57. 57. Example: Link Annotation in ELM-ART Terchova, June 2, 2009 JESS Summer School 2009 Slide 57
    58. 58. Example: link annotation in Interbook 4 3 2 √ 1 1. Concept role 3. Current section state 2. Current concept state 4. Linked sections state Terchova, June 2, 2009 JESS Summer School 2009 Slide 58
    59. 59. Example: Link Annotation in ISIS-Tutor Terchova, June 2, 2009 JESS Summer School 2009 Slide 59
    60. 60. Example: Link Annotation and Hiding in ISIS-Tutor Terchova, June 2, 2009 JESS Summer School 2009 Slide 60
    61. 61. Example: Link Generation in Alice Terchova, June 2, 2009 JESS Summer School 2009 Slide 61
    62. 62. Adaptation in GRAPPLE: GALE • The GRAPPLE Adaptive Learning Environment has the following main properties: – three separate components: UM server, DM/AM server, adaptation engine (AE) – linked through an internal event bus – separation between concepts and content – adaptation rules can call arbitrary (Java) code – supports forward and backward reasoning – adaptation to arbitrary XML formats (not just HTML) – works stand-alone or within the GRAPPLE June Slide 62 JESS Summer School 2009 Terchova, 2, 2009
    63. 63. Creating GALE Applications • Creating a conceptual structure: – domain model (concepts, conceptual relationships like “is-a”, “part-of”, etc.) – conceptual adaptation model (pedagogical relationships like “prerequisite”) • Creating content as a “website”: – any XML format is supported – use “gale” name space for adaptive Terchova, June 2, 2009 JESS Summer School 2009 elements Slide 63
    64. 64. Three Types of Authoring • A concept can be associated with one resource (page); each page is authored separately. • A concept can be associated with a template resource (shared between many concepts); the template “includes” content fragments (with URLs from the concept’s attributes). • A concept may rely on a presentation engine to generate a layout and “include” content Terchova, June 2, 2009 JESS Summer School 2009 fragments (from the concept’s attributes). Slide 64
    65. 65. Creating a GALE Page • It’s “mostly” like XHTML but needs name spaces: <html xmlns= xmlns:xsi= xmlns:gale= xsi:schemaLocation=quot;” • HTML tags are used without name space, GALE tags with name space: – adaptive link anchor: <gale:a href=“newconcept”>anchor text</gale:a> – conditionally included object: <gale:object name=“conceptname” /> – conditionally included in-line fragment: Terchova, June 2, 2009 JESS Summer School 2009 <gale:if expr=“${someconcept#someattribute}&gt;0”> Slide 65
    66. 66. GALE Expressions • References to concepts/attributes using URIs: – ${#attribute} refers to an attribute of the current concept – ${concept#attribute} refers to an attribute of the named concept of the current course – ${gale://server.where:port/gale/course/concept#attribute} refers to an attribute of a concept of some course somewhere on another server. • Java expressions, escaping reserved characters (<>) – ${concept#knowledge} &gt; 50 (is the knowledge of the concept greater than 50) – gale.concept().getApplication() Terchova, June 2, 2009 JESS Summer School 2009 (gives the name of the course of the current concept) Slide 66
    67. 67. Milky Way • Example with an “interesting” domain model • Similar concepts can be presented in a similar way (hence templated- based authoring) Terchova, June 2, 2009 JESS Summer School 2009 Slide 67
    68. 68. Example Pages • Page template shows: – title (Sun, Earth, Moon) – reference to parent – image (with caption) – information paragraph – list of children concepts Terchova, June 2, 2009 JESS Summer School 2009 Slide 68
    69. 69. Acknowledgements • GALE is based on earlier work on AHA! that was partly developed with a grant from the NLnet Foundation • Part of this work was performed as part of the EU FP7 STREP project GRAPPLE (215434) Terchova, June 2, 2009 JESS Summer School 2009 Slide 69
    70. 70. Prerequisites for Workshop • In order to do a hands-on workshop we need: – JDK 1.5 or 1.6 (http:/ – Maven 2 ( – Tomcat 6 ( – MySQL 5.1 ( ) • You also need: – the permission to run services, and to create tables in MySQL – a working network connection at 2009 during setup June 2, 2009 JESS Summer School least Terchova, Slide 70
    1. A particular slide catching your eye?

      Clipping is a handy way to collect important slides you want to go back to later.