Ejemplo artículonormasapa plantilla
Upcoming SlideShare
Loading in...5
×
 

Like this? Share it with your network

Share

Ejemplo artículonormasapa plantilla

on

  • 98 views

Ejemplo ar'ticulo de investigación normas apa

Ejemplo ar'ticulo de investigación normas apa

Statistics

Views

Total Views
98
Views on SlideShare
98
Embed Views
0

Actions

Likes
0
Downloads
1
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Microsoft Word

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

Ejemplo artículonormasapa plantilla Document Transcript

  • 1. Convocatoria: 452/08 FINANCIACIÓN DE PROYECTOS DE INVESTIGACIÓN - AÑO 2008 Cofinanciación Colciencias – UNAD, según contrato número 503 de 2008 Supporting Cognitive Competence Development in Virtual Classrooms Personal Learning Management and Evaluation using Pedagogical Agents Stefan Weinbrenner, Ulrich Hoppe Linda Leal, Milcon Montenegro, William Vargas, Luis Maldonado University of Duisburg-Essen Collide Research Group Duisburg, Germany {weinbrenner, hoppe}@collide.info Abstract— Self-directed learning with learning management systems can be intelligently supported and enriched using available log data in combination with detailed competence ontologies. The approach described here implements a mechanisms to adaptively generate quizzes for self-assessment in a Moodle environment. The underlying agent architecture uses a tuple space as a blackboard. The approach has been worked out with a distance course on Genetics. Keywords: LMS; Moodle; self-directed learning; pedagogical agents; assessment I. INTRODUCTION Nowadays many learning environments, ranging from support infrastructures for presence courses, as well as blended or virtual learning scenarios use learning (or learning content) managements systems (LMS) such as Moodle1 or Blackboard2 [8] to provide learning materials and to receive teacher and student uploads. Typically, the process of using the LMS is structured by the students themselves, relying on their planning and initiative. The LMS is able to capture all the students’ online activities in log files. Depending on local policies, teachers may have access to these data at a more or less detailed level. However, this information can also be used to automatically generate intelligent feedback to the learner, although this has not yet been often realized. The approach presented here enriches a virtual learning platform (Moodle) with intelligent agents to allow for adaptive assessment using adaptively generated quizzes under the initiative of the learners themselves. The questions are derived from a competence ontology, which is also used to index the learning materials. The students’ traces through the learning material are used to determine the current state of “expected knowledge” or compentences. A big challenge for this type of work is the choice of an adequate architecture and processing paradigm for the “intelligent add-ons”. We have decided to use an agent environment (for trace analysis and feedback generation), which we have separated from the PHP-based Moodle system. The information exchange is based on a blackboard in the form of a tuple space. In this blackboard architecture, 1 2 http://moodle.org http://blackboard.com Universidad Nacional Abierta y a Distancia / SIUNAD Bogotá, Colombia {linda.leal, milcon.montenegro, luis.maldonado}@unad.edu.co william.vargas@colombia.com agent programming is not compromised by a programming language (PHP) that was never designed for this purpose. Our special tuple space implementation (SQLSpaces, [7]) even allows for connecting different language clients (including, e.g, Prolog). This novel approach has been implemented and used for a distance course in Genetics at the Open University of Colombia (UNAD). Since this is a standard course, user data will be soon available. II. RELATED WORK AND ADDED VALUE Although LMS (or web-based learning systems, WBLS) can provide explicit access to many relevant data for adaptive learner feedback, this is not yet much in use. One example of this kind of work is being developed in the European GRAPPLE project3 [4]. This project aims at creating a technology-enhanced learning environment that (among others) focuses on generating both adaptive individualised feedback and support assessment. Technically, GRAPPLE aims at providing components that could be adapted to any LMS. The main difference to GRAPPLE is the architecture: GRAPPLE uses a traditional direct message passing paradigm and no loosely-coupled blackboard. Another similar approach has been pursued by Sodoké at al. [5]. Here, Moodle is extended to adaptively recommend learning resources. This extension also uses the feedback of the Moodle quiz module to calculate the student model and to generate recommendations. However, this extension does not focus on automatically generating questions from a knowledge base, but instead it requires an authoring component to explicitely define the assessment items (based on item-response theory). This extension has the only purpose to assess students, so it does not interactively support the learning process itself. The work presented here is indeed most similar to the far bigger GRAPPLE project. So far our work has been pursued in a bilateral cooperation, in which only one side (UNAD) receives specific funding. However, we believe that the originality of our approach lies, on the one hand, in the specific architecture, which supports easy information transfer but maximum independence of the agents from the user environment. The other specific contribution is the use 3 http://www.grapple-project.org/
  • 2. Convocatoria: 452/08 FINANCIACIÓN DE PROYECTOS DE INVESTIGACIÓN - AÑO 2008 Cofinanciación Colciencias – UNAD, según contrato número 503 de 2008 of a competence ontology from which assessment questions can be adaptively generated. III. SCENARIO A student resumes her work with the Moodle system and logs in. On the start screen she chooses the course about genetics, which she started a week ago. The course’s main screen offers her to choose between eight different chapters, which all cover different aspects of genetics like DNA, the chromosomal structures or mutations. The student chooses the chapter about cytology and is directed to a list of activities that are contained in this chapter. In this list, she can also see the pedagogical goals of each activities. She chooses the activity named “Introduction to the Cell”, which she did not finish last time. First she wants to check her previous achievements and gets an overview of her quiz results that say that she has answered 5 of 8 questions of this chapter and only 40% of her answers were completely correct. After that she returns to the activity overview and starts the “Introduction to the Cell”. In this activity a resource is presented to her, which she works through. After that she returns to the activity overview and does the quiz for this activity to rate her progress and assess herself. The result of the quiz says that she has answered all questions correctly and therefore has successfully finished this activity. Next she looks at the goals overview, divided into a conceptual, contextual and analytical progress. There she notices that she is still missing some contextual competencies. Therefore she returns to the activities list and chooses the activity “Types of Cells”, because this is marked as an activity that enhances the contextual competencies. IV. DISCREPANCY REDUCTION MODEL The pedagogical idea behind this scenario is the so-called Discrepancy Reduction (or DR) Model [6]. In a DR Model, the information that comes from a goal and the information that comes from the actual learning state are merged in a mental model or metamemory judgement that shows how close the current state is to the ideal state expressed by the goal. As a consequence, the student chooses an action that will generate changes in the current state with the objective to "reduce the differences" to the goal. This is a dynamic model, in which every cycle modifies the values of the model components. Information about the goal affects the clarity of its mental representation. The goal could be evaluated through some kind of test, so information about the actual learning state depends on two information sources: Information about the goal itself and information about the performance of the subject on the problem solving related to the goal. Those two sources are integrated in the metamemory judgement, but also the student has to know which actions student could take to modify the variables “Goal comprehension”, “Current learning status” or “Competency to solve the problem related to the goal”. This model considers that when students have higher detail in his learning status feedback, they can be more precise about their metamemory judgements. Finally, if study strategies are better refined and metamemory judgement are more precise, the overall learning process is enhanced. V. ARCHITECTURE The features mentioned in the scenario above put some requirements to an architecture that is supposed to support them. One possible way of realization would have been to write a new Learning Management System (LMS) to have a tight integration between the traditional functions of an LMS (student and course management, provision of resources, etc.) and our envisioned new features. However, the unnecessary redundant work on such a system on the one hand, and the wide distribution of LMS like Moodle, which is already quite sophisticated, were clear arguments against this approach. So, the technical realization should reuse as many aspects of Moodle as possible, while the new features should be integrated in a proper and adequate way. In order to develop a system that can be integrated well into Moodle, the specific interfaces and the general, technical context of Moodle have to be considered. This technical context has particularly to be examined with respect to the complexity of the personal learning management, i.e. the degree of integration into Moodle is depending on how adequate the technical base of Moodle is for the implementation of our goals. Finally, the architecture should meet some general requirements, for instance it should be flexible, robust, easy to extend later on and the interfaces should be powerful, expressive, and yet convenient. A. Moodle Moodle is a free and open-source LMS that is attractive to developers, as well as to students and teachers because of its many features (in Moodle itself and in plugins) and its extendability. The decision to use Moodle as the basis of this system was made having the wide distribution in schools and other educational institutes in mind. Since Moodle is implemented in PHP and provides corresponding interfaces for plugin developers, it would be a natural choice to implement our system using this approach. However, in our opinion PHP does not meet the requirements mentioned above: Due to its origins as a scripting language for dynamic webpages, it is not very well suited for complex backend solutions (although it might be technically possible). Moreover, an approach using PHP would be hardly as robust and flexible (having later additions in mind) as we would like. Yet, any technical extension to the Moodle system has of course to be based on PHP, so we needed a platform that supports multiple programming languages and provides the flexibility and robustness we need. B. Applets The system incorporates two Java applets. Through these applets the feedback of the agents is given. They are the Goals applet and the Contents applet. The integration of the Goal applet into a Moodle course can be seen in figure 1. In this figure, it shows the cytology course mentioned in chapter III.
  • 3. Convocatoria: 452/08 FINANCIACIÓN DE PROYECTOS DE INVESTIGACIÓN - AÑO 2008 Cofinanciación Colciencias – UNAD, según contrato número 503 de 2008 The Goals applet allows the student to see the structure of the course, to configure his activity plan and to launch the moodle resources and quiz for the chapter. Figure 1. Integration of applets in Moodle courses Figure 2. The plan configuration view The Contents applet on the other hand provides access to the question generator and an assessment status to see his or her results. C. TupleSpaces Our solution was to choose the blackboard design principle [2] and to implement the several components of the system as agents. In a blackboard system all participating units only communicate over the blackboard, which is a central place for storing and reading information. The main advantage of a blackboard system is its inherent flexibility and robustness due to its loose coupling. Since the components only have minimal knowledge of the others (they only know the central hub) a failure of one component will not directly affect the functioning of another. Such a system is naturally also more flexible, because a new agent with new features can be easily integrated, often without other agents being modified. To realize a blackboard system, we choose to use Tuple Spaces [3]. In a Tuple Space system there is a central server and several clients, that only send messages to the server. These messages are in tuple structure, i.e. they consist of lists of typed data. As an implementation of the Tuplespaces idea we choose the SQLSpaces 4 [7], since they offer a rich feature set and are multi-lingual and can therefore be used as a language switchboard [1]. In our case, we adapted the PHP 4 http://sqlspaces.collide.info code of Moodle slightly to write notifications of certain actions (like the quiz answers) into the Tuplespaces, where they were processed further by the agents. Another way of connecting these two worlds was done using Java Applets. Especially for the navigation through the chapters the advanced interaction methods were used. Since an applet is technically an own Java application, there is no problem in sending and receiving messages from and to the Tuplespace. As another way back from the Tuplespaces server to Moodle, the local file system is used. The quiz module of Moodle uses a certain file format (GIFT, see later) to store its quizzes. These files also represent an interface to feed new information (in this case questions) into the system. D. Agents The system includes two principal agents, the Goals Management Agent and the Contents Management Agent. The Goals Management Agent is responsible for guiding the student in the comprehension of the learning objectives by means of the formulation of question that activate metamemory judgements.This agent is also guiding the student in the selection of learning goals, the planning of activities, the organization of time and resources to achieve that goals. Finally, it monitors the review of the resources on the planned dates and time, verifying the competency levels reached in the goal test and notifying the student about his or her status according to the plan. The Contents Management Agent on the other hand is responsible for allowing the student to identify the ontologic structure required for the development of competencies by means of generating questions about the resource he or she is studying. It also suggests contents to be studied based on the inference the agents do about student comprehension of the topics. Following the multi-agent system concept, there are other agents participating, the Ontology Loader Agent and the GIFT Agent. The Ontology Loader Agent is responsible for loading the ontologies from OWL files into an object graph used by the system. The GIFT Agent is responsible for generating multiple choice questions based on the object graph mentioned. Agent behavior is implemented using the Java-based JADE platform; each agent has one or more behaviours attached. VI. OVERVIEW The overall architecture can be seen in figure 3. Content Agent Ontology Agent Browser Goals Agent Moodle HTML page SQLSpaces Java Applets Gift Agent http Moodle Server GIFT files
  • 4. Convocatoria: 452/08 FINANCIACIÓN DE PROYECTOS DE INVESTIGACIÓN - AÑO 2008 Cofinanciación Colciencias – UNAD, según contrato número 503 de 2008 Figure 3. Architectural overview In this figure the different colors represent the different programming languages: The brown Moodle box stands for PHP, the orange boxes mean “some other language” and the blue rest stands for Java applications. In the center the SQLSpaces server is shown, which represents an agent communication and storage facility as well as a language switchboard to the PHP world. In the upper right corner the four agents that are implemented in Java are shown, which are only connected to the Tuplespaces server. The only exception here is the Gift agent that is indirectly communicating with Moodle over the GIFT files on hard disk. At the bottom there is the Moodle server, which writes quiz results the Tuplespaces server and delivers the webpages to the requesting browser. This browser on the left side renders the pages and starts in some cases applets inside them, that again may communicate with the SQLSpaces server. VII. ONTOLOGY The system uses different ontologies as sources of knowledge: There is one ontology that describes the course structure and one ontology for each chapter of the course. The ontology that describes the course structure contains information about chapters, competencies and resources. The ontology of each chapter describes the knowledge domain for that particular chapter. The first ontology (structural ontology) represents general information (course name and authors), the chapters of the course (name and subject of study), the competencies for each chapter (type and description of the competencies), the questions (for activating metamemory judgements for each competency) and study resources (name, type and description of the resources, each one is associated with a competency). The other ontologies (knowledge domain ontologies) represent knowledge about conceptual structures (classes, instances and properties), systemic structure (relations between instances (as of components)), contextual structure (historical events and their authors). From ontologies of the latter type agents can generate questions and answers using templates depending on the competency: • Conceptual: The [property] of the [instance] is: [value] • Analytic: Are components of the [instance]: [instance], [instance], ... • Contextual: [Author] in [year] proposed: [theory]. • Observational: [image] corresponds to: [instance]. Distractors for the answers (i.e. wrong answers) are generated from correct values of the other instances. Generated questions are dumped in a GIFT file, which is sent to Moodle by the corresponding agent and used in the goal test (Moodle quiz). Ontology Gift Agent Ontology Agent SQLSpaces Browser Moodle Server Moodle HTML page Figure 4. The workflow of question generation The single steps of the creation of such a questions, beginning in the ontology and ending with a Moodle quiz webpage, is illustrated in figure 4: First, the ontology agent retrieves the relevant concepts from the ontology. After the matching of these concepts to the question patterns described above, the questions are written into the tuple space. This action triggers the GIFT agent that fetches the questions, transforms them into a GIFT file and stores them in the corresponding folder, so that Moodle can interpret these new questions. Finally, the questions are presented to a student in a quiz. For example, having assertions in functional OWL syntax like this: DataPropertyAssertion(Genetics:Function Genetics:Chloroplast "Photosynthesis") DataPropertyAssertion(Genetics:Function Genetics:Nuclear_membrane "Protect genetic material”) DataPropertyAssertion(Genetics:Function Genetics:Ribosome "Protein synthesis”) From these three facts, questions like the following can be generated: What is the function of the chloroplast: a) Photosynthesis b) Protein synthesis c) Protect genetic material What is the function of the nuclear membrane: a) Protein synthesis b) Protect genetic material c) Photosynthesis VIII. FEEDBACK The Goals Management Agent guides student in activities planning. These activities are configured by the activity plan view, which includes resources, a description, the planned finish date and estimated dedication time. Based on the goal test result, the Goals Management Agent provides feedback in several ways. In general, competences are sent as feedback. As each question is associated to a particular competence, the agent can show results grouped by competence. These are shown in a percentage value, calculated as number of correct
  • 5. Convocatoria: 452/08 FINANCIACIÓN DE PROYECTOS DE INVESTIGACIÓN - AÑO 2008 Cofinanciación Colciencias – UNAD, según contrato número 503 de 2008 answers compared to number of questions for that competence. The agent suggests to study again those resources associated to competencies with low percentages. (competency is considered low below 80%). Resources attached to low percentage competencies are also displayed and marked in a red color, indicating that they have to be reviewed. If a resource has not been visited before the planned date (defined by the student), it is marked in a red color indicating that it has to be visited as soon as possible or the finalization date for the resource has to be postponed. Furthermore, the dedication time, which was registered in the contents applet is shown in comparison to the accumulated time of study for each resource, so the student can compare estimated versus spent time. learning process, provide resources and to automatically generate tests for students’ self-assessment. On the basis of these assessment data, suggestions on how to proceed are generated. The architectural design of this Moodle extension allows for flexibility, robustness and language heterogeneity, so that each component can be implemented in a language that is appropriate for its particular task. An extension of this work is in progress that focuses on the inclusion of another agent, the Strategies Management Agent for supporting the student during the review of the resources. This agent will offer different learning strategies: • Vocabulary: use of glossary and web searches. • Conceptual networks: relations between concepts. • Tetrahedron modification: guiding questions like what, what for, who, where, when and components of. The Strategies Management Agent will monitor the use of strategies and will give recommendations based on goal tests and student self-assessments. REFERENCES [1] Figure 5. The content applet showing the user assessment The Contents Management Agent guides the student during the review of the resources. This guide is done by the generation of questions related to the resource. The student asks for new questions, then he or she answers them and checks later how close the answers were from the correct answers proposed by the agent. Based on these results, the Contents Management Agent provides three types of feedback. First, it produces questions that are related to the topics of the resource being studied. Then, of course, the corresponding answer is determined based on the knowledge domain ontology. Finally, this agent calculates the assessment, i.e. it presents an accumulated result for the answers, which is the number of correct answers compared to number of questions generated. The agent suggests to continue studying the resource when the number of correct answers by the student is below 80%. Such an assessment result will be shown to the user by the Content Applet as in figure 5. IX. CONCLUSION In this paper, an extension of the LMS Moodle has been presented, in which ontologies are used to structure the [2] [3] [4] [5] [6] [7] [8] L. Bollen, A. Giemza, and H. U. Hoppe, "Flexible Analysis of User Actions in Heterogeneous Distributed Learning Environments," Proceedings of the European Conference on Technology Enhanced Learning 2008 (EC-TEL 2008), Maastrich, NL, 2008. L.D. Erman, F. Hayes-Roth, V.R. Lesser, and D.R. Reddy, "The Hearsay-II speech understanding system: integrating knowledge to resolve uncertainty," ACM Comput. Surv., 12(2), 1980, pp. 213-253. D. Gelernter, "Generative Communication in Linda," ACM Transactions on Programming Languages and Systems, 1985. 7(1): pp. 80-112. M. Hendrix, P. de Bra, M. Pechenizkiy, D. Smits, and A. Cristea, "Defining Adaptation in a Generic Multi Layer Model: CAM: The GRAPPLE Conceptual Adaptation Model", Proceedings of EC-TEL 2008, Maastrich, NL, 2008. K. Sodoke, M. Riopel, G. Raîche, R. Nkambou, and M. Lesage, "Extending Moodle functionalities to adaptive testing framework," World Conf. on E-Learning in Corporate, Government, Healthcare & Higher Education (E-Learn), 2007. K. Thiede, M.C. Anderson, and D. Therriault, “Accuracy of metacognitivve monitoring affects learning of texts”, J. Ed. Psychology, 93 (1), 2003, 66-73. S. Weinbrenner, A. Giemza, and H.U. Hoppe, "Engineering heterogenous distributed learning environments using TupleSpaces as an architectural platform," Proceedings of the 7th IEEE International Conference on Advanced Learning Technologies (ICALT 2007). 2007. Los Alamitos (USA): IEEE Computer Society. D. Yaskin and D. Everhart, Blackboard Learning System (Release 6), B Inc, (2002)
  • 6. Convocatoria: 452/08 FINANCIACIÓN DE PROYECTOS DE INVESTIGACIÓN - AÑO 2008 Cofinanciación Colciencias – UNAD, según contrato número 503 de 2008 answers compared to number of questions for that competence. The agent suggests to study again those resources associated to competencies with low percentages. (competency is considered low below 80%). Resources attached to low percentage competencies are also displayed and marked in a red color, indicating that they have to be reviewed. If a resource has not been visited before the planned date (defined by the student), it is marked in a red color indicating that it has to be visited as soon as possible or the finalization date for the resource has to be postponed. Furthermore, the dedication time, which was registered in the contents applet is shown in comparison to the accumulated time of study for each resource, so the student can compare estimated versus spent time. learning process, provide resources and to automatically generate tests for students’ self-assessment. On the basis of these assessment data, suggestions on how to proceed are generated. The architectural design of this Moodle extension allows for flexibility, robustness and language heterogeneity, so that each component can be implemented in a language that is appropriate for its particular task. An extension of this work is in progress that focuses on the inclusion of another agent, the Strategies Management Agent for supporting the student during the review of the resources. This agent will offer different learning strategies: • Vocabulary: use of glossary and web searches. • Conceptual networks: relations between concepts. • Tetrahedron modification: guiding questions like what, what for, who, where, when and components of. The Strategies Management Agent will monitor the use of strategies and will give recommendations based on goal tests and student self-assessments. REFERENCES [1] Figure 5. The content applet showing the user assessment The Contents Management Agent guides the student during the review of the resources. This guide is done by the generation of questions related to the resource. The student asks for new questions, then he or she answers them and checks later how close the answers were from the correct answers proposed by the agent. Based on these results, the Contents Management Agent provides three types of feedback. First, it produces questions that are related to the topics of the resource being studied. Then, of course, the corresponding answer is determined based on the knowledge domain ontology. Finally, this agent calculates the assessment, i.e. it presents an accumulated result for the answers, which is the number of correct answers compared to number of questions generated. The agent suggests to continue studying the resource when the number of correct answers by the student is below 80%. Such an assessment result will be shown to the user by the Content Applet as in figure 5. IX. CONCLUSION In this paper, an extension of the LMS Moodle has been presented, in which ontologies are used to structure the [2] [3] [4] [5] [6] [7] [8] L. Bollen, A. Giemza, and H. U. Hoppe, "Flexible Analysis of User Actions in Heterogeneous Distributed Learning Environments," Proceedings of the European Conference on Technology Enhanced Learning 2008 (EC-TEL 2008), Maastrich, NL, 2008. L.D. Erman, F. Hayes-Roth, V.R. Lesser, and D.R. Reddy, "The Hearsay-II speech understanding system: integrating knowledge to resolve uncertainty," ACM Comput. Surv., 12(2), 1980, pp. 213-253. D. Gelernter, "Generative Communication in Linda," ACM Transactions on Programming Languages and Systems, 1985. 7(1): pp. 80-112. M. Hendrix, P. de Bra, M. Pechenizkiy, D. Smits, and A. Cristea, "Defining Adaptation in a Generic Multi Layer Model: CAM: The GRAPPLE Conceptual Adaptation Model", Proceedings of EC-TEL 2008, Maastrich, NL, 2008. K. Sodoke, M. Riopel, G. Raîche, R. Nkambou, and M. Lesage, "Extending Moodle functionalities to adaptive testing framework," World Conf. on E-Learning in Corporate, Government, Healthcare & Higher Education (E-Learn), 2007. K. Thiede, M.C. Anderson, and D. Therriault, “Accuracy of metacognitivve monitoring affects learning of texts”, J. Ed. Psychology, 93 (1), 2003, 66-73. S. Weinbrenner, A. Giemza, and H.U. Hoppe, "Engineering heterogenous distributed learning environments using TupleSpaces as an architectural platform," Proceedings of the 7th IEEE International Conference on Advanced Learning Technologies (ICALT 2007). 2007. Los Alamitos (USA): IEEE Computer Society. D. Yaskin and D. Everhart, Blackboard Learning System (Release 6), B Inc, (2002)