This document discusses ways to make computer-assisted language learning (CALL) environments more intelligent by bridging the gap between closed and open item types. It defines classical closed and open item types and their limitations. It then proposes several new item types, including half-closed items like select text and dictation that provide more freedom in learner output while still allowing automated correction. Half-open items like translation and reformulation are also suggested to provide more freedom than closed items but with more limited and predictable answers. Finally, supported open items are proposed which are fully open-ended but provide automated feedback and correction based on model answers, keywords, and blacklists/whitelists of acceptable answers. Challenges for future intelligent language learning environments
Bridging the gap between closed and open items or how to make CALL more intel...bwylin
Since its very beginning, CALL has often been identified with closed exercises such as multiple choice, fill-in-the-blank or drag-and-drop, allowing for one perfectly predictable and automatically gradable answer. Beatty (2003:11) still argues that “many programs being produced today feature little more than visually stimulating variations on the same gap-filling exercises used 40 years ago”. Meanwhile, the rise of CMC, serious gaming or social media has radically altered the type of communicative activities and tasks digital learning environments can offer. In most cases, we are dealing now with completely open activities allowing for unpredictable and spontaneous production.
However important may be the recent possibilities offered by computer augmented interaction with real world environments or by communication in immersive virtual worlds, one cannot deny that item-based exercise and test platforms allowing amongst others for focus-on-form activities haven’t lost anything of their relevance.
One of the main actual challenges is to make these item-based language learning environments more effective and attractive. This explains why there is for instance a growing interest in adaptivity in order to adjust one or more characteristics of the environment in function of the learner’s needs and preferences and/or the context.
Another challenging approach is to examine to what extent we can further diversify the types of exercises we offer. This presentation offers first of all a consistent typology of all possible exercise types based on such parameters as the degrees of freedom of input, the number of correct answers or the type of correction offered.
We then focus on three exercise types we designed, implemented and evaluated in order to move beyond the closed exercises. We first present “select text” as an example of a half-closed exercise type characterized by a limited degree of freedom of input and a limited number of correct answers but where possible answers are not given in beforehand. Next, we deal with half-open exercises such as “translate” or “reformulate” allowing for many answers, but that can still be automatically graded. We examine to what extent the analysis of learner output using NLP-approaches makes it possible to go beyond (more limited) approximate string matching techniques. We finally tackle the supported open exercise type which combines complete freedom of input with half-automated correction.
Bridging the gap between closed and open items or how to make CALL more intel...bwylin
Since its very beginning, CALL has often been identified with closed exercises such as multiple choice, fill-in-the-blank or drag-and-drop, allowing for one perfectly predictable and automatically gradable answer. Beatty (2003:11) still argues that “many programs being produced today feature little more than visually stimulating variations on the same gap-filling exercises used 40 years ago”. Meanwhile, the rise of CMC, serious gaming or social media has radically altered the type of communicative activities and tasks digital learning environments can offer. In most cases, we are dealing now with completely open activities allowing for unpredictable and spontaneous production.
However important may be the recent possibilities offered by computer augmented interaction with real world environments or by communication in immersive virtual worlds, one cannot deny that item-based exercise and test platforms allowing amongst others for focus-on-form activities haven’t lost anything of their relevance.
One of the main actual challenges is to make these item-based language learning environments more effective and attractive. This explains why there is for instance a growing interest in adaptivity in order to adjust one or more characteristics of the environment in function of the learner’s needs and preferences and/or the context.
Another challenging approach is to examine to what extent we can further diversify the types of exercises we offer. This presentation offers first of all a consistent typology of all possible exercise types based on such parameters as the degrees of freedom of input, the number of correct answers or the type of correction offered.
We then focus on three exercise types we designed, implemented and evaluated in order to move beyond the closed exercises. We first present “select text” as an example of a half-closed exercise type characterized by a limited degree of freedom of input and a limited number of correct answers but where possible answers are not given in beforehand. Next, we deal with half-open exercises such as “translate” or “reformulate” allowing for many answers, but that can still be automatically graded. We examine to what extent the analysis of learner output using NLP-approaches makes it possible to go beyond (more limited) approximate string matching techniques. We finally tackle the supported open exercise type which combines complete freedom of input with half-automated correction.
Predicting drug-target interactions (DTI) is an essential part of the drug discovery process, which is an expensive process in terms of time and cost. Therefore, reducing DTI cost could lead to reduced healthcare costs for a patient. In addition, a precisely learned molecule representation in a DTI model could contribute to developing personalized medicine, which will help many patient cohorts. In this paper, we propose a new molecule representation based on the self-attention mechanism, and a new DTI model using our molecule representation. The experiments show that our DTI model outperforms the state of the art by up to 4.9% points in terms of area under the precision-recall curve. Moreover, a study using the DrugBank database proves that our model effectively lists all known drugs targeting a specific cancer biomarker in the top-30 candidate list.
A COMPARATIVE STUDY OF FEATURE SELECTION METHODSijnlc
Text analysis has been attracting increasing attention in this data era. Selecting effective features from datasets is a particular important part in text classification studies. Feature selection excludes irrelevant features from the classification task, reduces the dimensionality of a dataset, and improves the accuracy and performance of identification. So far, so many feature selection methods have been proposed, however,
it remains unclear which method is the most effective in practice. This article focuses on evaluating and comparing the available feature selection methods in general versatility regarding authorship attribution problems and tries to identify which method is the most effective. The discussions on general versatility of feature selection methods and its connection in selecting the appropriate features for varying data were
done. In addition, different languages, different types of features, different systems for calculating the accuracy of SVM (support vector machine), and different criteria for determining the rank of feature selection methods were used to measure the general versatility of these methods together. The analysis
results indicate the best feature selection method is different for each dataset; however, some methods can always extract useful information to discriminate the classes. The chi-square was proved to be a better method overall.
Presentation of work that will be published at EMNLP 2016.
Ben Eisner, Tim Rocktäschel, Isabelle Augenstein, Matko Bošnjak, Sebastian Riedel. emoji2vec: Learning Emoji Representations from their Description. SocialNLP at EMNLP 2016. https://arxiv.org/abs/1609.08359
Georgios Spithourakis, Isabelle Augenstein, Sebastian Riedel. Numerically Grounded Language Models for Semantic Error Correction. EMNLP 2016. https://arxiv.org/abs/1608.04147
Isabelle Augenstein, Tim Rocktäschel, Andreas Vlachos, Kalina Bontcheva. Stance Detection with Bidirectional Conditional Encoding. EMNLP 2016. https://arxiv.org/abs/1606.05464
Practical Machine Learning - Part 1 contains:
- Basic notations of ML (what tasks are there, what is a model, how to measure performance)
- A couple of examples of problems and solutions (taken from previous work)
- A brief presentation of open-source software used for ML (R, scikit-learn, Weka)
Meta-evaluation of machine translation evaluation methodsLifeng (Aaron) Han
Cite: Lifeng Han. 2021. Meta-evaluation of machine translation evaluation methods. In Metrics2021 Tutorial Track/type: Workshop on Informetric and Scientometric Research (SIG-MET), ASIS&T. October 23–24.
Sentiment Analysis Using Hybrid Approach: A SurveyIJERA Editor
Sentiment analysis is the process of identifying people’s attitude and emotional state’s from language. The main objective is realized by identifying a set of potential features in the review and extracting opinion expressions about those features by exploiting their associations. Opinion mining, also known as Sentiment analysis, plays an important role in this process. It is the study of emotions i.e. Sentiments, Expressions that are stated in natural language. Natural language techniques are applied to extract emotions from unstructured data. There are several techniques which can be used to analysis such type of data. Here, we are categorizing these techniques broadly as ”supervised learning”, ”unsupervised learning” and ”hybrid techniques”. The objective of this paper is to provide the overview of Sentiment Analysis, their challenges and a comparative analysis of it’s techniques in the field of Natural Language Processing.
What will they need? Pre-assessment techniques for instruction session.gwenexner
Librarians all know the importance of a reference interview -- it's to make sure you're addressing what the patron actually needs. Classes take longer, and involve more people, but the fact still holds: to give the best service, you need to assess what the needs actually are.
An additional benefit of pre-assessment is that it can provide evidence of the impact of the teaching program, both to university administration and to accreditation organizations.
Presented by Gwen Exner at "Assessment Beyond Statistics" NCLA College & Universities Section/Community & Junior Colleges Section 2012 conference.
Predicting drug-target interactions (DTI) is an essential part of the drug discovery process, which is an expensive process in terms of time and cost. Therefore, reducing DTI cost could lead to reduced healthcare costs for a patient. In addition, a precisely learned molecule representation in a DTI model could contribute to developing personalized medicine, which will help many patient cohorts. In this paper, we propose a new molecule representation based on the self-attention mechanism, and a new DTI model using our molecule representation. The experiments show that our DTI model outperforms the state of the art by up to 4.9% points in terms of area under the precision-recall curve. Moreover, a study using the DrugBank database proves that our model effectively lists all known drugs targeting a specific cancer biomarker in the top-30 candidate list.
A COMPARATIVE STUDY OF FEATURE SELECTION METHODSijnlc
Text analysis has been attracting increasing attention in this data era. Selecting effective features from datasets is a particular important part in text classification studies. Feature selection excludes irrelevant features from the classification task, reduces the dimensionality of a dataset, and improves the accuracy and performance of identification. So far, so many feature selection methods have been proposed, however,
it remains unclear which method is the most effective in practice. This article focuses on evaluating and comparing the available feature selection methods in general versatility regarding authorship attribution problems and tries to identify which method is the most effective. The discussions on general versatility of feature selection methods and its connection in selecting the appropriate features for varying data were
done. In addition, different languages, different types of features, different systems for calculating the accuracy of SVM (support vector machine), and different criteria for determining the rank of feature selection methods were used to measure the general versatility of these methods together. The analysis
results indicate the best feature selection method is different for each dataset; however, some methods can always extract useful information to discriminate the classes. The chi-square was proved to be a better method overall.
Presentation of work that will be published at EMNLP 2016.
Ben Eisner, Tim Rocktäschel, Isabelle Augenstein, Matko Bošnjak, Sebastian Riedel. emoji2vec: Learning Emoji Representations from their Description. SocialNLP at EMNLP 2016. https://arxiv.org/abs/1609.08359
Georgios Spithourakis, Isabelle Augenstein, Sebastian Riedel. Numerically Grounded Language Models for Semantic Error Correction. EMNLP 2016. https://arxiv.org/abs/1608.04147
Isabelle Augenstein, Tim Rocktäschel, Andreas Vlachos, Kalina Bontcheva. Stance Detection with Bidirectional Conditional Encoding. EMNLP 2016. https://arxiv.org/abs/1606.05464
Practical Machine Learning - Part 1 contains:
- Basic notations of ML (what tasks are there, what is a model, how to measure performance)
- A couple of examples of problems and solutions (taken from previous work)
- A brief presentation of open-source software used for ML (R, scikit-learn, Weka)
Meta-evaluation of machine translation evaluation methodsLifeng (Aaron) Han
Cite: Lifeng Han. 2021. Meta-evaluation of machine translation evaluation methods. In Metrics2021 Tutorial Track/type: Workshop on Informetric and Scientometric Research (SIG-MET), ASIS&T. October 23–24.
Sentiment Analysis Using Hybrid Approach: A SurveyIJERA Editor
Sentiment analysis is the process of identifying people’s attitude and emotional state’s from language. The main objective is realized by identifying a set of potential features in the review and extracting opinion expressions about those features by exploiting their associations. Opinion mining, also known as Sentiment analysis, plays an important role in this process. It is the study of emotions i.e. Sentiments, Expressions that are stated in natural language. Natural language techniques are applied to extract emotions from unstructured data. There are several techniques which can be used to analysis such type of data. Here, we are categorizing these techniques broadly as ”supervised learning”, ”unsupervised learning” and ”hybrid techniques”. The objective of this paper is to provide the overview of Sentiment Analysis, their challenges and a comparative analysis of it’s techniques in the field of Natural Language Processing.
What will they need? Pre-assessment techniques for instruction session.gwenexner
Librarians all know the importance of a reference interview -- it's to make sure you're addressing what the patron actually needs. Classes take longer, and involve more people, but the fact still holds: to give the best service, you need to assess what the needs actually are.
An additional benefit of pre-assessment is that it can provide evidence of the impact of the teaching program, both to university administration and to accreditation organizations.
Presented by Gwen Exner at "Assessment Beyond Statistics" NCLA College & Universities Section/Community & Junior Colleges Section 2012 conference.
Use of online quizzes to support inquiry-based learning in chemical engineeringcilass.slideshare
Online quizzes have been developed to help prepare first year undergraduate Chemical Engineering students for participating in group based assignments carried out in an inquiry-based learning (IBL) format. These online quizzes based within WebCT Vista allow the students to test their understanding of the fundamental chemical process principles required for the assignments before they participate in the IBL activity. Currently, the classes size is about 70 students therefore it is important to develop the students’ ability to carry out independent and self- directed learning to acquire these core skills. Using these online quizzes, the students are able to self-assess their strengths and weaknesses in the core chemical engineering principles and practice so that they come to the IBL group work more prepared.
The effectiveness of the online quizzes has been evaluated, using a triangulation approach incorporating a student questionnaire, student focus group and project leaders’ interview. Preliminary analysis of the results suggests that the students have found the online quizzes beneficial for developing their core skills in chemical process principles. The presentation will provide: a showcase for the online quizzes created; feedback from the first cohort of students to use the resources; and lessons learned and future developments.
Classsourcing: Crowd-Based Validation of Question-Answer Learning Objects @ I...Jakub Šimko
A simple approach for assessing answer validity information from a student crowd in an online learning scenario context. Raises the questions about using of the student crowds for enhancing learning content and online student collaboration.
SERF: een gestructureerde opgavenbank met feedback voor OO (Java-)programmeer...SURF Events
SERF is een open en online opgavenbank voor programmeeronderwijs in Java. Het doel is om de opgavenverzameling te delen en door te ontwikkelen. De opgavenverzameling moet voor studenten en docenten goed doorzoekbaar zijn. Meerdere didactische aanpakken, zoals Objects-First of Objects-Late, moeten worden ondersteund. De opgaven kunnen dus niet in een vaste volgorde worden gerangschikt, maar moeten op een precieze manier worden voorzien van voorkennisinformatie. De ontwikkeling van de voorkennis waarmee de opgaven gekozen kunnen worden, wordt bepaald door de opbouw van een didactische aanpak.
We hebben een kennisgraaf ontwikkeld die afhankelijkheden tussen OO-syntaxconstructies en taalconcepten weergeeft. Zo kunnen we de opgaven categoriseren en zoeken naar opgaven praktisch mogelijk te maken. Hiervoor bleken diverse relaties nodig te zijn die in dit project zijn ontwikkeld. De opgavenbank is een database waarbij docenten opgaven kunnen taggen met de vereiste voorkennis qua syntax en concepten en studenten deze opgaven via de tags kunnen vinden. Studenten kunnen opgaven vinden, maken, insturen en krijgen vervolgens feedback.
De opgavenbank is expliciet bedoeld om diverse instellingen samen opgaven te laten maken en te gebruiken. In deze sessie geven we een demo van de opgavenbank.
it describes the bony anatomy including the femoral head , acetabulum, labrum . also discusses the capsule , ligaments . muscle that act on the hip joint and the range of motion are outlined. factors affecting hip joint stability and weight transmission through the joint are summarized.
How to Build a Module in Odoo 17 Using the Scaffold MethodCeline George
Odoo provides an option for creating a module by using a single line command. By using this command the user can make a whole structure of a module. It is very easy for a beginner to make a module. There is no need to make each file manually. This slide will show how to create a module using the scaffold method.
A Strategic Approach: GenAI in EducationPeter Windle
Artificial Intelligence (AI) technologies such as Generative AI, Image Generators and Large Language Models have had a dramatic impact on teaching, learning and assessment over the past 18 months. The most immediate threat AI posed was to Academic Integrity with Higher Education Institutes (HEIs) focusing their efforts on combating the use of GenAI in assessment. Guidelines were developed for staff and students, policies put in place too. Innovative educators have forged paths in the use of Generative AI for teaching, learning and assessments leading to pockets of transformation springing up across HEIs, often with little or no top-down guidance, support or direction.
This Gasta posits a strategic approach to integrating AI into HEIs to prepare staff, students and the curriculum for an evolving world and workplace. We will highlight the advantages of working with these technologies beyond the realm of teaching, learning and assessment by considering prompt engineering skills, industry impact, curriculum changes, and the need for staff upskilling. In contrast, not engaging strategically with Generative AI poses risks, including falling behind peers, missed opportunities and failing to ensure our graduates remain employable. The rapid evolution of AI technologies necessitates a proactive and strategic approach if we are to remain relevant.
Exploiting Artificial Intelligence for Empowering Researchers and Faculty, In...Dr. Vinod Kumar Kanvaria
Exploiting Artificial Intelligence for Empowering Researchers and Faculty,
International FDP on Fundamentals of Research in Social Sciences
at Integral University, Lucknow, 06.06.2024
By Dr. Vinod Kumar Kanvaria
Thinking of getting a dog? Be aware that breeds like Pit Bulls, Rottweilers, and German Shepherds can be loyal and dangerous. Proper training and socialization are crucial to preventing aggressive behaviors. Ensure safety by understanding their needs and always supervising interactions. Stay safe, and enjoy your furry friends!
Safalta Digital marketing institute in Noida, provide complete applications that encompass a huge range of virtual advertising and marketing additives, which includes search engine optimization, virtual communication advertising, pay-per-click on marketing, content material advertising, internet analytics, and greater. These university courses are designed for students who possess a comprehensive understanding of virtual marketing strategies and attributes.Safalta Digital Marketing Institute in Noida is a first choice for young individuals or students who are looking to start their careers in the field of digital advertising. The institute gives specialized courses designed and certification.
for beginners, providing thorough training in areas such as SEO, digital communication marketing, and PPC training in Noida. After finishing the program, students receive the certifications recognised by top different universitie, setting a strong foundation for a successful career in digital marketing.
Natural birth techniques - Mrs.Akanksha Trivedi Rama University
FLEAT VI - Harvard University - Piet Desmet & Bert Wylin
1. Bridging the gap between closed and open items
or
how to make CALL more intelligent
Piet Desmet & Bert Wylin
Fleat VI Harvard University
August 11-15, 2015
2. 1. Item-based learning & testing environments (ILTE): definition
2. CALL, SLA & LT: different views on a “classical” ILTE
3. Beyond the closed & open items in an ILTE
4. Half-closed items
5. Half-open items
6. Supported open items
7. Challenges for ILTEs
8. Conclusion
3. 1. Item-based learning & testing environments
(ILTE): definition
1.1. Definition of an item
“A digital item asks the learner to react to a given input,
leading to an output that is treated by the system”.
Typically, items are
• part of a series (or stand on themselves)
• structured (organized),
• (minimally) metadated,
• reusable,
• multimedial,
• stored in an item bank
4. 1.2. “Classical” items: closed or open
CLOSED OPEN
Learner output
level of freedom limited totally free
# correct answers limited to 1 or a few many
predicatibility answers maximal very limited
Output treatment
correction type automated manual
reliability high
Examples
closed: multiple choice, multiple answer, drag & drop, order, fill gaps, etc.
open: upload text file, audio or video-recording (without correction)
5. 2. CALL, SLA & LT:
Different views on “classical” ILTEs
2.1. Within CALL: tutor vs tool
Computer as a tutor (tutorial CALL):
ILTEs still crucial today although need for improvement
“Many programs being produced today feature little more than visually
stimulating variations on the same gap-filling exercises used 40 years ago”
(Beatty 2003: 11)
vs
Computer as a tool (multimedia, CMC, web 2.0, etc.):
ILTEs less important since main focus is on
CMC, social media, immersive virtual worlds, etc.
allowing for communicative activities and tasks
6. Tutorial CAL not even on the
Hype cycle for education (Gartner, 2013)
7. 2.2. Within SLA: cognitive vs socio-cultural
Different perspectives on SLA:
cognitive perspective: cognitive processing by the learner
(noticing, motivation, etc.)
socio-cultural perspective: impact of social environment of the learner
(collaboration between learners, scaffolding by interlocutor, etc.)
-> ILTEs are more crucial within a cognitive framework
8. 2.3. Within language teaching:
behavioral vs communicative/task-based
° Different methods:
grammar-translation
direct methods
communicative approach
task-based language teaching (TBLT)
etc.
-> ILTEs are considered to be less crucial in TBLT than before (cf. “drill & kill”)
° Different focus:
focus on form vs focus on meaning
rule-based vs usage-based
knowledge-oriented vs skills-oriented
teacher-centered vs learner-centered
-> ILTs are mainly associated with the left focuses
9. 3. Beyond the closed & open items in an ILTE
3.1. Limitations of “classical” closed items
(a) too limited freedom at the level of the learner output
(b) too limited cognitive complexity
(c) limited number of item types
(d) less suited for advanced learners
-> need for more “intelligent” CALL
10. 3.2. Old wine in new bottles…
Till recently only technological innovation
floppy disk (DOS only)
cd-rom (Windows)
website
platforms
CMS LMS learning platform testing platform
SPOC MOOC
11. 3.3. “Our” solution:
bridging the gap between closed and open items
= pedagogical innovation
still automated correction with high reliability
BUT:
Learner output: more freedom
more correct answers
less predictability
www.edumatic.com
13. 4. Half-closed items
CLOSED HALF-CLOSED
Learner output
level of freedom limited more free
# correct answers limited to 1 or a few limited
predicatibility answers maximal maximal
Output treatment
correction type automated automated
reliability high high
Examples
(1) select text
(2) dictation
4.1. Definition
14. 4.2. Select text
Learner output: selection of relevant passage in a text
The locus of the points of interest is not given beforehand
-> more freedom at the level of the learner output
Mechanism behind these items:
° mark the keyword(s) in a given text (sentence or paragraph)
& link/group these keywords
° define ranges for selection
(ranges as such don’t influence the score)
° prepare feedback for correct and wrong keywords
17. 4.3. Dictation
Learner output: transcription of a (bookmarked) audio file
Learner doesn’t know what are the possible points of interest
Learner can decide not to transcribe certain parts (without impact
on the correction mechanism)
-> more freedom at the level of the learner output
Mechanism behind these items:
Approximate string matching
18. Approximate String Matching @ Edumatic
• Normalization of input (or not)
• caps
• interpunction
• accents
• algorithm based on best match with input
I inform you to XXX the (…) tomorrow (XXX).
• 3 codes: delete, insert, substitute (error)
• Attempts model:
attempt – feedback – attempt – (…) – solution model
19. Approximate String Matching @ Edumatic
• “Brackets” model
[[In the/Every] morning, Mary listens to the radio./Mary listens to the radio [in
the/every] morning.]
• not only feedback,
also show solutions based on best match with student’s input
showing non matching solutions is an option
21. 5. Half-open items
HALF-CLOSED HALF-OPEN
Learner output
level of freedom more free more free
# correct answers limited to 1 or a few many
predicatibility answers maximal limited
(but feasable and
progressive build up)
Output treatment
correction type automated automated
reliability high average to high
Examples
(1) translate
(2) reformulate
(3) correct
5.1. Definition
26. 6. Open supported
HALF-OPEN OPEN SUPPORTED
Learner output
level of freedom more free free
# correct answers many many
predicatibility answers limited even more limited
Output treatment
correction type automated automated
reliability average to high average to high
6.1. Definition
27. 6.2. Mechanism
• open question with free learner input
• with due date
• generation of feedback on the basis of:
model answer
keyword matching
• white list (+ score)
• and
• if
• if then
• black list (0 or – score)
• negations (and range)
28. 4 functions of supported open item type:
1) Creation of open question
with model answer, black list, white list, elaborated feedback, etc.
2) Publication of this item
fix due date, select student groups, follow-up received
answers, etc.
3) Half-automated correction of the answers
correction proposal on the basis of the available info
manual correction of scores and adaptation of
black list & white list (-> update of automatic scores)
4) Generation of feedback report
individualised feedback, fix scores, add personal comments
notify all users by automatically generated mail
30. Item Input: create New item
Add original
text in
“logical
units”
(paragraph
or
sentences)
Add
instruction
31. Students make translations
•Use quick codes to have alternative
correct solutions
• Eg. [on passe/on passera/on fera/on effectuera/sera
passée/sera prise/l'infirmière glissera]
•Decide about keyphrases
•Add scores per keyphrase
•Add feedback per keyphrase
• including error specific feedback
33. Students make translations
•While correcting student input,
• Add more options
• Update all existing corrections
constantly
•See the effect of the updates in new
student input:
• less and less corrections to make
• more and more keyphrases
recognized (both correct and wrong
answers)
39. • Use of supported open exercises in three steps
• Step 1 : try out
as a marking and feedbacktool (aid) used by teaching staff
-> human verification and improvement of the black & white list is
necessary
• Step 2 : learning
result of scenario 1 can be used as an exercise with full automatic corrective
and elaborated feedback (with human intervention!)
-> human verification
and e-mail feedback
• Step 3 : exam simulation
results of scenario 2 can be used as an exercise with full immediate
automatic corrective and elaborated feedback (without human
intervention!)
40. •!
Supported open exercises are
not limited to languages
•Excellent experiences in
•Law faculty
•Medical faculty
41. 7.1. Adaptivity
-> frontend: e.g. adaptive item sequencing
adaptive feedback
7.2. Gamification
-> frontend: e.g. Badges & rankings
Collaboration & competition
7.3. Flexible delivery mode
-> frontend e.g. Integration in App or digital textbook
Integration in skills oriented learning environment
7.4. Output correction through NLP
-> from backend to frontend: e.g. parsing half-open input
7.5. Analysis of tracking & logging data
-> from backend to frontend: e.g. reporting
7. Challenges for ILTEs
43. 4D-model of adaptive instruction
Vandewaetere, Desmet & Clarebout 2011 / Vandewaetere & Clarebout, 2012
Cognition
(e.g. prior knowledge)
Affect
(e.g. motivation)
Behavior
(e.g. need for help)
What elements in the
environment to
adapt?
Adapt during interaction,
between interactions, prior to
interaction?
Who’s in control?
Learner vs. instructor decides
what/when/how to adapt?
Or both?
46. Using gameplay mechanics for non-game applications
- Challenges embedded in a compelling story
- Various layers or levels & character upgrades
- Rewards (scores & badges)
- Social interacton & peer motivation through competition
http://www.playwarestudios.com/wp-content/uploads/2013/07/gbl-cartoon.jpg
47. 7.3. Flexible delivery mode
“Classical” delivery mode
Items
(in Activities)
from: Horton, William, E-Learning by Design, Wiley, 2011
48. (a) From a technological point of view
ILTE as a
- smartphone app
- daily small interactive e-mail or sms
- micro-series of items, embedded in a digital textbook
- etc.
More flexibility
49. (b) From a pedagogical point of view
“Skinning” of item types to be integrated in a skills oriented environment
e.g. multimedia learning environment focusing on audio-visual comprehension
e.g. situational judgment test / inbox exercises
www.franel.eu Nedbox
50. 7.4. Output correction through NLP or statistical methods
NLP ASM
- by definition language dependent
- high R&D effort
+ by definition language independent
+ lower R&D effort
- unequal availability and quality of existing
algorithms and tools
- technologies not easily transferable to new
tools/environments
- slow
+ high availability of existing ASM algorithms
+ easily reusable algorithms
+ higher speed
+ better granularity (fineness with which
input can be analyzed)
- highly depending on teacher’s input
(number of correct answers predicted by
teacher)
+ language specific intelligent feedback
generation by the algorithm (cf. E-Tutor T.
Heift)
- no automatic language specific feedback
generation
51. NLP: lemmatisation -> tagging -> parsing (-> semantic analysis?)
Statistical methods: combine advantages of ASM & NLP!
Statistical error detection:
training a classifier based on a corpus of corrected utterances
with feedback
(cf. PhD Ruben Lagatie)
52. 7.5. Analysis of tracking & logging data
From manually entering data to online massive storage
From self-reporting data to behavioral data
From single measurements to longitudinal measurements
From inaccessible to everywhere
From big data to rich data…
53. Not the data, but the views on the data make it interesting…
For the user: - detailed reporting (from generic to specific!)
- advice on next steps
For the teacher: - reporting at individual and group level
- item analysis
54. For the user: detailed reporting (from generic to specific reports)
59. CLOSED HALF-CLOSED HALF-OPEN OPEN
SUPPORTED
OPEN
Learner output
level of
freedom
limited more free more free free totally
free
# correct
answers
limited to 1
or a few
limited many many many
predicatibilit
y answers
maximal maximal limited very
limited
very
limited
Output treatment
correction
type
automated automated automated automated manual
reliability high high average to
high
average to
high
60.
61. More info
Piet Desmet Bert Wylin
Piet.Desmet@kuleuven.be Bert.Wylin@kuleuven.be
B. Wylin@televic.com
www.linkedin.com/in/pietdesmet www.linkedin.com/in/bertwylin
@PietDesmet
ITEC
www.kuleuven.be/itec