Visual representation and organization of the knowledge have been utilized in different ways in tutoring
systems to upgrade their usefulness. This paper concentrates on the usage of various graphical formalisms,
for example, the conceptual graph, ontology, and concept map in tutoring systems. The paper addresses
what is way of the utilization of every formalism and the offering of the potential outcomes to assist the
student in education systems.
Ontological Model of Educational Programs in Computer Science (Bachelor and M...ijsrd.com
In this work there is illustrated an ontological model of educational programs in computer science for bachelor and master degrees in Computer science and for master educational program “Computer science as second competence†by Tempus project PROMIS.
AUTOMATED SHORT ANSWER GRADER USING FRIENDSHIP GRAPHScsandit
The paper proposes a method to assess short answer written by student using friendship matrix,
representation of friendship graph. The Short Answer is a type of answer which is based on
facts. These answers are quite different from long answers and Multiple Choice Question
(MCQ) type answers. The friendship graph is a graph which is based on friendship condition
i.e. the nodes have only one common neighbor. Friendship matrix is the matrix form of the
friendship graph. The student answer is stored in a friendship matrix and the teacher answer is
stored in another friendship matrix and both the matrices are compared. Based on the number
of errors encountered from student answer an error marks is calculated and that number is
subtracted from full marks to get student grade.
On the benefit of logic-based machine learning to learn pairwise comparisonsjournalBEEI
This document describes a study that used a logic-based machine learning approach called APARELL to learn user preferences from pairwise comparisons in a recommender system. APARELL learns description logic rules from examples of users' pairwise item preferences and background knowledge about item properties. It was implemented in a used car recommender system. A user study evaluated the pairwise preference elicitation method compared to a standard list interface. Results showed the pairwise interface was significantly better in recommending items that matched users' preferences.
Collnet turkey feroz-core_scientific domainHan Woo PARK
This document discusses mapping and visualizing the core of scientific domains using information systems research as an example. It introduces the concept of a "network of the core" (NC) to represent the theoretical constructs, models, and concepts within a research domain. An NC can be constructed to reveal characteristics like density, centrality, and bridges within a domain. Both causal and non-causal NCs are possible. Causal NCs show theoretical relationships between constructs, while non-causal NCs provide an overall picture. The document demonstrates an NC for an information systems outsourcing model and discusses additional issues like optional/mandatory nodes, directional vs. non-directional NCs, and their potential uses.
This document discusses mapping and visualizing the core of scientific domains using social network analysis techniques. It introduces the concept of a "Network of the Core" (NC) to represent relationships between theoretical constructs, models, and concepts. NCs can be directional, showing causal relationships, or directionless, showing general connections. NCs can reveal hidden characteristics of a research domain like central constructs. The document demonstrates directional and directionless NCs for information systems research domains. NCs help conceptualize domains, identify missing links, and explore research opportunities. Future work should construct more detailed NCs to analyze research domain structures.
PREDICTING STUDENT ACADEMIC PERFORMANCE IN BLENDED LEARNING USING ARTIFICIAL ...ijaia
Along with the spreading of online education, the importance of active support of students involved in
online learning processes has grown. The application of artificial intelligence in education allows
instructors to analyze data extracted from university servers, identify patterns of student behavior and
develop interventions for struggling students. This study used student data stored in a Moodle server and
predicted student success in course, based on four learning activities - communication via emails,
collaborative content creation with wiki, content interaction measured by files viewed and self-evaluation
through online quizzes. Next, a model based on the Multi-Layer Perceptron Neural Network was trained to
predict student performance on a blended learning course environment. The model predicted the
performance of students with correct classification rate, CCR, of 98.3%.
Visualizer for concept relations in an automatic meaning extraction systemPatricia Tavares Boralli
This document discusses a visualizer interface that has been developed for an automatic meaning extraction (AME) system. The visualizer allows users to view concepts and their relationships extracted from text documents in a graph format. It maps concepts as nodes and relationships as edges. Users can search for concepts, view related concepts, and trace relationships back to the original text passages. The visualizer was created to help users interact with and understand the outputs of the AME system, which automatically extracts concepts and relations from documents across various domains.
NEW ONTOLOGY RETRIEVAL IMAGE METHOD IN 5K COREL IMAGESijcax
Semantic annotation of images is an important research topic on both image understanding and database or web image search. Image annotation is a technique to choosing appropriate labels for images with
extracting effective and hidden feature in pictures. In the feature extraction step of proposed method, we
present a model, which combined effective features of visual topics (global features over an image) and
regional contexts (relationship between the regions in Image and each other regions images) to automatic image annotation.In the annotation step of proposed method, we create a new ontology (base on WordNet ontology) for the semantic relationships between tags in the classification and improving semantic gap exist in the automatic image annotation.Experiments result on the 5k Corel dataset show the proposed
method of image annotation in addition to reducing the complexity of the classification, increased accuracy
compared to the another methods.
Ontological Model of Educational Programs in Computer Science (Bachelor and M...ijsrd.com
In this work there is illustrated an ontological model of educational programs in computer science for bachelor and master degrees in Computer science and for master educational program “Computer science as second competence†by Tempus project PROMIS.
AUTOMATED SHORT ANSWER GRADER USING FRIENDSHIP GRAPHScsandit
The paper proposes a method to assess short answer written by student using friendship matrix,
representation of friendship graph. The Short Answer is a type of answer which is based on
facts. These answers are quite different from long answers and Multiple Choice Question
(MCQ) type answers. The friendship graph is a graph which is based on friendship condition
i.e. the nodes have only one common neighbor. Friendship matrix is the matrix form of the
friendship graph. The student answer is stored in a friendship matrix and the teacher answer is
stored in another friendship matrix and both the matrices are compared. Based on the number
of errors encountered from student answer an error marks is calculated and that number is
subtracted from full marks to get student grade.
On the benefit of logic-based machine learning to learn pairwise comparisonsjournalBEEI
This document describes a study that used a logic-based machine learning approach called APARELL to learn user preferences from pairwise comparisons in a recommender system. APARELL learns description logic rules from examples of users' pairwise item preferences and background knowledge about item properties. It was implemented in a used car recommender system. A user study evaluated the pairwise preference elicitation method compared to a standard list interface. Results showed the pairwise interface was significantly better in recommending items that matched users' preferences.
Collnet turkey feroz-core_scientific domainHan Woo PARK
This document discusses mapping and visualizing the core of scientific domains using information systems research as an example. It introduces the concept of a "network of the core" (NC) to represent the theoretical constructs, models, and concepts within a research domain. An NC can be constructed to reveal characteristics like density, centrality, and bridges within a domain. Both causal and non-causal NCs are possible. Causal NCs show theoretical relationships between constructs, while non-causal NCs provide an overall picture. The document demonstrates an NC for an information systems outsourcing model and discusses additional issues like optional/mandatory nodes, directional vs. non-directional NCs, and their potential uses.
This document discusses mapping and visualizing the core of scientific domains using social network analysis techniques. It introduces the concept of a "Network of the Core" (NC) to represent relationships between theoretical constructs, models, and concepts. NCs can be directional, showing causal relationships, or directionless, showing general connections. NCs can reveal hidden characteristics of a research domain like central constructs. The document demonstrates directional and directionless NCs for information systems research domains. NCs help conceptualize domains, identify missing links, and explore research opportunities. Future work should construct more detailed NCs to analyze research domain structures.
PREDICTING STUDENT ACADEMIC PERFORMANCE IN BLENDED LEARNING USING ARTIFICIAL ...ijaia
Along with the spreading of online education, the importance of active support of students involved in
online learning processes has grown. The application of artificial intelligence in education allows
instructors to analyze data extracted from university servers, identify patterns of student behavior and
develop interventions for struggling students. This study used student data stored in a Moodle server and
predicted student success in course, based on four learning activities - communication via emails,
collaborative content creation with wiki, content interaction measured by files viewed and self-evaluation
through online quizzes. Next, a model based on the Multi-Layer Perceptron Neural Network was trained to
predict student performance on a blended learning course environment. The model predicted the
performance of students with correct classification rate, CCR, of 98.3%.
Visualizer for concept relations in an automatic meaning extraction systemPatricia Tavares Boralli
This document discusses a visualizer interface that has been developed for an automatic meaning extraction (AME) system. The visualizer allows users to view concepts and their relationships extracted from text documents in a graph format. It maps concepts as nodes and relationships as edges. Users can search for concepts, view related concepts, and trace relationships back to the original text passages. The visualizer was created to help users interact with and understand the outputs of the AME system, which automatically extracts concepts and relations from documents across various domains.
NEW ONTOLOGY RETRIEVAL IMAGE METHOD IN 5K COREL IMAGESijcax
Semantic annotation of images is an important research topic on both image understanding and database or web image search. Image annotation is a technique to choosing appropriate labels for images with
extracting effective and hidden feature in pictures. In the feature extraction step of proposed method, we
present a model, which combined effective features of visual topics (global features over an image) and
regional contexts (relationship between the regions in Image and each other regions images) to automatic image annotation.In the annotation step of proposed method, we create a new ontology (base on WordNet ontology) for the semantic relationships between tags in the classification and improving semantic gap exist in the automatic image annotation.Experiments result on the 5k Corel dataset show the proposed
method of image annotation in addition to reducing the complexity of the classification, increased accuracy
compared to the another methods.
Neural perceptual model to global local vision for the recognition of the log...ijaia
This paper gives the definition of Transparent Neural Network “TNN” for the simulation of the globallocal
vision and its application to the segmentation of administrative document image. We have developed
and have adapted a recognition method which models the contextual effects reported from studies in
experimental psychology. Then, we evaluated and tested the TNN and the multi-layer perceptron “MLP”,
which showed its effectiveness in the field of the recognition, in order to show that the TNN is clearer for
the user and more powerful on the level of the recognition. Indeed, the TNN is the only system which makes
it possible to recognize the document and its structure.
This document analyzes a single student learning episode using two theoretical lenses: the instrumental genesis perspective and the onto-semiotic approach. The instrumental genesis perspective focuses on how students develop techniques for using tools or artifacts to solve mathematical tasks, and the relationships between thinking and gestures. The onto-semiotic approach views mathematical knowledge and learning as involving systems of practices within social and institutional contexts. Analyzing the same episode from both perspectives provides complementary insights and a richer understanding of the phenomena, while also helping to identify the strengths and limitations of each theoretical approach. Networking the two theories in this way contributes to theoretical development in mathematics education.
A preliminary survey on optimized multiobjective metaheuristic methods for da...ijcsit
The present survey provides the state-of-the-art of research, copiously devoted to Evolutionary Approach
(EAs) for clustering exemplified with a diversity of evolutionary computations. The Survey provides a
nomenclature that highlights some aspects that are very important in the context of evolutionary data
clustering. The paper missions the clustering trade-offs branched out with wide-ranging Multi Objective
Evolutionary Approaches (MOEAs) methods. Finally, this study addresses the potential challenges of
MOEA design and data clustering, along with conclusions and recommendations for novice and
researchers by positioning most promising paths of future research.
Assessment of Programming Language Reliability Utilizing Soft-Computingijcsa
The document discusses assessing programming language reliability using soft computing techniques like fuzzy logic and genetic algorithms. It proposes using these methods to model programming language reliability based on linguistic variables like "Reliable", "Moderately Reliable", and "Not Reliable". The key factors examined for determining a programming language's reliability include syntax consistency, semantic consistency, error handling, modularity, and documentation. A soft computing system is simulated to evaluate programming languages based on these reliability criteria.
Importance of the neutral category in fuzzy clustering of sentimentsijfls
This document summarizes a research paper on the importance of including a neutral category in sentiment analysis using fuzzy clustering. It discusses related work on sentiment analysis and clustering algorithms. It then presents a model that uses fuzzy c-means clustering to analyze tweets about the International Criminal Court in Kenya. The model represents sentiments in a 3D plot to capture varying levels of positivity, negativity, and neutrality, rather than just positive or negative. The goal is to gain more information from sentiments than a simple two-category classification.
SENTIMENT ANALYSIS FOR MOVIES REVIEWS DATASET USING DEEP LEARNING MODELSIJDKP
Due to the enormous amount of data and opinions being produced, shared and transferred everyday across the internet and other media, Sentiment analysis has become vital for developing opinion mining systems. This paper introduces a developed classification sentiment analysis using deep learning networks and introduces comparative results of different deep learning networks. Multilayer Perceptron (MLP) was developed as a baseline for other networks results. Long short-term memory (LSTM) recurrent neural network, Convolutional Neural Network (CNN) in addition to a hybrid model of LSTM and CNN were developed and applied on IMDB dataset consists of 50K movies reviews files. Dataset was divided to 50% positive reviews and 50% negative reviews. The data was initially pre-processed using Word2Vec and word embedding was applied accordingly. The results have shown that, the hybrid CNN_LSTM model have outperformed the MLP and singular CNN and LSTM networks. CNN_LSTM have reported the accuracy of 89.2% while CNN has given accuracy of 87.7%, while MLP and LSTM have reported accuracy of 86.74% and 86.64 respectively. Moreover, the results have elaborated that the proposed deep learning models have also outperformed SVM, Naïve Bayes and RNTN that were published in other works using English datasets.
Comparison of relational and attribute-IEEE-1999-published ...butest
1. The document compares relational data mining methods to attribute-based methods for use in intelligent systems and data mining. Relational methods use first-order logic to represent background knowledge and relationships between objects, while attribute-based methods like neural networks are limited to attribute-value representations.
2. Relational methods have advantages over attribute-based methods for applications that require expressing complex logical relationships and background knowledge. They can also better handle sparse data. However, existing inductive logic programming systems for relational data mining are relatively inefficient for numerical data.
3. The paper proposes a hybrid relational data mining technique called MMDR that combines inductive logic programming with probabilistic inference. This allows it to efficiently handle
Machine learning is a branch of artificial intelligence concerned with using algorithms to learn from data and improve automatically through experience without being explicitly programmed. The algorithms build a mathematical model of sample data, known as "training data", in order to make predictions or decisions without being explicitly programmed to perform the task. Machine learning algorithms are often categorized as supervised or unsupervised. Supervised learning involves predicting the value of a target variable based on input variables whereas unsupervised learning identifies hidden patterns or grouping in the data.
SEMI-SUPERVISED BOOTSTRAPPING APPROACH FOR NAMED ENTITY RECOGNITIONkevig
The aim of Named Entity Recognition (NER) is to identify references of named entities in unstructured documents, and to classify them into pre-defined semantic categories. NER often aids from added background knowledge in the form of gazetteers. However using such a collection does not deal with name variants and cannot resolve ambiguities associated in identifying the entities in context and associating them with predefined categories. We present a semi-supervised NER approach that starts with identifying named entities with a small set of training data. Using the identified named entities, the word and the context features are used to define the pattern. This pattern of each named entity category is used as a seed pattern to identify the named entities in the test set. Pattern scoring and tuple value score enables the generation of the new patterns to identify the named entity categories. We have evaluated the proposed system for English language with the dataset of tagged (IEER) and untagged (CoNLL 2003) named entity corpus and for Tamil language with the documents from the FIRE corpus and yield an average f-measure of 75% for both the languages.
This document provides an initial specification for a CSE333 project on learning agents. The project aims to investigate current research on software learning agents and implement a simple demonstration system. A team of 4 students will build a distributed learning agent system that finds a policy for navigating a maze using reinforcement learning. The project will involve research on machine learning, agent computing, distributed computing and implementation using UML and Java. The document outlines objectives, topics, an example problem, planned activities and appendices on references, agent definitions and development.
Collaborative semantic annotation of images ontology based modelsipij
The document presents a collaborative semantic annotation approach for images based on ontology. It proposes using ontologies to extract multiple meanings from images as different annotators may interpret images differently. Annotators would propose keywords from an ontology and dictionary to describe image contents. The system would then determine the most representative meaning by measuring similarities between meanings based on keyword frequencies and rankings. This emergent semantic approach aims to capture different interpretations of images for more accurate semantic annotation compared to single-annotator methods.
Knowledge maps for e-learning. Jae Hwa Lee, Aviv Segev
Maps such as concept maps and knowledge maps are often used as learning materials. These maps havenodes and links, nodes as key concepts and links as relationships between key concepts. From a map, theuser can recognize the important concepts and the relationships between them. To build concept orknowledge maps, domain experts are needed. Therefore, since these experts are hard to obtain, the costof map creation is high. In this study, an attempt was made to automatically build a domain knowledgemap for e-learning using text mining techniques. From a set of documents about a specific topic,keywords are extracted using the TF/IDF algorithm. A domain knowledge map (K-map) is based onranking pairs of keywords according to the number of appearances in a sentence and the number ofwords in a sentence. The experiments analyzed the number of relations required to identify theimportant ideas in the text. In addition, the experiments compared K-map learning to document learningand found that K-map identifies the more important ideas
The document discusses using mind maps as an educational tool for teaching political translation. A study was conducted with students learning about the US political system. Students were split into two groups, with one group creating mind maps for each lesson and the other using traditional methods for the first lesson then mind maps for later lessons. Test scores showed that students who used mind maps scored higher and their motivation and performance improved over time, showing mind maps can enhance learning. The document argues mind maps help students organize information and build cognitive schemas in a similar way to how the brain processes knowledge.
CONSTRAINT-BASED AND FUZZY LOGIC STUDENT MODELING FOR ARABIC GRAMMARijcsit
Computer-Assisted Language Learning (CALL) are computer-based tutoring systems that deal with linguistic skills. Adding intelligence in such systems is mainly based on using Natural Language Processing (NLP) tools to diagnose student errors, especially in language grammar. However, most such systems do not consider the modeling of student competence in linguistic skills, especially for the Arabic language. In this paper, we will deal with basic grammar concepts of the Arabic language taught for the fourth grade of the elementary school in Egypt. This is through Arabic Grammar Trainer (AGTrainer) which is an Intelligent CALL. The implemented system (AGTrainer) trains the students through different questions that deal with the different concepts and have different difficulty levels. Constraint-based student modeling (CBSM) technique is used as a short-term student model. CBSM is used to define in small grain
level the different grammar skills through the defined skill structures. The main contribution of this paper is the hierarchal representation of the system's basic grammar skills as domain knowledge. That representation is used as a mechanism for efficiently checking constraints to model the student knowledge
and diagnose the student errors and identify their cause. In addition, satisfying constraints and the number of trails the student takes for answering each question and fuzzy logic decision system are used to determine the student learning level for each lesson as a long-term model. The results of the evaluation showed the system's effectiveness in learning in addition to the satisfaction of students and teachers with its
features and abilities.
Computer-Assisted Language Learning (CALL) are computer-based tutoring systems that deal with
linguistic skills. Adding intelligence in such systems is mainly based on using Natural Language
Processing (NLP) tools to diagnose student errors, especially in language grammar. However, most such
systems do not consider the modeling of student competence in linguistic skills, especially for the Arabic
language. In this paper, we will deal with basic grammar concepts of the Arabic language taught for the
fourth grade of the elementary school in Egypt. This is through Arabic Grammar Trainer (AGTrainer)
which is an Intelligent CALL. The implemented system (AGTrainer) trains the students through different
questions that deal with the different concepts and have different difficulty levels. Constraint-based student
modeling (CBSM) technique is used as a short-term student model. CBSM is used to define in small grain
level the different grammar skills through the defined skill structures. The main contribution of this paper
is the hierarchal representation of the system's basic grammar skills as domain knowledge. That
representation is used as a mechanism for efficiently checking constraints to model the student knowledge
and diagnose the student errors and identify their cause. In addition, satisfying constraints and the number
of trails the student takes for answering each question and fuzzy logic decision system are used to
determine the student learning level for each lesson as a long-term model. The results of the evaluation
showed the system's effectiveness in learning in addition to the satisfaction of students and teachers with its
features and abilities.
Computer-Assisted Language Learning (CALL) are computer-based tutoring systems that deal with
linguistic skills. Adding intelligence in such systems is mainly based on using Natural Language
Processing (NLP) tools to diagnose student errors, especially in language grammar. However, most such
systems do not consider the modeling of student competence in linguistic skills, especially for the Arabic
language. In this paper, we will deal with basic grammar concepts of the Arabic language taught for the
fourth grade of the elementary school in Egypt. This is through Arabic Grammar Trainer (AGTrainer)
which is an Intelligent CALL. The implemented system (AGTrainer) trains the students through different
questions that deal with the different concepts and have different difficulty levels. Constraint-based student
modeling (CBSM) technique is used as a short-term student model. CBSM is used to define in small grain
level the different grammar skills through the defined skill structures. The main contribution of this paper
is the hierarchal representation of the system's basic grammar skills as domain knowledge. That
representation is used as a mechanism for efficiently checking constraints to model the student knowledge
and diagnose the student errors and identify their cause. In addition, satisfying constraints and the number
of trails the student takes for answering each question and fuzzy logic decision system are used to
determine the student learning level for each lesson as a long-term model. The results of the evaluation
showed the system's effectiveness in learning in addition to the satisfaction of students and teachers with its
features and abilities.
Semantically Enchanced Personalised Adaptive E-Learning for General and Dysle...Eswar Publications
E-learning plays an important role in providing required and well formed knowledge to a learner. The medium of e- learning has achieved advancement in various fields such as adaptive e-learning systems. The need for enhancing e-learning semantically can enhance the retrieval and adaptability of the learning curriculum. This paper provides a semantically enhanced module based e-learning for computer science programme on a learnercentric perspective. The learners are categorized based on their proficiency for providing personalized learning environment for users. Learning disorders on the platform of e-learning still require lots of research. Therefore, this paper also provides a personalized assessment theoretical model for alphabet learning with learning objects for
children’s who face dyslexia.
This document proposes a model for automatically clustering Thai students' online homework assignments before teachers grade them. The model uses five parts: 1) Thai word segmentation, 2) stop-word removal, 3) term weighting, 4) document clustering using k-means, and 5) performance evaluation. The model was tested on 1,000 student assignments and achieved high accuracy, purity, and F-measure scores similar to human grading, allowing teachers to grade assignments more efficiently.
Design strategies for mobile language learning effectiveness using hybrid mcd...Alexander Decker
This document summarizes a research study that aimed to evaluate the design of two mobile learning applications for language learning using a hybrid multi-criteria decision making approach. The study combined the Decision Making Trial and Evaluation Laboratory Model (DEMATEL) and Analytical Network Process (ANP) to examine how learners value different design aspects and to identify the relationships between design attributes. The results could help designers formulate effective design strategies for mobile language learning apps by structuring important design dimensions and identifying how they relate to one another based on learner perceptions.
INTELLIGENT SOCIAL NETWORKS MODEL BASED ON SEMANTIC TAG RANKINGdannyijwest
Social Networks has become one of the most popular platforms to allow users to communicate, and share their interests without being at the same geographical location. With the great and rapid growth of Social Media sites such as Facebook, LinkedIn, Twitter…etc. causes huge amount of user-generated content. Thus, the improvement in the information quality and integrity becomes a great challenge to all social media sites, which allows users to get the desired content or be linked to the best link relation using improved search / link technique. So introducing semantics to social networks will widen up the representation of the social networks. In this paper, a new model of social networks based on semantic tag ranking is introduced. This model is based on the concept of multi-agent systems. In this proposed model the representation of social links will be extended by the semantic relationships found in the vocabularies which are known as (tags) in most of social networks.The proposed model for the social media engine is based on enhanced Latent Dirichlet Allocation(E-LDA) as a semantic indexing algorithm, combined with Tag Rank as social network ranking algorithm. The improvements on (E-LDA) phase is done by optimizing (LDA) algorithm using the optimal parameters. Then a filter is introduced to enhance the final indexing output. In ranking phase, using Tag Rank based on the indexing phase has improved the output of the ranking. Simulation results of the proposed model have shown improvements in indexing and ranking output.
INTELLIGENT SOCIAL NETWORKS MODEL BASED ON SEMANTIC TAG RANKINGIJwest
The document presents a new model for intelligent social networks based on semantic tag ranking. It uses a multi-agent system approach with agents performing indexing and ranking. For indexing, it uses an enhanced Latent Dirichlet Allocation (E-LDA) model that optimizes LDA parameters. Tags above a threshold from E-LDA output are ranked using Tag Rank. Simulation results showed improvements in indexing and ranking over conventional methods. The model introduces semantics to social networks to improve search and link recommendation.
INTELLIGENT SOCIAL NETWORKS MODEL BASED ON SEMANTIC TAG RANKINGdannyijwest
Social Networks has become one of the most popular platforms to allow users to communicate, and share
their interests without being at the same geographical location. With the great and rapid growth of Social
Media sites such as Facebook, LinkedIn, Twitter...etc. causes huge amount of user-generated content.
Thus, the improvement in the information quality and integrity becomes a great challenge to all social
media sites, which allows users to get the desired content or be linked to the best link relation using
improved search / link technique. So introducing semantics to social networks will widen up the
representation of the social networks.
Neural perceptual model to global local vision for the recognition of the log...ijaia
This paper gives the definition of Transparent Neural Network “TNN” for the simulation of the globallocal
vision and its application to the segmentation of administrative document image. We have developed
and have adapted a recognition method which models the contextual effects reported from studies in
experimental psychology. Then, we evaluated and tested the TNN and the multi-layer perceptron “MLP”,
which showed its effectiveness in the field of the recognition, in order to show that the TNN is clearer for
the user and more powerful on the level of the recognition. Indeed, the TNN is the only system which makes
it possible to recognize the document and its structure.
This document analyzes a single student learning episode using two theoretical lenses: the instrumental genesis perspective and the onto-semiotic approach. The instrumental genesis perspective focuses on how students develop techniques for using tools or artifacts to solve mathematical tasks, and the relationships between thinking and gestures. The onto-semiotic approach views mathematical knowledge and learning as involving systems of practices within social and institutional contexts. Analyzing the same episode from both perspectives provides complementary insights and a richer understanding of the phenomena, while also helping to identify the strengths and limitations of each theoretical approach. Networking the two theories in this way contributes to theoretical development in mathematics education.
A preliminary survey on optimized multiobjective metaheuristic methods for da...ijcsit
The present survey provides the state-of-the-art of research, copiously devoted to Evolutionary Approach
(EAs) for clustering exemplified with a diversity of evolutionary computations. The Survey provides a
nomenclature that highlights some aspects that are very important in the context of evolutionary data
clustering. The paper missions the clustering trade-offs branched out with wide-ranging Multi Objective
Evolutionary Approaches (MOEAs) methods. Finally, this study addresses the potential challenges of
MOEA design and data clustering, along with conclusions and recommendations for novice and
researchers by positioning most promising paths of future research.
Assessment of Programming Language Reliability Utilizing Soft-Computingijcsa
The document discusses assessing programming language reliability using soft computing techniques like fuzzy logic and genetic algorithms. It proposes using these methods to model programming language reliability based on linguistic variables like "Reliable", "Moderately Reliable", and "Not Reliable". The key factors examined for determining a programming language's reliability include syntax consistency, semantic consistency, error handling, modularity, and documentation. A soft computing system is simulated to evaluate programming languages based on these reliability criteria.
Importance of the neutral category in fuzzy clustering of sentimentsijfls
This document summarizes a research paper on the importance of including a neutral category in sentiment analysis using fuzzy clustering. It discusses related work on sentiment analysis and clustering algorithms. It then presents a model that uses fuzzy c-means clustering to analyze tweets about the International Criminal Court in Kenya. The model represents sentiments in a 3D plot to capture varying levels of positivity, negativity, and neutrality, rather than just positive or negative. The goal is to gain more information from sentiments than a simple two-category classification.
SENTIMENT ANALYSIS FOR MOVIES REVIEWS DATASET USING DEEP LEARNING MODELSIJDKP
Due to the enormous amount of data and opinions being produced, shared and transferred everyday across the internet and other media, Sentiment analysis has become vital for developing opinion mining systems. This paper introduces a developed classification sentiment analysis using deep learning networks and introduces comparative results of different deep learning networks. Multilayer Perceptron (MLP) was developed as a baseline for other networks results. Long short-term memory (LSTM) recurrent neural network, Convolutional Neural Network (CNN) in addition to a hybrid model of LSTM and CNN were developed and applied on IMDB dataset consists of 50K movies reviews files. Dataset was divided to 50% positive reviews and 50% negative reviews. The data was initially pre-processed using Word2Vec and word embedding was applied accordingly. The results have shown that, the hybrid CNN_LSTM model have outperformed the MLP and singular CNN and LSTM networks. CNN_LSTM have reported the accuracy of 89.2% while CNN has given accuracy of 87.7%, while MLP and LSTM have reported accuracy of 86.74% and 86.64 respectively. Moreover, the results have elaborated that the proposed deep learning models have also outperformed SVM, Naïve Bayes and RNTN that were published in other works using English datasets.
Comparison of relational and attribute-IEEE-1999-published ...butest
1. The document compares relational data mining methods to attribute-based methods for use in intelligent systems and data mining. Relational methods use first-order logic to represent background knowledge and relationships between objects, while attribute-based methods like neural networks are limited to attribute-value representations.
2. Relational methods have advantages over attribute-based methods for applications that require expressing complex logical relationships and background knowledge. They can also better handle sparse data. However, existing inductive logic programming systems for relational data mining are relatively inefficient for numerical data.
3. The paper proposes a hybrid relational data mining technique called MMDR that combines inductive logic programming with probabilistic inference. This allows it to efficiently handle
Machine learning is a branch of artificial intelligence concerned with using algorithms to learn from data and improve automatically through experience without being explicitly programmed. The algorithms build a mathematical model of sample data, known as "training data", in order to make predictions or decisions without being explicitly programmed to perform the task. Machine learning algorithms are often categorized as supervised or unsupervised. Supervised learning involves predicting the value of a target variable based on input variables whereas unsupervised learning identifies hidden patterns or grouping in the data.
SEMI-SUPERVISED BOOTSTRAPPING APPROACH FOR NAMED ENTITY RECOGNITIONkevig
The aim of Named Entity Recognition (NER) is to identify references of named entities in unstructured documents, and to classify them into pre-defined semantic categories. NER often aids from added background knowledge in the form of gazetteers. However using such a collection does not deal with name variants and cannot resolve ambiguities associated in identifying the entities in context and associating them with predefined categories. We present a semi-supervised NER approach that starts with identifying named entities with a small set of training data. Using the identified named entities, the word and the context features are used to define the pattern. This pattern of each named entity category is used as a seed pattern to identify the named entities in the test set. Pattern scoring and tuple value score enables the generation of the new patterns to identify the named entity categories. We have evaluated the proposed system for English language with the dataset of tagged (IEER) and untagged (CoNLL 2003) named entity corpus and for Tamil language with the documents from the FIRE corpus and yield an average f-measure of 75% for both the languages.
This document provides an initial specification for a CSE333 project on learning agents. The project aims to investigate current research on software learning agents and implement a simple demonstration system. A team of 4 students will build a distributed learning agent system that finds a policy for navigating a maze using reinforcement learning. The project will involve research on machine learning, agent computing, distributed computing and implementation using UML and Java. The document outlines objectives, topics, an example problem, planned activities and appendices on references, agent definitions and development.
Collaborative semantic annotation of images ontology based modelsipij
The document presents a collaborative semantic annotation approach for images based on ontology. It proposes using ontologies to extract multiple meanings from images as different annotators may interpret images differently. Annotators would propose keywords from an ontology and dictionary to describe image contents. The system would then determine the most representative meaning by measuring similarities between meanings based on keyword frequencies and rankings. This emergent semantic approach aims to capture different interpretations of images for more accurate semantic annotation compared to single-annotator methods.
Knowledge maps for e-learning. Jae Hwa Lee, Aviv Segev
Maps such as concept maps and knowledge maps are often used as learning materials. These maps havenodes and links, nodes as key concepts and links as relationships between key concepts. From a map, theuser can recognize the important concepts and the relationships between them. To build concept orknowledge maps, domain experts are needed. Therefore, since these experts are hard to obtain, the costof map creation is high. In this study, an attempt was made to automatically build a domain knowledgemap for e-learning using text mining techniques. From a set of documents about a specific topic,keywords are extracted using the TF/IDF algorithm. A domain knowledge map (K-map) is based onranking pairs of keywords according to the number of appearances in a sentence and the number ofwords in a sentence. The experiments analyzed the number of relations required to identify theimportant ideas in the text. In addition, the experiments compared K-map learning to document learningand found that K-map identifies the more important ideas
The document discusses using mind maps as an educational tool for teaching political translation. A study was conducted with students learning about the US political system. Students were split into two groups, with one group creating mind maps for each lesson and the other using traditional methods for the first lesson then mind maps for later lessons. Test scores showed that students who used mind maps scored higher and their motivation and performance improved over time, showing mind maps can enhance learning. The document argues mind maps help students organize information and build cognitive schemas in a similar way to how the brain processes knowledge.
CONSTRAINT-BASED AND FUZZY LOGIC STUDENT MODELING FOR ARABIC GRAMMARijcsit
Computer-Assisted Language Learning (CALL) are computer-based tutoring systems that deal with linguistic skills. Adding intelligence in such systems is mainly based on using Natural Language Processing (NLP) tools to diagnose student errors, especially in language grammar. However, most such systems do not consider the modeling of student competence in linguistic skills, especially for the Arabic language. In this paper, we will deal with basic grammar concepts of the Arabic language taught for the fourth grade of the elementary school in Egypt. This is through Arabic Grammar Trainer (AGTrainer) which is an Intelligent CALL. The implemented system (AGTrainer) trains the students through different questions that deal with the different concepts and have different difficulty levels. Constraint-based student modeling (CBSM) technique is used as a short-term student model. CBSM is used to define in small grain
level the different grammar skills through the defined skill structures. The main contribution of this paper is the hierarchal representation of the system's basic grammar skills as domain knowledge. That representation is used as a mechanism for efficiently checking constraints to model the student knowledge
and diagnose the student errors and identify their cause. In addition, satisfying constraints and the number of trails the student takes for answering each question and fuzzy logic decision system are used to determine the student learning level for each lesson as a long-term model. The results of the evaluation showed the system's effectiveness in learning in addition to the satisfaction of students and teachers with its
features and abilities.
Computer-Assisted Language Learning (CALL) are computer-based tutoring systems that deal with
linguistic skills. Adding intelligence in such systems is mainly based on using Natural Language
Processing (NLP) tools to diagnose student errors, especially in language grammar. However, most such
systems do not consider the modeling of student competence in linguistic skills, especially for the Arabic
language. In this paper, we will deal with basic grammar concepts of the Arabic language taught for the
fourth grade of the elementary school in Egypt. This is through Arabic Grammar Trainer (AGTrainer)
which is an Intelligent CALL. The implemented system (AGTrainer) trains the students through different
questions that deal with the different concepts and have different difficulty levels. Constraint-based student
modeling (CBSM) technique is used as a short-term student model. CBSM is used to define in small grain
level the different grammar skills through the defined skill structures. The main contribution of this paper
is the hierarchal representation of the system's basic grammar skills as domain knowledge. That
representation is used as a mechanism for efficiently checking constraints to model the student knowledge
and diagnose the student errors and identify their cause. In addition, satisfying constraints and the number
of trails the student takes for answering each question and fuzzy logic decision system are used to
determine the student learning level for each lesson as a long-term model. The results of the evaluation
showed the system's effectiveness in learning in addition to the satisfaction of students and teachers with its
features and abilities.
Computer-Assisted Language Learning (CALL) are computer-based tutoring systems that deal with
linguistic skills. Adding intelligence in such systems is mainly based on using Natural Language
Processing (NLP) tools to diagnose student errors, especially in language grammar. However, most such
systems do not consider the modeling of student competence in linguistic skills, especially for the Arabic
language. In this paper, we will deal with basic grammar concepts of the Arabic language taught for the
fourth grade of the elementary school in Egypt. This is through Arabic Grammar Trainer (AGTrainer)
which is an Intelligent CALL. The implemented system (AGTrainer) trains the students through different
questions that deal with the different concepts and have different difficulty levels. Constraint-based student
modeling (CBSM) technique is used as a short-term student model. CBSM is used to define in small grain
level the different grammar skills through the defined skill structures. The main contribution of this paper
is the hierarchal representation of the system's basic grammar skills as domain knowledge. That
representation is used as a mechanism for efficiently checking constraints to model the student knowledge
and diagnose the student errors and identify their cause. In addition, satisfying constraints and the number
of trails the student takes for answering each question and fuzzy logic decision system are used to
determine the student learning level for each lesson as a long-term model. The results of the evaluation
showed the system's effectiveness in learning in addition to the satisfaction of students and teachers with its
features and abilities.
Semantically Enchanced Personalised Adaptive E-Learning for General and Dysle...Eswar Publications
E-learning plays an important role in providing required and well formed knowledge to a learner. The medium of e- learning has achieved advancement in various fields such as adaptive e-learning systems. The need for enhancing e-learning semantically can enhance the retrieval and adaptability of the learning curriculum. This paper provides a semantically enhanced module based e-learning for computer science programme on a learnercentric perspective. The learners are categorized based on their proficiency for providing personalized learning environment for users. Learning disorders on the platform of e-learning still require lots of research. Therefore, this paper also provides a personalized assessment theoretical model for alphabet learning with learning objects for
children’s who face dyslexia.
This document proposes a model for automatically clustering Thai students' online homework assignments before teachers grade them. The model uses five parts: 1) Thai word segmentation, 2) stop-word removal, 3) term weighting, 4) document clustering using k-means, and 5) performance evaluation. The model was tested on 1,000 student assignments and achieved high accuracy, purity, and F-measure scores similar to human grading, allowing teachers to grade assignments more efficiently.
Design strategies for mobile language learning effectiveness using hybrid mcd...Alexander Decker
This document summarizes a research study that aimed to evaluate the design of two mobile learning applications for language learning using a hybrid multi-criteria decision making approach. The study combined the Decision Making Trial and Evaluation Laboratory Model (DEMATEL) and Analytical Network Process (ANP) to examine how learners value different design aspects and to identify the relationships between design attributes. The results could help designers formulate effective design strategies for mobile language learning apps by structuring important design dimensions and identifying how they relate to one another based on learner perceptions.
INTELLIGENT SOCIAL NETWORKS MODEL BASED ON SEMANTIC TAG RANKINGdannyijwest
Social Networks has become one of the most popular platforms to allow users to communicate, and share their interests without being at the same geographical location. With the great and rapid growth of Social Media sites such as Facebook, LinkedIn, Twitter…etc. causes huge amount of user-generated content. Thus, the improvement in the information quality and integrity becomes a great challenge to all social media sites, which allows users to get the desired content or be linked to the best link relation using improved search / link technique. So introducing semantics to social networks will widen up the representation of the social networks. In this paper, a new model of social networks based on semantic tag ranking is introduced. This model is based on the concept of multi-agent systems. In this proposed model the representation of social links will be extended by the semantic relationships found in the vocabularies which are known as (tags) in most of social networks.The proposed model for the social media engine is based on enhanced Latent Dirichlet Allocation(E-LDA) as a semantic indexing algorithm, combined with Tag Rank as social network ranking algorithm. The improvements on (E-LDA) phase is done by optimizing (LDA) algorithm using the optimal parameters. Then a filter is introduced to enhance the final indexing output. In ranking phase, using Tag Rank based on the indexing phase has improved the output of the ranking. Simulation results of the proposed model have shown improvements in indexing and ranking output.
INTELLIGENT SOCIAL NETWORKS MODEL BASED ON SEMANTIC TAG RANKINGIJwest
The document presents a new model for intelligent social networks based on semantic tag ranking. It uses a multi-agent system approach with agents performing indexing and ranking. For indexing, it uses an enhanced Latent Dirichlet Allocation (E-LDA) model that optimizes LDA parameters. Tags above a threshold from E-LDA output are ranked using Tag Rank. Simulation results showed improvements in indexing and ranking over conventional methods. The model introduces semantics to social networks to improve search and link recommendation.
INTELLIGENT SOCIAL NETWORKS MODEL BASED ON SEMANTIC TAG RANKINGdannyijwest
Social Networks has become one of the most popular platforms to allow users to communicate, and share
their interests without being at the same geographical location. With the great and rapid growth of Social
Media sites such as Facebook, LinkedIn, Twitter...etc. causes huge amount of user-generated content.
Thus, the improvement in the information quality and integrity becomes a great challenge to all social
media sites, which allows users to get the desired content or be linked to the best link relation using
improved search / link technique. So introducing semantics to social networks will widen up the
representation of the social networks.
Development and Evaluation of Concept Maps as Viable Educational Technology t...paperpublications3
Abstract: This study had developed and evaluated concept maps as viable educational technology to facilitate learning and assessment. The development process concluded upon establishing validity and reliability. These maps were classified into two: concept maps to facilitate learning; and, fill-in-the-maps to facilitate assessment. A one group pre-test-posttest pre-experimental design was employed. Fill-in-the-maps were utilized for unit pre-tests and posttests. Complete concept maps were used to facilitate learning. For midterm examination, students were given composition as basis for constructing concept map. For final examination, students were provided concept maps to write their own composition. Rubrics were used to assess students’ outputs. z-test for correlated means showed significant increases of Mean Percentage Score (MPS) from pre-test to posttest. The overall posttest result was correlated with those of objective, fill-in-the-map, map construction and composition writing. Significant correlations were observed. Results accentuated that concept maps can be developed and evaluated to facilitate learning and assessment.
Honkela.t leinonen.t lonka.k_raike.a_2000: self-organizing maps and construct...ArchiLab 7
This document discusses using self-organizing maps (SOMs) to model constructive learning. It presents two key ideas:
1) SOMs provide a more realistic model of human learning than traditional computer memory models, as they are dynamic, associative, and adapt existing knowledge rather than just storing facts.
2) SOMs can be used in computer-supported collaborative learning environments to help visualize complex concepts and support inquiry-based learning processes. Two examples of using SOMs for these purposes are described.
Applying adaptive learning by integrating semantic and machine learning in p...IJECEIAES
Adaptive learning is one of the most widely used data driven approach to teaching and it received an increasing attention over the last decade. It aims to meet the student’s characteristics by tailoring learning courses materials and assessment methods. In order to determine the student’s characteristics, we need to detect their learning styles according to visual, auditory or kinaesthetic (VAK) learning style. In this research, an integrated model that utilizes both semantic and machine learning clustering methods is developed in order to cluster students to detect their learning styles and recommend suitable assessment method(s) accordingly. In order to measure the effectiveness of the proposed model, a set of experiments were conducted on real dataset (Open University Learning Analytics Dataset). Experiments showed that the proposed model is able to cluster students according to their different learning activities with an accuracy that exceeds 95% and predict their relative assessment method(s) with an average accuracy equals to 93%.
This document describes a learner modeling approach built upon an ontology for computing environments for human learning (CEHL). The proposed learner ontology includes concepts for the learner, learner profile, pedagogical activity, and pedagogical assistance. The learner concept includes personal data, type of learner, cognitive level, and degree setting. The learner profile concept includes behavior, knowledge, skills, interactions, and conception. The pedagogical activity concept includes pre-tests to assess learner level and appropriate learning methods. Pedagogical assistance is then provided related to the chosen pedagogical content and activity.
Learner Ontological Model for Intelligent Virtual Collaborative Learning Envi...ijceronline
An enacting approach to intelligent virtual collaborative learning model is explored through the lens of critical ontology. This ontological model enables to reuse of the domain knowledge and to make the knowledge explicitly available to the agent working as an Expert System, which uses the operational knowledge in collaborative learning environment. This ontological model used by the agent to identify the preliminary competency level of the user. This environment offers personalized education to each learner in accordance with his/her learning preferences, and learning capabilities. Here the factors considered to identify the learning capability taken are demographic profile, age, family profile, basic educational qualification and basic competency scale. The conception of heuristics is then used by the agent to determine the effectiveness of the learner by referring the different parameters of the learner available in the ontological model.To help getting over this, the paper describes the experience on using an ontological model for collaborative learning to relate and integrate the history of the learner by maintaining the history of learner in collaborative learning environment that will be used by the Multi-Objective Grey Situation Decision Making Theory to infer the understanding level of user and produces the conditional content to the user
Exploring Semantic Question Generation Methodology and a Case Study for Algor...IJCI JOURNAL
Assessment of student performance is one of the most important tasks in the educational process. Thus, formulating questions and creating tests takes the instructor a lot of time and effort. However, the time spent for learning acquisition and on exam preparation could be utilized in better ways. With the technical development in representing and linking data, ontologies have been used in academic fields to represent the terms in a field by defining concepts and categories classifies the subject. Also, the emergence of such methods that represent the data and link it logically contributed to the creation of methods and tools for creating questions. These tools can be used in existing learning systems to provide effective solutions to assist the teacher in creating test questions. This research paper introduces a semantic methodology for automating question generation in the domain of Algorithms. The primary objective of this approach is to support instructors in effectively incorporating automatically generated questions into their instructional practice, thereby enhancing the teaching and learning experience.
AUTOMATED SHORT ANSWER GRADER USING FRIENDSHIP GRAPHScscpconf
The paper proposes a method to assess short answer written by student using friendship matrix,
representation of friendship graph. The Short Answer is a type of answer which is based on
facts. These answers are quite different from long answers and Multiple Choice Question
(MCQ) type answers. The friendship graph is a graph which is based on friendship condition
i.e. the nodes have only one common neighbor. Friendship matrix is the matrix form of the
friendship graph. The student answer is stored in a friendship matrix and the teacher answer is
stored in another friendship matrix and both the matrices are compared. Based on the number
of errors encountered from student answer an error marks is calculated and that number is
subtracted from full marks to get student grade.
This document reviews automated planning and its application to cloud e-learning (CeL). Planning can be used to generate learning paths by selecting appropriate learning resources to bridge a learner's current knowledge state and desired knowledge goals. In CeL, any resource stored in the cloud could potentially be included in a personalized learning path. The document discusses planning approaches and techniques that could be suitable for automatically generating personalized learning paths in CeL using resources from the cloud.
This document discusses applying semantic web technologies to enhance the services of e-learning systems. It proposes developing a semantic learning management system (S-LMS) based on technologies like XML, RDF, OWL and SPARQL to automate and accurately search for information on e-learning systems like Moodle. The S-LMS would add semantic capabilities to allow students to search for learning resources based on semantics and provide personalized, customized content tailored to individual needs. It presents applying ontologies and metadata to Moodle in order to define domains and describe learning content in a way that improves search, interoperability and reusability of educational resources.
Similar to GRAPHICAL REPRESENTATION IN TUTORING SYSTEMS (20)
Home security is of paramount importance in today's world, where we rely more on technology, home
security is crucial. Using technology to make homes safer and easier to control from anywhere is
important. Home security is important for the occupant’s safety. In this paper, we came up with a low cost,
AI based model home security system. The system has a user-friendly interface, allowing users to start
model training and face detection with simple keyboard commands. Our goal is to introduce an innovative
home security system using facial recognition technology. Unlike traditional systems, this system trains
and saves images of friends and family members. The system scans this folder to recognize familiar faces
and provides real-time monitoring. If an unfamiliar face is detected, it promptly sends an email alert,
ensuring a proactive response to potential security threats.
In the era of data-driven warfare, the integration of big data and machine learning (ML) techniques has
become paramount for enhancing defence capabilities. This research report delves into the applications of
big data and ML in the defence sector, exploring their potential to revolutionize intelligence gathering,
strategic decision-making, and operational efficiency. By leveraging vast amounts of data and advanced
algorithms, these technologies offer unprecedented opportunities for threat detection, predictive analysis,
and optimized resource allocation. However, their adoption also raises critical concerns regarding data
privacy, ethical implications, and the potential for misuse. This report aims to provide a comprehensive
understanding of the current state of big data and ML in defence, while examining the challenges and
ethical considerations that must be addressed to ensure responsible and effective implementation.
Cloud Computing, being one of the most recent innovative developments of the IT world, has been
instrumental not just to the success of SMEs but, through their productivity and innovative contribution to
the economy, has even made a remarkable contribution to the economic growth of the United States. To
this end, the study focuses on how cloud computing technology has impacted economic growth through
SMEs in the United States. Relevant literature connected to the variables of interest in this study was
reviewed, and secondary data was generated and utilized in the analysis section of this paper. The findings
of this paper revealed that there have been meaningful contributions that the usage of virtualization has
made in the commercial dealings of small firms in the United States, and this has also been reflected in the
economic growth of the country. This paper further revealed that as important as cloud-based software is,
some SMEs are still skeptical about how it can help improve their business and increase their bottom line
and hence have failed to adopt it. Apart from the SMEs, some notable large firms in different industries,
including information and educational services, have adopted cloud computing technology and hence
contributed to the economic growth of the United States. Lastly, findings from our inferential statistics
revealed that no discernible change has occurred in innovation between small and big businesses in the
adoption of cloud computing. Both categories of businesses adopt cloud computing in the same way, and
their contribution to the American economy has no significant difference in the usage of virtualization.
Energy-constrained Wireless Sensor Networks (WSNs) have garnered significant research interest in
recent years. Multiple-Input Multiple-Output (MIMO), or Cooperative MIMO, represents a specialized
application of MIMO technology within WSNs. This approach operates effectively, especially in
challenging and resource-constrained environments. By facilitating collaboration among sensor nodes,
Cooperative MIMO enhances reliability, coverage, and energy efficiency in WSN deployments.
Consequently, MIMO finds application in diverse WSN scenarios, spanning environmental monitoring,
industrial automation, and healthcare applications.
The AIRCC's International Journal of Computer Science and Information Technology (IJCSIT) is devoted to fields of Computer Science and Information Systems. The IJCSIT is a open access peer-reviewed scientific journal published in electronic form as well as print form. The mission of this journal is to publish original contributions in its field in order to propagate knowledge amongst its readers and to be a reference publication. IJCSIT publishes original research papers and review papers, as well as auxiliary material such as: research papers, case studies, technical reports etc.
With growing, Car parking increases with the number of car users. With the increased use of smartphones
and their applications, users prefer mobile phone-based solutions. This paper proposes the Smart Parking
Management System (SPMS) that depends on Arduino parts, Android applications, and based on IoT. This
gave the client the ability to check available parking spaces and reserve a parking spot. IR sensors are
utilized to know if a car park space is allowed. Its area data are transmitted using the WI-FI module to the
server and are recovered by the mobile application which offers many options attractively and with no cost
to users and lets the user check reservation details. With IoT technology, the smart parking system can be
connected wirelessly to easily track available locations.
Welcome to AIRCC's International Journal of Computer Science and Information Technology (IJCSIT), your gateway to the latest advancements in the dynamic fields of Computer Science and Information Systems.
In the realm of computer security, the importance of efficient and reliable user authentication methods has
become increasingly critical. This paper examines the potential of mouse movement dynamics as a
consistent metric for continuous authentication. By analysing user mouse movement patterns in two
contrasting gaming scenarios, "Team Fortress" and "Poly Bridge," we investigate the distinctive
behavioral patterns inherent in high-intensity and low-intensity UI interactions. The study extends beyond
conventional methodologies by employing a range of machine learning models. These models are carefully
selected to assess their effectiveness in capturing and interpreting the subtleties of user behavior as
reflected in their mouse movements. This multifaceted approach allows for a more nuanced and
comprehensive understanding of user interaction patterns. Our findings reveal that mouse movement
dynamics can serve as a reliable indicator for continuous user authentication. The diverse machine
learning models employed in this study demonstrate competent performance in user verification, marking
an improvement over previous methods used in this field. This research contributes to the ongoing efforts to
enhance computer security and highlights the potential of leveraging user behavior, specifically mouse
dynamics, in developing robust authentication systems.
The AIRCC's International Journal of Computer Science and Information Technology (IJCSIT) is devoted to fields of Computer Science and Information Systems. The IJCSIT is a open access peer-reviewed scientific journal published in electronic form as well as print form. The mission of this journal is to publish original contributions in its field in order to propagate knowledge amongst its readers and to be a reference publication.
Image segmentation and classification tasks in computer vision have proven to be highly effective using neural networks, specifically Convolutional Neural Networks (CNNs). These tasks have numerous
practical applications, such as in medical imaging, autonomous driving, and surveillance. CNNs are capable
of learning complex features directly from images and achieving outstanding performance across several
datasets. In this work, we have utilized three different datasets to investigate the efficacy of various preprocessing and classification techniques in accurssedately segmenting and classifying different structures
within the MRI and natural images. We have utilized both sample gradient and Canny Edge Detection
methods for pre-processing, and K-means clustering have been applied to segment the images. Image
augmentation improves the size and diversity of datasets for training the models for image classification
The AIRCC's International Journal of Computer Science and Information Technology (IJCSIT) is devoted to fields of Computer Science and Information Systems. The IJCSIT is a open access peer-reviewed scientific journal published in electronic form as well as print form. The mission of this journal is to publish original contributions in its field in order to propagate knowledge amongst its readers and to be a reference publication.
This research aims to further understanding in the field of continuous authentication using behavioural
biometrics. We are contributing a novel dataset that encompasses the gesture data of 15 users playing
Minecraft with a Samsung Tablet, each for a duration of 15 minutes. Utilizing this dataset, we employed
machine learning (ML) binary classifiers, being Random Forest (RF), K-Nearest Neighbors (KNN), and
Support Vector Classifier (SVC), to determine the authenticity of specific user actions. Our most robust
model was SVC, which achieved an average accuracy of approximately 90%, demonstrating that touch
dynamics can effectively distinguish users. However, further studies are needed to make it viable option
for authentication systems. You can access our dataset at the following
link:https://github.com/AuthenTech2023/authentech-repo
This paper discusses the capabilities and limitations of GPT-3 (0), a state-of-the-art language model, in the
context of text understanding. We begin by describing the architecture and training process of GPT-3, and
provide an overview of its impressive performance across a wide range of natural language processing
tasks, such as language translation, question-answering, and text completion. Throughout this research
project, a summarizing tool was also created to help us retrieve content from any types of document,
specifically IELTS (0) Reading Test data in this project. We also aimed to improve the accuracy of the
summarizing, as well as question-answering capabilities of GPT-3 (0) via long text
In the realm of computer security, the importance of efficient and reliable user authentication methods has
become increasingly critical. This paper examines the potential of mouse movement dynamics as a
consistent metric for continuous authentication. By analysing user mouse movement patterns in two
contrasting gaming scenarios, "Team Fortress" and "Poly Bridge," we investigate the distinctive
behavioral patterns inherent in high-intensity and low-intensity UI interactions. The study extends beyond
conventional methodologies by employing a range of machine learning models. These models are carefully
selected to assess their effectiveness in capturing and interpreting the subtleties of user behavior as
reflected in their mouse movements. This multifaceted approach allows for a more nuanced and
comprehensive understanding of user interaction patterns. Our findings reveal that mouse movement
dynamics can serve as a reliable indicator for continuous user authentication. The diverse machine
learning models employed in this study demonstrate competent performance in user verification, marking
an improvement over previous methods used in this field. This research contributes to the ongoing efforts to
enhance computer security and highlights the potential of leveraging user behavior, specifically mouse
dynamics, in developing robust authentication systems.
Image segmentation and classification tasks in computer vision have proven to be highly effective using neural networks, specifically Convolutional Neural Networks (CNNs). These tasks have numerous
practical applications, such as in medical imaging, autonomous driving, and surveillance. CNNs are capable
of learning complex features directly from images and achieving outstanding performance across several
datasets. In this work, we have utilized three different datasets to investigate the efficacy of various preprocessing and classification techniques in accurssedately segmenting and classifying different structures
within the MRI and natural images. We have utilized both sample gradient and Canny Edge Detection
methods for pre-processing, and K-means clustering have been applied to segment the images. Image
augmentation improves the size and diversity of datasets for training the models for image classification.
This work highlights transfer learning’s effectiveness in image classification using CNNs and VGG 16 that
provides insights into the selection of pre-trained models and hyper parameters for optimal performance.
We have proposed a comprehensive approach for image segmentation and classification, incorporating preprocessing techniques, the K-means algorithm for segmentation, and employing deep learning models such
as CNN and VGG 16 for classification.
- The document presents 6 different models for defining foot size in Tunisia: 2 statistical models, 2 neural network models using unsupervised learning, and 2 models combining neural networks and fuzzy logic.
- The statistical models (SM and SHM) are based on applying statistical equations to morphological foot data.
- The neural network models (MSK and MHSK) use self-organizing Kohonen maps to cluster foot data and model full and half sizes.
- The fuzzy neural network models (MSFK and MHSFK) incorporate fuzzy logic into the neural network learning process to better account for uncertainty in foot sizes.
The security of Electric Vehicle (EV) charging has gained momentum after the increase in the EV adoption
in the past few years. Mobile applications have been integrated into EV charging systems that mainly use a
cloud-based platform to host their services and data. Like many complex systems, cloud systems are
susceptible to cyberattacks if proper measures are not taken by the organization to secure them. In this
paper, we explore the security of key components in the EV charging infrastructure, including the mobile
application and its cloud service. We conducted an experiment that initiated a Man in the Middle attack
between an EV app and its cloud services. Our results showed that it is possible to launch attacks against
the connected infrastructure by taking advantage of vulnerabilities that may have substantial economic and
operational ramifications on the EV charging ecosystem. We conclude by providing mitigation suggestions
and future research directions.
The AIRCC's International Journal of Computer Science and Information Technology (IJCSIT) is devoted to fields of Computer Science and Information Systems. The IJCSIT is a open access peer-reviewed scientific journal published in electronic form as well as print form. The mission of this journal is to publish original contributions in its field in order to propagate knowledge amongst its readers and to be a reference publication.
The AIRCC's International Journal of Computer Science and Information Technology (IJCSIT) is devoted to fields of Computer Science and Information Systems. The IJCSIT is a open access peer-reviewed scientific journal published in electronic form as well as print form. The mission of this journal is to publish original contributions in its field in order to propagate knowledge amongst its readers and to be a reference publication.
This paper describes the outcome of an attempt to implement the same transitive closure (TC) algorithm
for Apache MapReduce running on different Apache Hadoop distributions. Apache MapReduce is a
software framework used with Apache Hadoop, which has become the de facto standard platform for
processing and storing large amounts of data in a distributed computing environment. The research
presented here focuses on the variations observed among the results of an efficient iterative transitive
closure algorithm when run against different distributed environments. The results from these comparisons
were validated against the benchmark results from OYSTER, an open source Entity Resolution system. The
experiment results highlighted the inconsistencies that can occur when using the same codebase with
different implementations of Map Reduce.
Advanced control scheme of doubly fed induction generator for wind turbine us...IJECEIAES
This paper describes a speed control device for generating electrical energy on an electricity network based on the doubly fed induction generator (DFIG) used for wind power conversion systems. At first, a double-fed induction generator model was constructed. A control law is formulated to govern the flow of energy between the stator of a DFIG and the energy network using three types of controllers: proportional integral (PI), sliding mode controller (SMC) and second order sliding mode controller (SOSMC). Their different results in terms of power reference tracking, reaction to unexpected speed fluctuations, sensitivity to perturbations, and resilience against machine parameter alterations are compared. MATLAB/Simulink was used to conduct the simulations for the preceding study. Multiple simulations have shown very satisfying results, and the investigations demonstrate the efficacy and power-enhancing capabilities of the suggested control system.
Null Bangalore | Pentesters Approach to AWS IAMDivyanshu
#Abstract:
- Learn more about the real-world methods for auditing AWS IAM (Identity and Access Management) as a pentester. So let us proceed with a brief discussion of IAM as well as some typical misconfigurations and their potential exploits in order to reinforce the understanding of IAM security best practices.
- Gain actionable insights into AWS IAM policies and roles, using hands on approach.
#Prerequisites:
- Basic understanding of AWS services and architecture
- Familiarity with cloud security concepts
- Experience using the AWS Management Console or AWS CLI.
- For hands on lab create account on [killercoda.com](https://killercoda.com/cloudsecurity-scenario/)
# Scenario Covered:
- Basics of IAM in AWS
- Implementing IAM Policies with Least Privilege to Manage S3 Bucket
- Objective: Create an S3 bucket with least privilege IAM policy and validate access.
- Steps:
- Create S3 bucket.
- Attach least privilege policy to IAM user.
- Validate access.
- Exploiting IAM PassRole Misconfiguration
-Allows a user to pass a specific IAM role to an AWS service (ec2), typically used for service access delegation. Then exploit PassRole Misconfiguration granting unauthorized access to sensitive resources.
- Objective: Demonstrate how a PassRole misconfiguration can grant unauthorized access.
- Steps:
- Allow user to pass IAM role to EC2.
- Exploit misconfiguration for unauthorized access.
- Access sensitive resources.
- Exploiting IAM AssumeRole Misconfiguration with Overly Permissive Role
- An overly permissive IAM role configuration can lead to privilege escalation by creating a role with administrative privileges and allow a user to assume this role.
- Objective: Show how overly permissive IAM roles can lead to privilege escalation.
- Steps:
- Create role with administrative privileges.
- Allow user to assume the role.
- Perform administrative actions.
- Differentiation between PassRole vs AssumeRole
Try at [killercoda.com](https://killercoda.com/cloudsecurity-scenario/)
Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024Sinan KOZAK
Sinan from the Delivery Hero mobile infrastructure engineering team shares a deep dive into performance acceleration with Gradle build cache optimizations. Sinan shares their journey into solving complex build-cache problems that affect Gradle builds. By understanding the challenges and solutions found in our journey, we aim to demonstrate the possibilities for faster builds. The case study reveals how overlapping outputs and cache misconfigurations led to significant increases in build times, especially as the project scaled up with numerous modules using Paparazzi tests. The journey from diagnosing to defeating cache issues offers invaluable lessons on maintaining cache integrity without sacrificing functionality.
Batteries -Introduction – Types of Batteries – discharging and charging of battery - characteristics of battery –battery rating- various tests on battery- – Primary battery: silver button cell- Secondary battery :Ni-Cd battery-modern battery: lithium ion battery-maintenance of batteries-choices of batteries for electric vehicle applications.
Fuel Cells: Introduction- importance and classification of fuel cells - description, principle, components, applications of fuel cells: H2-O2 fuel cell, alkaline fuel cell, molten carbonate fuel cell and direct methanol fuel cells.
Embedded machine learning-based road conditions and driving behavior monitoringIJECEIAES
Car accident rates have increased in recent years, resulting in losses in human lives, properties, and other financial costs. An embedded machine learning-based system is developed to address this critical issue. The system can monitor road conditions, detect driving patterns, and identify aggressive driving behaviors. The system is based on neural networks trained on a comprehensive dataset of driving events, driving styles, and road conditions. The system effectively detects potential risks and helps mitigate the frequency and impact of accidents. The primary goal is to ensure the safety of drivers and vehicles. Collecting data involved gathering information on three key road events: normal street and normal drive, speed bumps, circular yellow speed bumps, and three aggressive driving actions: sudden start, sudden stop, and sudden entry. The gathered data is processed and analyzed using a machine learning system designed for limited power and memory devices. The developed system resulted in 91.9% accuracy, 93.6% precision, and 92% recall. The achieved inference time on an Arduino Nano 33 BLE Sense with a 32-bit CPU running at 64 MHz is 34 ms and requires 2.6 kB peak RAM and 139.9 kB program flash memory, making it suitable for resource-constrained embedded systems.
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...IJECEIAES
Climate change's impact on the planet forced the United Nations and governments to promote green energies and electric transportation. The deployments of photovoltaic (PV) and electric vehicle (EV) systems gained stronger momentum due to their numerous advantages over fossil fuel types. The advantages go beyond sustainability to reach financial support and stability. The work in this paper introduces the hybrid system between PV and EV to support industrial and commercial plants. This paper covers the theoretical framework of the proposed hybrid system including the required equation to complete the cost analysis when PV and EV are present. In addition, the proposed design diagram which sets the priorities and requirements of the system is presented. The proposed approach allows setup to advance their power stability, especially during power outages. The presented information supports researchers and plant owners to complete the necessary analysis while promoting the deployment of clean energy. The result of a case study that represents a dairy milk farmer supports the theoretical works and highlights its advanced benefits to existing plants. The short return on investment of the proposed approach supports the paper's novelty approach for the sustainable electrical system. In addition, the proposed system allows for an isolated power setup without the need for a transmission line which enhances the safety of the electrical network
artificial intelligence and data science contents.pptxGauravCar
What is artificial intelligence? Artificial intelligence is the ability of a computer or computer-controlled robot to perform tasks that are commonly associated with the intellectual processes characteristic of humans, such as the ability to reason.
› ...
Artificial intelligence (AI) | Definitio
Software Engineering and Project Management - Introduction, Modeling Concepts...Prakhyath Rai
Introduction, Modeling Concepts and Class Modeling: What is Object orientation? What is OO development? OO Themes; Evidence for usefulness of OO development; OO modeling history. Modeling
as Design technique: Modeling, abstraction, The Three models. Class Modeling: Object and Class Concept, Link and associations concepts, Generalization and Inheritance, A sample class model, Navigation of class models, and UML diagrams
Building the Analysis Models: Requirement Analysis, Analysis Model Approaches, Data modeling Concepts, Object Oriented Analysis, Scenario-Based Modeling, Flow-Oriented Modeling, class Based Modeling, Creating a Behavioral Model.
Comparative analysis between traditional aquaponics and reconstructed aquapon...bijceesjournal
The aquaponic system of planting is a method that does not require soil usage. It is a method that only needs water, fish, lava rocks (a substitute for soil), and plants. Aquaponic systems are sustainable and environmentally friendly. Its use not only helps to plant in small spaces but also helps reduce artificial chemical use and minimizes excess water use, as aquaponics consumes 90% less water than soil-based gardening. The study applied a descriptive and experimental design to assess and compare conventional and reconstructed aquaponic methods for reproducing tomatoes. The researchers created an observation checklist to determine the significant factors of the study. The study aims to determine the significant difference between traditional aquaponics and reconstructed aquaponics systems propagating tomatoes in terms of height, weight, girth, and number of fruits. The reconstructed aquaponics system’s higher growth yield results in a much more nourished crop than the traditional aquaponics system. It is superior in its number of fruits, height, weight, and girth measurement. Moreover, the reconstructed aquaponics system is proven to eliminate all the hindrances present in the traditional aquaponics system, which are overcrowding of fish, algae growth, pest problems, contaminated water, and dead fish.
1. International Journal of Computer Science & Information Technology (IJCSIT) Vol 9, No 3, June 2017
DOI:10.5121/ijcsit.2017.9309 107
GRAPHICAL REPRESENTATION IN TUTORING
SYSTEMS
Nabila Khodeir
Informatics Department, Electronics Research Institute, Cairo, Egypt
ABSTRACT
Visual representation and organization of the knowledge have been utilized in different ways in tutoring
systems to upgrade their usefulness. This paper concentrates on the usage of various graphical formalisms,
for example, the conceptual graph, ontology, and concept map in tutoring systems. The paper addresses
what is way of the utilization of every formalism and the offering of the potential outcomes to assist the
student in education systems.
KEYWORDS
Conceptual graph, ontology, concept map.
1. INTRODUCTION
The graphical representation is based on linking some concepts to each other by semantic
relations. Conceptual graph, ontology, and concept map are the most common formalisms for the
description of such graphical structure. Each formalism has its own characteristics that are
selected for a particular use in the learning setting.
A conceptual graph is a finite, connected, undirected graph with two types of nodes. First, one is
called concepts and the other type called conceptual relations. The conceptual graph is composed
of propositions defined by two concept nodes and one connecting relation link [1]. The main
application of conceptual graph in the learning context is to represent the prerequisite relations
between domain concepts to be utilized in check out the origin of the student errors.
Ontology is an explicit formal specification of types and properties of the domain terms and
relations among them [2,3]. Developing ontologies in the learning context aims to share a
common understanding of the structure of information among people or software agents [4,2].
That enables the reuse of a domain knowledge which allows building a large ontology by
integrating several existing ontologies.
A concept map is a graphical tool for organizing and representing knowledge which was first
introduced by Novak and Gowin [5]. It is based on representing the main ideas or concepts as
nodes and linking between them by relations. The unit that has a node-link-node connection is
called a proposition or semantic unit, or unit of meaning [5]. Hierarchical representation is
usually used in concept map where the most general concepts at the top and more specific nodes
arranged hierarchically below. Representation of the knowledge using the concept map starts by
defining the context for it within a specific domain. Then, key concepts that apply to this domain
are defined and related to each other by links that encode relationships in the domain to constitute
the preliminary concept map [6]. A more advanced step that requires high levels of cognitive
performance, namely evaluation and synthesis of knowledge [7], is to define cross-links which
are links between concepts in different segments of knowledge on the map [6]. Concept map
supports meaningful learning that aims to relate the new knowledge to the relevant concepts
2. International Journal of Computer Science & Information Technology (IJCSIT) Vol 9, No 3, June 2017
108
already known to the student [5]. Concept maps have been used in learning contexts as a learning
and evaluation tools [8,9].
We separate the use of graphical representation to three classes i) representation of the domain or
the learning material, ii) diagnosing tool for the student errors, and the iii) assessing the student
visual expression of a specific part of domain concepts. Distinctive uses are because of the
diverse description of the domain concepts and the way of the semantic relations between these
concepts. The paper will investigate the distinctive usage of the graphical representations through
these categories.
The rest of the paper is organized as follows. Section 2 investigates using of the graphical
representation as a domain model. Diagnosing the student errors based on historical test scores
and how to guide the student through the learning process is addressed in section 3. Section 4
focuses on the concept mapping technique that assessing the student graphical expressions of
his/her knowledge. Finally, Section 5 provides the conclusion of this work.
2. GRAPHICAL REPRESENTATION AS A DOMAIN MODEL
The domain model is an integral part in intelligent educational applications. Ontology and
concept map are used in representing the learning material. The concept map is used in limited
intelligent tutoring systems as a domain model to be used as a reference in the modeling of the
student knowledge. Other systems automatically extracting concept maps from different
resources to be used in automatic question generation or to extract and organize the domain
concepts in a hierarchal order. On the other hand, ontology formalization is used to represent a
sharable domain on the web. Different features are represented for the learning materials to be
used in adaptive hypermedia techniques. Next sections will investigate using the concept map and
ontology in domain representation.
2.1. Concept Map as a Domain Model
As we mentioned limited intelligent tutoring systems use the concept map as a domain model to
assess the student knowledge through the different concepts. For instance; Kordaki and Psomos
[10] present an intelligent concept mapping tool to assess the student knowledge and diagnosing
and treat his misconceptions. An interactive questionnaire is attached to each node of the concept
map to evaluate students’ knowledge. Subsequently, the system automatically providing
appropriate personalized feedback for each learner to diagnose their mistakes according to the
answers given by each student. The system creates an adapted version of the concept map for
each individual student by adding appropriate statistical data to each of its nodes and colors to
indicate the student knowledge status.
Another example is presented by Kumar [11] who presents intelligent tutors for programming
that uses the concept map as a domain model in addition to using an overlay of it as a student
model. To obtain a finer-grained student model pedagogical concepts called learning objectives
are added for each correct concept in the domain. Moreover, potential errors associated with each
concept is represented as a separate learning objective. Using the concept map representation as
an overlay student model has the advantage of allowing individual assessment of each concept to
influence related concepts. In addition, it primates focusing on specific concepts in the student
model for more efficient assessment. The student model is used to adapt the selection of the
presented problems where learning objectives in the student model that have not yet been met are
enumerated. Then the next problem is selected such that the problem addresses one or more of
these learning objectives. After the student solves each problem, the tutor updates credit for all
the affected learning objectives in the student model and recalculates the set of learning
objectives that remain to be met.
3. International Journal of Computer Science & Information Technology (IJCSIT) Vol 9, No 3, June 2017
109
Some systems generate the domain model concept map automatically from different resources.
Olney et al., [12,13] present a methodology for automatically extracting concept maps from
textbooks using term extraction, semantic parsing, and relation classification. The methodology is
based on SemNet formulation [14] that take the form of one layer of links radiating out of a core
concept. The generated concept map consists of fragments that are called triples. Each triple has
to start by a node contains key term (pedagogically significant terms in the domain) and end by
another node that can contain key terms, other words, or complete propositions. Labeled edges
are used to connect between the two nodes. Since Only et al. [12,13], aim to utilize the generated
concept map in generating questions and answers, they use restricted set of labeled edges to
facilities that target. The index and glossary are used as sources of getting the key terms.
Automatic extraction for the triples starts by using LTH SRL parser to get information about the
word token’s part of speech, lemma, head, and relation to the head. After parsing, four triple
extractor algorithms are applied to each sentence. Each extractor first attempts to identify a key
term as a possible start node. After triples are extracted from the parse, they are filtered to remove
triples that are not particularly useful for generating concept map exercises. The final filter uses
likelihood ratios to establish whether the relation between start and end nodes is meaningful.
A concept hierarchy is neglected in [12,13] although it is a powerful tool for representing and
organizing the domain knowledge. In addition to utilizing it in diagnosing the cause of the student
errors and assessing of the student knowledge. Wang et al., [15] deal with extracting the concept
hierarchy from Textbooks using the lexical content and table of contents (TOC). In addition, they
augment Web knowledge and extracts a set of related important Wikipedia concepts for each
book chapter and organize them as a concept hierarchy using the book’s TOC. Learning-to-Rank
approach is used to extract concept hierarchy which considers both local relatedness and global
coherence. They propose three sets of global features, which guarantee less redundancy,
consistency and appropriate learning order for a concept hierarchy that captures the global
coherence embedded in a book. They first extract a domain-specific dictionary for a given book
topic and then performs candidate selection for each chapter. Finally, by re-ranking the
candidates based on the local and global features, it generates the concept hierarchy which arrives
at coherent sets of important concepts for a given book.
Wang et al., [16] focus on the construction of prerequisite concept maps to discover students'
learning gaps and work on closing these gaps. They implemented a two-phase model that
includes domain concepts extraction and prerequisite relationships identification. They start by
constructing a domain-specific concept dictionary in which each concept is the title of a domain
related Wikipedia page. Then given an article, they identify all Wikipedia concepts in the article
using this dictionary and obtain a list of Wikipedia candidates.
2.2. Ontology as a Domain Model
Ontology formalism is usually used in web-based systems to describe the learning material or the
learning objects. The learning object is anything digital that can be delivered across the network
on demand such as text, images, applets, and entire web pages. Gascueña et al. [17] consider two
characteristics to define each learning object, the most appropriate learning style and the most
satisfactory hardware and software features of the used device. A questionnaire is used to found
the dominant learning style of each student. Based on these features and the student learning style
adaptive e-learning environments and reusable educational resources are provided. Chi et al. [18]
present another example that utilizes semantic rules in combination with ontologies to model
curriculum contents sequencing expertise into a knowledge. The ontology was used to represent
abstract views of content sequencing and course materials and semantic rules were used to
express relationships between individuals.
4. International Journal of Computer Science & Information Technology (IJCSIT) Vol 9, No 3, June 2017
110
Due to rapid increase of learning content on the Web, it will be time-consuming for learners to
find adequate learning material. Yu et al. [19] present an ontology-based approach for semantic
content recommendation to realize context-awareness in e-learning. The approach is
recommending of a learning material based on considering knowledge about the learner,
knowledge about content, and knowledge about the domain being learned. Ontology is utilized to
model and represent such kinds of knowledge. The recommendation approach proceeds in four
steps. First, the Semantic Relevance Calculation computes the semantic similarity between the
learner and the learning contents to generate a recommendation list. Second, the
Recommendation Refining provides an interactive way to allow the learner to select one item
from the candidates, the Learning Path Generation guides the learning process by builds a
studying route composed of prerequisite contents and the target learning contents. Finally, the
Recommendation Augmentation aggregates appending contents related with the main course.
Vesin et al. [20] present programming tutoring system “Protus” which rely on Semantic web
standards and technologies. Implemented architecture utilizes ontology, where the representation
of each component is made by a specific ontology. This allows interoperability and reusability of
the system in addition to the communication among the different components. The system
contains domain ontology that represents types of all essential learning materials. The different
types support the different learning style of the users. The role of each specific resource from
domain ontology is represented in the task ontology. The learner personal information, learning
style and performance constitute the learner model ontology. Teaching strategy ontology consists
of selecting or computing a specific navigation sequences among the resources. The decisions are
drawn on the basis of the information in learner model ontology, task ontology, and domain
ontology. Most appropriate learning pattern or resource that will be recommended to the learner
is selected. Finally, the interface ontology is used to reads a decision from the Teaching strategy
ontology, and based on that decision it creates navigation sequence of resources recommended for
a specific learner and generates an interface view to the learner.
3. DIGNOSING
Defining of the prerequisite relations between different concepts in the conceptual graph gives the
potential to guide the student on concepts that needing improvement and the path or sequence to
learning. Hsu et al. [21] proposed a concept effect relationship (CER) model which proves how
certain concepts are a perquisite to efficiently performing other concepts. Some systems utilize
the conceptual graphs method for modeling the prerequisite relationships among the domain
concepts to be learned. Then, the student test results are analyzed based on such conceptual
graphs. Jong et al. [22] present an algorithm for diagnosing individual student learning situation
based on predefined weighted conceptual graphs. The algorithm has the potential to providing the
Remedial-Instruction Decisive path (RID path) that identify the student’s missing concepts.
Student learning situation that defines if the students have mastered certain concept is estimated
based on the Sequential Probability Radio Test (SPRT) [23]. The algorithm of the RID path finds
a remedial instruction decisive path based on missing concepts using SPRT. Once a certain
student is assessed to have failed to achieve mastery of a certain concept, their prior concepts of
that student can be obtained through the conceptual graphs. These concepts are verified to detect
the missing prior concepts. The algorithm repeats the diagnosis steps for each missing prior
concept. Finally, all missing concept nodes are obtained which represented the RID. Each student
has a distinctive evaluated conceptual graph which maps his knowledge structure and can be
considered as an overlay model.
Since it is time-consuming for teachers to build a conceptual graph that includes the prerequisite
relationships between concepts, some system automates this process based on long-term analysis
of the students' test items status or score. Hwang et al. [24] present an algorithm that starts by
5. International Journal of Computer Science & Information Technology (IJCSIT) Vol 9, No 3, June 2017
111
finding the test item that most students failed to answer correctly, then finds the other test items
that were incorrectly answered by those students, and finally uses this information to determine
the relationships among the test items. The relationships among concepts can consequently be
determined based on the relationships among test items, and between test items and concepts.
Getting the Concept effect relationships are refined by using the support and belief values which
are two thresholds. Many noisy relationships may be generated if the thresholds approach 0,
while some important relationships may be missed if the thresholds approach 1. The most
appropriate support and belief values for generating concept effect relationships based on the
outcome of previous applications. Data mining techniques proposed in [25] are used to explore
the relationships among some attributes, and the relationships identified are then employed to
assist in determining belief and support values for future applications.
Hsu et al. and Hwang et al [21,24] focus on single rule type, L–L type, which means students get
a low grade on specific question implies that they may also get a low grade on specific another
question, which may decrease the quality of concept map. On the other hand, Tseng et al. [26]
propose a Two-Phase Concept Map Construction (TP-CMC) algorithm to automatically construct
a concept map of a course by historical testing records. They apply Fuzzy Set Theory to
transform the numeric testing records of learners into symbolic in the first phase. Then data
mining approach is applied to find its grade fuzzy association rules. The mined grade fuzzy
association rules include four rule types, L–L, L–H, H–L, and H–H, which denote the casual
relations between learning concepts of quizzes for all types of grades (Low or High).
4. CONCEPT MAPPING
Concept mapping is a supportive way in learning. That stem from the fact that people understand
and remember the knowledge after they organize and integrate it [27]. Moreover, it is easy for
people to memorize and recall the ideas which are correlative [28]. Concept mapping serves as a
kind of template or scaffold to help to organize knowledge and to structure it [6]. In addition,
Concept mapping has been introduced as an effectively visualized learning tool that helps
learners memorize and organize their knowledge [5].
In e-learning context, concept mapping technique is used for the student to express visually his/
her understanding of the domain concepts in terms of concepts that are related by hierarchy
relationships. It is used as an assessment tool which is characterized in terms of the directedness
that is provided for the student to express his/her knowledge structure, the student response and
the scoring mechanism to evaluate the student concept’s map [29]. It also could be considered as
a visual form of the student model that can be utilized to assess the student’s knowledge about a
specific domain and to be used in adapting of the learning system resources.
Concept mapping is a challenging task where it requires the student to reflect his understanding
of the concepts and their interrelations [5]. In order to guide the students, concept mapping
assessment tools use a less free-form approach to mapping. [29] Identifies a scale from low to
high directedness in concept mapping tools based on the provided information to the students.
High-directed concept map tasks support students with a template of the concept map and the
students asked to fill some missing concepts or relations. On the other hand, in a low-directed
concept map tasks, students are free to construct their maps and select the concepts, relations, and
structure [30].
The assessment mechanism is usually based on comparing learner’s map with the predefined
expert’s map [29]. It has two main methods which are the structural method and the relational
method. The structural method [5] is restricted on hierarchical maps and considers the valid map
components such as propositions, and links. On the other hand, the relational method focuses on
the accuracy of each proposition [29], [31].
6. International Journal of Computer Science & Information Technology (IJCSIT) Vol 9, No 3, June 2017
112
In the following sections, we will explore a number of systems to show the directedness level,
scoring mechanism, presented guidance and if they have adaptation mechanism.
4.1. STRUCTURAL METHOD BASED ASSESSMENT SYSTEMS
Chang et al. [32] presented a system that provides two learning environments. First one is the
‘construct-by-self’ environment, the system provides students with the evaluation results and
corresponding hints for feedback. The students construct concept maps by themselves with only
the assistance of the feedback. In the ‘construct-on scaffold’ environment, in addition to the
feedback, the students receive an incomplete concept map, within which some nodes and links
were set as blanks for the scaffold. A study comparing the effectiveness of the ‘construct-by-self’,
‘construct-on-scaffold’, and ‘construct by paper-and-pencil’ concept mapping showed that the
‘construct-on-scaffold’ had a better effect for learning on biology. Scoring the student concept
map is based on comparing the numbers of valid propositions, valid hierarchical levels, and valid
cross-links. The score for a student concept map is divided by the score of an expert concept map
to produce a ratio as the similarity index. For the ‘construct-on-scaffold’ version, the similarity
index of a map is estimated by getting the ratio between the number of correct answers in the
blanks and the number of total blanks on the map. Both of the two indices range from zero to one.
Zero shows that the two maps are completely different. One point to that the two maps are
identical.
Cimolino et al. [33] present a verified concept mapper where the students create a map from a
given list of concepts and links which are predefined by the teacher. The student map is
interpreted by the teacher to model the student understanding in terms of saved sentences. Such
sentences are shown to the students to indicate what information in their current map is being
saved to their user model. In addition, questions are provided in case of student errors to guide
him in correcting them. For example, the default question associated with a missing concept is to
ask the student if they can see how to include it on the map.
Hwang et al. [34] integrate concept mapping in educational computer game aiming to help
students to organize what they have learned during the game-based learning process. Such a
concept map-embedded gaming approach made the students highly accepting of the appearance
and assistance of the concept maps during the gaming process. The system includes concept
mapping module to assists the students in organizing the collected data following the storyline
based on the concept map templates provided by the teachers.
Jain et al. [35] present an artificial intelligence-based student learning evaluation tool (AISLE) to
evaluate student learning using concept maps. The student would be given a topic to learn and
build a concept map based on their understanding of the topic. Which means concepts maps are
developed by the students from the scratch. The implemented scoring system is based on the
structure of the concept map where scores are assigned in the form of numbers to every concept
that is presented in the hierarchy of concept. The hierarchy is included in the scoring to give the
level of the student understands of the topic in the study. Z-score for each concept is used as
standardization of scores which is used as a function to estimate the probability distribution for all
concepts in the hierarchy of concepts. The standard probability distribution of the curve is used as
a reference curve to evaluate the concept maps drawn by the students. The concept map drawn by
the students is verified and validated by the instructor.
Leelawong et al. [36,37] have designed learning environments where students teach a computer
agent, called a Teachable Agent (TA). The concept map is used as a visual representation to help
structure domain knowledge. In addition reasoning through the concept map links are considered
where TAs can show their reasoning based on how they have been taught which helps students to
assess their teaching. Scoring mechanism is based on comparing the student map with the expert
7. International Journal of Computer Science & Information Technology (IJCSIT) Vol 9, No 3, June 2017
113
map. Concepts and links were labeled as “expert” if they were in the expert map while concepts
and links that were not in the expert map, but were resulted from the correct understanding of the
domain were graded as “relevant.” The number of valid concepts and links in a student’s map
was the sum of the expert and relevant concepts and links.
Sormo [38] presents CREEK-Tutor, an exercise oriented tutoring system that uses a student
modeling technique based on case-based reasoning to find students of similar competence.
Firstly, concepts and linking names are extracted from the teacher concept map for a topic. Then
the student is asked to construct his map using the same concepts and link names. This approach
is different from previous approaches in that the goal is not to score the student’s concept map by
its similarity to the teacher’s map but to use it to find students that are similar in ability. The
similarity is measured by finding the difference between the union and intersection between two
graphs. Two main contributions are presented in CREEK-Tutor. Firstly, included procedural
knowledge with the fact knowledge in the concept map by introducing a lot of examples of
program code snippets as a correct or wrong example. Second, a lot of traditional programming
exercises are combined with constructing the concept map. The student answers are recorded and
used as an offline data set that contains for each student a concept map and various measures of
how the student performed on each programming task. That is used later in adapting the selection
of the presented exercise for each student. The exercise selection algorithm is based considering
the student concept map of a particular topic. Then find similar concept maps drawn by other and
predict the difficulty of exercises based on the performance of students found to have similar
concept maps. Suggest an exercise of appropriate difficulty level and justify the exercise selection
by showing which part of the concept map it addresses.
4.2. RELATIONAL METHOD BASED ASSESSMENT SYSTEMS
Po-Han et al. [39] present an Interactive Concept Map-oriented Learning System which enables
learners to construct concept maps in personal computers and share them on servers via the
Internet. The system provides immediate evaluation of concept maps and gives also real-time
feedback to the students. The scoring mechanism is based on comparing each of student’s
concept map propositions with the corresponding proposition in the expert’s concept map. In a
case of matching, the weighting of the proposition is added to the accumulated score for the
student’s concept map. If the two propositions are partially matched, only half of the weighting is
added to the accumulated score. Accordingly, to the evaluation results, the system provides
feedback which indicates student errors on the structure of the concept map developed by
individual students, such as missing concepts or relations. In addition, learning materials related
to the missing or incorrect concepts/connections are provided to the student as supplementary
information.
Conlon et al. [40] consider the characterization of a proposition in its assessment were fully
correct means full matching between the student proposition and the corresponding of the expert.
Partly correct indicates the relationship between two concepts or the direction of the arrow is
incorrect. In addition, the weights of each characterization, and the number of valid concepts
included in learner’s map are considered in the evaluation process.
Gouli et al. [41] propose a scheme that has been embedded in COMPASS, an adaptive web-based
concept map assessment tool. The system presents adaptive feedback based on the evaluation of
the student knowledge level on the concept of the map. The assessment process is based on
comparing the propositions presented on student’s map and expert map. Weights are assigned by
the teacher to reflect the degree of importance of the concepts and propositions. In addition,
different error categories are identified which characterize Compass system to support the
adaptive feedback.
8. International Journal of Computer Science & Information Technology (IJCSIT) Vol 9, No 3, June 2017
114
5. CONCLUSION
Different graphical representations formalisms have been used in the learning context. Limited
intelligent tutoring systems use the concept map as a domain model to assess the student
knowledge through the different concepts. Other systems automatically extracting concept maps
from textbooks aiming to generate questions and answers while other systems extract the
concepts and the prerequisite relations between them to discover students' learning gaps and work
on closing these gaps. Concept mapping technique focuses on assessing of the student visual
expression of his/her knowledge. It is a technique for the student to express visually his/ her
understanding of the domain concepts in terms of concepts that are related by hierarchy
relationships. It is used as an assessment tool which is characterized in terms of the directedness
that is provided for the student to express his/her knowledge structure. It also could be considered
as a visual form of the student model that can be utilized to assess the student’s knowledge about
a specific domain and to be used in adapting of the learning system resources.
Some systems utilize the conceptual graphs method for modeling the prerequisite relationships
among the domain concepts to be learned. Then, the student test results are analyzed based on
such conceptual graphs. Since it is time-consuming for teachers to build a conceptual graph that
includes the prerequisite relationships between concepts, some systems automate this process
based on long-term analysis of the students' test items status or score
Ontology is used as a formalism to describe knowledge and information in a way that can be
shared on the web. Adding more description about the learning style or the role of each
knowledge item allows different adaptation techniques to be applied such as adaptive presentation
and adaptive navigation.
REFERENCES
[1] Sowa J.F. (1976) Conceptual graphs for a data base interface. IBM Journal of Research and
Development 20, 336–357.
[2] Gruber, T.R. (1993). A Translation Approach to Portable Ontology Specification. Knowledge
Acquisition 5: 199-220.
[3] Gascueña, J. M., Fernández-Caballero, A. & González, P. (2006). Domain ontology for personalized
e-learning in educational systems. In The Sixth IEEE international conference on advanced learning
technologies (pp. 456–458).
[4] Musen, M.A. (1992). Dimensions of knowledge sharing and reuse. Computers and Biomedical
Research 25: 435-467.
[5] Novak, J.D. and D.B. Gowin, Learning how to learn. 1984: Cambridge University Press.
[6] Novak, J. D. & A. J. Canas (2008). The Theory Underlying Concept Maps and How to Construct and
Use Them. Technical Report IHMC CmapTools 2006-01 Rev 01-2008, Florida Institute for Human
and Machine Cognition, 2008.
[7] Bloom, B. S. (1956). Taxonomy of educational objectives; the classification of educational goals (1st
ed.). New York: Longmans Green. Pankratius,W. J. (1990). Building an organized knowledge base:
concept mapping and achievement in secondary school physics. Journal of Research in Science
Teaching, 27(4), 315–333.
[8] Mintzes, J. J., Wandersee, J. H., & Novak, J. D. (2000). Assessing science understanding: A human
constructivist view. San Diego: Academic Press.
[9] Novak, J. D. (1990). Concept maps and vee diagrams: Two metacognitive tools for science and
mathematics education. Instructional Science, 19, 29-52
[10] M. Kordaki, P. Psomos (2014). An intelligent concept mapping tool for the diagnosis and treatment of
students' misconceptions. 6th World Conference on Educational Sciences.
[11] Kumar, A. (2006). Using Enhanced Concept Map for Student Modeling in a Model-Based
Programming Tutor, in Proc. of 19th International FLAIRS Conference on Artificial Intelligence,
Melbourne Beach, FL
9. International Journal of Computer Science & Information Technology (IJCSIT) Vol 9, No 3, June 2017
115
[12] Olney, A., (2010). Extraction of concept maps from textbooks for domain modeling. In: Kay, J.,
Aleven, V. (Eds.), Proceedings of 10th International Conference on Intelligent Tutoring Systems.
Springer,Berlin/Heidelberg, pp. 390–392
[13] Olney, A.M., Cade, W.L., Williams, C.: Generating Concept Map Exercises from Textbooks. In:
Proceedings of the Sixth Workshop on Innovative Use of NLP for Building Educational Applications,
pp. 111–119. Association for Computational Linguistics, Portland, Oregon (2011)
[14] Fisher, K., Wandersee, J., Moody, D.: Mapping biology knowledge. Kluwer AcademicPub.,
Dordrecht (2000)
[15] S. Wang, C. Liang, Z. Wu, K. Williams, B. Pursel, B. Brautigam, S. Saul, H. Williams, K. Bowen,
and C. Giles. Concept hierarchy extraction from textbooks.The ACM Symposium on Document
Engineering, 2015.
[16] S. Wang and L. Liu. Prerequisite concept maps extraction for automatic assessment. In WWW, pages
519–521, 2016
[17] Gascueña, J. M., Fernández-Caballero, A. & González, P. (2006). Domain ontology for personalized
e-learning in educational systems. In The Sixth IEEE international conference on advanced learning
technologies (pp. 456–458).
[18] Chi, Y. L. (2009). Ontology-based curriculum content sequencing system with semantic rules. Expert
Systems with Applications, 36, 7838–7847.
[19] Yu, Z. et al. (2007). "Ontology-Based Semantic Recommendation for Context-Aware E-Learning."
Proc. of the 4th Conference on Ubiquitous Intelligence and Computing, v.4611, Berlin, Heidelberg:
Springer, pp. 898-907.
[20] Vesin B., Ivanović M., Klašnja-Milićević A.,Budimac Z., Protus 2.0: OntologyBased Semantic
Recommendation in Programming Tutoring System, Experts systems with application, (2012), DOI:
10.1016/j.eswa.2012.04.052.
[21] C. S. Hsu, S. F. Tu, and G. J. Hwang, “A concept inheritance method for learning diagnosis of a
network-based testing and evaluation system,” The 7th International Conference on Computer-
Assisted Instructions, 1998, pp. 602-609.
[22] Jong, BinShyan, Lin, TsongWuu, Wu, YuLung, & Chan, Teyi (2004). Diagnostic and remedial
learning strategy based on conceptual graphs. Journal of Computer Assisted Learning, 20(5), 377–
386.
[23] Wald A. (1947) Sequential Analysis. Wiley, New York.
[24] Hwang, G. J. (2003). A conceptual map model for developing intelligent tutoring system. Computers
and Education, 40(3), 217–235.
[25] J. S. Park, M. S. Chen, and P. S. Yu, “Using a hash-based method with transaction trimming and
database scan reduction for mining association rules,” IEEE Transactions on Knowledge and Data
Engineering, Vol. 9, (1997), pp. 813-825.
[26] S. Tseng, P. Sue, J Su, J Weng, W Tsai: A new approach for constructing the concept map,
Computers and Education, Vol 38, pp123-141,2007.
[27] Brainerd, C.J. & Gordon, L.L. (1994). Development of verbatim and gist memory for numbers.
Developmental Psychology, 30(2), 163–177.
[28] Joseph D. N. & Alberto J. C. (2008). The Theory Underlying Concept Maps and How to Construct
and Use Them. Institute for Human and Machine Cognition, 1.
[29] Ruiz-Primo, M. A. & Shavelson, R. J. (1996). Problems and issues in the use of concept maps in
science assessment. Journal of Research in Science Teaching, 33(6), 569-600.
[30] Ruiz-Primo, M.A.: Examining Concept Maps as an Assessment Tool. Proc. Of the FirstInt.
Conference on Concept Mapping. Pamplona, Spain, http://cmc.ihmc.us/papers/cmc2004-036.pdf (last
access 13.04.05) (2004).
[31] McClure, J., Sonak, B., & Suen, H. (1999). Concept map assessment of classroom learning:
Reliability, validity, and logistical practicality. Journal of Research in Science Teaching, 36, 475-492.
[32] Chang, K., Sung, T., & Chen, S. (2001). Learning through computer-based concept mapping with
scaffolding aid. Journal of Computer Assisted Learning, 17 (1), 21-33.
[33] Cimolino, L., Kay, J., Miller, A. (2003). Incremental student modelling and reflection by verified
concept mapping. In Supplementary Proceedings of the AIED2003: Learner Modelling for Reflection
Workshop, 219-227.
[34] Hwang, G. J., Yang, L. H., & Wang, S. Y. (2013). A concept map-embedded educational computer
game for improving students’ learning performance in natural science courses. Computers &
Education, 69,121–130.
10. International Journal of Computer Science & Information Technology (IJCSIT) Vol 9, No 3, June 2017
116
[35] G. P. Jain, V. P. Gurupur, J. L. Schroeder, and E. D. Faulkenberry, “Artificial intelligence-based
student learning evaluation: a concept map-based approach for analyzing a student’s understanding of
a topic,” IEEE Transactions on Learning Technologies, vol. 7, no. 3, pp. 267–279, (2014).
[36] Leelawong, K., & Biswas, G. (2008). Designing learning by teaching agents: The Betty’s Brain
system. International Journal of Artificial Intelligence in Education, 18(3), 181-208.
[37] Biswas, G., Segedy, J.R. & Leelawong, K. (2015). From Design to Implementation to Practice - A
Learning by Teaching System: Betty’s Brain. International Journal of Artificial Intelligence in
Education.
[38] F. Sormo. Case-based student modeling using concept maps. Case-Based Reasoning Research And
Development, Proceedings, 3620:492–506, 2005.
[39] Wu, P. H., Hwang, G. J., Milrad, M., Ke, H. R., & Huang, Y. M. (2011). An innovative concept map
approach for improving students’ learning performance with an instant feedback mechanism. British
Journal of Educational Technology. doi:10.1111/j.1467-8535.2010.01167.x.
[40] Cimolino, L., Kay, J., Miller, A. (2003). Incremental student modelling and reflection by verified
concept mapping. In Supplementary Proceedings of the AIED2003: Learner Modelling for Reflection
Workshop, 219-227.
[41] Gouli, E, Gogoulou, A., Papanikolaou, K. A., & Grigoriadou, M. (2005). Evaluating Learner’s
Knowledge Level on Concept Mapping Tasks. Fifth IEEE International Conference on Advanced
Learning Technologies, 424-428.
AUTHORS
Nabila Khodeir, is a researcher in the Informatics department at the Electronics Research Institute, Cairo, Egypt.
Her research interests include intelligent tutoring systems, user modelling and natural language processing. She
earned her Ph.D. and ME from the Electronics and Communications department at Cairo University.