Many theoretical works and tools on epidemiological field reflect the emphasis on decisionmaking tools by both public health and the scientific community, which continues to increase.
Indeed, in the epidemiological field, modeling tools are proving a very important way in helping to make decision. However, the variety, the large volume of data and the nature of epidemics
lead us to seek solutions to alleviate the heavy burden imposed on both experts and developers. In this paper, we present a new approach: the passage of an epidemic model realized in BioPEPA to a narrative language using the basics of SBML language. Our goal is to allow on one hand, epidemiologists to verify and validate the model, and the other hand, developers to
optimize the model in order to achieve a better model of decision making. We also present some preliminary results and some suggestions to improve the simulated model.
SBML FOR OPTIMIZING DECISION SUPPORT'S TOOLScsandit
Many theoretical works and tools on epidemiological field reflect the emphasis on decisionmaking
tools by both public health and the scientific community, which continues to increase.
Indeed, in the epidemiological field, modeling tools are proving a very important way in helping
to make decision. However, the variety, the large volume of data and the nature of epidemics
lead us to seek solutions to alleviate the heavy burden imposed on both experts and developers.
In this paper, we present a new approach: the passage of an epidemic model realized in Bio-
PEPA to a narrative language using the basics of SBML language. Our goal is to allow on one
hand, epidemiologists to verify and validate the model, and the other hand, developers to
optimize the model in order to achieve a better model of decision making. We also present some
preliminary results and some suggestions to improve the simulated model.
ACL-WMT2013.Quality Estimation for Machine Translation Using the Joint Method...Lifeng (Aaron) Han
Proceedings of the ACL 2013 EIGHTH WORKSHOP ON STATISTICAL MACHINE TRANSLATION (ACL-WMT 2013), 8-9 August 2013. Sofia, Bulgaria. Open tool https://github.com/aaronlifenghan/aaron-project-ebleu (ACM digital library, ACL anthology)
COQUEL: A CONCEPTUAL QUERY LANGUAGE BASED ON THE ENTITYRELATIONSHIP MODELcsandit
As more and more collections of data are available on the Internet, end users but not experts in
Computer Science demand easy solutions for retrieving data from these collections. A good
solution for these users is the conceptual query languages, which facilitate the composition of
queries by means of a graphical interface. In this paper, we present (1) CoQueL, a conceptual
query language specified on E/R models and (2) a translation architecture for translating
CoQueL queries into languages such as XQuery or SQL..
ELABORATE LEXICON EXTENDED LANGUAGE WITH A LOT OF CONCEPTUAL INFORMATIONIJCSEA Journal
The use of model such as LEL (Lexicon Extended Language) in natural language is very interesting in Requirements Engineering. But LEL, even if it is derived from the Universe of Discourse (UofD) does not provide further details on the concepts it describes. However, we believe that the elements inherent in the conceptual level of a system are already defined in the Universe of Discourse. Therefore, in this work we propose a more elaborate natural language model called eLEL. It is a model that describes the concepts in a domain in more detail than the conventional LEL. We also propose a modeling process of a domain using an eLEL model.
Elaborate Lexicon Extended Language with a Lot of Conceptual InformationIJCSEA Journal
The use of model such as LEL (Lexicon Extended Language) in natural language is very interesting in Requirements Engineering. But LEL, even if it is derived from the Universe of Discourse (UofD) does not provide further details on the concepts it describes. However, we believe that the elements inherent in the conceptual level of a system are already defined in the Universe of Discourse. Therefore, in this work we propose a more elaborate natural language model called eLEL. It is a model that describes the concepts in a domain in more detail than the conventional LEL. We also propose a modeling process of a domain using an eLEL model.
This paper presents a natural language processing based automated system called DrawPlus for generating UML diagrams, user scenarios and test cases after analyzing the given business requirement specification which is written in natural language. The DrawPlus is presented for analyzing the natural languages and extracting the relative and required information from the given business requirement Specification by the user. Basically user writes the requirements specifications in simple English and the designed system has conspicuous ability to analyze the given requirement specification by using some of the core natural language processing techniques with our own well defined algorithms. After compound analysis and extraction of associated information, the DrawPlus system draws use case diagram, User scenarios and system level high level test case description. The DrawPlus provides the more convenient and reliable way of generating use case, user scenarios and test cases in a way reducing the time and cost of software development process while accelerating the 70 of works in Software design and Testing phase Janani Tharmaseelan ""Cohesive Software Design"" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-3 , April 2019, URL: https://www.ijtsrd.com/papers/ijtsrd22900.pdf
Paper URL: https://www.ijtsrd.com/computer-science/other/22900/cohesive-software-design/janani-tharmaseelan
BOOLEAN SPECIFICATION BASED TESTING TECHNIQUES: A SURVEYcscpconf
Boolean expressions are major focus of specifications and they are very much prone to
introduction of faults, this survey presents various Boolean specification based testing
techniques, and covers more than 30 papers for the same. The various Boolean specification
based testing techniques like Cause effect graph, fosters strategy, meaningful impact strategy,
Branch Operator Strategy (BOR), Modified Condition/ Decision Coverage (MCDC) compared
on the basis of their fault detection effectiveness and the size of test suite. This collection
represents most of the existing work performed on Boolean specification based testing
techniques. This survey describes the basic algorithms used by these strategies and it also
includes operator and operand fault categories for evaluating the performance of above mentioned testing techniques. Finally, this survey contains short summaries of all the papers that use Boolean specification based testing techniques. These techniques have been empirically evaluated by various researchers on a simplified safety related real time control system.
SBML FOR OPTIMIZING DECISION SUPPORT'S TOOLScsandit
Many theoretical works and tools on epidemiological field reflect the emphasis on decisionmaking
tools by both public health and the scientific community, which continues to increase.
Indeed, in the epidemiological field, modeling tools are proving a very important way in helping
to make decision. However, the variety, the large volume of data and the nature of epidemics
lead us to seek solutions to alleviate the heavy burden imposed on both experts and developers.
In this paper, we present a new approach: the passage of an epidemic model realized in Bio-
PEPA to a narrative language using the basics of SBML language. Our goal is to allow on one
hand, epidemiologists to verify and validate the model, and the other hand, developers to
optimize the model in order to achieve a better model of decision making. We also present some
preliminary results and some suggestions to improve the simulated model.
ACL-WMT2013.Quality Estimation for Machine Translation Using the Joint Method...Lifeng (Aaron) Han
Proceedings of the ACL 2013 EIGHTH WORKSHOP ON STATISTICAL MACHINE TRANSLATION (ACL-WMT 2013), 8-9 August 2013. Sofia, Bulgaria. Open tool https://github.com/aaronlifenghan/aaron-project-ebleu (ACM digital library, ACL anthology)
COQUEL: A CONCEPTUAL QUERY LANGUAGE BASED ON THE ENTITYRELATIONSHIP MODELcsandit
As more and more collections of data are available on the Internet, end users but not experts in
Computer Science demand easy solutions for retrieving data from these collections. A good
solution for these users is the conceptual query languages, which facilitate the composition of
queries by means of a graphical interface. In this paper, we present (1) CoQueL, a conceptual
query language specified on E/R models and (2) a translation architecture for translating
CoQueL queries into languages such as XQuery or SQL..
ELABORATE LEXICON EXTENDED LANGUAGE WITH A LOT OF CONCEPTUAL INFORMATIONIJCSEA Journal
The use of model such as LEL (Lexicon Extended Language) in natural language is very interesting in Requirements Engineering. But LEL, even if it is derived from the Universe of Discourse (UofD) does not provide further details on the concepts it describes. However, we believe that the elements inherent in the conceptual level of a system are already defined in the Universe of Discourse. Therefore, in this work we propose a more elaborate natural language model called eLEL. It is a model that describes the concepts in a domain in more detail than the conventional LEL. We also propose a modeling process of a domain using an eLEL model.
Elaborate Lexicon Extended Language with a Lot of Conceptual InformationIJCSEA Journal
The use of model such as LEL (Lexicon Extended Language) in natural language is very interesting in Requirements Engineering. But LEL, even if it is derived from the Universe of Discourse (UofD) does not provide further details on the concepts it describes. However, we believe that the elements inherent in the conceptual level of a system are already defined in the Universe of Discourse. Therefore, in this work we propose a more elaborate natural language model called eLEL. It is a model that describes the concepts in a domain in more detail than the conventional LEL. We also propose a modeling process of a domain using an eLEL model.
This paper presents a natural language processing based automated system called DrawPlus for generating UML diagrams, user scenarios and test cases after analyzing the given business requirement specification which is written in natural language. The DrawPlus is presented for analyzing the natural languages and extracting the relative and required information from the given business requirement Specification by the user. Basically user writes the requirements specifications in simple English and the designed system has conspicuous ability to analyze the given requirement specification by using some of the core natural language processing techniques with our own well defined algorithms. After compound analysis and extraction of associated information, the DrawPlus system draws use case diagram, User scenarios and system level high level test case description. The DrawPlus provides the more convenient and reliable way of generating use case, user scenarios and test cases in a way reducing the time and cost of software development process while accelerating the 70 of works in Software design and Testing phase Janani Tharmaseelan ""Cohesive Software Design"" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-3 , April 2019, URL: https://www.ijtsrd.com/papers/ijtsrd22900.pdf
Paper URL: https://www.ijtsrd.com/computer-science/other/22900/cohesive-software-design/janani-tharmaseelan
BOOLEAN SPECIFICATION BASED TESTING TECHNIQUES: A SURVEYcscpconf
Boolean expressions are major focus of specifications and they are very much prone to
introduction of faults, this survey presents various Boolean specification based testing
techniques, and covers more than 30 papers for the same. The various Boolean specification
based testing techniques like Cause effect graph, fosters strategy, meaningful impact strategy,
Branch Operator Strategy (BOR), Modified Condition/ Decision Coverage (MCDC) compared
on the basis of their fault detection effectiveness and the size of test suite. This collection
represents most of the existing work performed on Boolean specification based testing
techniques. This survey describes the basic algorithms used by these strategies and it also
includes operator and operand fault categories for evaluating the performance of above mentioned testing techniques. Finally, this survey contains short summaries of all the papers that use Boolean specification based testing techniques. These techniques have been empirically evaluated by various researchers on a simplified safety related real time control system.
Systems variability modeling a textual model mixing class and feature conceptsijcsit
System’s reusability and cost are very important in software product line design area. Developers’ goal is
to increase system reusability and decreasing cost and efforts for building components from scratch for
each software configuration. This can be reached by developing software product line (SPL). To handle
SPL engineering process, several approaches with several techniques were developed. One of these
approaches is called separated approach. It requires separating the commonalities and variability for
system’s components to allow configuration selection based on user defined features. Textual notationbased
approaches have been used for their formal syntax and semantics to represent system features and
implementations. But these approaches are still weak in mixing features (conceptual level) and classes
(physical level) that guarantee smooth and automatic configuration generation for software releases. The
absence of methodology supporting the mixing process is a real weakness. In this paper, we enhanced
SPL’s reusability by introducing some meta-features, classified according to their functionalities. As a first
consequence, mixing class and feature concepts is supported in a simple way using class interfaces and
inherent features for smooth move from feature model to class model. And as a second consequence, the
mixing process is supported by a textual design and implementation methodology, mixing class and feature
models by combining their concepts in a single language. The supported configuration generation process
is simple, coherent, and complete.
FUNCTIONAL OVER-RELATED CLASSES BAD SMELL DETECTION AND REFACTORING SUGGESTIONSijseajournal
Bad phenomena about functional over-related classes and confused inheritances in programs will cause difficulty in programs comprehension, extension and maintenance. In this paper it is defined as a new bad smell Functional over-Related Classes. After the analysis, the characteristics of this new smell are transformed to the large number of entities dependency relationships between classes. So after entities dependency information collection and analysis, the bad smell is detected in programs, and corresponding refactoring suggestions are provided based on detection results. The experiments results of open source programs show that the proposed bad smell cannot be detected by current detection methods. The proposed
detection method in this paper behaves well on refactoring evaluation, and the refactoring suggestions improve the quality of programs.
Sentiment Analysis In Myanmar Language Using Convolutional Lstm Neural Networkkevig
In recent years, there has been an increasing use of social media among people in Myanmar and writing
review on social media pages about the product, movie, and trip are also popular among people. Moreover,
most of the people are going to find the review pages about the product they want to buy before deciding
whether they should buy it or not. Extracting and receiving useful reviews over interesting products is very
important and time consuming for people. Sentiment analysis is one of the important processes for extracting
useful reviews of the products. In this paper, the Convolutional LSTM neural network architecture is
proposed to analyse the sentiment classification of cosmetic reviews written in Myanmar Language. The
paper also intends to build the cosmetic reviews dataset for deep learning and sentiment lexicon in Myanmar
Language.
T EXT M INING AND C LASSIFICATION OF P RODUCT R EVIEWS U SING S TRUCTURED S U...csandit
Text mining and Text classification are the two pro
minent and challenging tasks in the field of
Machine learning. Text mining refers to the process
of deriving high quality and relevant
information from text, while Text classification de
als with the categorization of text documents
into different classes. The real challenge in these
areas is to address the problems like handling
large text corpora, similarity of words in text doc
uments, and association of text documents with
a subset of class categories. The feature extractio
n and classification of such text documents
require an efficient machine learning algorithm whi
ch performs automatic text classification.
This paper describes the classification of product
review documents as a multi-label
classification scenario and addresses the problem u
sing Structured Support Vector Machine.
The work also explains the flexibility and performan
ce of the proposed approach for e
fficient text classification.
Chunking means splitting the sentences into tokens and then grouping them in a meaningful way. When it comes to high-performance chunking systems, transformer models have proved to be the state of the art benchmarks. To perform chunking as a task it requires a large-scale high quality annotated corpus where each token is attached with a particular tag similar as that of Named Entity Recognition Tasks. Later these tags are used in conjunction with pointer frameworks to find the final chunk. To solve this for a specific domain problem, it becomes a highly costly affair in terms of time and resources to manually annotate and produce a large-high-quality training set. When the domain is specific and diverse, then cold starting becomes even more difficult because of the expected large number of manually annotated queries to cover all aspects. To overcome the problem, we applied a grammar-based text generation mechanism where instead of annotating a sentence we annotate using grammar templates. We defined various templates corresponding to different grammar rules. To create a sentence we used these templates along with the rules where symbol or terminal values were chosen from the domain data catalog. It helped us to create a large number of annotated queries. These annotated queries were used for training the machine learning model using an ensemble transformer-based deep neural network model [24.] We found that grammar-based annotation was useful to solve domain-based chunks in input query sentences without any manual annotation where it was found to achieve a classification F1 score of 96.97% in classifying the tokens for the out of template queries.
Functional over related classes bad smell detection and refactoring suggestionsijseajournal
Bad phenomena about functional over-related classes and confused inheritances in programs will cause
difficulty in programs comprehension, extension and maintenance. In this paper it is defined as a new bad
smell Functional over-Related Classes. After the analysis, the characteristics of this new smell are
transformed to the large number of entities dependency relationships between classes. So after entities
dependency information collection and analysis, the bad smell is detected in programs, and corresponding
refactoring suggestions are provided based on detection results. The experiments results of open source
programs show that the proposed bad smell cannot be detected by current detection methods. The proposed
detection method in this paper behaves well on refactoring evaluation, and the refactoring suggestions
improve the quality of programs.
BIDIRECTIONAL LONG SHORT-TERM MEMORY (BILSTM)WITH CONDITIONAL RANDOM FIELDS (...ijnlc
This study investigates the effectiveness of Knowledge Named Entity Recognition in Online Judges (OJs). OJs are lacking in the classification of topics and limited to the IDs only. Therefore a lot of time is consumed in finding programming problems more specifically in knowledge entities.A Bidirectional Long Short-Term Memory (BiLSTM) with Conditional Random Fields (CRF) model is applied for the recognition of knowledge named entities existing in the solution reports.For the test run, more than 2000 solution reports are crawled from the Online Judges and processed for the model output. The stability of the model is
also assessed with the higher F1 value. The results obtained through the proposed BiLSTM-CRF model are more effectual (F1: 98.96%) and efficient in lead-time.
We present an approach towards knowledge acquisition of process knowledge for the natural sciences. The work has been conducted within Project Halo, which is creating advanced knowledge authoring and question answering systems for the natural sciences. An analysis of AP®-level questions for Biology, Chemistry and Physics uncovered that process knowledge is the single most frequent type of knowledge required. Thus, we developed means to acquire process knowledge, to formally represent it, and to reason about it in order to answer novel questions about the do-mains.
All these tasks are supported by an abstract process meta-model. It provides the terminology for user-tailored process diagrams, which are automatically translated into executa-ble FLogic code. The meta-model and the code generation are based on the notion of Problem Solving Methods (PSM) which represent an abstract formalization of the reasoning strategies needed for processes.
A NATURAL LANGUAGE REQUIREMENTS ENGINEERING APPROACH FOR MDA IJCSEA Journal
A software system for any information system can be developed following a model driven paradigm, in particular MDA (Model Driven Architecture). In this way, models that represent the organizational work are used to produce models that represent the information system. Current software development methods are starting to provide guidelines for the construction of conceptual models, taking as input requirements models. In MDA the CIM (Computation Independent Model) can be used to define the business process model. Though a complete automatic construction of the CIM is not possible, we have proposed in other papers the integration of some natural language requirements models and we have defined a strategy to derive a CIM from these models. In this paper, we present an improved version of our ATL transformation that implements a strategy to obtain a UML class diagram representing a preliminary CIM from requirements models allowing traceability between the source and the target models.
A natural language requirements engineering approach for mdaIJCSEA Journal
A software system for any information system can be developed following a model driven paradigm, in particular MDA (Model Driven Architecture). In this way, models that represent the organizational work are used to produce models that represent the information system. Current software development methods are starting to provide guidelines for the construction of conceptual models, taking as input requirements models. In MDA the CIM (Computation Independent Model) can be used to define the business process model. Though a complete automatic construction of the CIM is not possible, we have proposed in other papers the integration of some natural language requirements models and we have defined a strategy to
derive a CIM from these models. In this paper, we present an improved version of our ATL transformation
that implements a strategy to obtain a UML class diagram representing a preliminary CIM from requirements models allowing traceability between the source and the target models.
A NATURAL LANGUAGE REQUIREMENTS ENGINEERING APPROACH FOR MDA IJCSEA Journal
A software system for any information system can be developed following a model driven paradigm, in particular MDA (Model Driven Architecture). In this way, models that represent the organizational work are used to produce models that represent the information system. Current software development methods are starting to provide guidelines for the construction of conceptual models, taking as input requirements models. In MDA the CIM (Computation Independent Model) can be used to define the business process model. Though a complete automatic construction of the CIM is not possible, we have proposed in other papers the integration of some natural language requirements models and we have defined a strategy to derive a CIM from these models. In this paper, we present an improved version of our ATL transformation that implements a strategy to obtain a UML class diagram representing a preliminary CIM from requirements models allowing traceability between the source and the target models.
A Natural Language Requirements Engineering Approach for MDAIJCSEA Journal
A software system for any information system can be developed following a model driven paradigm, in particular MDA (Model Driven Architecture). In this way, models that represent the organizational work are used to produce models that represent the information system. Current software development methods are starting to provide guidelines for the construction of conceptual models, taking as input requirements models. In MDA the CIM (Computation Independent Model) can be used to define the business process model. Though a complete automatic construction of the CIM is not possible, we have proposed in other papers the integration of some natural language requirements models and we have defined a strategy to derive a CIM from these models. In this paper, we present an improved version of our ATL transformation that implements a strategy to obtain a UML class diagram representing a preliminary CIM from requirements models allowing traceability between the source and the target models
A NATURAL LANGUAGE REQUIREMENTS ENGINEERING APPROACH FOR MDA IJCSEA Journal
A software system for any information system can be developed following a model driven paradigm, in
particular MDA (Model Driven Architecture). In this way, models that represent the organizational work
are used to produce models that represent the information system. Current software development methods
are starting to provide guidelines for the construction of conceptual models, taking as input requirements
models. In MDA the CIM (Computation Independent Model) can be used to define the business process
model. Though a complete automatic construction of the CIM is not possible, we have proposed in other
papers the integration of some natural language requirements models and we have defined a strategy to
derive a CIM from these models. In this paper, we present an improved version of our ATL transformation
that implements a strategy to obtain a UML class diagram representing a preliminary CIM from
requirements models allowing traceability between the source and the target models.
Systems variability modeling a textual model mixing class and feature conceptsijcsit
System’s reusability and cost are very important in software product line design area. Developers’ goal is
to increase system reusability and decreasing cost and efforts for building components from scratch for
each software configuration. This can be reached by developing software product line (SPL). To handle
SPL engineering process, several approaches with several techniques were developed. One of these
approaches is called separated approach. It requires separating the commonalities and variability for
system’s components to allow configuration selection based on user defined features. Textual notationbased
approaches have been used for their formal syntax and semantics to represent system features and
implementations. But these approaches are still weak in mixing features (conceptual level) and classes
(physical level) that guarantee smooth and automatic configuration generation for software releases. The
absence of methodology supporting the mixing process is a real weakness. In this paper, we enhanced
SPL’s reusability by introducing some meta-features, classified according to their functionalities. As a first
consequence, mixing class and feature concepts is supported in a simple way using class interfaces and
inherent features for smooth move from feature model to class model. And as a second consequence, the
mixing process is supported by a textual design and implementation methodology, mixing class and feature
models by combining their concepts in a single language. The supported configuration generation process
is simple, coherent, and complete.
FUNCTIONAL OVER-RELATED CLASSES BAD SMELL DETECTION AND REFACTORING SUGGESTIONSijseajournal
Bad phenomena about functional over-related classes and confused inheritances in programs will cause difficulty in programs comprehension, extension and maintenance. In this paper it is defined as a new bad smell Functional over-Related Classes. After the analysis, the characteristics of this new smell are transformed to the large number of entities dependency relationships between classes. So after entities dependency information collection and analysis, the bad smell is detected in programs, and corresponding refactoring suggestions are provided based on detection results. The experiments results of open source programs show that the proposed bad smell cannot be detected by current detection methods. The proposed
detection method in this paper behaves well on refactoring evaluation, and the refactoring suggestions improve the quality of programs.
Sentiment Analysis In Myanmar Language Using Convolutional Lstm Neural Networkkevig
In recent years, there has been an increasing use of social media among people in Myanmar and writing
review on social media pages about the product, movie, and trip are also popular among people. Moreover,
most of the people are going to find the review pages about the product they want to buy before deciding
whether they should buy it or not. Extracting and receiving useful reviews over interesting products is very
important and time consuming for people. Sentiment analysis is one of the important processes for extracting
useful reviews of the products. In this paper, the Convolutional LSTM neural network architecture is
proposed to analyse the sentiment classification of cosmetic reviews written in Myanmar Language. The
paper also intends to build the cosmetic reviews dataset for deep learning and sentiment lexicon in Myanmar
Language.
T EXT M INING AND C LASSIFICATION OF P RODUCT R EVIEWS U SING S TRUCTURED S U...csandit
Text mining and Text classification are the two pro
minent and challenging tasks in the field of
Machine learning. Text mining refers to the process
of deriving high quality and relevant
information from text, while Text classification de
als with the categorization of text documents
into different classes. The real challenge in these
areas is to address the problems like handling
large text corpora, similarity of words in text doc
uments, and association of text documents with
a subset of class categories. The feature extractio
n and classification of such text documents
require an efficient machine learning algorithm whi
ch performs automatic text classification.
This paper describes the classification of product
review documents as a multi-label
classification scenario and addresses the problem u
sing Structured Support Vector Machine.
The work also explains the flexibility and performan
ce of the proposed approach for e
fficient text classification.
Chunking means splitting the sentences into tokens and then grouping them in a meaningful way. When it comes to high-performance chunking systems, transformer models have proved to be the state of the art benchmarks. To perform chunking as a task it requires a large-scale high quality annotated corpus where each token is attached with a particular tag similar as that of Named Entity Recognition Tasks. Later these tags are used in conjunction with pointer frameworks to find the final chunk. To solve this for a specific domain problem, it becomes a highly costly affair in terms of time and resources to manually annotate and produce a large-high-quality training set. When the domain is specific and diverse, then cold starting becomes even more difficult because of the expected large number of manually annotated queries to cover all aspects. To overcome the problem, we applied a grammar-based text generation mechanism where instead of annotating a sentence we annotate using grammar templates. We defined various templates corresponding to different grammar rules. To create a sentence we used these templates along with the rules where symbol or terminal values were chosen from the domain data catalog. It helped us to create a large number of annotated queries. These annotated queries were used for training the machine learning model using an ensemble transformer-based deep neural network model [24.] We found that grammar-based annotation was useful to solve domain-based chunks in input query sentences without any manual annotation where it was found to achieve a classification F1 score of 96.97% in classifying the tokens for the out of template queries.
Functional over related classes bad smell detection and refactoring suggestionsijseajournal
Bad phenomena about functional over-related classes and confused inheritances in programs will cause
difficulty in programs comprehension, extension and maintenance. In this paper it is defined as a new bad
smell Functional over-Related Classes. After the analysis, the characteristics of this new smell are
transformed to the large number of entities dependency relationships between classes. So after entities
dependency information collection and analysis, the bad smell is detected in programs, and corresponding
refactoring suggestions are provided based on detection results. The experiments results of open source
programs show that the proposed bad smell cannot be detected by current detection methods. The proposed
detection method in this paper behaves well on refactoring evaluation, and the refactoring suggestions
improve the quality of programs.
BIDIRECTIONAL LONG SHORT-TERM MEMORY (BILSTM)WITH CONDITIONAL RANDOM FIELDS (...ijnlc
This study investigates the effectiveness of Knowledge Named Entity Recognition in Online Judges (OJs). OJs are lacking in the classification of topics and limited to the IDs only. Therefore a lot of time is consumed in finding programming problems more specifically in knowledge entities.A Bidirectional Long Short-Term Memory (BiLSTM) with Conditional Random Fields (CRF) model is applied for the recognition of knowledge named entities existing in the solution reports.For the test run, more than 2000 solution reports are crawled from the Online Judges and processed for the model output. The stability of the model is
also assessed with the higher F1 value. The results obtained through the proposed BiLSTM-CRF model are more effectual (F1: 98.96%) and efficient in lead-time.
We present an approach towards knowledge acquisition of process knowledge for the natural sciences. The work has been conducted within Project Halo, which is creating advanced knowledge authoring and question answering systems for the natural sciences. An analysis of AP®-level questions for Biology, Chemistry and Physics uncovered that process knowledge is the single most frequent type of knowledge required. Thus, we developed means to acquire process knowledge, to formally represent it, and to reason about it in order to answer novel questions about the do-mains.
All these tasks are supported by an abstract process meta-model. It provides the terminology for user-tailored process diagrams, which are automatically translated into executa-ble FLogic code. The meta-model and the code generation are based on the notion of Problem Solving Methods (PSM) which represent an abstract formalization of the reasoning strategies needed for processes.
A NATURAL LANGUAGE REQUIREMENTS ENGINEERING APPROACH FOR MDA IJCSEA Journal
A software system for any information system can be developed following a model driven paradigm, in particular MDA (Model Driven Architecture). In this way, models that represent the organizational work are used to produce models that represent the information system. Current software development methods are starting to provide guidelines for the construction of conceptual models, taking as input requirements models. In MDA the CIM (Computation Independent Model) can be used to define the business process model. Though a complete automatic construction of the CIM is not possible, we have proposed in other papers the integration of some natural language requirements models and we have defined a strategy to derive a CIM from these models. In this paper, we present an improved version of our ATL transformation that implements a strategy to obtain a UML class diagram representing a preliminary CIM from requirements models allowing traceability between the source and the target models.
A natural language requirements engineering approach for mdaIJCSEA Journal
A software system for any information system can be developed following a model driven paradigm, in particular MDA (Model Driven Architecture). In this way, models that represent the organizational work are used to produce models that represent the information system. Current software development methods are starting to provide guidelines for the construction of conceptual models, taking as input requirements models. In MDA the CIM (Computation Independent Model) can be used to define the business process model. Though a complete automatic construction of the CIM is not possible, we have proposed in other papers the integration of some natural language requirements models and we have defined a strategy to
derive a CIM from these models. In this paper, we present an improved version of our ATL transformation
that implements a strategy to obtain a UML class diagram representing a preliminary CIM from requirements models allowing traceability between the source and the target models.
A NATURAL LANGUAGE REQUIREMENTS ENGINEERING APPROACH FOR MDA IJCSEA Journal
A software system for any information system can be developed following a model driven paradigm, in particular MDA (Model Driven Architecture). In this way, models that represent the organizational work are used to produce models that represent the information system. Current software development methods are starting to provide guidelines for the construction of conceptual models, taking as input requirements models. In MDA the CIM (Computation Independent Model) can be used to define the business process model. Though a complete automatic construction of the CIM is not possible, we have proposed in other papers the integration of some natural language requirements models and we have defined a strategy to derive a CIM from these models. In this paper, we present an improved version of our ATL transformation that implements a strategy to obtain a UML class diagram representing a preliminary CIM from requirements models allowing traceability between the source and the target models.
A Natural Language Requirements Engineering Approach for MDAIJCSEA Journal
A software system for any information system can be developed following a model driven paradigm, in particular MDA (Model Driven Architecture). In this way, models that represent the organizational work are used to produce models that represent the information system. Current software development methods are starting to provide guidelines for the construction of conceptual models, taking as input requirements models. In MDA the CIM (Computation Independent Model) can be used to define the business process model. Though a complete automatic construction of the CIM is not possible, we have proposed in other papers the integration of some natural language requirements models and we have defined a strategy to derive a CIM from these models. In this paper, we present an improved version of our ATL transformation that implements a strategy to obtain a UML class diagram representing a preliminary CIM from requirements models allowing traceability between the source and the target models
A NATURAL LANGUAGE REQUIREMENTS ENGINEERING APPROACH FOR MDA IJCSEA Journal
A software system for any information system can be developed following a model driven paradigm, in
particular MDA (Model Driven Architecture). In this way, models that represent the organizational work
are used to produce models that represent the information system. Current software development methods
are starting to provide guidelines for the construction of conceptual models, taking as input requirements
models. In MDA the CIM (Computation Independent Model) can be used to define the business process
model. Though a complete automatic construction of the CIM is not possible, we have proposed in other
papers the integration of some natural language requirements models and we have defined a strategy to
derive a CIM from these models. In this paper, we present an improved version of our ATL transformation
that implements a strategy to obtain a UML class diagram representing a preliminary CIM from
requirements models allowing traceability between the source and the target models.
May 2024 - Top10 Cited Articles in Natural Language Computingkevig
Natural Language Processing is a programmed approach to analyze text that is based on both a set of theories and a set of technologies. This forum aims to bring together researchers who have designed and build software that will analyze, understand, and generate languages that humans use naturally to address computers.
Bio-Inspired Requirements Variability Modeling with use Case ijseajournal
Background.Feature Model (FM) is the most important technique used to manage the variability through products in Software Product Lines (SPLs). Often, the SPLs requirements variability is by using variable use case modelwhich is a real challenge inactual approaches: large gap between their concepts and those of real world leading to bad quality, poor supporting FM, and the variability does not cover all requirements modeling levels. Aims. This paper proposes a bio-inspired use case variability modeling methodology dealing with the above shortages.
Method. The methodology is carried out through variable business domain use case meta modeling,
variable applications family use case meta modeling, and variable specific application use case generating.
Results. This methodology has leaded to integrated solutions to the above challenges: it decreases the gap
between computing concepts and real world ones. It supports use case variability modeling by introducing
versions and revisions features and related relations. The variability is supported at three meta levels
covering business domain, applications family, and specific application requirements.
Conclusion. A comparative evaluation with the closest recent works, upon some meaningful criteria in the
domain, shows the conceptual and practical great value of the proposed methodology and leads to
promising research perspectives
A hybrid composite features based sentence level sentiment analyzerIAESIJAI
Current lexica and machine learning based sentiment analysis approaches
still suffer from a two-fold limitation. First, manual lexicon construction and
machine training is time consuming and error-prone. Second, the
prediction’s accuracy entails sentences and their corresponding training text
should fall under the same domain. In this article, we experimentally
evaluate four sentiment classifiers, namely support vector machines (SVMs),
Naive Bayes (NB), logistic regression (LR) and random forest (RF). We
quantify the quality of each of these models using three real-world datasets
that comprise 50,000 movie reviews, 10,662 sentences, and 300 generic
movie reviews. Specifically, we study the impact of a variety of natural
language processing (NLP) pipelines on the quality of the predicted
sentiment orientations. Additionally, we measure the impact of incorporating
lexical semantic knowledge captured by WordNet on expanding original
words in sentences. Findings demonstrate that the utilizing different NLP
pipelines and semantic relationships impacts the quality of the sentiment
analyzers. In particular, results indicate that coupling lemmatization and
knowledge-based n-gram features proved to produce higher accuracy results.
With this coupling, the accuracy of the SVM classifier has improved to
90.43%, while it was 86.83%, 90.11%, 86.20%, respectively using the three
other classifiers.
BIO-INSPIRED REQUIREMENTS VARIABILITY MODELING WITH USE CASE mathsjournal
Background.Feature Model (FM) is the most important technique used to manage the variability through
products in Software Product Lines (SPLs). Often, the SPLs requirements variability is by using variable
use case modelwhich is a real challenge inactual approaches: large gap between their concepts and those of
real world leading to bad quality, poor supporting FM, and the variability does not cover all requirements
modeling levels.
ELABORATE LEXICON EXTENDED LANGUAGE WITH A LOT OF CONCEPTUAL INFORMATION IJCSEA Journal
The use of model such as LEL (Lexicon Extended Language) in natural language is very interesting in
Requirements Engineering. But LEL, even if it is derived from the Universe of Discourse (UofD) does not
provide further details on the concepts it describes. However, we believe that the elements inherent in the
conceptual level of a system are already defined in the Universe of Discourse. Therefore, in this work we
propose a more elaborate natural language model called eLEL. It is a model that describes the concepts in
a domain in more detail than the conventional LEL. We also propose a modeling process of a domain using
an eLEL model.
ELABORATE LEXICON EXTENDED LANGUAGE WITH A LOT OF CONCEPTUAL INFORMATION IJCSEA Journal
The use of model such as LEL (Lexicon Extended Language) in natural language is very interesting in Requirements Engineering. But LEL, even if it is derived from the Universe of Discourse (UofD) does not provide further details on the concepts it describes. However, we believe that the elements inherent in the conceptual level of a system are already defined in the Universe of Discourse. Therefore, in this work we propose a more elaborate natural language model called eLEL. It is a model that describes the concepts in a domain in more detail than the conventional LEL. We also propose a modeling process of a domain using an eLEL model.
Class Diagram Extraction from Textual Requirements Using NLP Techniquesiosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
ANALYSIS OF LAND SURFACE DEFORMATION GRADIENT BY DINSAR cscpconf
The progressive development of Synthetic Aperture Radar (SAR) systems diversify the exploitation of the generated images by these systems in different applications of geoscience. Detection and monitoring surface deformations, procreated by various phenomena had benefited from this evolution and had been realized by interferometry (InSAR) and differential interferometry (DInSAR) techniques. Nevertheless, spatial and temporal decorrelations of the interferometric couples used, limit strongly the precision of analysis results by these techniques. In this context, we propose, in this work, a methodological approach of surface deformation detection and analysis by differential interferograms to show the limits of this technique according to noise quality and level. The detectability model is generated from the deformation signatures, by simulating a linear fault merged to the images couples of ERS1 / ERS2 sensors acquired in a region of the Algerian south.
4D AUTOMATIC LIP-READING FOR SPEAKER'S FACE IDENTIFCATIONcscpconf
A novel based a trajectory-guided, concatenating approach for synthesizing high-quality image real sample renders video is proposed . The lips reading automated is seeking for modeled the closest real image sample sequence preserve in the library under the data video to the HMM predicted trajectory. The object trajectory is modeled obtained by projecting the face patterns into an KDA feature space is estimated. The approach for speaker's face identification by using synthesise the identity surface of a subject face from a small sample of patterns which sparsely each the view sphere. An KDA algorithm use to the Lip-reading image is discrimination, after that work consisted of in the low dimensional for the fundamental lip features vector is reduced by using the 2D-DCT.The mouth of the set area dimensionality is ordered by a normally reduction base on the PCA to obtain the Eigen lips approach, their proposed approach by[33]. The subjective performance results of the cost function under the automatic lips reading modeled , which wasn’t illustrate the superior performance of the
method.
MOVING FROM WATERFALL TO AGILE PROCESS IN SOFTWARE ENGINEERING CAPSTONE PROJE...cscpconf
Universities offer software engineering capstone course to simulate a real world-working environment in which students can work in a team for a fixed period to deliver a quality product. The objective of the paper is to report on our experience in moving from Waterfall process to Agile process in conducting the software engineering capstone project. We present the capstone course designs for both Waterfall driven and Agile driven methodologies that highlight the structure, deliverables and assessment plans.To evaluate the improvement, we conducted a survey for two different sections taught by two different instructors to evaluate students’ experience in moving from traditional Waterfall model to Agile like process. Twentyeight students filled the survey. The survey consisted of eight multiple-choice questions and an open-ended question to collect feedback from students. The survey results show that students were able to attain hands one experience, which simulate a real world-working environment. The results also show that the Agile approach helped students to have overall better design and avoid mistakes they have made in the initial design completed in of the first phase of the capstone project. In addition, they were able to decide on their team capabilities, training needs and thus learn the required technologies earlier which is reflected on the final product quality
PROMOTING STUDENT ENGAGEMENT USING SOCIAL MEDIA TECHNOLOGIEScscpconf
Using social media in education provides learners with an informal way for communication. Informal communication tends to remove barriers and hence promotes student engagement. This paper presents our experience in using three different social media technologies in teaching software project management course. We conducted different surveys at the end of every semester to evaluate students’ satisfaction and engagement. Results show that using social media enhances students’ engagement and satisfaction. However, familiarity with the tool is an important factor for student satisfaction.
A SURVEY ON QUESTION ANSWERING SYSTEMS: THE ADVANCES OF FUZZY LOGICcscpconf
In real world computing environment with using a computer to answer questions has been a human dream since the beginning of the digital era, Question-answering systems are referred to as intelligent systems, that can be used to provide responses for the questions being asked by the user based on certain facts or rules stored in the knowledge base it can generate answers of questions asked in natural , and the first main idea of fuzzy logic was to working on the problem of computer understanding of natural language, so this survey paper provides an overview on what Question-Answering is and its system architecture and the possible relationship and
different with fuzzy logic, as well as the previous related research with respect to approaches that were followed. At the end, the survey provides an analytical discussion of the proposed QA models, along or combined with fuzzy logic and their main contributions and limitations.
DYNAMIC PHONE WARPING – A METHOD TO MEASURE THE DISTANCE BETWEEN PRONUNCIATIONS cscpconf
Human beings generate different speech waveforms while speaking the same word at different times. Also, different human beings have different accents and generate significantly varying speech waveforms for the same word. There is a need to measure the distances between various words which facilitate preparation of pronunciation dictionaries. A new algorithm called Dynamic Phone Warping (DPW) is presented in this paper. It uses dynamic programming technique for global alignment and shortest distance measurements. The DPW algorithm can be used to enhance the pronunciation dictionaries of the well-known languages like English or to build pronunciation dictionaries to the less known sparse languages. The precision measurement experiments show 88.9% accuracy.
INTELLIGENT ELECTRONIC ASSESSMENT FOR SUBJECTIVE EXAMS cscpconf
In education, the use of electronic (E) examination systems is not a novel idea, as Eexamination systems have been used to conduct objective assessments for the last few years. This research deals with randomly designed E-examinations and proposes an E-assessment system that can be used for subjective questions. This system assesses answers to subjective questions by finding a matching ratio for the keywords in instructor and student answers. The matching ratio is achieved based on semantic and document similarity. The assessment system is composed of four modules: preprocessing, keyword expansion, matching, and grading. A survey and case study were used in the research design to validate the proposed system. The examination assessment system will help instructors to save time, costs, and resources, while increasing efficiency and improving the productivity of exam setting and assessments.
TWO DISCRETE BINARY VERSIONS OF AFRICAN BUFFALO OPTIMIZATION METAHEURISTICcscpconf
African Buffalo Optimization (ABO) is one of the most recent swarms intelligence based metaheuristics. ABO algorithm is inspired by the buffalo’s behavior and lifestyle. Unfortunately, the standard ABO algorithm is proposed only for continuous optimization problems. In this paper, the authors propose two discrete binary ABO algorithms to deal with binary optimization problems. In the first version (called SBABO) they use the sigmoid function and probability model to generate binary solutions. In the second version (called LBABO) they use some logical operator to operate the binary solutions. Computational results on two knapsack problems (KP and MKP) instances show the effectiveness of the proposed algorithm and their ability to achieve good and promising solutions.
DETECTION OF ALGORITHMICALLY GENERATED MALICIOUS DOMAINcscpconf
In recent years, many malware writers have relied on Dynamic Domain Name Services (DDNS) to maintain their Command and Control (C&C) network infrastructure to ensure a persistence presence on a compromised host. Amongst the various DDNS techniques, Domain Generation Algorithm (DGA) is often perceived as the most difficult to detect using traditional methods. This paper presents an approach for detecting DGA using frequency analysis of the character distribution and the weighted scores of the domain names. The approach’s feasibility is demonstrated using a range of legitimate domains and a number of malicious algorithmicallygenerated domain names. Findings from this study show that domain names made up of English characters “a-z” achieving a weighted score of < 45 are often associated with DGA. When a weighted score of < 45 is applied to the Alexa one million list of domain names, only 15% of the domain names were treated as non-human generated.
GLOBAL MUSIC ASSET ASSURANCE DIGITAL CURRENCY: A DRM SOLUTION FOR STREAMING C...cscpconf
The amount of piracy in the streaming digital content in general and the music industry in specific is posing a real challenge to digital content owners. This paper presents a DRM solution to monetizing, tracking and controlling online streaming content cross platforms for IP enabled devices. The paper benefits from the current advances in Blockchain and cryptocurrencies. Specifically, the paper presents a Global Music Asset Assurance (GoMAA) digital currency and presents the iMediaStreams Blockchain to enable the secure dissemination and tracking of the streamed content. The proposed solution provides the data owner the ability to control the flow of information even after it has been released by creating a secure, selfinstalled, cross platform reader located on the digital content file header. The proposed system provides the content owners’ options to manage their digital information (audio, video, speech, etc.), including the tracking of the most consumed segments, once it is release. The system benefits from token distribution between the content owner (Music Bands), the content distributer (Online Radio Stations) and the content consumer(Fans) on the system blockchain.
IMPORTANCE OF VERB SUFFIX MAPPING IN DISCOURSE TRANSLATION SYSTEMcscpconf
This paper discusses the importance of verb suffix mapping in Discourse translation system. In
discourse translation, the crucial step is Anaphora resolution and generation. In Anaphora
resolution, cohesion links like pronouns are identified between portions of text. These binders
make the text cohesive by referring to nouns appearing in the previous sentences or nouns
appearing in sentences after them. In Machine Translation systems, to convert the source
language sentences into meaningful target language sentences the verb suffixes should be
changed as per the cohesion links identified. This step of translation process is emphasized in
the present paper. Specifically, the discussion is on how the verbs change according to the
subjects and anaphors. To explain the concept, English is used as the source language (SL) and
an Indian language Telugu is used as Target language (TL)
EXACT SOLUTIONS OF A FAMILY OF HIGHER-DIMENSIONAL SPACE-TIME FRACTIONAL KDV-T...cscpconf
In this paper, based on the definition of conformable fractional derivative, the functional
variable method (FVM) is proposed to seek the exact traveling wave solutions of two higherdimensional
space-time fractional KdV-type equations in mathematical physics, namely the
(3+1)-dimensional space–time fractional Zakharov-Kuznetsov (ZK) equation and the (2+1)-
dimensional space–time fractional Generalized Zakharov-Kuznetsov-Benjamin-Bona-Mahony
(GZK-BBM) equation. Some new solutions are procured and depicted. These solutions, which
contain kink-shaped, singular kink, bell-shaped soliton, singular soliton and periodic wave
solutions, have many potential applications in mathematical physics and engineering. The
simplicity and reliability of the proposed method is verified.
AUTOMATED PENETRATION TESTING: AN OVERVIEWcscpconf
The using of information technology resources is rapidly increasing in organizations,
businesses, and even governments, that led to arise various attacks, and vulnerabilities in the
field. All resources make it a must to do frequently a penetration test (PT) for the environment
and see what can the attacker gain and what is the current environment's vulnerabilities. This
paper reviews some of the automated penetration testing techniques and presents its
enhancement over the traditional manual approaches. To the best of our knowledge, it is the
first research that takes into consideration the concept of penetration testing and the standards
in the area.This research tackles the comparison between the manual and automated
penetration testing, the main tools used in penetration testing. Additionally, compares between
some methodologies used to build an automated penetration testing platform.
CLASSIFICATION OF ALZHEIMER USING fMRI DATA AND BRAIN NETWORKcscpconf
Since the mid of 1990s, functional connectivity study using fMRI (fcMRI) has drawn increasing
attention of neuroscientists and computer scientists, since it opens a new window to explore
functional network of human brain with relatively high resolution. BOLD technique provides
almost accurate state of brain. Past researches prove that neuro diseases damage the brain
network interaction, protein- protein interaction and gene-gene interaction. A number of
neurological research paper also analyse the relationship among damaged part. By
computational method especially machine learning technique we can show such classifications.
In this paper we used OASIS fMRI dataset affected with Alzheimer’s disease and normal
patient’s dataset. After proper processing the fMRI data we use the processed data to form
classifier models using SVM (Support Vector Machine), KNN (K- nearest neighbour) & Naïve
Bayes. We also compare the accuracy of our proposed method with existing methods. In future,
we will other combinations of methods for better accuracy.
VALIDATION METHOD OF FUZZY ASSOCIATION RULES BASED ON FUZZY FORMAL CONCEPT AN...cscpconf
In order to treat and analyze real datasets, fuzzy association rules have been proposed. Several
algorithms have been introduced to extract these rules. However, these algorithms suffer from
the problems of utility, redundancy and large number of extracted fuzzy association rules. The
expert will then be confronted with this huge amount of fuzzy association rules. The task of
validation becomes fastidious. In order to solve these problems, we propose a new validation
method. Our method is based on three steps. (i) We extract a generic base of non redundant
fuzzy association rules by applying EFAR-PN algorithm based on fuzzy formal concept analysis.
(ii) we categorize extracted rules into groups and (iii) we evaluate the relevance of these rules
using structural equation model.
PROBABILITY BASED CLUSTER EXPANSION OVERSAMPLING TECHNIQUE FOR IMBALANCED DATAcscpconf
In many applications of data mining, class imbalance is noticed when examples in one class are
overrepresented. Traditional classifiers result in poor accuracy of the minority class due to the
class imbalance. Further, the presence of within class imbalance where classes are composed of
multiple sub-concepts with different number of examples also affect the performance of
classifier. In this paper, we propose an oversampling technique that handles between class and
within class imbalance simultaneously and also takes into consideration the generalization
ability in data space. The proposed method is based on two steps- performing Model Based
Clustering with respect to classes to identify the sub-concepts; and then computing the
separating hyperplane based on equal posterior probability between the classes. The proposed
method is tested on 10 publicly available data sets and the result shows that the proposed
method is statistically superior to other existing oversampling methods.
CHARACTER AND IMAGE RECOGNITION FOR DATA CATALOGING IN ECOLOGICAL RESEARCHcscpconf
Data collection is an essential, but manpower intensive procedure in ecological research. An
algorithm was developed by the author which incorporated two important computer vision
techniques to automate data cataloging for butterfly measurements. Optical Character
Recognition is used for character recognition and Contour Detection is used for imageprocessing.
Proper pre-processing is first done on the images to improve accuracy. Although
there are limitations to Tesseract’s detection of certain fonts, overall, it can successfully identify
words of basic fonts. Contour detection is an advanced technique that can be utilized to
measure an image. Shapes and mathematical calculations are crucial in determining the precise
location of the points on which to draw the body and forewing lines of the butterfly. Overall,
92% accuracy were achieved by the program for the set of butterflies measured.
SOCIAL MEDIA ANALYTICS FOR SENTIMENT ANALYSIS AND EVENT DETECTION IN SMART CI...cscpconf
Smart cities utilize Internet of Things (IoT) devices and sensors to enhance the quality of the city
services including energy, transportation, health, and much more. They generate massive
volumes of structured and unstructured data on a daily basis. Also, social networks, such as
Twitter, Facebook, and Google+, are becoming a new source of real-time information in smart
cities. Social network users are acting as social sensors. These datasets so large and complex
are difficult to manage with conventional data management tools and methods. To become
valuable, this massive amount of data, known as 'big data,' needs to be processed and
comprehended to hold the promise of supporting a broad range of urban and smart cities
functions, including among others transportation, water, and energy consumption, pollution
surveillance, and smart city governance. In this work, we investigate how social media analytics
help to analyze smart city data collected from various social media sources, such as Twitter and
Facebook, to detect various events taking place in a smart city and identify the importance of
events and concerns of citizens regarding some events. A case scenario analyses the opinions of
users concerning the traffic in three largest cities in the UAE
SOCIAL NETWORK HATE SPEECH DETECTION FOR AMHARIC LANGUAGEcscpconf
The anonymity of social networks makes it attractive for hate speech to mask their criminal
activities online posing a challenge to the world and in particular Ethiopia. With this everincreasing
volume of social media data, hate speech identification becomes a challenge in
aggravating conflict between citizens of nations. The high rate of production, has become
difficult to collect, store and analyze such big data using traditional detection methods. This
paper proposed the application of apache spark in hate speech detection to reduce the
challenges. Authors developed an apache spark based model to classify Amharic Facebook
posts and comments into hate and not hate. Authors employed Random forest and Naïve Bayes
for learning and Word2Vec and TF-IDF for feature selection. Tested by 10-fold crossvalidation,
the model based on word2vec embedding performed best with 79.83%accuracy. The
proposed method achieve a promising result with unique feature of spark for big data.
GENERAL REGRESSION NEURAL NETWORK BASED POS TAGGING FOR NEPALI TEXTcscpconf
This article presents Part of Speech tagging for Nepali text using General Regression Neural
Network (GRNN). The corpus is divided into two parts viz. training and testing. The network is
trained and validated on both training and testing data. It is observed that 96.13% words are
correctly being tagged on training set whereas 74.38% words are tagged correctly on testing
data set using GRNN. The result is compared with the traditional Viterbi algorithm based on
Hidden Markov Model. Viterbi algorithm yields 97.2% and 40% classification accuracies on
training and testing data sets respectively. GRNN based POS Tagger is more consistent than the
traditional Viterbi decoding technique.
Macroeconomics- Movie Location
This will be used as part of your Personal Professional Portfolio once graded.
Objective:
Prepare a presentation or a paper using research, basic comparative analysis, data organization and application of economic information. You will make an informed assessment of an economic climate outside of the United States to accomplish an entertainment industry objective.
Introduction to AI for Nonprofits with Tapp NetworkTechSoup
Dive into the world of AI! Experts Jon Hill and Tareq Monaur will guide you through AI's role in enhancing nonprofit websites and basic marketing strategies, making it easy to understand and apply.
Honest Reviews of Tim Han LMA Course Program.pptxtimhan337
Personal development courses are widely available today, with each one promising life-changing outcomes. Tim Han’s Life Mastery Achievers (LMA) Course has drawn a lot of interest. In addition to offering my frank assessment of Success Insider’s LMA Course, this piece examines the course’s effects via a variety of Tim Han LMA course reviews and Success Insider comments.
Instructions for Submissions thorugh G- Classroom.pptxJheel Barad
This presentation provides a briefing on how to upload submissions and documents in Google Classroom. It was prepared as part of an orientation for new Sainik School in-service teacher trainees. As a training officer, my goal is to ensure that you are comfortable and proficient with this essential tool for managing assignments and fostering student engagement.
2024.06.01 Introducing a competency framework for languag learning materials ...Sandy Millin
http://sandymillin.wordpress.com/iateflwebinar2024
Published classroom materials form the basis of syllabuses, drive teacher professional development, and have a potentially huge influence on learners, teachers and education systems. All teachers also create their own materials, whether a few sentences on a blackboard, a highly-structured fully-realised online course, or anything in between. Despite this, the knowledge and skills needed to create effective language learning materials are rarely part of teacher training, and are mostly learnt by trial and error.
Knowledge and skills frameworks, generally called competency frameworks, for ELT teachers, trainers and managers have existed for a few years now. However, until I created one for my MA dissertation, there wasn’t one drawing together what we need to know and do to be able to effectively produce language learning materials.
This webinar will introduce you to my framework, highlighting the key competencies I identified from my research. It will also show how anybody involved in language teaching (any language, not just English!), teacher training, managing schools or developing language learning materials can benefit from using the framework.
Model Attribute Check Company Auto PropertyCeline George
In Odoo, the multi-company feature allows you to manage multiple companies within a single Odoo database instance. Each company can have its own configurations while still sharing common resources such as products, customers, and suppliers.
Unit 8 - Information and Communication Technology (Paper I).pdfThiyagu K
This slides describes the basic concepts of ICT, basics of Email, Emerging Technology and Digital Initiatives in Education. This presentations aligns with the UGC Paper I syllabus.
The Roman Empire A Historical Colossus.pdfkaushalkr1407
The Roman Empire, a vast and enduring power, stands as one of history's most remarkable civilizations, leaving an indelible imprint on the world. It emerged from the Roman Republic, transitioning into an imperial powerhouse under the leadership of Augustus Caesar in 27 BCE. This transformation marked the beginning of an era defined by unprecedented territorial expansion, architectural marvels, and profound cultural influence.
The empire's roots lie in the city of Rome, founded, according to legend, by Romulus in 753 BCE. Over centuries, Rome evolved from a small settlement to a formidable republic, characterized by a complex political system with elected officials and checks on power. However, internal strife, class conflicts, and military ambitions paved the way for the end of the Republic. Julius Caesar’s dictatorship and subsequent assassination in 44 BCE created a power vacuum, leading to a civil war. Octavian, later Augustus, emerged victorious, heralding the Roman Empire’s birth.
Under Augustus, the empire experienced the Pax Romana, a 200-year period of relative peace and stability. Augustus reformed the military, established efficient administrative systems, and initiated grand construction projects. The empire's borders expanded, encompassing territories from Britain to Egypt and from Spain to the Euphrates. Roman legions, renowned for their discipline and engineering prowess, secured and maintained these vast territories, building roads, fortifications, and cities that facilitated control and integration.
The Roman Empire’s society was hierarchical, with a rigid class system. At the top were the patricians, wealthy elites who held significant political power. Below them were the plebeians, free citizens with limited political influence, and the vast numbers of slaves who formed the backbone of the economy. The family unit was central, governed by the paterfamilias, the male head who held absolute authority.
Culturally, the Romans were eclectic, absorbing and adapting elements from the civilizations they encountered, particularly the Greeks. Roman art, literature, and philosophy reflected this synthesis, creating a rich cultural tapestry. Latin, the Roman language, became the lingua franca of the Western world, influencing numerous modern languages.
Roman architecture and engineering achievements were monumental. They perfected the arch, vault, and dome, constructing enduring structures like the Colosseum, Pantheon, and aqueducts. These engineering marvels not only showcased Roman ingenuity but also served practical purposes, from public entertainment to water supply.
The French Revolution, which began in 1789, was a period of radical social and political upheaval in France. It marked the decline of absolute monarchies, the rise of secular and democratic republics, and the eventual rise of Napoleon Bonaparte. This revolutionary period is crucial in understanding the transition from feudalism to modernity in Europe.
For more information, visit-www.vavaclasses.com
Francesca Gottschalk - How can education support child empowerment.pptxEduSkills OECD
Francesca Gottschalk from the OECD’s Centre for Educational Research and Innovation presents at the Ask an Expert Webinar: How can education support child empowerment?
2. Computer Science & Information Technology (CS & IT) 110
The rest of the paper is organized as follows. Section 2 presents a brief review of epidemiological
modeling and why do we need to translate the simulated model to the narrative languages?. A
description of our model in SBML (Bio-PEPA), and how to perform its translation into narrative
language, in Section 3. Section 4 describes the details of information on testing and evaluation.
Section 5 summarizes the work done and also some suggestions to improve the model.
2. FROM NARRATIVE LANGUAGE TO A MODEL
Develop and use a good epidemiological model, remains to this day a very attractive idea, and to
achieve it many researchers are struggling between having to choose the best tools and methods
or to conduct a thorough training in the field in question and often they find themselves stagger
between them. However, others, give little importance to neither one nor the other, rather they
prefer to save their energy and adopt a technique completely original that is to transform the
context expressed by an expert directly in a simulated model, as it was presented by Georgoulas
and Guerriero in 2012 [3], for translating the narrative language in a "Bio-PEPA" formel model.
In 2007 and 2009, Guerriero et all [4, 5] studied the translation of narrative language in a "model
Beta-binders" and " a bio-inspired process calculus", the authors have assumed that it would be
better to simplify the communication between experts and developers by providing a simple
interface that would allow both the expert to insert their information and the developer to
manipulate only the code without worrying too much about understanding everything. This
approach has been baptized the passage from narrative language to a model. Although this work
is regarded as a large opening in the field of modeling, however, the question arises, what
happens to existing models?.
The following work, were largely inspired from Guerriero's work [5], where we suggest to do the
reverse of him, kept the existing model and improve it, which means, translating the Bio-PEPA
model to a narrative language.
3. FROM MODEL TO A NARRATIVE LANGUAGE
In order to respond to the issues raised in the previous section and based on the principle defined
above, we propose an approach whose aim is to preserve existing models and also to optimize it
by allowing an incremental model implementation.
An extensive literature search, which focused on methods of modeling with both, analytical and
decision support tools, as well as the translation of the model in other specific formats, for
approaching the narrative language, we were able to highlight Bio-PEPA [2, 6], which is formal
language based on the process algebras recommended for biochemical systems and which was
perfectly suited to epidemiological field. Beyond this definition, Bio-PEPA is equipped with an
extension that allows translating any model in Bio-PEPA in XML format better known as SBML
(Systems Biology Markup Language).
3.1. Bio-PEPA (Biological Performance Evaluation of Process Algebras)
Bio-PEPA is a tool, method and language based on process algebra. These are described by
mathematical formalisms used in the analysis of concurrent systems [2, 7, 8], which consist of a
set of processes running in parallel, can be independent or share common tasks.
As it was defined in [6], the Bio-PEPA language is 7-tuple (V, N, K, FR, Comp, P, Event) Where:
3. 111 Computer Science & Information Technology (CS & IT)
• V is a set of locations,
• N is a set of auxiliary information,
• K is a set of parameters,
• E is a set of functional rates
• Comp is the set of species
• P is the component model.
• Event is the set of events.
3.1.1. Characteristics of Bio-PEPA
The main features which are provided in Bio-PEPA are:
• provides a formal abstraction of biochemical systems and further epidemiological systems.
• Allows expressing any kind of interaction law expressed using functional rates.
• Allows expressing the evolution of species and their interaction.
• Defined syntax and structural semantics based on a formal representation.
• Provides the ability to perform different types of analysis from the model (continuous time
Markov chain, the stochastic simulation algorithms, differential equations).
3.1.2. Bio-PEPA Syntax
As defined by [9, 6], Bio-PEPA syntax is described by:
S:= (α,k) op S:=S ; S:=S+S; S:=C where
op = ↓ ׀↑׀⊕׀⊖׀ ⊙ And
S::= S S ׀ S(x)
Where, S: describe the species (different types of individuals); P: the model describing the system
and interaction between species. The term (α, k) op S, express that the action α is described by k
rate and performed by the species S, “op” define the role of S. Op={ ↓: reactant, ↑: product, ⊕ :
activator, ⊖ : inhibitor, ⊙ : generic modifier}.
3.2. Systems Biology Markup Language (SBML)
SBML (The Systems Biology Markup Language) is a markup language based on XML (the
eXtensible Markup Language). In essence, an XML document is divided into hierarchically
structured elements from a root element. Syntactically, the elements of an XML document are
marked in the document itself by opening and closing pairs of tags, each element consists of a
name that specifies its type, attributes, and content (elements or text).
4. Computer Science & Information Technology (CS & IT) 112
SBML is a set of constructions’ elements specific to the systems biology, defined in an XML
schema. It has been adapted to the epidemiological models.
The SBML language is divided into hierarchically structured elements which are a syntax tree of
language as an XML schema.
As it was defined in section 3.1, an epidemiological model is defined in Bio-PEPA by a set of
compartments, species and reactions described by rates and parameters. SBML do the same by
using tags and attributes [10, 14]. Figure 1 shows the general organization of SBML's TAG which
are described in the following [11]:
Figure 1. General organization of SBML language.
• Model : An SBML model definition consists of lists of SBML components located
inside the tags : <model id="My_Model" > ….</model>.
• listOfFunctionDefinitions: The mathematical functions that can be used in the other
part of the model are defined in this section.
• listOfUnitDefinitions : these units are used to explicitly specify: constants, initial
conditions, the symbols in the formulas and the results of formulas.
• listOfCompartments : Is an enclosed space in which the species (species) are located.
• listOfSpecies : To specify the different entities in the model regardless of their nature,
where one type of species "listOfSpeciesTypes" can be specified.
• listOfReactions : Any process whereby the transfer of a species from one
compartment to another.
We specify that Representation and semantics of mathematical expressions are defined in the
SBML using MathML.
3.3. Relation of Bio-PEPA to SBML
T The principal notions which rely Bio-PEPA to SBML are summarized in table 1, (this table was
directly extracted from [2]).
5. 113 Computer Science & Information Technology (CS & IT)
Table 1: Summary of mapping from SBML to Bio-PEPA (taken from [2])
SBML Element Corresponding Bio-PEPA component
List of Compartments Bio-PEPA compartments
List of Species Species definitions (Name, initial concentration
and compartment). Step size and level default to 1.
Also used in species sequential component
definitions.
List of Parameters Bio-PEPA parameter list. Local parameters
renamed to include re- action name.
List of Reactions Species component definitions and model
component definition.
Kinetic-Laws Bio-PEPA Functional rates
4. IMPLEMENTATION
For implementing our approach, we resumed work that we had already started in [6] which was to
reproduce the spread and vaccination protocol of chickenpox in Bio-PEPA, as shown in Figure 2.
Figure 2. Model structure (taken from [12]).
The overall scheme of our approach is defined by three main steps:
• Formulation of the epidemic model in Bio-PEPA: definition of species and reactions.
• Exporting SBML file.
• Representation in narrative language: analysis of SBML file, displaying a detailed report,
validation by the expert.
6. Computer Science & Information Technology (CS & IT) 114
4.1. Description of model structure
Our approach, as it was structured, allows us to share our work in two main stages, the first is to
develop a model with Bio-PEPA (Formulation from epidemic model in Bio-PEPA), a work that
has already been done [6] and demonstrated the importance of using such a tool.
The second part (Exporting SBML file from Bio-PEPA, Representation of SBML text in
narrative language) is developing a module that would translate the Bio-PEPA code in a language
understood by the expert, who may easily check whether the contents of the model is adequate to
the example and thus validate it.
4.1.1. Chickenpox model in Bio-PEPA
To better understand the modeling process, we have taken and explained in this section the most
important part of the code Bio-PEPA model of chickenpox [12]. (For clarity of the document, we
have listed a few parts of model).
1. Location: To explain the seven age groups of the model, we have represented it as
compartments.
location Age1 in world : size = sizeAge, type = compartment;
………
location Age7 in world : size = sizeAge, type = compartment;
2. Functional rate: Describes the interaction law between compartments.
Exposition = λ . S . I; describes the contact between susceptible (S) and infected (I) with λ rate.
…….
LostVaccin = W . VP; defines the rate of immune lost (W) of those protected by vaccination
(VP).
3. The species : are the system entities expressed by operations describing their evolution.
S = [(Exposition,1)↓ S + (Vaccination_1,1) ↓ S + (Vaccination_2,1)↓S; explains what happens
if executes Exposition function, Vaccination_1 or Vaccination_2.
Some lines of Bio-PEPA code are shown in Figure 3. We could note that even if the Bio-PEPA
language is simple and easy for developer, however, remains an ambiguous part face which is set
epidemiologist who cannot verify the validity of information represented by the developer, as the
epidemiologist cannot understand the Bio-PEPA code.
We can extract from this figure, two important points, firstly the representation of chickenpox
model in Bio-PEPA (right of the figure), and another hand the results of a simulation graph
summarizing the status of various species (left of the figure).
7. 115 Computer Science & Information Technology (CS & IT)
Fi
Figure 3. Global view of Bio-PEPA model.
4.1.2. Exporting SBML file from Bio-PEPA
Bio-PEPA provides the ability to export the model as an SBML file. As shown in Figure 4: The
resulting text describes all the tags and attributes as they were presented in Section 3.2,
corresponding to our model of chicken pox. It should be remembered that to study an epidemic,
we must take into consideration: the environment "space", time, and various other functions.
SBML can express perfectly each section describing the elements defined in Bio-PEPA.
4.1.3. The Chickenpox model in narrative language
To work with SBML, we need to perform a literature research, about tools analyzing and
interpreting this type of descriptor, the latter revealed the JDOM [13] tool.
The main features of DOM are:
• The DOM model (unlike on this point to another famous API: SAX) is a specification
that has its origins in the w3C consortium.
• The DOM model is not only a multi-platform specification, but also multi-languages:
as Java, JavaScript, etc.
• DOM presents documents as a hierarchy of objects, from which, more specialized
interfaces are themselves implemented: Document, Element, Attribute, Text, etc. With
this model, we can treat all DOM components either by their generic type, “Node”, or
by their specific type “element, attribute”, many methods of navigation allow
navigation in the tree without having to worry about the specific type of component
Treated.
9. 117 Computer Science & Information Technology (CS & IT)
Figure 5 (b) From SBML code to narrative language.
Figures 5 (a, b) shows the interface of our application based on JDOM model and thus gathering
the steps defined above. White space viewed in the figure corresponds to the loading of SBML
file, when the black area corresponds to the translation and analysis of SBML in narrative
language understandable by the expert, the expert in this way has no difficulty in verifying the
validity of the model. The user-friendly interface allows him to navigate the various components
of the model (species, function of interaction, locations ... etc.)
To validate our application, we made a change in the initial code (Bio-PEPA) where we
intentionally caused an error in our model, the generation of the latter, as shown clearly in Figure
6 that the expert could detect the species and reactions which are missing, and therefore it can
easily report them to us. (The red frame line specifies the error caused by the number of species
missed).
10. Computer Science & Information Technology (CS & IT) 118
Figure 6. Detection error after translatation the model into narrative language.
5. CONCLUSIONS
Modeling and simulation are very useful to understand and predict the dynamics of various
biological phenomena. The Bio-PEPA approach seems to be an interesting and powerful
approach to address such problems. Through its various features it allows an easy development of
the computer model and a transparent way for biologists between the real system and the built
which helps a faithful representation of the phenomenon studied. Nevertheless, in case of
occurrence of a new event, which has been badly treated by the developer and therefore omitted,
correction model is considered a tedious task for both. This is the reason for which, we have
introduced a new module (interface), where the expert can easily detect this omission and thus
back to the developer, who may discern error and quickly position it, on the Bio-PEPA model.
As perspective to strengthen this work, why not attached it to the one that was mentioned in
Section 2, and thus drift toward a cyclical pattern, which would not require even the presence of
developer, however, after reflection, what will become the expert, front of its multitude of
information? After a brief literature review the idea of integrating it with the world of data mining
would be a much better idea to fruition.
REFERENCES
[1] Mansoul, A., & Atmani, B. (2009). Fouille de données biologiques : vers une représentation
booléenne des règles d’association, In Proceedings of CIIA.
[2] Ciocchetta, F., & Ellavarason, K. (2008). An Automatic Mapping from the Systems Biology Markup
Language to the Bio-PEPA Process Algebra.
[3] Georgoulas, A., & Guerriero, M. L. (2012). A software interface between the Narrative Language and
Bio-PEPA, 1–9.
[4] Guerriero, M. L., A. Dudka, N. Underhill-Day, J. K. Heath and C. Priami (2009), Narrative-based
computational modelling of the Gp130/JAK/STAT signalling pathway, BMC Systems Biology 3, p.
40.
11. 119 Computer Science & Information Technology (CS & IT)
[5] Guerriero, M. L., J. K. Heath and C. Priami, (2007), An Automated Translation from a Narrative
Language for Biological Modelling into Process Algebra, in: Proceedings of Computational Methods
in Systems Biology (CMSB’07), LNCS 4695, pp. 136–151.
[6] Hamami.D, Atmani.B. (2012). Modeling the effect of vaccination on varicella using Bio-PEPA.
Proceeding Iasted , 783-077, doi:978-0-88986-926-4
[7] Milner.R. (1999). Communicating and Mobile Systems: the π-calculus. Cambridge University Press.
[8] Baeten J.C.M.(2005). A Brief History of Process Algebras. Theoretical Computer Science, Volume
335, Issue 2-3, Pages 131-146.
[9] Ciocchetta, F. and M. Guerriero (2009), Modelling Biological Compartments in Bio-PEPA, ENTCS
227, pp. 77–95.
[10] Hucka.M, Finney.A, S. Hoops, S. Keating and N. L. Novere (2007). Systems Biology Markup
Language (SBML) Level 2: Structures and Facilities for Model Definitions. Systems Biology Markup
Language, Release 2.
[11] Beurton-aimar.M (2007). Langage de modélisation des réseaux biochimiques, 1–16, ECRIN-Biologie
syst, Chap. 07, Page 7·.
[12] Bonmarin.I, Santa-Olalla.P, Lévy-Bruhl.D(2008), « Modélisation de l’impact de la vaccination sur
l’épidémiologie de la varicelle et du zona », Revue d’Epidémiologie et de Santé Publique 56 323–
331.
[13] Hunter, J. (2002). JDOM Makes XML Easy.Sun’s 2002 Worlwide Java Developer Conference.
[14] Hucka.M, Finney.A, Bornstein.B.J, Keating.S.M, B.E. Shapiro, J. Matthews, B.L. Kovitz, M.J.
Schilstra, A. Funahashi, J.C. Doyle and H. Kitano. Evolving a Lingua Franca and Associated
Software Infrastructure for Computational Systems Biology (2004): The Systems Biology Markup
Language (SBML) Project, Systems Biology, Volume 1, Pages 41-53.