The use of model such as LEL (Lexicon Extended Language) in natural language is very interesting in Requirements Engineering. But LEL, even if it is derived from the Universe of Discourse (UofD) does not provide further details on the concepts it describes. However, we believe that the elements inherent in the conceptual level of a system are already defined in the Universe of Discourse. Therefore, in this work we propose a more elaborate natural language model called eLEL. It is a model that describes the concepts in a domain in more detail than the conventional LEL. We also propose a modeling process of a domain using an eLEL model.
Proposal of an Ontology Applied to Technical Debt on PL/SQL DevelopmentJorge Barreto
The document proposes an ontology for technical debt in PL/SQL development. It discusses relevant concepts like ontologies, technical debt, and PL/SQL. An initial model is developed in Protégé with five types of technical debt - documentation, requirements, tests, design, and code. The model could be expanded to include more debt types and relationships. The ontology provides a standardized vocabulary to describe technical debt for PL/SQL developers.
Possibility of interdisciplinary research software engineering andnatural lan...Nakul Sharma
This document discusses the possibility of interdisciplinary research between software engineering and natural language processing. It provides a literature review of research papers from 2003 to 2014 related to applying tools and techniques from one field to the other. Some key areas discussed include generating UML diagrams from natural language text, developing ontologies to clarify meanings, and potential issues with joint research like determining complexity of sentences. The document proposes a flowchart for how artifacts could be analyzed using tasks from either field to enable interdisciplinary research.
The document contains information about entity-relationship (ER) modeling including:
1. It discusses the key components of an ER model including entities, attributes, relationships, and cardinality.
2. It provides examples of one-to-one, one-to-many, and many-to-many relationships between entities.
3. It describes the different types of attributes such as simple, composite, single-valued, multi-valued, and derived attributes.
The document discusses entity-relationship (E-R) modeling concepts for database design. It defines entities, attributes, relationship sets, and keys. It describes how E-R diagrams visually represent entities, relationships, and attributes using shapes and connections. It also covers modeling concepts like cardinality constraints, participation constraints, and the choice between binary and non-binary relationships.
Integrating natural language processing and software engineeringNakul Sharma
This document summarizes research on integrating natural language processing and software engineering. It provides a literature review of works that have used natural language text as input to generate software engineering artifacts like UML diagrams, test cases, and process models. The paper also discusses how techniques from natural language processing can be applied to different phases of the software development life cycle and how natural language understanding can help automate software engineering tasks.
COQUEL: A CONCEPTUAL QUERY LANGUAGE BASED ON THE ENTITYRELATIONSHIP MODELcsandit
As more and more collections of data are available on the Internet, end users but not experts in
Computer Science demand easy solutions for retrieving data from these collections. A good
solution for these users is the conceptual query languages, which facilitate the composition of
queries by means of a graphical interface. In this paper, we present (1) CoQueL, a conceptual
query language specified on E/R models and (2) a translation architecture for translating
CoQueL queries into languages such as XQuery or SQL..
Intelligent query converter a domain independent interfacefor conversionIAEME Publication
This document describes an "Intelligent Query Converter" (IQC) system that converts natural language queries in English to SQL queries. IQC uses semantic matching techniques and WordNet to match queries to database schema without requiring domain-specific configuration. The system classifies queries based on presence of value keywords and conjunctive clauses. Experiments show IQC correctly answered 64.5-87.9% of queries for different databases, outperforming Microsoft's English Query system which answered correctly 29.0-51.8% of time. IQC provides a domain-independent natural language interface to databases using semantic analysis techniques.
COMPREHENSIVE ANALYSIS OF NATURAL LANGUAGE PROCESSING TECHNIQUEJournal For Research
Natural Language Processing (NLP) techniques are one of the most used techniques in the field of computer applications. It has become one of the vast and advanced techniques. Language is the means of communication or interaction among humans and in present scenario when everything is dependent on machine or everything is computerized, communication between computer and human has become a necessity. To fulfill this necessity NLP has been emerged as the means of interaction which narrows the gap between machines (computers) and humans. It was evolved from the study of linguistics which was passed through the Turing test to check the similarity between data but it was limited to small set of data. Later on various algorithms were developed along with the concept of AI (Artificial Intelligence) for the successful execution of NLP. In this paper, the main emphasis is on the different techniques of NLP which have been developed till now, their applications and the comparison of all those techniques on different parameters.
Proposal of an Ontology Applied to Technical Debt on PL/SQL DevelopmentJorge Barreto
The document proposes an ontology for technical debt in PL/SQL development. It discusses relevant concepts like ontologies, technical debt, and PL/SQL. An initial model is developed in Protégé with five types of technical debt - documentation, requirements, tests, design, and code. The model could be expanded to include more debt types and relationships. The ontology provides a standardized vocabulary to describe technical debt for PL/SQL developers.
Possibility of interdisciplinary research software engineering andnatural lan...Nakul Sharma
This document discusses the possibility of interdisciplinary research between software engineering and natural language processing. It provides a literature review of research papers from 2003 to 2014 related to applying tools and techniques from one field to the other. Some key areas discussed include generating UML diagrams from natural language text, developing ontologies to clarify meanings, and potential issues with joint research like determining complexity of sentences. The document proposes a flowchart for how artifacts could be analyzed using tasks from either field to enable interdisciplinary research.
The document contains information about entity-relationship (ER) modeling including:
1. It discusses the key components of an ER model including entities, attributes, relationships, and cardinality.
2. It provides examples of one-to-one, one-to-many, and many-to-many relationships between entities.
3. It describes the different types of attributes such as simple, composite, single-valued, multi-valued, and derived attributes.
The document discusses entity-relationship (E-R) modeling concepts for database design. It defines entities, attributes, relationship sets, and keys. It describes how E-R diagrams visually represent entities, relationships, and attributes using shapes and connections. It also covers modeling concepts like cardinality constraints, participation constraints, and the choice between binary and non-binary relationships.
Integrating natural language processing and software engineeringNakul Sharma
This document summarizes research on integrating natural language processing and software engineering. It provides a literature review of works that have used natural language text as input to generate software engineering artifacts like UML diagrams, test cases, and process models. The paper also discusses how techniques from natural language processing can be applied to different phases of the software development life cycle and how natural language understanding can help automate software engineering tasks.
COQUEL: A CONCEPTUAL QUERY LANGUAGE BASED ON THE ENTITYRELATIONSHIP MODELcsandit
As more and more collections of data are available on the Internet, end users but not experts in
Computer Science demand easy solutions for retrieving data from these collections. A good
solution for these users is the conceptual query languages, which facilitate the composition of
queries by means of a graphical interface. In this paper, we present (1) CoQueL, a conceptual
query language specified on E/R models and (2) a translation architecture for translating
CoQueL queries into languages such as XQuery or SQL..
Intelligent query converter a domain independent interfacefor conversionIAEME Publication
This document describes an "Intelligent Query Converter" (IQC) system that converts natural language queries in English to SQL queries. IQC uses semantic matching techniques and WordNet to match queries to database schema without requiring domain-specific configuration. The system classifies queries based on presence of value keywords and conjunctive clauses. Experiments show IQC correctly answered 64.5-87.9% of queries for different databases, outperforming Microsoft's English Query system which answered correctly 29.0-51.8% of time. IQC provides a domain-independent natural language interface to databases using semantic analysis techniques.
COMPREHENSIVE ANALYSIS OF NATURAL LANGUAGE PROCESSING TECHNIQUEJournal For Research
Natural Language Processing (NLP) techniques are one of the most used techniques in the field of computer applications. It has become one of the vast and advanced techniques. Language is the means of communication or interaction among humans and in present scenario when everything is dependent on machine or everything is computerized, communication between computer and human has become a necessity. To fulfill this necessity NLP has been emerged as the means of interaction which narrows the gap between machines (computers) and humans. It was evolved from the study of linguistics which was passed through the Turing test to check the similarity between data but it was limited to small set of data. Later on various algorithms were developed along with the concept of AI (Artificial Intelligence) for the successful execution of NLP. In this paper, the main emphasis is on the different techniques of NLP which have been developed till now, their applications and the comparison of all those techniques on different parameters.
Semantic based automatic question generation using artificial immune systemAlexander Decker
The document describes a system that uses artificial immune systems and natural language processing techniques like semantic role labeling and named entity recognition to automatically generate questions from text. It introduces a model that applies these techniques to extract semantic patterns from sentences, trains a classifier using artificial immune systems to classify question types, and then generates questions by matching patterns. The system was tested on sentences from various sources and showed promising results, correctly determining question types 95% of the time and generating matching questions 87% of the time.
IRJET - Voice based Natural Language Query ProcessingIRJET Journal
This document describes a voice-based natural language query processing system that allows non-expert users to interact with a database using natural language queries. The system takes a user's spoken query as input, converts it to text using speech recognition, analyzes the text to generate a SQL query, executes the SQL query against the database, and displays the results in a table. The system addresses challenges like ambiguity through techniques such as tokenization, lexical analysis, syntactic analysis, and semantic analysis to map the natural language query to a valid SQL query.
Microposts Ontology Construction Via Concept Extraction dannyijwest
The social networking website Facebook offers to its users a feature called “status updates” (or just “status”), which allows users to create Microposts directed to all their contacts, or a subset thereof. Readers can respond to Microposts, or in addition to that also click a “Like” button to show their appreciation for a certain Micropost. Adding semantic meaning in the sense of unambiguous intended ideas to such Microposts. We can make a start towards semantic web by adding semantic annotation to web resources. Ontology are used to specify meaning of annotations. Ontology provide a vocabulary for representing and communicating knowledge about some topic and a set of semantic relationships that hold among the terms in that vocabulary. For increasing the efficiency of ontology based application there is a need to develop a mechanism that reduces the manual work in developing ontology. In this paper, we proposed Microposts’ ontology construction. In this paper we present a method that extracts meaningful knowledge from microposts shared in social platforms. This process involves different steps for the analysis of such microposts (extraction of keywords, named entities and their matching to ontological concepts).
The important problem of word segmentation in Thai language is sentential noun phrase. The existing
studies try to minimize the problem. But there is no research that solves this problem directly. This study
investigates the approach to resolve this problem using conditional random fields which is a probabilistic
model to segment and label sequence data. The results present that the corrected data of noun phrase was
detected more than 78.61 % based on our technique.
The document discusses different types of expressions in programming, including arithmetic, string, and logical expressions. It explains that expressions manipulate data and are composed of values, operators, and functions. Arithmetic expressions perform math operations, string expressions manipulate text, and logical expressions select actions by evaluating to true or false. The document also provides examples of different operators and functions used in each type of expression.
Resolving the semantics of vietnamese questions in v news qaict systemijaia
Recently we have built a VNewsQA/ICT system which can read the titles of Vietnamese news in the domain
of information and communication technology, then process and use them to answer the Vietnamese
questions of users. The architecture of VNewsQA/ICT system has two main components: 1) the first
component treats the simple Vietnamese sentences as its natural language textual data which is used to
answer the user’s questions; 2) the second component resolves the semantics of Vietnamese questions
which query the system. This paper introduces a semantic representation model and a processing model to
revolve the Vietnamese questions in VNewsQA/ICT system. These semantic representation and processing
models are able to resolve the semantics of eight Vietnamese question classes which are used in our
system.
Suitability of naïve bayesian methods for paragraph level text classification...ijaia
This document discusses using Naive Bayesian methods for paragraph-level text classification in the Kannada language. It evaluates the performance of the Naive Bayesian and Naive Bayesian Multinomial models on a corpus of 1791 paragraphs from four categories (Commerce, Social Sciences, Natural Sciences, Aesthetics). Dimensionality reduction techniques like removing stop words and words with low term frequency are applied before classification. The results show that the Naive Bayesian Multinomial model outperforms the simple Naive Bayesian approach for paragraph classification in Kannada.
The document provides an overview of object-oriented concepts. It discusses that software development is increasingly relying on object-oriented paradigms due to benefits like improved modeling of real-world problems and reusability. Key concepts discussed include classes and objects, encapsulation, inheritance, polymorphism, and object composition. Various object-oriented methodologies like those proposed by Coad/Yourdon, Booch, Rumbaugh, and Jacobson are also summarized.
Author Credits - Maaz Anwar Nomani
Semantic Role Labeler (SRL) is a semantic parser which can automatically identify and then classify arguments of a verb in a natural language sentence for Hindi and Urdu. For e.g. in the natural language sentence “Sara won the competition because of her hard work.”, ‘won’ is the main verb and there are 3 arguments for this verb; ‘Sara’ (Agent), ‘hard work’ (Reason) and ‘competition’ (Theme). The problem statement of a SRL revolves around the fact that how will you make a machine identify and then classify the arguments of a verb in a natural language sentence.
Since there are 2 sub problem statements here (Identification and Classification), our SRL has a pipeline architecture in which a binary classifier (Logistic Regression) is first trained to identify whether a word is an argument to a verb in a sentence or not (Yes or No) and subsequently a multi-class classifier (SVM with Linear kernel) is trained to classify the identified arguments by above binary classifier into one of the 20 classes. These 20 classes are the various notions present in a natural language sentence (for e.g. Agent, Theme, Location, Time, Purpose, Reason, Cause etc.). These ‘notions’ are called Propbank labels or semantic labels present in a Proposition Bank which is a collection of hand-annotated sentences.
In essence, SRL felicitates Semantic Parsing which essentially is the research investigation of identifying WHO did WHAT to WHOM, WHERE, HOW, WHY and WHEN etc. in a natural language sentence.
OOAD - UML - Class and Object Diagrams - LabVicter Paul
The document discusses class diagrams and object diagrams. It explains that a class diagram shows the structure of a system by displaying classes, interfaces, and their relationships, while an object diagram shows specific instances of classes at a point in time. The document provides steps for constructing class diagrams, such as identifying classes and relationships. It also discusses how object diagrams are created based on class diagrams by instantiating classes and depicting their relationships.
This document provides an introduction to object-oriented analysis and design (OOAD) using the Unified Process as an example iterative development process. It discusses OO concepts like objects, classes, attributes, methods, encapsulation, inheritance, polymorphism, and relationships. It also defines analysis as investigating requirements while design emphasizes a conceptual solution that fulfills requirements. Object-oriented analysis focuses on identifying real-world concepts as objects, while object-oriented design defines software objects and how they will collaborate.
This document discusses classes and objects in object-oriented analysis and design. It defines objects as having state, behavior, and identity, with similar objects defined by common classes. Classes represent groups of objects with similar behavior and structures. The relationships between classes and objects are explored, including generalization, aggregation, and association. Identification of classes and objects in analysis is discussed through various approaches like use case analysis and CRC cards. The importance of proper classification and key abstractions and mechanisms are also covered.
A study on the approaches of developing a named entity recognition tooleSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
This document summarizes structural modeling techniques used to identify classes in object-oriented analysis and design. It discusses three common approaches: the noun-phrase approach, which identifies classes from nouns in requirements; the common class pattern approach, which leverages known common classes; and the use case driven approach, which analyzes use cases. It also describes how to represent classes, relationships, interfaces and packages using class and object diagrams. Finally, it compares entity-relationship diagrams and class diagrams, noting class diagrams describe system structure without persistence details.
Imran Sarwar Bajwa, [2010], "Context Based Meaning Extraction by Means of Markov Logic", in International Journal of Computer Theory and Engineering - (IJCTE) 2(1) pp:35-38, February 2010
INFERENCE BASED INTERPRETATION OF KEYWORD QUERIES FOR OWL ONTOLOGYIJwest
This paper presents a model for interpreting keyword queries over OWL ontologies that considers OWL axioms and restrictions to provide more precise answers to user queries. The model maps keywords from user queries to ontological elements and identifies phrases to select an appropriate SPARQL query template. The template is populated using inferred results from applying OWL restrictions to generate a formal SPARQL query for execution over the ontology. By addressing OWL features, the model aims to leverage the full capabilities of OWL knowledge bases to better understand users' information needs.
The document discusses how to convert ER diagrams to relational databases. It explains that each entity set maps to a table, while relationship sets can map to tables or be represented within other tables by adding attributes. It also covers handling special cases like one-to-one/many relationships, composite attributes, and specialization/aggregation. The document provides SQL commands for creating tables, adding constraints, and altering or dropping tables during the conversion process.
Object-oriented programming (OOP) is a new programming paradigm that views computation as objects interacting by sending messages to one another. Key elements of OOP include objects performing computation by making requests of each other through message passing, with every object having its own memory consisting of other objects. Classes group similar objects and define their common behaviors. Classes are organized into an inheritance hierarchy to allow subclasses to inherit and override behaviors. OOP aims to help programmers cope with complexity by providing abstraction and modularity.
The Indy 500 is one of the most prestigious auto races in the world held annually in Speedway, Indiana. This year's race will take place on Sunday, May 26th, 2019. Fans can watch the live race broadcast online at www.onlineindycar.com.
49 u.s.c. § 46504 – interference with flight crewMichael Pariente
49 u.s.c. § 46504 – interference with flight crew is a federal crime and this is some information about how the law originated in different bits of case law involving diminished capacity and crimes of intent.
Semantic based automatic question generation using artificial immune systemAlexander Decker
The document describes a system that uses artificial immune systems and natural language processing techniques like semantic role labeling and named entity recognition to automatically generate questions from text. It introduces a model that applies these techniques to extract semantic patterns from sentences, trains a classifier using artificial immune systems to classify question types, and then generates questions by matching patterns. The system was tested on sentences from various sources and showed promising results, correctly determining question types 95% of the time and generating matching questions 87% of the time.
IRJET - Voice based Natural Language Query ProcessingIRJET Journal
This document describes a voice-based natural language query processing system that allows non-expert users to interact with a database using natural language queries. The system takes a user's spoken query as input, converts it to text using speech recognition, analyzes the text to generate a SQL query, executes the SQL query against the database, and displays the results in a table. The system addresses challenges like ambiguity through techniques such as tokenization, lexical analysis, syntactic analysis, and semantic analysis to map the natural language query to a valid SQL query.
Microposts Ontology Construction Via Concept Extraction dannyijwest
The social networking website Facebook offers to its users a feature called “status updates” (or just “status”), which allows users to create Microposts directed to all their contacts, or a subset thereof. Readers can respond to Microposts, or in addition to that also click a “Like” button to show their appreciation for a certain Micropost. Adding semantic meaning in the sense of unambiguous intended ideas to such Microposts. We can make a start towards semantic web by adding semantic annotation to web resources. Ontology are used to specify meaning of annotations. Ontology provide a vocabulary for representing and communicating knowledge about some topic and a set of semantic relationships that hold among the terms in that vocabulary. For increasing the efficiency of ontology based application there is a need to develop a mechanism that reduces the manual work in developing ontology. In this paper, we proposed Microposts’ ontology construction. In this paper we present a method that extracts meaningful knowledge from microposts shared in social platforms. This process involves different steps for the analysis of such microposts (extraction of keywords, named entities and their matching to ontological concepts).
The important problem of word segmentation in Thai language is sentential noun phrase. The existing
studies try to minimize the problem. But there is no research that solves this problem directly. This study
investigates the approach to resolve this problem using conditional random fields which is a probabilistic
model to segment and label sequence data. The results present that the corrected data of noun phrase was
detected more than 78.61 % based on our technique.
The document discusses different types of expressions in programming, including arithmetic, string, and logical expressions. It explains that expressions manipulate data and are composed of values, operators, and functions. Arithmetic expressions perform math operations, string expressions manipulate text, and logical expressions select actions by evaluating to true or false. The document also provides examples of different operators and functions used in each type of expression.
Resolving the semantics of vietnamese questions in v news qaict systemijaia
Recently we have built a VNewsQA/ICT system which can read the titles of Vietnamese news in the domain
of information and communication technology, then process and use them to answer the Vietnamese
questions of users. The architecture of VNewsQA/ICT system has two main components: 1) the first
component treats the simple Vietnamese sentences as its natural language textual data which is used to
answer the user’s questions; 2) the second component resolves the semantics of Vietnamese questions
which query the system. This paper introduces a semantic representation model and a processing model to
revolve the Vietnamese questions in VNewsQA/ICT system. These semantic representation and processing
models are able to resolve the semantics of eight Vietnamese question classes which are used in our
system.
Suitability of naïve bayesian methods for paragraph level text classification...ijaia
This document discusses using Naive Bayesian methods for paragraph-level text classification in the Kannada language. It evaluates the performance of the Naive Bayesian and Naive Bayesian Multinomial models on a corpus of 1791 paragraphs from four categories (Commerce, Social Sciences, Natural Sciences, Aesthetics). Dimensionality reduction techniques like removing stop words and words with low term frequency are applied before classification. The results show that the Naive Bayesian Multinomial model outperforms the simple Naive Bayesian approach for paragraph classification in Kannada.
The document provides an overview of object-oriented concepts. It discusses that software development is increasingly relying on object-oriented paradigms due to benefits like improved modeling of real-world problems and reusability. Key concepts discussed include classes and objects, encapsulation, inheritance, polymorphism, and object composition. Various object-oriented methodologies like those proposed by Coad/Yourdon, Booch, Rumbaugh, and Jacobson are also summarized.
Author Credits - Maaz Anwar Nomani
Semantic Role Labeler (SRL) is a semantic parser which can automatically identify and then classify arguments of a verb in a natural language sentence for Hindi and Urdu. For e.g. in the natural language sentence “Sara won the competition because of her hard work.”, ‘won’ is the main verb and there are 3 arguments for this verb; ‘Sara’ (Agent), ‘hard work’ (Reason) and ‘competition’ (Theme). The problem statement of a SRL revolves around the fact that how will you make a machine identify and then classify the arguments of a verb in a natural language sentence.
Since there are 2 sub problem statements here (Identification and Classification), our SRL has a pipeline architecture in which a binary classifier (Logistic Regression) is first trained to identify whether a word is an argument to a verb in a sentence or not (Yes or No) and subsequently a multi-class classifier (SVM with Linear kernel) is trained to classify the identified arguments by above binary classifier into one of the 20 classes. These 20 classes are the various notions present in a natural language sentence (for e.g. Agent, Theme, Location, Time, Purpose, Reason, Cause etc.). These ‘notions’ are called Propbank labels or semantic labels present in a Proposition Bank which is a collection of hand-annotated sentences.
In essence, SRL felicitates Semantic Parsing which essentially is the research investigation of identifying WHO did WHAT to WHOM, WHERE, HOW, WHY and WHEN etc. in a natural language sentence.
OOAD - UML - Class and Object Diagrams - LabVicter Paul
The document discusses class diagrams and object diagrams. It explains that a class diagram shows the structure of a system by displaying classes, interfaces, and their relationships, while an object diagram shows specific instances of classes at a point in time. The document provides steps for constructing class diagrams, such as identifying classes and relationships. It also discusses how object diagrams are created based on class diagrams by instantiating classes and depicting their relationships.
This document provides an introduction to object-oriented analysis and design (OOAD) using the Unified Process as an example iterative development process. It discusses OO concepts like objects, classes, attributes, methods, encapsulation, inheritance, polymorphism, and relationships. It also defines analysis as investigating requirements while design emphasizes a conceptual solution that fulfills requirements. Object-oriented analysis focuses on identifying real-world concepts as objects, while object-oriented design defines software objects and how they will collaborate.
This document discusses classes and objects in object-oriented analysis and design. It defines objects as having state, behavior, and identity, with similar objects defined by common classes. Classes represent groups of objects with similar behavior and structures. The relationships between classes and objects are explored, including generalization, aggregation, and association. Identification of classes and objects in analysis is discussed through various approaches like use case analysis and CRC cards. The importance of proper classification and key abstractions and mechanisms are also covered.
A study on the approaches of developing a named entity recognition tooleSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
This document summarizes structural modeling techniques used to identify classes in object-oriented analysis and design. It discusses three common approaches: the noun-phrase approach, which identifies classes from nouns in requirements; the common class pattern approach, which leverages known common classes; and the use case driven approach, which analyzes use cases. It also describes how to represent classes, relationships, interfaces and packages using class and object diagrams. Finally, it compares entity-relationship diagrams and class diagrams, noting class diagrams describe system structure without persistence details.
Imran Sarwar Bajwa, [2010], "Context Based Meaning Extraction by Means of Markov Logic", in International Journal of Computer Theory and Engineering - (IJCTE) 2(1) pp:35-38, February 2010
INFERENCE BASED INTERPRETATION OF KEYWORD QUERIES FOR OWL ONTOLOGYIJwest
This paper presents a model for interpreting keyword queries over OWL ontologies that considers OWL axioms and restrictions to provide more precise answers to user queries. The model maps keywords from user queries to ontological elements and identifies phrases to select an appropriate SPARQL query template. The template is populated using inferred results from applying OWL restrictions to generate a formal SPARQL query for execution over the ontology. By addressing OWL features, the model aims to leverage the full capabilities of OWL knowledge bases to better understand users' information needs.
The document discusses how to convert ER diagrams to relational databases. It explains that each entity set maps to a table, while relationship sets can map to tables or be represented within other tables by adding attributes. It also covers handling special cases like one-to-one/many relationships, composite attributes, and specialization/aggregation. The document provides SQL commands for creating tables, adding constraints, and altering or dropping tables during the conversion process.
Object-oriented programming (OOP) is a new programming paradigm that views computation as objects interacting by sending messages to one another. Key elements of OOP include objects performing computation by making requests of each other through message passing, with every object having its own memory consisting of other objects. Classes group similar objects and define their common behaviors. Classes are organized into an inheritance hierarchy to allow subclasses to inherit and override behaviors. OOP aims to help programmers cope with complexity by providing abstraction and modularity.
The Indy 500 is one of the most prestigious auto races in the world held annually in Speedway, Indiana. This year's race will take place on Sunday, May 26th, 2019. Fans can watch the live race broadcast online at www.onlineindycar.com.
49 u.s.c. § 46504 – interference with flight crewMichael Pariente
49 u.s.c. § 46504 – interference with flight crew is a federal crime and this is some information about how the law originated in different bits of case law involving diminished capacity and crimes of intent.
Magnify Network Lifetime in WSN by Reducing Data Aggregation Distance of Weak...ijwmn
Energy efficient protocols have always played a vital role in conservation of energy in Wireless Sensor
Network. One of the major introduced protocols is LEACH a cluster based protocol. To improve its
performance, an algorithm named as Maximizing the Network Lifetime of Clustered-based WSN Using
Probability of Residual Energy is introduced. This protocol improved the Cluster Head selection process of
LEACH by using the concept of residual energy. In the proposed paper, further improvement is done by
enhancing the data transmission process. This process heightens the lifetime of the nodes having very less
energy left by reducing their data transmission distance. Implemented results in MATLAB shows increase
in the stability and lifetime of the network..
This document provides a brief history of rock music from its origins in the 1950s to modern developments in the 2010s. It notes that rock music originated from blending genres like blues, country, and R&B. The 1960s were considered the golden age as rock gained widespread popularity through bands like The Beatles. The 1970s saw the rise of punk rock. In subsequent decades, rock branched out and blended with other genres, such as pop punk, rap rock, and nu metal in the 1990s. The 2000s and 2010s continued breaking down barriers between rock and other styles.
Tom is called into his boss' office on Monday morning, expecting to be fired. However, he overhears that he is actually getting a promotion. Mark, a jealous coworker, convinces Tom he is being fired and gets him drunk. Mark then has the drunk Tom egg his boss' house for revenge. The next day, Tom's boss calls but the call ends before Tom can answer, leaving Tom worried about his job status.
A NATURAL LANGUAGE REQUIREMENTS ENGINEERING APPROACH FOR MDA IJCSEA Journal
A software system for any information system can be developed following a model driven paradigm, in
particular MDA (Model Driven Architecture). In this way, models that represent the organizational work
are used to produce models that represent the information system. Current software development methods
are starting to provide guidelines for the construction of conceptual models, taking as input requirements
models. In MDA the CIM (Computation Independent Model) can be used to define the business process
model. Though a complete automatic construction of the CIM is not possible, we have proposed in other
papers the integration of some natural language requirements models and we have defined a strategy to
derive a CIM from these models. In this paper, we present an improved version of our ATL transformation
that implements a strategy to obtain a UML class diagram representing a preliminary CIM from
requirements models allowing traceability between the source and the target models.
A NATURAL LANGUAGE REQUIREMENTS ENGINEERING APPROACH FOR MDA IJCSEA Journal
A software system for any information system can be developed following a model driven paradigm, in particular MDA (Model Driven Architecture). In this way, models that represent the organizational work are used to produce models that represent the information system. Current software development methods are starting to provide guidelines for the construction of conceptual models, taking as input requirements models. In MDA the CIM (Computation Independent Model) can be used to define the business process model. Though a complete automatic construction of the CIM is not possible, we have proposed in other papers the integration of some natural language requirements models and we have defined a strategy to derive a CIM from these models. In this paper, we present an improved version of our ATL transformation that implements a strategy to obtain a UML class diagram representing a preliminary CIM from requirements models allowing traceability between the source and the target models.
A natural language requirements engineering approach for mdaIJCSEA Journal
A software system for any information system can be developed following a model driven paradigm, in particular MDA (Model Driven Architecture). In this way, models that represent the organizational work are used to produce models that represent the information system. Current software development methods are starting to provide guidelines for the construction of conceptual models, taking as input requirements models. In MDA the CIM (Computation Independent Model) can be used to define the business process model. Though a complete automatic construction of the CIM is not possible, we have proposed in other papers the integration of some natural language requirements models and we have defined a strategy to
derive a CIM from these models. In this paper, we present an improved version of our ATL transformation
that implements a strategy to obtain a UML class diagram representing a preliminary CIM from requirements models allowing traceability between the source and the target models.
A NATURAL LANGUAGE REQUIREMENTS ENGINEERING APPROACH FOR MDA IJCSEA Journal
A software system for any information system can be developed following a model driven paradigm, in particular MDA (Model Driven Architecture). In this way, models that represent the organizational work are used to produce models that represent the information system. Current software development methods are starting to provide guidelines for the construction of conceptual models, taking as input requirements models. In MDA the CIM (Computation Independent Model) can be used to define the business process model. Though a complete automatic construction of the CIM is not possible, we have proposed in other papers the integration of some natural language requirements models and we have defined a strategy to derive a CIM from these models. In this paper, we present an improved version of our ATL transformation that implements a strategy to obtain a UML class diagram representing a preliminary CIM from requirements models allowing traceability between the source and the target models.
A Natural Language Requirements Engineering Approach for MDAIJCSEA Journal
A software system for any information system can be developed following a model driven paradigm, in particular MDA (Model Driven Architecture). In this way, models that represent the organizational work are used to produce models that represent the information system. Current software development methods are starting to provide guidelines for the construction of conceptual models, taking as input requirements models. In MDA the CIM (Computation Independent Model) can be used to define the business process model. Though a complete automatic construction of the CIM is not possible, we have proposed in other papers the integration of some natural language requirements models and we have defined a strategy to derive a CIM from these models. In this paper, we present an improved version of our ATL transformation that implements a strategy to obtain a UML class diagram representing a preliminary CIM from requirements models allowing traceability between the source and the target models
SBML FOR OPTIMIZING DECISION SUPPORT'S TOOLS cscpconf
Many theoretical works and tools on epidemiological field reflect the emphasis on decisionmaking tools by both public health and the scientific community, which continues to increase.
Indeed, in the epidemiological field, modeling tools are proving a very important way in helping to make decision. However, the variety, the large volume of data and the nature of epidemics
lead us to seek solutions to alleviate the heavy burden imposed on both experts and developers. In this paper, we present a new approach: the passage of an epidemic model realized in BioPEPA to a narrative language using the basics of SBML language. Our goal is to allow on one hand, epidemiologists to verify and validate the model, and the other hand, developers to
optimize the model in order to achieve a better model of decision making. We also present some preliminary results and some suggestions to improve the simulated model.
Tools for Ontology Building from Texts: Analysis and Improvement of the Resul...IOSR Journals
Text2Onto is a tool that learns ontologies from textual data by extracting ontology components like concepts, relations, instances, and hierarchies. It analyzes texts through linguistic preprocessing using Gate to tokenize, tag parts of speech, and identify noun and verb phrases. Algorithms then extract ontology components and store them probabilistically in a Preliminary Ontology Model independent of any representation language. The study aimed to understand Text2Onto's architecture, analyze errors in its extractions, and attempt improvements by using a meta-model of the text to better classify concepts under core concepts.
A Proposal For A Web Service-Based Architecture To Support The Enhancement Of...Suzanne Simmons
This document proposes a web services-based architecture to support the enactment of units of learning (UoLs) modeled according to educational modeling languages (EMLs). EMLs model instructional practices as workflows involving actors, data, applications, tasks, and processes. An important part is the learning environments where services and applications are integrated. The proposed architecture uses web services ideas to treat integrated applications and services as web services, facilitating UoL modeling and enactment. Well-defined interfaces would support service invocation, composition, and monitoring during enactment of modeled learning environments.
A Survey of Ontology-based Information Extraction for Social Media Content An...ijcnes
The amount of information generated in the Web has grown enormously over the years. This information is significant to individuals, businesses and organizations. If analyzed, understood and utilized, it will provide a valuable insight to its stakeholders. However, many of these information are semi-structured or unstructured which makes it difficult to draw in-depth understanding of the implications behind those information. This is where Ontology-based Information Extraction (OBIE) and social media content analysis come into play. OBIE has now become a popular way to extract information coming from machine-readable sources. This paper presents a survey of OBIE, Ontology languages and tools and the process to build an ontology model and framework. The author made a comparison of two ontology building frameworks and identified which framework is complete.
SBML FOR OPTIMIZING DECISION SUPPORT'S TOOLScsandit
Many theoretical works and tools on epidemiological field reflect the emphasis on decisionmaking
tools by both public health and the scientific community, which continues to increase.
Indeed, in the epidemiological field, modeling tools are proving a very important way in helping
to make decision. However, the variety, the large volume of data and the nature of epidemics
lead us to seek solutions to alleviate the heavy burden imposed on both experts and developers.
In this paper, we present a new approach: the passage of an epidemic model realized in Bio-
PEPA to a narrative language using the basics of SBML language. Our goal is to allow on one
hand, epidemiologists to verify and validate the model, and the other hand, developers to
optimize the model in order to achieve a better model of decision making. We also present some
preliminary results and some suggestions to improve the simulated model.
Systems variability modeling a textual model mixing class and feature conceptsijcsit
System’s reusability and cost are very important in software product line design area. Developers’ goal is
to increase system reusability and decreasing cost and efforts for building components from scratch for
each software configuration. This can be reached by developing software product line (SPL). To handle
SPL engineering process, several approaches with several techniques were developed. One of these
approaches is called separated approach. It requires separating the commonalities and variability for
system’s components to allow configuration selection based on user defined features. Textual notationbased
approaches have been used for their formal syntax and semantics to represent system features and
implementations. But these approaches are still weak in mixing features (conceptual level) and classes
(physical level) that guarantee smooth and automatic configuration generation for software releases. The
absence of methodology supporting the mixing process is a real weakness. In this paper, we enhanced
SPL’s reusability by introducing some meta-features, classified according to their functionalities. As a first
consequence, mixing class and feature concepts is supported in a simple way using class interfaces and
inherent features for smooth move from feature model to class model. And as a second consequence, the
mixing process is supported by a textual design and implementation methodology, mixing class and feature
models by combining their concepts in a single language. The supported configuration generation process
is simple, coherent, and complete.
The document discusses modeling educational content and learning objects (LOs). It describes several specifications for describing educational materials at different levels, from pedagogical information to sequencing and content packaging. It also discusses issues with current authoring processes and the lack of interoperability between specifications. The use of ontologies and educational modeling languages is proposed to address these issues by providing reusable learning activities and processes defined independently of delivery formats.
Formal treatments of inheritance are rather scarce and those that do exist are often more suited for
analysis of existing systems than as guides to language designers. One problem that adds complexity to
previous efforts is the need to pass a reference to the original invoking object throughout the method call
tree. In this paper, a novel specification of inheritance semantics is given. The approach dispenses with
self-reference, instead using static and dynamic scope to accomplish similar behaviour. The result is a
methodology that is simpler than previous specification attempts, easy to understand, and sufficiently
expressive. Moreover, an inheritance system based on this approach can be implemented with relatively
few lines of code in environment-passing interpreters.
ONTOLOGY VISUALIZATION PROTÉGÉ TOOLS – A REVIEWijait
The document discusses ontology visualization tools in Protégé. It reviews four main visualization methods used in Protégé tools: indented list, node-link and tree, zoomable, and focus+context. It then examines specific Protégé tools that use each method, including their key features and limitations. The tools discussed are Protégé Class Browser (indented list), Protégé OntoViz and OntoSphere (node-link and tree), Jambalaya (zoomable), and Protégé TGVizTab (focus+context). The document aims to categorize the characteristics of existing Protégé visualization tools to assist in method selection and promote future research.
ONTOLOGY VISUALIZATION PROTÉGÉ TOOLS – A REVIEW ijait
The document discusses ontology visualization tools in Protégé. It reviews four main visualization methods used in Protégé tools: indented list, node-link and tree, zoomable, and focus+context. It then examines specific Protégé tools that use each method, including their key features and limitations. The tools assessed are Protégé Class Browser (indented list), Protégé OntoViz and OntoSphere (node-link and tree), Jambalaya (zoomable), and Protégé TGVizTab (focus+context). The document concludes by summarizing and comparing the visualization characteristics of these Protégé tools.
IRJET- An Efficient Way to Querying XML Database using Natural LanguageIRJET Journal
This document discusses an efficient way to query XML databases using natural language. It proposes a framework that can accept English language queries and translate them into XQuery or SQL expressions to retrieve data from an XML database. The system performs linguistic processing to map tokens in the natural language query to XQuery fragments, then executes the translated query against the database. Existing approaches are discussed that typically use semantic and syntactic analysis to represent the query logically before translation, but have limitations in handling ambiguity. The proposed system aims to improve query translation accuracy by leveraging token relationships and classifications determined from natural language parsing.
Detailed description and introduction to UML(Unified Modeling Language).Structural and behavioral modeling.Class Diagram, Object Diagram.Notation for building all kinds of UML diagrams.
EDON: A Method for Building an Ontology as Software ArtefactEmiliano Reynares
The current dynamics of organizations produces frequently
changing business rules, involving changes into the software applications that embed them. The use of ontologies as software artefacts intended to encapsulate business rules is a mean to raise the exibility, extensibility
and ease of maintenance of the software applications. Such ontologies should be developed and maintained in conjunction with other software components, so an ontology building methodology must be considered in the context of the software development process. This paper presents an evolutionary method for building ontologies intended to be used as a structural conceptual model of an information system, encoding business rules in a declarative way and enabling the intertwining of ontology and software development processes.
This document introduces a multilingual access module that translates legal text queries between English and Bulgarian. The module uses two approaches: an ontology-based method and statistical machine translation. The ontology-based method relies on domain ontologies like EuroVoc to map query terms to concepts and expand queries. Statistical machine translation is used to translate out-of-vocabulary terms. The document describes how the module represents the relation between ontologies, lexicons, and text to enable cross-lingual information retrieval over legal documents.
SWSN UNIT-3.pptx we can information about swsn professionalgowthamnaidu0986
Ontology engineering involves constructing ontologies through various methods. It begins with defining the scope and evaluating existing ontologies for reuse. Terms are enumerated and organized in a taxonomy with defined properties, facets, and instances. The ontology is checked for anomalies and refined iteratively. Popular tools for ontology development include Protege and WebOnto. Methods like Meth ontology and On-To-Knowledge methodology provide processes for building ontologies from scratch or reusing existing ones. Ontology sharing requires mapping between ontologies to allow interoperability, and libraries exist for storing and accessing ontologies.
Similar to ELABORATE LEXICON EXTENDED LANGUAGE WITH A LOT OF CONCEPTUAL INFORMATION (20)
Prediction of Electrical Energy Efficiency Using Information on Consumer's Ac...PriyankaKilaniya
Energy efficiency has been important since the latter part of the last century. The main object of this survey is to determine the energy efficiency knowledge among consumers. Two separate districts in Bangladesh are selected to conduct the survey on households and showrooms about the energy and seller also. The survey uses the data to find some regression equations from which it is easy to predict energy efficiency knowledge. The data is analyzed and calculated based on five important criteria. The initial target was to find some factors that help predict a person's energy efficiency knowledge. From the survey, it is found that the energy efficiency awareness among the people of our country is very low. Relationships between household energy use behaviors are estimated using a unique dataset of about 40 households and 20 showrooms in Bangladesh's Chapainawabganj and Bagerhat districts. Knowledge of energy consumption and energy efficiency technology options is found to be associated with household use of energy conservation practices. Household characteristics also influence household energy use behavior. Younger household cohorts are more likely to adopt energy-efficient technologies and energy conservation practices and place primary importance on energy saving for environmental reasons. Education also influences attitudes toward energy conservation in Bangladesh. Low-education households indicate they primarily save electricity for the environment while high-education households indicate they are motivated by environmental concerns.
Digital Twins Computer Networking Paper Presentation.pptxaryanpankaj78
A Digital Twin in computer networking is a virtual representation of a physical network, used to simulate, analyze, and optimize network performance and reliability. It leverages real-time data to enhance network management, predict issues, and improve decision-making processes.
Supermarket Management System Project Report.pdfKamal Acharya
Supermarket management is a stand-alone J2EE using Eclipse Juno program.
This project contains all the necessary required information about maintaining
the supermarket billing system.
The core idea of this project to minimize the paper work and centralize the
data. Here all the communication is taken in secure manner. That is, in this
application the information will be stored in client itself. For further security the
data base is stored in the back-end oracle and so no intruders can access it.
Height and depth gauge linear metrology.pdfq30122000
Height gauges may also be used to measure the height of an object by using the underside of the scriber as the datum. The datum may be permanently fixed or the height gauge may have provision to adjust the scale, this is done by sliding the scale vertically along the body of the height gauge by turning a fine feed screw at the top of the gauge; then with the scriber set to the same level as the base, the scale can be matched to it. This adjustment allows different scribers or probes to be used, as well as adjusting for any errors in a damaged or resharpened probe.
Null Bangalore | Pentesters Approach to AWS IAMDivyanshu
#Abstract:
- Learn more about the real-world methods for auditing AWS IAM (Identity and Access Management) as a pentester. So let us proceed with a brief discussion of IAM as well as some typical misconfigurations and their potential exploits in order to reinforce the understanding of IAM security best practices.
- Gain actionable insights into AWS IAM policies and roles, using hands on approach.
#Prerequisites:
- Basic understanding of AWS services and architecture
- Familiarity with cloud security concepts
- Experience using the AWS Management Console or AWS CLI.
- For hands on lab create account on [killercoda.com](https://killercoda.com/cloudsecurity-scenario/)
# Scenario Covered:
- Basics of IAM in AWS
- Implementing IAM Policies with Least Privilege to Manage S3 Bucket
- Objective: Create an S3 bucket with least privilege IAM policy and validate access.
- Steps:
- Create S3 bucket.
- Attach least privilege policy to IAM user.
- Validate access.
- Exploiting IAM PassRole Misconfiguration
-Allows a user to pass a specific IAM role to an AWS service (ec2), typically used for service access delegation. Then exploit PassRole Misconfiguration granting unauthorized access to sensitive resources.
- Objective: Demonstrate how a PassRole misconfiguration can grant unauthorized access.
- Steps:
- Allow user to pass IAM role to EC2.
- Exploit misconfiguration for unauthorized access.
- Access sensitive resources.
- Exploiting IAM AssumeRole Misconfiguration with Overly Permissive Role
- An overly permissive IAM role configuration can lead to privilege escalation by creating a role with administrative privileges and allow a user to assume this role.
- Objective: Show how overly permissive IAM roles can lead to privilege escalation.
- Steps:
- Create role with administrative privileges.
- Allow user to assume the role.
- Perform administrative actions.
- Differentiation between PassRole vs AssumeRole
Try at [killercoda.com](https://killercoda.com/cloudsecurity-scenario/)
Mechatronics is a multidisciplinary field that refers to the skill sets needed in the contemporary, advanced automated manufacturing industry. At the intersection of mechanics, electronics, and computing, mechatronics specialists create simpler, smarter systems. Mechatronics is an essential foundation for the expected growth in automation and manufacturing.
Mechatronics deals with robotics, control systems, and electro-mechanical systems.
Discover the latest insights on Data Driven Maintenance with our comprehensive webinar presentation. Learn about traditional maintenance challenges, the right approach to utilizing data, and the benefits of adopting a Data Driven Maintenance strategy. Explore real-world examples, industry best practices, and innovative solutions like FMECA and the D3M model. This presentation, led by expert Jules Oudmans, is essential for asset owners looking to optimize their maintenance processes and leverage digital technologies for improved efficiency and performance. Download now to stay ahead in the evolving maintenance landscape.
Applications of artificial Intelligence in Mechanical Engineering.pdfAtif Razi
Historically, mechanical engineering has relied heavily on human expertise and empirical methods to solve complex problems. With the introduction of computer-aided design (CAD) and finite element analysis (FEA), the field took its first steps towards digitization. These tools allowed engineers to simulate and analyze mechanical systems with greater accuracy and efficiency. However, the sheer volume of data generated by modern engineering systems and the increasing complexity of these systems have necessitated more advanced analytical tools, paving the way for AI.
AI offers the capability to process vast amounts of data, identify patterns, and make predictions with a level of speed and accuracy unattainable by traditional methods. This has profound implications for mechanical engineering, enabling more efficient design processes, predictive maintenance strategies, and optimized manufacturing operations. AI-driven tools can learn from historical data, adapt to new information, and continuously improve their performance, making them invaluable in tackling the multifaceted challenges of modern mechanical engineering.
Generative AI Use cases applications solutions and implementation.pdfmahaffeycheryld
Generative AI solutions encompass a range of capabilities from content creation to complex problem-solving across industries. Implementing generative AI involves identifying specific business needs, developing tailored AI models using techniques like GANs and VAEs, and integrating these models into existing workflows. Data quality and continuous model refinement are crucial for effective implementation. Businesses must also consider ethical implications and ensure transparency in AI decision-making. Generative AI's implementation aims to enhance efficiency, creativity, and innovation by leveraging autonomous generation and sophisticated learning algorithms to meet diverse business challenges.
https://www.leewayhertz.com/generative-ai-use-cases-and-applications/
ELABORATE LEXICON EXTENDED LANGUAGE WITH A LOT OF CONCEPTUAL INFORMATION
1. International Journal of Computer Science, Engineering and Applications (IJCSEA) Vol.5, No.6, December 2015
DOI : 10.5121/ijcsea.2015.5601 1
ELABORATE LEXICON EXTENDED
LANGUAGE WITH A LOT OF CONCEPTUAL
INFORMATION
Jean Luc Razafindramintsa1
, Thomas Mahatody2
and Josvah Paul
Razafimandimby3
1
Research Institute School for Computer Modelisation, Laboratory for Mathematical and
Computer Applied to the Development Systems, University of Fianarantsoa Madagascar
2
National School for Computer Engineering, University of Fianarantsoa Madagascar
3
Laboratory for Mathematical and Computer Applied to the Development Systems,
University of Fianarantsoa Madagascar
ABSTRACT
The use of model such as LEL (Lexicon Extended Language) in natural language is very interesting in
Requirements Engineering. But LEL, even if it is derived from the Universe of Discourse (UofD) does not
provide further details on the concepts it describes. However, we believe that the elements inherent in the
conceptual level of a system are already defined in the Universe of Discourse. Therefore, in this work we
propose a more elaborate natural language model called eLEL. It is a model that describes the concepts in
a domain in more detail than the conventional LEL. We also propose a modeling process of a domain using
an eLEL model.
KEYWORDS
ATL transformations, Conceptual Information, Lexicon Extended Language, Model in Natural Language,
Requirements Engineering, Universe of Discourse.
1. INTRODUCTION
Elucidation, modeling and analysis are both the inherent and the basic elements of requirements
engineering [9]. In order to obtain the essential information and to understand the elucidation
phase problems, [15] stipulate the use of several resources by requirements engineers including
the analysis and careful reading of sources of information documents. For this analysis and
reading, source of information documents such as companies businesses, surveys, interviews, the
regulation or related text and other sources of information systems [15] should be assessed. The
documents generated during this phase of elucidation (investigation reports, interviews,
regulation or related text and other sources of information systems) are written in natural
language. These documents contain the terms in the domain of the problem which are the terms
used by customers and users [11, 13]. These documents are then used by relevant professionals,
considering that professionals sometimes have different roles and participate with their skills in
this process. The technical terms with their different meanings in different areas of expertise can
lead to different interpretation. Hence the need for a lexicon application which makes it possible
to share the same understanding of the term in the field [15].
2. International Journal of Computer Science, Engineering and Applications (IJCSEA) Vol.5, No.6, December 2015
2
Lexicons may appear in different forms, such as glossary or dictionary data in a very simple form
or LEL in a more or less elaborate form. Then, the lexicon is not a simple requirement for the
quality of a process, but it is also a source of reference for stakeholders. In [5,11,15] found that
the lexicon (LEL) like ontology represents only the overall view of a system, and does not allow
stakeholders to represent the detailed conceptual level of the system. In addition, properties or
features of every concept are combined in the notion or in the behavioral response of the LEL
symbol. Furthermore, the LEL does not provide any information regarding the format and size of
each data properties. This is why many research projects have proposed to divert or transform the
LEL to describe or reveal details about the domain concepts or issues. These researches always
define the context of the system in a UofD [9], which should be considered as the first phase of
the process of the application domain model construction [4], and the models generated during
this phase are used as the input of the next phase. Some researchers use the UofD to divert the
LEL [9] and the LEL will be used in turn to obtain the scenario [1, 9], the ontology [7], the UML
class diagram [12, 13] and the use case [2, 8]. To derive or transform the LEL into other
conceptual model, an analysis followed by conversion will be conducted and the LEL becomes
the initial stage of this process [14].We see that the necessary information for derivative models
are already in the UofD which is the origin of the LEL. This results in the need for a more
elaborate LEL which shows the conceptual level and the characteristics of each concept.
In this paper, we therefore propose a specific strategy to build the requirement specification in the
form of lexicon rich in information characterized by one input with four outputs, called eLEL
(section 2). The eLEL makes it possible to display the concepts of an application in the form of
both natural and conceptual language oriented model. Then in section 3, we propose a
construction process based on a series of heuristics in order to find the symbols, their structures
and their meaning, and the various steps are undertaken to obtain the complete requirement
model. Before concluding and saying a few words about various perspectives (Section 6), we will
present a case study.
2. eLEL OR A MORE ELABORATED LANGUAGE EXTENDED
LEXICON
A more elaborate Lexicon Extended Language or eLEL is a set of symbols (signs) according to
the [3] definition. Indeed according to [3], a sign is something which can be interpreted to
substitute something. According to [6], a sign is defined as an entity that has an expression and a
content, but an eLEL symbol is a simple coding system with five entities: terms, notions,
behavioral responses, attributes and methods. In eLEL, the terms classified in four categories like
LEL [9] are divided into four types: the object, the subject, the verb and the state. Each eLEL
term can be described according to its type by the heuristics presented in Tables 1, 2, 3 and 4.
Table 1 describes the subject type eLEL.
3. International Journal of Computer Science, Engineering and Applications (IJCSEA) Vol.5, No.6, December 2015
3
Table 1 : Subject type eLEL
eLEL symbol type Subject
Description This is an active entity with relevant roles in the application. The
subject can be a person, a software component or another system
with which interactions will occur.
Notion Describes: Who is the subject? What are its characteristics? What
are the objects it manipulates?
Behavioral responses Describes: what is the definition of the functions performed by the
subject?
Attribute It shows a characteristic of the subject, such as the code, the
technical wording or name etc. It is defined by its name, its code,
its size, its nature or its type and description. Thus, a subject might
have different attributes.
Method Represents an operation which makes it possible to manipulate an
attribute
Table 2 shows the heuristics associated with the symbol of the eLEL object type.
Table 2: Object Type eLEL
eLEL symbol type Object
Description This is a passive entity manipulated by a subject type eLEL.
Notion Describes: what is the purpose? What are their characteristics?
What are the other objects with which it is related?
Behavioral response Describes: What are the actions applied to this object. ?
Attribute It shows a characteristic of an object, such as the code, the
technical wording or name etc. It is defined by its name, its code,
its size, its nature or its type and description. Thus an object may
have different attributes.
Method Represents the action used to access an object or modify it.
Table 3 shows the heuristics associated with the object of the verb type eLEL
Table 3: Verb type eLEL.
eLEL symbol type Verb
Description Describes a feature that is performed by the subjects with its
impacts on the operational environment.
Notion Describes: who intervenes when an event happens or takes place?
What is the object manipulated by the subject? What is the
purpose or the objective to be achieved?
Behavioral response Describes: what is the environmental impact, the resulting state
and the satisfactory conditions for achieving the objective or
purpose?
Attribute Represents the subjects or objects affected by the verb.
Method What are the actions to be taken by the subject on the objects
participating in the realization of the objective or goal to be
achieved?
Table 4 shows the heuristic of state type eLEL object.
4. International Journal of Computer Science, Engineering and Applications (IJCSEA) Vol.5, No.6, December 2015
4
Table 4: State type eLEL.
eLEL object type State
Description It is characterized by considerable attributes that contain values at
different times during the running of the system.
Notion Describes: what it represents? What actions led to it?
Behavioral response Describes: how to identify other states that can be reached by the
current state?
Attribute Represents the subjects or objects that change the state.
Method Represents actions taken to produce this state.
3. eLEL BUILDING PROCESS
Building eLEL requires the following two principles [7]: the first one is to maximize the lexicon
term used to describe the notion, the behavioral response, the method and the attribute of a new
term, this is the principle of closure or circularity. The second is to minimize the use of terms
outside the UofD. If it is unavoidable, it is necessary to ensure that the vocabularies used belong
to the basic vocabulary of the natural language and as far as possible with a clear mathematical
representation, this principle is called minimal vocabulary principle. In [7] mentioned that the
eLEL term elucidation like the LEL is always done through a combination of technical
elucidations. Then, during this technical elucidation, heuristics are used to find the relevant words
in the UofD as well as the terms used in a very specific goal. In [7] also proposed that first, the
initial terms are listed, then the reading and analysis of detailed documents can be used to assign
the notion, the behavioral response, the attribute and the method of the recorded words. The eLEL
construction process consists of thirteen steps based on the process proposed by [7, 11, 13]:
• Step 1: This is the identification of the main source of information in the UofD.
• Step 2: This is the identification of relevant terms in the UofD using a set of technical
elucidation such as the statistical method that integrates the occurrence of a word or a noun
phrase. Each identified term that seems to have a special meaning is listed.
• Step 3: This is the classification of each term by typology. Each term must be classified as an
object, subject, verb or state (see Tables 1, 2, 3, 4).
• Step 4: This is the description of the notion and the behavioral response of the term by type:
- For the subject, describe the notion of the term by answering the questions, "Who is the subject”
What are their characteristics? And what are the objects it manipulates? "And describe the
behavioral response of the term by answering the question: " what is the definition of the
functions performed by the subject? "(See Table 1).
- For the object, describe the notion of the term by answering the questions: "what is the object?”
What are its characteristics? What are the related objects? "And describe the behavioral response
of the term by answering the question" What are the actions applied to this object? "(See Table 2).
- For the verb, describe the notion of the term by answering the questions: "Who intervenes?”
What is the object manipulated by the subject? And what is the purpose or goal to be achieved?
"And describe the behavioral response by responding to the questions:" What is the
environmental impact, the resulting state and the satisfactory conditions to achieve the goal?
"(See Table 3).
5. International Journal of Computer Science, Engineering and Applications (IJCSEA) Vol.5, No.6, December 2015
5
- For the state, describe the notion of the term by answering the questions: "What does it
represent? And what are the actions that lead to it? "And describe the behavioral response of the
term by answering the question: “how to identify another state that can be achieved in addition to
the current status? "(See Table 4).
In describing the notion and behavioral response of the terms in the lexicon, we should follow the
principles of closure and minimal vocabulary.
• Step 5: For the subject, extract the attributes from the responses to the question: "what are their
characteristics? " (See Table 1). After other terms have been referenced by the closure principle,
this step identifies each name and defines them as attributes or properties of the term.
• Step 6: For the object, extract the attributes from the responses to the question: "what are their
characteristics? " (See Table 2). What is not referenced as another term by the closure principle,
this step identifies each name and defines them as attributes or properties of this term. The
methods to access or modify each attribute are defined by adding respectively in each attribute
name the prefixes GET and SET.
• Step 7: Each term attribute obtained in the fifth and sixth steps must be analyzed, its code is
given, then its definition, format and size are deduced. The code, the definition, the format and
the size characterize each attribute.
• Step 8: The actions or the methods of the term must be deduced from each entry of the
behavioral response of the term classified as subject.
• Step 9: This step is about finding the method parameters obtained in step 8. Each term classified
as a verb comes from the behavioral response classified as subject, therefore it describes all the
data required to complete the behavior. The rules that model the actors and each term resources
classified as verb parameter methods obtained in step 8 originated from the entry of the
behavioral response origin of this symbol. The parameters characterize the eLEL symbol of verb
types, so they are its attributes and the verb itself is the method or action. We must then analyze
each term attribute obtained, its code is given, then we deduce its definition, format and size. The
code, the definition, the format and the size obtained characterize each attribute.
• Step 10: This step is about the term classified as state. The verb type term is associated with the
state of the environment before and after its execution, it defines the condition that must be
previous to the implementation and to the situation which must be accomplished after the
execution. The attributes and methods of a state type term are defined as follows: The method is
that the eLEL verb type which starts the event, the attributes include the parameters used by the
eLEL verb to trigger the event. Then we must analyze each term attribute obtained, its code is
given, then its definition, format and size are deduced. The code, definition, the format and the
size obtained characterize each attribute.
• Step 11: This step involves the verification of the lexicon that uses the inspection strategy
invented by [9].
• Step 12: This step involves the validation [10] of the lexicon executed by the actors of the UofD
using a writing technique.
• Step 13: This last step ensures that the eLEL symbols made valid in step 12 is linked in pairs
through the principle of circularity :
-Each created eLEL symbol is assigned to a concept called created element. The concept created
element is characterized by two attributes, its name and an eLEL symbol and composed of
6. International Journal of Computer Science, Engineering and Applications (IJCSEA) Vol.5, No.6, December 2015
6
another concept known as number of created elements which is characterized by the minimum
and maximum occurrence of associated eLEL symbol.
- After that, the entries of each eLEL symbol (source symbol) must be analyzed in the order.
Then, we must detect another eLEL symbol in relation (target symbol).
-The relationship between symbols is represented by a concept called circularity. The circularity
concept is characterized by its name, a source symbol, a target symbol and a created element
concept which contains a couple of names of the two created element concepts corresponding
to the source symbol and the target symbol.
4. PROPOSITION OF EXTRACTION RULE
The proposed eLEL model is very rich in information. And it is possible to convert or extract it
from other models. We present here as an example some eLEL transformation rules in class
diagram.
Rule 1: All eLEL symbol classified as subject and object corresponds each to a UML class.
Rule 2: We extract the attributes of each eLEL symbol obtained in Rule 1 to constitute the
attributes of each corresponding class. The code, format and size of each symbol attribute eLEL
concerned respectively describe the name, format and size of the attribute of the corresponding
class.
Rule 3: We extract the methods of each eLEL symbol obtained in Rule1 to constitute the methods
of each corresponding UML class.
Rule 4 : Extraction relationship between eLEL object in association between class. This
transformation is applicable for both subjects and objects. The circularity concept obtained
through Step 13 of the eLEL construction process is extracted to form the association between
each UML class.
Rule 5 : The concept number of created elements obtained through step 13 of the eLEL
construction process is extracted to form the cardinalities of the classes obtained through rules 1,
2, 3 and 4.
5. CASE STUDY
In this section we instantiate the eLEL construction process, presented in section 3, with the
process of issuing a birth certificate in the civil registry management system and the issuance of a
birth certificate. Then, we transform the resulting eLEL models obtained in a class diagram.
The process of issuing a birth certificate begins with the declaration of birth made by the
declarant by completing a systematic birth declaration form and then the actual issuance of the
certificate by the civil status officer. The issuance of a birth certificate is a service offered by the
registry office which can deliver the birth certificate of the newborn declared by the declarant.
We present below some examples of eLEL symbols that belong to this case study. A description
is provided for each symbol. A process of issuing a birth certificate is a classical administrative
application in the field of civil status. In this case, the eLEL is used to represent each symbol in
each category of the typology.
In Madagascar, the declaration of birth must realized within 12 days after the date of birth. The
civil status officer requests a confirmation of the new birth from the declarant, it is after this
confirmation that the issuance is done. If the 12-day period is exceeded, the birth certificate will
not be issued or the process has to be repeated.
7. International Journal of Computer Science, Engineering and Applications (IJCSEA) Vol.5, No.6, December 2015
7
5.1. Construction of eLEL
Step 1: We get the UofD as the main source of information. From the "vital events declaration
form" and the "newborn identification process, the declaration and registration systemization of
vital events".
Example 1 (excerpt from the UofD):
“1-A birth declaration form is made up of the Region, the Municipality, the District, the
Neighborhood, the information about the newborn, the information about the parents and the
information about the declarant. 2- A declaration sheet contains the place of birth, the date of
birth, the name and the signature of the declarant and the civil status officer. 3-The declarant, the
civil status officer, the newborn and the parents of the newborn are person beings. 4- Each
person may have a name, a first name, a date of birth etc. 5-The declarant fills in the vital events
form. 6-The civil status officer receives the vital events form. 7-The civil status officer makes the
birth certificate. 8-The civil status officer issues the birth certificate. 9-A birth certificate is
issued. “
Step 2: We obtained a list of candidate terms for the construction of eLEL objects:
Example 2 (from the list of candidate terms)
1. Municipality.
2. Declarant
3. District.
4. Birth declaration form.
5. The civil status officer issues the birth certificate.
6. The civil status officer prepares the birth certificate.
7. Newborn.
8. Birth certificate issuance process.
9. Neighborhood.
10. Receives the vital events form.
11. Region.
12. Fill in the vital events form.
13. Civil status officer.
14. A birth certificate is issued.
Step 3: We obtained the term classification by eLEL typology.
Table 5: List of terms by classification.
Type Candidate terms
Subject Declarant, birth certificate issuance process, civil status officer.
Object Municipality, District, Birth declaration form, Newborn neighborhood,
Region
Verb To issue the birth certificate, to prepare the birth certificate, to receive
the civil status form, to fill in the civil status form
State The birth certificate issued
Step 4: We got the description of the notion and the behavioral response of each term classified
by eLEL type (see Tables 6, 7, 8, 9 and 10).
8. International Journal of Computer Science, Engineering and Applications (IJCSEA) Vol.5, No.6, December 2015
8
Table 6: Description of an eLEL symbol type subject.
eLEL Symbol Birth certificate issuance process
Type Subject
Notion This is an information system for a birth certificate issuance process.
It is made up of the birth declaration form.
It is made up of the acknowledgement of receipt civil status officer.
It is made up of the birth confirmation.
It is made up of an application calculating the number of days
between the date of the declaration of birth and date of birth.
It is made up of a request for confirmation for the declarant.
It contains the number of the certificate.
It contains the date of registration in the civil register
It contains the surname and the first name of the civil status officer.
It contains the surname and name of the declarant.
Behavioral response It can make it possible to declare the birth.
It can make it possible to receive the birth declaration.
It can make it possible to calculate the days between the date of the
birth declaration and the date of birth.
It can make it possible to request a confirmation from the declarant.
It can make it possible to confirm the declaration of birth
It can make it possible to register the declaration of birth in the civil
registry.
It can make it possible to prepare the birth certificate.
It can make it possible to issue the birth certificate.
Table 7: Description of the concept and behavioral response of a subject term.
eLEL Symbol Declarant
Type Subject
Notion This is a person who declares the birth.
It is an entity characterized by a name, a first name, an address, and
the quality of the declarant.
It provides the Region of birth, the district of birth, the
municipality of birth, the information about the newborn, the
information about the newborn’s father, the information about
the newborn’s mother, the issuance date of the certificate, the date
of birth, the place where the certificate was made.
Behavioral response It fills in the information about the birth Region, the District of
birth, the municipality of birth, about the neighborhood where he
was born, about the newborn, the newborn’s father and about
the newborn’s father.
Table 8: Description of the concept and behavioral response of a term like verb.
eLEL Symbol To issue a birth certificate
Type Verb
Notion The civil status officer must issue the birth certificate
following the process of issuing the birth certificate.
Behavioral response The birth certificate is delivered to the declarant on a date
fixed by the civil status officer
9. International Journal of Computer Science, Engineering and Applications (IJCSEA) Vol.5, No.6, December 2015
9
Table 9: Description of the concept and behavioral response of a term like object.
eLEL Symbol Birth certificate declaration form
Type Object
Notion This is a form completed by the declarant to declare a birth.
It contains the birth region, the birth district, the neighborhood
where he was born, information about the newborn,
information about the father, information about the mother
and information about the declarant.
It contains the number of the certificate, the month, the year,
the hour, the minute, the day (in the morning or in the evening),
the birth date, the birth place, the declaration date and the
declarant and the civil status officer’s signature.
Behavioral response It makes it possible to declare the birth, to receive the birth
declaration, to calculate the number of days, to request
confirmation, to confirm the declaration, to register the
declaration, to prepare the birth certificate and to issue the
birth certificate
Table 10: Description of the concept and behavioral response of a state like term.
eLEL Symbol Birth certificate issued
Type State
Notion The situation in which the birth certificate of the newborn is
delivered at the end of the process for issuing a birth
certificate.
It is conducted by the action deliver the birth certificate.
Behavioral response The Date of the issuance of the first birth certificate is fixed.
The birth certificate is issued.
Step 5: We got the attributes of subject like terms (Tables 11 and 12).
Table 11: Representation of the Attributes of subject like terms.
eLEL : Birth certificate issuance process
Type : Subject
Attributes
Date of the declaration, The Civil Status Officer’s name, The Civil Status Officer’s name, The
Declarant’s name, The Declarant’s first name.
Table 12: Representation of the attributes of subject like terms.
eLEL : Declarant
Type : Subject
Attributes
The declarant’s name, the declarant’s first name, the address, the declarant’s first name, the
declarant’s quality, the date of the birth certificate, date of birth, place of birth
Step 6 and 7: Applying step 6, we got the attributes and methods of the words classified as objects
and step 7 provides the features of each attribute, such as its code, its definition, its format and
size. The terms classified object are all obtained in this step.
10. International Journal of Computer Science, Engineering and Applications (IJCSEA) Vol.5, No.6, December 2015
10
Table 13: eLEL symbol of Object type.
eLEL Symbol Birth certificate declaration form
Type Object
Notion This is a form completed by the declarant to declare a birth.
It consists of the birth region, the birth District, the
neighborhood where he was born, the information about
the newborn, the information about the father, the
information about the mother and the information about
the declarant.
It contains the number of the certificate, the month, the year,
the hour, the minute, the day (in the morning or in the
evening), the date of birth, the place of birth, the date of the
statement and the declarant and the civil status officer’s
signatures
Behavioral response It can make it possible to declare the birth, to receive the
declaration of birth, to calculate the number of days, to
request a confirmation, to confirm the statement, to include
the declaration, to make the document and to issue the birth
certificate.
Attributes
Name Code Definition Format Size
Number of
the
certificate
num_cert_birth Number of the
birth certificate
Digit 6
Month birth_month Month of birth Digit 2
Year birth_year Year of birth Digit 4
Hour birth_hour Hour of birth Digit 2
Minute birth_min Minute of birth Digit 2
Day birth_day Birth date of the
newborn
Digital 2
Birth Date birth_date Birth Date of the
newborn
Date 8
Methods
getBirthCertNum()
setBirthCertNum()
getBirthMonth()
setBirthMonth()
getBirthYear()
setBirthYear()
getBirthHour()
setBirthHour()
getBirthMin()
setBirthMin()
getBirthDay()
setBirthDay()
getBirthDate()
setBirthDate()
11. International Journal of Computer Science, Engineering and Applications (IJCSEA) Vol.5, No.6, December 2015
11
Step 8: We obtained the methods of each term classified subject (see table 14 and 15).
Table 14: Methods of a subject term.
eLEL Symbol Birth certificate issuance process
Type Subject
Methods
DeclareBirth()
ReceiveDeclaration()
CalculateNumberDays()
RequestConfirmation()
ConfirmDeclaration()
Registereclaration()
MakeBirthCertificate()
DeliverBirthCertificate()
Table 15: Methods of a subject term.
eLEL Symbol Declarant
Type Subject
Methods
EnterRegion()
EnterDistrict()
EnterMunicipality()
EnterNeighborhood()
EnterNewbornInfo()
EnterFatherInfo()
EnterMotherInfo()
EnterDeclarantInfo()
Step 9: We got the methods of terms classified as subject as well as the attributes and methods of
the terms classified as verbs. The terms of subject and verb types are defined in this step.
Table 16: Description of an eLEL symbol type subject.
eLEL Symbol Birth certificate issuance process
Type Subject
Notion This is an information system to perform a birth certificate
issuance process.
It contains the birth declaration form.
It contains the acknowledgement of receipt by the civil status
officer.
Il contains the birth confirmation.
It contains an application calculating the number of days
between the date of the declaration of birth and date of birth.
It consists of a request for confirmation for the declarant.
It contains the number of the certificate.
It contains the date of registration in the civil register
It contains the surname and the first name of civil status officer
It contains the surname and name of the declarant.
Behavioral response It can enable us to declare the birth.
12. International Journal of Computer Science, Engineering and Applications (IJCSEA) Vol.5, No.6, December 2015
12
It can enable us to receive the birth declaration.
It can enable us to calculate the number of days between the date
of the birth declaration and the date of birth.
It can enable us to request a confirmation from the declarant
It can enable us to confirm the declaration of birth
It can enable us to register the declaration of birth in the civil
registry.
It can enable us to prepare the birth certificate.
It can enable us to issue the birth certificate.
Attributes
Name Code Definition Format Size
Birth
declaration
Date
declaration_date Date of the birth
declaration
Date 8
Name of
the civil
status
officer
civ_stat_officer_name Name of the civil
status officer
Text 25
First name
of the civil
status o
civ_stat_firstname First name of the
civil status officer
Text 25
Name of
the
declarant
declarant_name Name of the
declarant
Text 25
First name
of the
declarant
declarant_firstname First name of the
declarant
Text 25
Methods
DeclareBirth()
ReceiveDeclaration()
CalculateNumberDays()
RequestConfirmation()
ConfirmDeclaration()
RegisterDeclaration()
EstablishBirthCertificate()
DeliverBirthCertificate()
Table 17: Description of an eLEL symbol type subject.
eLEL Symbol Declarant
Type Subject
Notion This is a person who declares the birth.
It is an entity characterized by a name, a first name, an address, and
the quality of the declarant.
It provides the Region of birth, the district of birth, the
municipality of birth, the information about the newborn, the
information about the father of the newborn, the information
about the newborn’s mother, the issuance date of the certificate, the
date of birth, the place where the certificate was made.
BehavioralResponse It fills in the information about the birth Region, the District of
13. International Journal of Computer Science, Engineering and Applications (IJCSEA) Vol.5, No.6, December 2015
13
birth, the municipality of birth, about the neighborhood where he
was born, about the newborn, the newborn’s father and about the
newborn’s father.
Attributes
Name Code Definition Format Size
Name Name Name of
declarant
Text 25
First Name Firstname First name of
declarant
Text 25
Adress Adress Adress of
declarant
Text 65
Quality quality Quality of
declarant
Text 15
Birth certificate
Date
birth_cert_date Date of the birth
certificate
Date 8
Birth date birth_date Date of birth Date 8
Place of the
Certificate
cert_place Place of the
certificate
Text 25
Methods
EnterRegion()
EnterDistrict()
EnteMunicipality()
EnterNeighborhood()
EnterNewbornInfo()
EnterFatherInfo()
EnterMotherInfo()
EnterDeclarantInfo()
Table 18: Description of an eLEL object of a verb type.
eLEL Symbol Issue the birth certificate
Type Verb
Notion The civil status officer must issue the birth certificate
following the process of issuing birth certificates.
Behavioral response The birth certificate is delivered to the declarant on a date
fixed by the civil status officer
Attributes
Name Code Definition Format Size
The birth
certificate
copy_birth_cert Copy of the birth
certificate
Complex 1
Declarant declarant Declarantof the
birth
Complex 1
Civil status
officer
civ_stat_officer Civil status officer Complex 1
Methods
DeliverBirthCertificate().
Step 10: We have defined the terms of state type
14. International Journal of Computer Science, Engineering and Applications (IJCSEA) Vol.5, No.6, December 2015
14
Table 19: Description of an eLEL symbol type state.
eLEL symbol Copy of the birth certificate issued
Type State
Notion The situation in which the birth certificate of the newborn is
delivered at the end of the process for issuing the birth
certificate.
It is conducted by the action deliver the birth certificate.
Behavioral response The date of the issuance of the first birth certificate is fixed.
The birth certificate is issued.
Attributes
Name Code Definition Format Size
Birth declaration
form
declaration_form Birth certificate
declaration
Complex 1
Declarant declarant Declarant of the
birth
Complex 1
Civil status officer civ_stat_officer Civil status
officer
Complex 1
Methods
DeliverBirthCertificate()
Step 11 and 12: To complete the construction process, an expert study has been carried out by
linguists to verify the description of the eLEL (step 11). Finally, all the parties concerned have
also done the validation of the eLEL symbols (step 12).
Step 13: By applying Step 13, we have the relationship between symbols made valid as well as
its occurrencies.
5.2. Transformation of the eLEL obtained in a class diagram
Rule 1: We got the list of UML classes corresponding to each eLEL symbol of object and subject
type.
Table 20: List of obtained UML class candidates.
Type eLEL Object UML class
Subject Declarant, Process of issuing
birth certificate, Civil Status
Officer.
Declarant, Process of issuing
birth certificate, Civil Status
Officer.
Object Municipality, District, Birth
certificate declaration form,
Newborn, Region, Newborn’s
mother, Newborn’s father.
Municipality, District, Birth
certificate declaration form,
Newborn, Region, Newborn’s
mother, Newborn’s father.
Rule 2 and 3: By applying rule 2 and 3 respectively, we got the attributes and methods for each
UML class from rule 1.
15. International Journal of Computer Science, Engineering and Applications (IJCSEA) Vol.5, No.6, December 2015
15
Table 21: Example of class UML, methods and attributes.
UML classes Attributes Methods
Declarant name, firstname, adress,
quality, date_birth_certif,
date_birth
EnterRegion(),EnterDistrcit(),EnterMunicip
ality(),EnterNeighborhood(),EnterNewborn
(),EnterFather(),EnterMother(),Enterdeclara
nt().
Process of issuing
the birth
certificate
date_declaration,
name_civ_stat_officer,
firstname_civ_stat_officer
Declare_birth(),receive_declaration(),calcul
ateNumberDays,requestConfirmation(),
registerDeclaration(),
establishBirthCertificate(),
issuebirthcertificate()
Birth certificate
declaration form
birth_certif_numb,
birth_month, birth_year etc.
getNum, setNum etc…
Rule 4 et 5: We have the UML class diagram abstract model after the extraction (Figure 3)
corresponding to the eLEL symbol subject and object (Figure 2) by translating rules 1-5 in ATL
transformation (Atlas Transform Language). (Figure 1) show the ATL transformation of rule 5
eLEL extraction into UML class diagram.
Figure 1. ATL transformations of the number of created element concept of rule 5 eLEL extraction into
UML classes cardinality.
16. International Journal of Computer Science, Engineering and Applications (IJCSEA) Vol.5, No.6, December 2015
16
Figure 2. Ecore Model Sample Reflective for the eLEL symbol model.
Figure 3. Ecore Model Sample Reflective for UML class diagram abstract.
17. International Journal of Computer Science, Engineering and Applications (IJCSEA) Vol.5, No.6, December 2015
17
6. CONCLUSION AND FUTURE WORK
In this paper, we have proposed a conceptual model enriched in natural language which enables
us to describe in more details the concept in the field of eLEL. Then we have proposed a
modeling process using the eLEL model. The eLEL consists of an entity, attributes, methods,
notions and behavioral responses. The entity is the term or the concept itself, the attributes and
methods are the conceptual view, the notions and the behavioral responses define the semantics of
natural language of each model. The heuristics and the stages of the process of building eLEL
used in this article allow us to build an eLEL, given the case study we conducted. The eLEL
construction process steps are so important in requirements engineering that they must be
developed carefully. The eLEL construction steps process provide a systematic and logical way to
define the conceptual view of an application model in natural language. The approach provided in
this article enables us to describe the different concepts of the UofD by explicitly defining
structural and behavioral aspects. The eLEL model is a conceptual model of a natural language
application domain. So it is a model closer to the raw original model because it comes directly
from the UofD. Thanks to its wealth of information, it is possible to easily extract it from
application requirements as well as data dictionary of a domain.
And as perspectives of this work, we identified the following:
-The use of eLEL in MDM (Master Data Management), to design, implement an information
system, and to map the core business of an enterprise.
-The extraction of an eLEL symbol from different UML diagrams such as the use case diagram,
the activity diagram, the state diagram etc.
-The derivation of scenarios.
-The computerization of the birth certificate.
-Website to create.
ACKNOWLEDGEMENTS
I would like to thank Pr. Josvah Paul RAZAFIMANDIMBY and Dr. Thomas MAHATODY for
their most support and encouragement. They kindly read my paper and offered invaluable
detailed advices on grammar, organization, and the theme of the paper.
REFERENCES
[1] Cysneiros, L.M. & Leite, J.C.S.P, (2001) “Using the Language Extended Lexicon to Support Non-
Functional Requirements Elicitation“, in proceedings of the Workshops de Engenharia de Requisitos,
Wer’01, Buenos Aires, Argentina.
[2] Cysneiros L.M. & Leite J.C.S.P, (2014) “Deriving Non-Functional Requirements to Use Cases and
Scenarios“, XV Simpósio Brasileiro de Engenharia de Software.
[3] Eco.,U.(1976). Theory of semiotics. Bloomington: Indiana University Press.P.4.
[4] González A, España S, Ruiz M, and Pastor O., (2011) “Systematic Derivation of Class Diagrams from
Communication in Business-Process and Information Systems Modeling”. Proceedings of 12th
International Conference Enterprise, Business-Process and Information Systems Modeling, BPMDS
2011, and 16th International Conference EMMSAD 2011, CAISE 2011, London, UK. pp.246-260.
[5] Guy P., Hondjack D.,Yamine A., Beelatreche L.,(2005) “ Base de données à base ontology : principe
et mise en œuvre“. Laboratoire d’Informatique Scientifique et Industrielle (LISI).
[6] Hjelmslev L., (1963) “ Prolegomena to a theory of Language “. Madison: University of Winsconsin
Press.P.120.
[7] Karin B. & Leite J.C.S.P(2004) “Lexicon based ontology construction“. Engineering for Multi-
Agent Systems II, Springer.
18. International Journal of Computer Science, Engineering and Applications (IJCSEA) Vol.5, No.6, December 2015
18
[8] Leandro A.,Gustavo R., Leite J.C.S.P. and Alejandro O. (2012) “Deriving requirements
specifications from the application domain language captured by Language Extended
Lexicon“.Departemanto de Informatica. Universidad Nacional de la Plata.
[9] Leite J.C.S.P, Hadad G.D.S, Doorn J.H and Kaplan G.N, (2000) “A Scenario Construction Process“.
Requirement Eng 5:38-61.
[10] Leite J.C.S.P&Paula A.M.F., (1993).“A Strategy for Conceptual Model Acquisition“. proceedings of
IEEE International Symposium. P : 243 – 246.
[11] Leonardi M.C., Marcela R., Mauco M.V and Laura F. (2015) “A Natural Language Requirements
Engineering Approach for MDA“. International journal of Computer Science, Engineering and
Application (IJCSEA) vol.5, N°1.
[12] Leonardi M.C, Mauco M. V.(2009) “Integrating Natural Language Oriented Requirements Models
with MDA“. Encyclopedia of Information Science and Technology, Second Edition, IGI
Global,USA.pp 2091-2102.
[13] Megha, M., & Shivani, G., (2012) “Transformation LEL to UML“. International Journal of Computer
Applications (0975-888) Vol 48-N°12.
[14] Nui Nan., (2008). “Extracting and Modeling Product Line Functional Requirements“. Steve
Easterbook, Departement of Computer Science, Univerisity of Toronto.
[15] Sayao M. & Gustavo P.C., (2007) “Intelligencia Artificial“. Revista Iberoamericana de Intelligencia
Artificia, 36 (11), p35-42.
AUTHORS
Jean Luc RAZAFINDRAMINTSA is a student PhD at the Research Institute School for
Computer Modelisation and the Laboratory of Mathematical and Computer Applied to
the Development Systems at the University of Fianarantsoa Madagascar. The main
research topics are modelling Master Data Management, modelling Service Oriented
Architecture, modelling Enterprise Architect and Business Intelligence by using elaborate
natural language model oriented requirements.
Thomas MAHATODY is a PhD at the National School for Computer Engineering and
the Director of TIC at the University of Fianarantsoa Madagascar.
Josvah Paul RAZAFIMANDIMBY is a full Professor at the University of Madagascar
and the Director of the Laboratory of Mathematical and Computer Applied to the
Development Systems at the University of Fianarantsoa Madagascar.