The generation of digital content has undergone a great increase in recent years due to the
development of new technologies that allow the creation of content quickly and easily. A further step in this
evolution is the generation of contents by automatic systems without human intervention. Thus, for decadesit has
been developing models for the Natural Language Generation (NLG) that allow the transformation of content to
the form of narratives. At present, there are several systems that enable the generation in text format. In this
paper we present the Narrative system, which allows the generation of text narratives from different sources,
and which are indistinguishable for user from those made by a human being.
Game-based Learning as a Suitable Approach for Teaching Digital Ethical Think...Dagmar Monett
Slides of the talk at the 15th annual International Technology, Education and Development Conference, INTED 2021 (a virtual conference), March 8th-9th, 2021.
Context, Causality, and Information Flow: Implications for Privacy Engineerin...Sebastian Benthall
The creators of technical infrastructure are under social and legal pressure to comply with expectations that can be difficult to translate into computational and business logics. The dissertation presented in this talk bridges this gap through three projects that focus on privacy engineering, information security, and data economics, respectively. These projects culminate in a new formal method for evaluating the strategic and tactical value of data. This method relies on a core theoretical contribution building on the work of Shannon, Dretske, Pearl, Koller, and Nissenbaum: a definition of information flow as a channel situated in a context of causal relations.
In recent years the growth of digital data is increasing dramatically, knowledge discovery and data mining have attracted immense attention with coming up need for turning such data into useful information and knowledge. Keyword extraction is considered an essential task in natural language processing (NLP) that facilitates mapping of documents to a concise set of representative single and multi-word phrases. This paper investigates using of Word2Vec and Decision Tree for keywords extraction from textual documents. The Sem-Eval (2010) dataset is used as a main input for the proposed study. The words are represented by vectors with Word2Vec technique following applying pre-processing operations on the dataset. This method is based on word similarity between candidate keywords from both collecting keywords for each label and one sample from the same label. An appropriate threshold has been determined by which the percentages that exceed this threshold are exported to the Decision Tree in order to consider an appropriate classification to be taken on the text document.
Some similarity measurements were used for the classification process. The efficiency and accuracy of the algorithm was measured in the process of classification using precision, recall and F-score rates. The obtained results indicated that using of vector representation for each keyword is an effective way to identify the most similar words, so that the opportunity to recognize the correct classification of the document increases. When using word2Vec CBOW the result of F-Score was 64% with the Gini method and WordNet Lemmatizer. Meanwhile, when using Word2Vec SG the result of F-Score was 82% with Gini Index and English Porter Stemming which considered the highest ratio for all our experiments.
http://sites.google.com/site/ijcsis/
https://google.academia.edu/JournalofComputerScience
https://www.linkedin.com/in/ijcsis-research-publications-8b916516/
http://www.researcherid.com/rid/E-1319-2016
Game-based Learning as a Suitable Approach for Teaching Digital Ethical Think...Dagmar Monett
Slides of the talk at the 15th annual International Technology, Education and Development Conference, INTED 2021 (a virtual conference), March 8th-9th, 2021.
Context, Causality, and Information Flow: Implications for Privacy Engineerin...Sebastian Benthall
The creators of technical infrastructure are under social and legal pressure to comply with expectations that can be difficult to translate into computational and business logics. The dissertation presented in this talk bridges this gap through three projects that focus on privacy engineering, information security, and data economics, respectively. These projects culminate in a new formal method for evaluating the strategic and tactical value of data. This method relies on a core theoretical contribution building on the work of Shannon, Dretske, Pearl, Koller, and Nissenbaum: a definition of information flow as a channel situated in a context of causal relations.
In recent years the growth of digital data is increasing dramatically, knowledge discovery and data mining have attracted immense attention with coming up need for turning such data into useful information and knowledge. Keyword extraction is considered an essential task in natural language processing (NLP) that facilitates mapping of documents to a concise set of representative single and multi-word phrases. This paper investigates using of Word2Vec and Decision Tree for keywords extraction from textual documents. The Sem-Eval (2010) dataset is used as a main input for the proposed study. The words are represented by vectors with Word2Vec technique following applying pre-processing operations on the dataset. This method is based on word similarity between candidate keywords from both collecting keywords for each label and one sample from the same label. An appropriate threshold has been determined by which the percentages that exceed this threshold are exported to the Decision Tree in order to consider an appropriate classification to be taken on the text document.
Some similarity measurements were used for the classification process. The efficiency and accuracy of the algorithm was measured in the process of classification using precision, recall and F-score rates. The obtained results indicated that using of vector representation for each keyword is an effective way to identify the most similar words, so that the opportunity to recognize the correct classification of the document increases. When using word2Vec CBOW the result of F-Score was 64% with the Gini method and WordNet Lemmatizer. Meanwhile, when using Word2Vec SG the result of F-Score was 82% with Gini Index and English Porter Stemming which considered the highest ratio for all our experiments.
http://sites.google.com/site/ijcsis/
https://google.academia.edu/JournalofComputerScience
https://www.linkedin.com/in/ijcsis-research-publications-8b916516/
http://www.researcherid.com/rid/E-1319-2016
Ontology engineering, along with semantic Web technologies, allow the semantic development and modeling of the operational flow required for blockchain design. The semantic Web, in accordance with W3C, "provides a common framework that allows data to be shared and reused across application, enterprise, and community boundaries" and can be seen as an integrator for various content, applications and information systems. The most widely used blockchain modelling system, by abstract representation, description and definition of structure, processes, information and resources, is the enterprises modelling. Enterprise modelling uses domain ontologies by model representation languages.
DOI: 10.13140/RG.2.2.19062.24642
Semantic-Driven e-Government: Application of Uschold and King Ontology Buildi...dannyijwest
Electronic government (e-government) has been one of the most active areas of ontology development
during the past six years. In e-government, ontologies are being used to describe and specify e-government
services (e-services) because they enable easy composition, matching, mapping and merging of various e-
government services. More importantly, they also facilitate the semantic integration and interoperability of
e-government services. However, it is still unclear in the current literature how an existing ontology
building methodology can be applied to develop semantic ontology models in a government service
domain. In this paper the Uschold and King ontology building methodology is applied to develop semantic
ontology models in a government service domain. Firstly, the Uschold and King methodology is presented,
discussed and applied to build a government domain ontology.
Data-to-text technologies present an enormous and exciting opportunity to help
audiences understand some of the insights present in today’s vasts and growing amounts of electronic
data. In this article we analyze the potential value and benefits of these solutions as well as their risks
and limitations for a wider penetration. These technologies already bring substantial advantages of
cost, time, accuracy and clarity versus other traditional approaches or format. On the other hand,
there are still important limitations that restrict the broad applicability of these solutions, most
importantly in the limited quality of their output. However we find that the current state of
development is sufficient for the application of these solution across many domains and use cases and
recommend businesses of all sectors to consider how to deploy them to enhance the value they are
currently getting from their data. As the availability of data keeps growing exponentially and natural
language generation technology keeps improving, we expect data-to-text solutions to take a much
more bigger role in the production of automated content across many different domains.
USING MACHINE LEARNING TO BUILD A SEMI-INTELLIGENT BOT ecij
Nowadays, real-time systems and intelligent systems offer more and more control interface based on voice recognition or human language recognition. Robots and drones will soon be mainly controlled by voice. Other robots will integrate bots to interact with their users, this can be useful both in industry and entertainment. At first, researchers were digging on the side of "ontology reasoning". Given all the technical constraints brought by the treatment of ontologies, an interesting solution has emerged in last years: the construction of a model based on machine learning to connect a human language to a knowledge
base (based for example on RDF). We present in this paper our contribution to build a bot that could be used on real-time systems and drones/robots, using recent machine learning technologies.
USING MACHINE LEARNING TO BUILD A SEMI-INTELLIGENT BOT ecij
Nowadays, real-time systems and intelligent systems offer more and more control interface based on voice
recognition or human language recognition. Robots and drones will soon be mainly controlled by voice.
Other robots will integrate bots to interact with their users, this can be useful both in industry and
entertainment. At first, researchers were digging on the side of "ontology reasoning". Given all the
technical constraints brought by the treatment of ontologies, an interesting solution has emerged in last
years: the construction of a model based on machine learning to connect a human language to a knowledge
base (based for example on RDF). We present in this paper our contribution to build a bot that could be
used on real-time systems and drones/robots, using recent machine learning technologies.
Temporal Exploration in 2D Visualization of Emotions on Twitter StreamTELKOMNIKA JOURNAL
As people freely express their opinions toward a product on Twitter streams without being bound
by time, visualizing time pattern of customers emotional behavior can play a crucial role in decisionmaking.
We analyze how emotions are fluctuated in pattern and demonstrate how we can explore it into
useful visualizations with an appropriate framework. We manually customized the current framework in
order to improve a state-of-the-art of crawling and visualizing Twitter data. The data, post or update on
status on the Twitter website about iPhone, was collected from U.S.A, Japan, Indonesia, and Taiwan by
using geographical bounding-box and visualized it into two-dimensional heat map, interactive stream
graph, and context focus via brushing visualization. The results show that our proposed system can
explore uniqueness of temporal pattern of customers emotional behavior.
Dialectal Arabic sentiment analysis based on tree-based pipeline optimizatio...IJECEIAES
The heavy involvement of the Arabic internet users resulted in spreading data written in the Arabic language and creating a vast research area regarding natural language processing (NLP). Sentiment analysis is a growing field of research that is of great importance to everyone considering the high added potential for decision-making and predicting upcoming actions using the texts produced in social networks. Arabic used in microblogging websites, especially Twitter, is highly informal. It is not compliant with neither standards nor spelling regulations making it quite challenging for automatic machine-learning techniques. In this paper’s scope, we propose a new approach based on AutoML methods to improve the efficiency of the sentiment classification process for dialectal Arabic. This approach was validated through benchmarks testing on three different datasets that represent three vernacular forms of Arabic. The obtained results show that the presented framework has significantly increased accuracy than similar works in the literature.
A prior case study of natural language processing on different domain IJECEIAES
In the present state of digital world, computer machine do not understand the human’s ordinary language. This is the great barrier between humans and digital systems. Hence, researchers found an advanced technology that provides information to the users from the digital machine. However, natural language processing (i.e. NLP) is a branch of AI that has significant implication on the ways that computer machine and humans can interact. NLP has become an essential technology in bridging the communication gap between humans and digital data. Thus, this study provides the necessity of the NLP in the current computing world along with different approaches and their applications. It also, highlights the key challenges in the development of new NLP model.
Concept integration using edit distance and n gram match ijdms
Information is growing more rapidly on the World Wide Web (WWW) has made it necessary to make all
this information not only available to people but also to the machines. Ontology and token are widely being
used to add the semantics in data processing or information processing. A concept formally refers to the
meaning of the specification which is encoded in a logic-based language, explicit means concepts,
properties that specification is machine readable and also a conceptualization model how people think
about things of a particular subject area. In modern scenario more ontologies has been developed on
various different topics, results in an increased heterogeneity of entities among the ontologies. The concept
integration becomes vital over last decade and a tool to minimize heterogeneity and empower the data
processing. There are various techniques to integrate the concepts from different input sources, based on
the semantic or syntactic match values. In this paper, an approach is proposed to integrate concept
(Ontologies or Tokens) using edit distance or n-gram match values between pair of concept and concept
frequency is used to dominate the integration process. The proposed techniques performance is compared
with semantic similarity based integration techniques on quality parameters like Recall, Precision, FMeasure
& integration efficiency over the different size of concepts. The analysis indicates that edit
distance value based interaction outperformed n-gram integration and semantic similarity techniques.
Structured and Unstructured Information Extraction Using Text Mining and Natu...rahulmonikasharma
Information on web is increasing at infinitum. Thus, web has become an unstructured global area where information even if available, cannot be directly used for desired applications. One is often faced with an information overload and demands for some automated help. Information extraction (IE) is the task of automatically extracting structured information from unstructured and/or semi-structured machine-readable documents by means of Text Mining and Natural Language Processing (NLP) techniques. Extracted structured information can be used for variety of enterprise or personal level task of varying complexity. The Information Extraction (IE) in also a set of knowledge in order to answer to user consultations using natural language. The system is based on a Fuzzy Logic engine, which takes advantage of its flexibility for managing sets of accumulated knowledge. These sets may be built in hierarchic levels by a tree structure. Information extraction is structured data or knowledge from unstructured text by identifying references to named entities as well as stated relationships between such entities. Data mining research assumes that the information to be “mined” is already in the form of a relational database. IE can serve an important technology for text mining. The knowledge discovered is expressed directly in the documents to be mined, then IE alone can serve as an effective approach to text mining. However, if the documents contain concrete data in unstructured form rather than abstract knowledge, it may be useful to first use IE to transform the unstructured data in the document corpus into a structured database, and then use traditional data mining tools to identify abstract patterns in this extracted data. We propose a novel method for text mining with natural language processing techniques to extract the information from data base with efficient way, where the extraction time and accuracy is measured and plotted with simulation. Where the attributes of entities and relationship entities from structured and semi structured information .Results are compared with conventional methods.
This paper introduces Topic Tracking for Punjabi language. Text mining is a field that automatically
extracts previously unknown and useful information from unstructured textual data. It has strong
connections with natural language processing. NLP has produced technologies that teach computers
natural language so that they may analyze, understand and even generate text. Topic tracking is one of the
technologies that has been developed and can be used in the text mining process. The main purpose of topic
tracking is to identify and follow events presented in multiple news sources, including newswires, radio and
TV broadcasts. It collects dispersed information together and makes it easy for user to get a general
understanding. Not much work has been done in Topic tracking for Indian Languages in general and
Punjabi in particular. First we survey various approaches available for Topic Tracking, then represent our
approach for Punjabi. The experimental results are shown.
Ontology engineering, along with semantic Web technologies, allow the semantic development and modeling of the operational flow required for blockchain design. The semantic Web, in accordance with W3C, "provides a common framework that allows data to be shared and reused across application, enterprise, and community boundaries" and can be seen as an integrator for various content, applications and information systems. The most widely used blockchain modelling system, by abstract representation, description and definition of structure, processes, information and resources, is the enterprises modelling. Enterprise modelling uses domain ontologies by model representation languages.
DOI: 10.13140/RG.2.2.19062.24642
Semantic-Driven e-Government: Application of Uschold and King Ontology Buildi...dannyijwest
Electronic government (e-government) has been one of the most active areas of ontology development
during the past six years. In e-government, ontologies are being used to describe and specify e-government
services (e-services) because they enable easy composition, matching, mapping and merging of various e-
government services. More importantly, they also facilitate the semantic integration and interoperability of
e-government services. However, it is still unclear in the current literature how an existing ontology
building methodology can be applied to develop semantic ontology models in a government service
domain. In this paper the Uschold and King ontology building methodology is applied to develop semantic
ontology models in a government service domain. Firstly, the Uschold and King methodology is presented,
discussed and applied to build a government domain ontology.
Data-to-text technologies present an enormous and exciting opportunity to help
audiences understand some of the insights present in today’s vasts and growing amounts of electronic
data. In this article we analyze the potential value and benefits of these solutions as well as their risks
and limitations for a wider penetration. These technologies already bring substantial advantages of
cost, time, accuracy and clarity versus other traditional approaches or format. On the other hand,
there are still important limitations that restrict the broad applicability of these solutions, most
importantly in the limited quality of their output. However we find that the current state of
development is sufficient for the application of these solution across many domains and use cases and
recommend businesses of all sectors to consider how to deploy them to enhance the value they are
currently getting from their data. As the availability of data keeps growing exponentially and natural
language generation technology keeps improving, we expect data-to-text solutions to take a much
more bigger role in the production of automated content across many different domains.
USING MACHINE LEARNING TO BUILD A SEMI-INTELLIGENT BOT ecij
Nowadays, real-time systems and intelligent systems offer more and more control interface based on voice recognition or human language recognition. Robots and drones will soon be mainly controlled by voice. Other robots will integrate bots to interact with their users, this can be useful both in industry and entertainment. At first, researchers were digging on the side of "ontology reasoning". Given all the technical constraints brought by the treatment of ontologies, an interesting solution has emerged in last years: the construction of a model based on machine learning to connect a human language to a knowledge
base (based for example on RDF). We present in this paper our contribution to build a bot that could be used on real-time systems and drones/robots, using recent machine learning technologies.
USING MACHINE LEARNING TO BUILD A SEMI-INTELLIGENT BOT ecij
Nowadays, real-time systems and intelligent systems offer more and more control interface based on voice
recognition or human language recognition. Robots and drones will soon be mainly controlled by voice.
Other robots will integrate bots to interact with their users, this can be useful both in industry and
entertainment. At first, researchers were digging on the side of "ontology reasoning". Given all the
technical constraints brought by the treatment of ontologies, an interesting solution has emerged in last
years: the construction of a model based on machine learning to connect a human language to a knowledge
base (based for example on RDF). We present in this paper our contribution to build a bot that could be
used on real-time systems and drones/robots, using recent machine learning technologies.
Temporal Exploration in 2D Visualization of Emotions on Twitter StreamTELKOMNIKA JOURNAL
As people freely express their opinions toward a product on Twitter streams without being bound
by time, visualizing time pattern of customers emotional behavior can play a crucial role in decisionmaking.
We analyze how emotions are fluctuated in pattern and demonstrate how we can explore it into
useful visualizations with an appropriate framework. We manually customized the current framework in
order to improve a state-of-the-art of crawling and visualizing Twitter data. The data, post or update on
status on the Twitter website about iPhone, was collected from U.S.A, Japan, Indonesia, and Taiwan by
using geographical bounding-box and visualized it into two-dimensional heat map, interactive stream
graph, and context focus via brushing visualization. The results show that our proposed system can
explore uniqueness of temporal pattern of customers emotional behavior.
Dialectal Arabic sentiment analysis based on tree-based pipeline optimizatio...IJECEIAES
The heavy involvement of the Arabic internet users resulted in spreading data written in the Arabic language and creating a vast research area regarding natural language processing (NLP). Sentiment analysis is a growing field of research that is of great importance to everyone considering the high added potential for decision-making and predicting upcoming actions using the texts produced in social networks. Arabic used in microblogging websites, especially Twitter, is highly informal. It is not compliant with neither standards nor spelling regulations making it quite challenging for automatic machine-learning techniques. In this paper’s scope, we propose a new approach based on AutoML methods to improve the efficiency of the sentiment classification process for dialectal Arabic. This approach was validated through benchmarks testing on three different datasets that represent three vernacular forms of Arabic. The obtained results show that the presented framework has significantly increased accuracy than similar works in the literature.
A prior case study of natural language processing on different domain IJECEIAES
In the present state of digital world, computer machine do not understand the human’s ordinary language. This is the great barrier between humans and digital systems. Hence, researchers found an advanced technology that provides information to the users from the digital machine. However, natural language processing (i.e. NLP) is a branch of AI that has significant implication on the ways that computer machine and humans can interact. NLP has become an essential technology in bridging the communication gap between humans and digital data. Thus, this study provides the necessity of the NLP in the current computing world along with different approaches and their applications. It also, highlights the key challenges in the development of new NLP model.
Concept integration using edit distance and n gram match ijdms
Information is growing more rapidly on the World Wide Web (WWW) has made it necessary to make all
this information not only available to people but also to the machines. Ontology and token are widely being
used to add the semantics in data processing or information processing. A concept formally refers to the
meaning of the specification which is encoded in a logic-based language, explicit means concepts,
properties that specification is machine readable and also a conceptualization model how people think
about things of a particular subject area. In modern scenario more ontologies has been developed on
various different topics, results in an increased heterogeneity of entities among the ontologies. The concept
integration becomes vital over last decade and a tool to minimize heterogeneity and empower the data
processing. There are various techniques to integrate the concepts from different input sources, based on
the semantic or syntactic match values. In this paper, an approach is proposed to integrate concept
(Ontologies or Tokens) using edit distance or n-gram match values between pair of concept and concept
frequency is used to dominate the integration process. The proposed techniques performance is compared
with semantic similarity based integration techniques on quality parameters like Recall, Precision, FMeasure
& integration efficiency over the different size of concepts. The analysis indicates that edit
distance value based interaction outperformed n-gram integration and semantic similarity techniques.
Structured and Unstructured Information Extraction Using Text Mining and Natu...rahulmonikasharma
Information on web is increasing at infinitum. Thus, web has become an unstructured global area where information even if available, cannot be directly used for desired applications. One is often faced with an information overload and demands for some automated help. Information extraction (IE) is the task of automatically extracting structured information from unstructured and/or semi-structured machine-readable documents by means of Text Mining and Natural Language Processing (NLP) techniques. Extracted structured information can be used for variety of enterprise or personal level task of varying complexity. The Information Extraction (IE) in also a set of knowledge in order to answer to user consultations using natural language. The system is based on a Fuzzy Logic engine, which takes advantage of its flexibility for managing sets of accumulated knowledge. These sets may be built in hierarchic levels by a tree structure. Information extraction is structured data or knowledge from unstructured text by identifying references to named entities as well as stated relationships between such entities. Data mining research assumes that the information to be “mined” is already in the form of a relational database. IE can serve an important technology for text mining. The knowledge discovered is expressed directly in the documents to be mined, then IE alone can serve as an effective approach to text mining. However, if the documents contain concrete data in unstructured form rather than abstract knowledge, it may be useful to first use IE to transform the unstructured data in the document corpus into a structured database, and then use traditional data mining tools to identify abstract patterns in this extracted data. We propose a novel method for text mining with natural language processing techniques to extract the information from data base with efficient way, where the extraction time and accuracy is measured and plotted with simulation. Where the attributes of entities and relationship entities from structured and semi structured information .Results are compared with conventional methods.
This paper introduces Topic Tracking for Punjabi language. Text mining is a field that automatically
extracts previously unknown and useful information from unstructured textual data. It has strong
connections with natural language processing. NLP has produced technologies that teach computers
natural language so that they may analyze, understand and even generate text. Topic tracking is one of the
technologies that has been developed and can be used in the text mining process. The main purpose of topic
tracking is to identify and follow events presented in multiple news sources, including newswires, radio and
TV broadcasts. It collects dispersed information together and makes it easy for user to get a general
understanding. Not much work has been done in Topic tracking for Indian Languages in general and
Punjabi in particular. First we survey various approaches available for Topic Tracking, then represent our
approach for Punjabi. The experimental results are shown.
A DEVELOPMENT FRAMEWORK FOR A CONVERSATIONAL AGENT TO EXPLORE MACHINE LEARNIN...mlaij
This study aims to introduce a discussion platform and curriculum designed to help people understand how
machines learn. Research shows how to train an agent through dialogue and understand how information
is represented using visualization. This paper starts by providing a comprehensive definition of AI literacy
based on existing research and integrates a wide range of different subject documents into a set of key AI
literacy skills to develop a user-centered AI. This functionality and structural considerations are organized
into a conceptual framework based on the literature. Contributions to this paper can be used to initiate
discussion and guide future research on AI learning within the computer science community.
Temporal Information Processing is a subfield of Natural Language Processing, valuable in many tasks
like Question Answering and Summarization. Temporal Information Processing is broadened, ranging
from classical theories of time and language to current computational approaches for Temporal
Information Extraction. This later trend consists on the automatic extraction of events and temporal
expressions. Such issues have attracted great attention especially with the development of annotated
corpora and annotations schemes mainly TimeBank and TimeML. In this paper, we give a survey of
Temporal Information Extraction from Natural Language texts.
A N E XTENSION OF P ROTÉGÉ FOR AN AUTOMA TIC F UZZY - O NTOLOGY BUILDING U...ijcsit
The process of building ontology is a very
complex and time
-
consuming process
especially when dealing
with huge amount of data. Unfortunately current
marketed
tools are very limited and don’t meet
all
user
needs.
Indeed, t
hese software build the core of the ontology from initial data that generates
a
big number of
information.
In this paper, we
aim to resolve these problems
by adding an extension to the well known
ontology editor Protégé in order to work towards a complete
FCA
-
based framework
which resolves the
limitation of other tools in
building fuzzy
-
ontology
.
W
e will give
, in this paper
, some
details on
our
sem
i
-
automat
ic collaborative tool
called FOD Tab Plug
-
in
which
takes into consideration another degree of
granularity in the process of generation
.
In fact, i
t follows a bottom
-
up strategy based on conceptual
clustering, fuzzy logic and Formal Concept Analysis (FCA) a
nd it defines ontology between classes
resulting from a preliminary classification of data and not from the initial large amount of data
.
Similar to Narrative: Text Generation Model from Data (20)
Structure failure often occurs in the structure of wall. This failure can adversely affect the comfort level of the structure. Knowing the behavior of structure resulting from the load is important, as it can help to predict the strength of the structure and comfort of the structure being worked on. One way to find out and predict the strength and comfort of the structure as a result of the load received is experimental test and simulation. The simulation VecTor2 used to predict the shear force, crack, and displacement of reinforced concrete wall when applied the load. This simulation considered the effect of bond stress-slip effect of behavior reinforced concrete. Bonds stress-slip gives a great influence on the strength and hysteretic response of the reinforced concrete wall. That is why this study considers the influence of bond stress-slip on reinforced concrete wall. All the result of simulation VecTor2 using bond stress-slip effect would be compared with the result of the experimental test to see the accuracy of the simulation test.
The concept of sustainable construction is increasingly affecting the development of the construction market.The specificity of construction as an economic activity and ofthe construction product (goods and services) determine the existence of a complex vertical chain of links, involving different actors, who tend to work in the short term and are limited to the rational use of knowledge and experience in practice. Moreover, it is characterized by a low level of inter-company relationshipsresulting in a fragmentation of responsibilityand complicates and hinders the realization of projects and sites,which meet the requirements of sustainable construction. Sustainable construction requires a holistic approach and substantial changes in the organization of construction activity, both at the market and firm level, under the active role of the state. The aim of the study is: 1) analysis of problems in the vertical chain of connections in the construction market, 2)an analysis of the possibilities for creating stable long-term relationships and a joint approach of clients, contractors and subcontractors, which can provide economic, social and environmental efficiency of the construction.
Since the recent development of UAVs(Unmanned Aerial Vehicles) and digital sensors technology has enabled the acquisition of high-resolution image data, it is considered that the image data of riverside can be analysed. Therefore, this study analyses the applicability of remote sensing techniques through image analysis in river systems and habitats. The target stream in this study was the Cheongmi stream and the applicability of the river environmental evaluation technique was analysed through image analysis. The satellite images used for the analysis of river topography and environments were compared with the aerial images taken by a micro UAV), and the river environmental evaluation was carried out with the field research at the same time. The data acquisition range and application limit by river environmental evaluation technique proposed previously were evaluated, and as a result, it was found that it was possible to draw various evaluation parameters using a drone that could take an image at a low altitude in comparison to satellite images.
Industrial engineering is founded on the idea that there is always a better way. This mantra rings true in everything an industrial engineer does, from lean manufacturing to six sigma, to quality control and ergonomics. This paper demonstrates the uniqueness of this discipline, the impact its techniques has in sectors outside of manufacturing, and the positive effects it has on businesses.
The study was carried out using the UAV for analyzing the characteristics of debris in order to present the methodology to estimate the quantitative amount of debris caught in small river facilities. A total of six small rivers that maintained the form of a natural river were selected for collecting UAV images, and the grouping of each target in the image was carried out using the object-based classification method, and based on the object-based classification result of the UAV images, the land cover classification for the status of factors causing the generation of debris for six target sections was carried out by applying the screen digitizing method. In addition, in order to verify the accuracy of the classification result, the error matrix was performed, securing the reliability of the result. The accuracy analysis result showed that for all six target sections, the overall accuracy was 93.95% and the Kappa coefficient was 0.93, showing an excellent result.
Multilevel Inverters are getting popular and have become more attractive to researchers in the recent times for high power applications due to their better power quality and higher efficiency as compared to two level inverters. This research work presents a detailed comparative analysis of various multicarrier sinusoidal PWM schemes such as In Phase Disposition, Phase Opposition Disposition and Alternate Phase Opposite Disposition implemented on five level conventional and modified cascaded h-bridge inverters in MATLAB/SIMULINK software. Conventional five level topology uses eight switches and suffers from increased switching complexity while modified five level topology uses only five switches and is recommended to reduce switching complexity and switching losses. It also ensures less number of components, reduced size and overall cost of the system. The effect of modulation index (Ma) on the output harmonic contents in various PWM techniques is also analyzed.
Objective: Cervical cancer (CC) is one of the leading causes of cancer-related deaths among women worldwide.Human papillomavirus (HPV) is the most important element in this disease.The aim of this study is to prepare TiO2/ZnO nanocomposite (NC), titanium dioxide (TiO2) and zinc oxide (ZnO)nanoparticles (NPs) to determine the anticancer activity on human CC cell line (HeLa) and healthy mouse fibroblast cell line (L-929). Materials&Methods: ZnO, TiO2 NPs and NC were prepared by a solution combustion synthesis method. The samples were characterized by ultraviolet–visible spectroscopy. Stability analysis was performed with zeta potential. The synthesized NC and NPs were permormed to the HeLa and L-929 cell lines and anticancer activity of these NC and NPs were determined by using MTT method. The HeLa and L-929 cells were treated with different concentrations of these NC and NPs (0,5-100 μg/ml) for 24, 48 and 72 hours. The spectrophotometric readings at 570 nm were recorded and analysed with Graphpad Prism7. Results: NC and NPs were successfully synthesized. The effects of these NC and NPs on the HeLa and L-929 cells were compared with the control group and IC50 values were determined for 24, 48 and 72 hours. Then we compared the effects of these molecules on the L-929 cell line with the HeLa cell line and founded more active is on HeLa cells. Conclusion:There are many drugs used in CC treatment. However, undesirable toxicity and drug resistance of these drugs negatively affect treatment.We have synthesized NC and NPs in order to formulate basis of a new drug in this study and have identified anti-cancer activity.As a result, we found that NC and NPs anti-cancer activity was higher in HeLa cells than in L-929.
Graphene is a material that attracts attention in technical textile applications as in many other areas due to its outstanding features. In this study, it was aimed to investigate the performance properties of graphene coated fabrics. Pre-treated polyester fabrics were coated with nano-graphene powders at different concentration rates (50, 100 and 200 g/kg) by knife-over-roll technique. According to test results, generally, the graphene coating had a positive effect on the performance properties of polyester fabrics.
This study was focused on the effects of Sugarcane Bagasse Ash (SCBA) additive on process parameters and compost quality of Co-composting of filter cake and bagasse. Filter cake and bagasse were mixed and sugar cane bagasse ash (SCBA) from a heating power plant of sugar mill. Three compost mixes (M) were obtained: MA with 0%, MB with 10% and MC with 20 wt % of fuel ash. These three different mixes were composted in an experimental composter as three parallel experiments for 3 weeks each. The physical, chemical and biological parameters were monitoring during composting. Significantly, ash additives decreased the total organic carbon; measured by mineralization the breaking down of the organic matter was more rapid in the MC than in the MA, as well as increased the pH during composting. Interesting, the pH decreased was most important in MA and attend 5 for the first week of composting, and then it gradually increased to pH around 8 at the end of the process. The results indicated that ash inhibits the pH drop due to production of organic acids during composting. The acidity of the material was reported as affects the process during the initial phase of rising temperature and quality of the final product. The temperature reached up to 50-55oC during thermophilic phase, the greater temperature was obtained for MC. At the end of composting, the electrical conductivity increased in the MC, especially in MC, but don’t exceed limit (4 mS/cm) for prevent phytotoxicity of the compost. The SCBA additive was likely to speed up the composting process of bagasse with filter cake from 44 days to 33 days.
The work presents report on production and analysis of bioresin from epoxidized mango kernel oil (EMKO). The bioresin (acrylated epoxidized mango kernel oil) or AEMKO was produced from epoxidized mango kernel oil via acrylation chemical reaction route. The FTIR spectrum analysis of epoxidized mango kernel oil (EMKO) and acrylated epoxidized mango kernel oil (AEMKO) produced gave the degree of acrylation (DOA) as 46%. The Viscosity of AEMKO (resin) was determined at room temperature (25 °C) to be 387cP while the density at 25oC was 1.2 g/cm3. The glass transition temperature (Tg) of the bioresin was determined to be 95oC. Production cost analysis of the bioresin was done and found to be N8, 804.35 per litre. The high cost was due to high costs of the chemicals, labour and overhead charges involved at my local level. At commercial level, those components of the costs would definitely reduce to the level compatible with synthetic (polyester) resin (N2, 500 per litre) currently sold by some markers in Nigeria. However, the overall results of the work demonstrated that bioresin can be successfully synthesized from mango kernel oil with properties compatible with ASTM standards. The commercial production of the bioresin will go a long way in mitigating some of the challenges associated with total use of fossil fuel currently use for production of bulk of synthetic resins for composite manufacturing activities.
The window functions used for digital filter design are used to eliminate oscillations in
the FIR (Finite Impulse Response) filter design. In this work, the use of Particle Swarm Optimization
(PSO) algorithm is proposed in the design of cosh window function, in which has widely used in the
literature and has useful spectral parameters. The cosh window is a window function derived from the
Kaiser window. It is more advantageous than the Kaiser window because there is no power series
expansion in the time domain representation. The designed window function shows better ripple ratio
characteristics than other window functions commonly used in the literature. The results obtained
were presented in tables and figures and successful results were obtained
The aim of the study was to investigate the relationship between 2D gray scale pixels and 3D gray scale pixels of image reconstructions in computed tomography (CT). The 3D space image reconstruction from data projection was a challenging and difficult research problem. The image was normally reconstructed from the 2D data from CT data projection. In this descriptive study, a synthetics 3D Shepp-Logan phantom was used to simulate the actual data projection from a CT scanner. Real-time data projection of a human abdomen was also included in this study. Additionally, the Graphical User Interface (GUI) for the application was designed using Matlab Graphical User Interface Development Environment (GUIDE). The application was able to reconstruct 2D and 3D images in their respective spaces successfully.The image reconstruction for CT in 3D space was analyzedalong with 2D space in order to show their relationships and shared properties for the purpose of constructing these images.
In this work the antimicrobial activity and the economic viability analysis of the essential oil extracted from the hybrid formed by the seeds species of the Murupi (Capsicum chinense), Criollos de Morellos (Capsicum annuum) and Finger of the young (Capsicum baccatum ). The essential oil of the pepper was obtained by the steam drag process and for this extraction, the Soxhlet method was used. For the determination of the antimicrobial activity of the oil the disc diffusion method was used for the strains of Bacillus cereus, Staphylococcus aureus and Escherichia coli. The results point out the resistance of the tested strains to the essential oil of the respective pepper and, in terms of financial and economic aspects, this was not feasible on a small scale. It is suggested that other microorganisms be tested and, later, that studies be carried out with the purpose of characterizing the studied oil chemically for proper application in the agroindustry.
Eliminating Gibbs phenomenon, which occurs during design of Finite Impulse Response (FIR) digital filter and which is undesirable, is very important in order to provide expected performance from digital filter. Window functions have been developed to eliminate these oscillations and to improve the performance of the filter in this regard. In this work, an application was developed for designing window function using LABVIEW which is a graphical programming environment produced by National Instruments. LABVIEW offers a powerful programming environment away from complexity. In this work, the performances of cosh and exponential window functions, which are designed by using the possibilities of LABVIEW in programming, are examined and the situations that will occur under various conditions are compared.
Better efficiency of the air transport system of a country at the national level, especially in terms of its
capacity to generate value for passenger flow and cargo transport, effectively depends on the identification of
the demand generation potential of each hub for this type of service. This requires the mapping of the passenger
flow and volume of cargo transport of each region served by the system and the number of connections. The
main goal of this study was to identify important factors that account for the great variability (demand) of
regional hubsof the airport modal system in operation in the State of São Paulo, the most populated and
industrialized in the Southeast region in Brazil. For this purpose, datasets for each airport related to passengers
or cargo flow were obtained from time series data in the period ranging from January 01, 2008 to December
31, 2014. Different data analysis approaches could imply in better mapping of the flow of the air modal system
from the evaluation of some factors related to operations/volume. Therefore, different statistical models - such
as multiple linear regression with normal errors and new stochastic volatility (SV) models - are introduced in
this study, to provide a better view of the operation system in the four main regional hubs, within a large group
of 32 airports reported in the dataset.
Linear attenuation coefficient (휇) is a measure of the ability of a medium to diffuse and absorb radiation. In the interaction of radiation with matter, the linear absorption coefficient plays an important role because during the passage of radiation through a medium, its absorption depends on the wavelength of the radiation and the thickness and nature of the medium. Experiments to determine linear absorption coefficient for Lead, Copper and Aluminum were carried out in air. The result showed that linear absorption Coefficient for Lead is 0.545cm – 1, Copper is 0.139cm-1 and Aluminum is 0.271cm-1 using gamma-rays. The results agree with standard values.
This study presents results of Activity Concentrations, Absorbed dose rate and the Annual Effective dose rates of naturally occurring radionuclides (40K, 232Th and 226Ra) absorbed in 8 soil samples collected from different areas within the Ajiwei mining sites in Niger State, North Central Nigeria. A laboratory γ-ray spectrometry NaI (Tl) at the Centre for Energy Research and Training (CERT), Ahmadu Bello University Zaria, was used to carry out the analysis of the soil samples. The values of Activity Concentration for 40K ranged from 421.6174 ± 7.9316 to 768.7403 ± 7.9315; for 226Ra it ranged from 20.6257 ± 2.0858 to 44.0324 ± 5.0985 and for 232Th the ranged is from 23.7172 ± 1.3683 to 62.7137 ± 4.1049 Bq.Kg-1. While the Absorbed Dose for 40K ranged from 17.5814 ± 0.3307 to 32.0565 ± 0.3307 ŋGy.h-1, for 226Ra the range is from 9.5291 ± 0.9636 to 20.3430 ± 2.3555 ŋGy.h-1 and for 232Th range from 14.3252 ± 0.4414 to 37.8791 ± 2.4794 ŋGy.h-1. The total average Absorbed Dose rate of the 8 soil samples collected is 63.7877 ŋGy.h-1 and the estimated Annual Effective Dose for the sampled areas range from 0.0636- 0.1028mSvy-1 (i.e 64 – 103 μSv.y-1), with an average Annual Effective Dose of 0.0782 mSv.y-1 (i.e. 78.2 μSv.y-1). These results show’s that the radiation exposure level reaching members of the public in the study areas is lower than the recommended limit value of 1 mSv.y-1 (UNSCEAR, 2000). Also the mean Radium Equivalents obtained ranged from 107.3259 BqKg-1 (AJ1) to 179.4064 BqKg-1 (AJ4). These results show that the recommended Radium Equivalent Concentration is ≤ 370 BqKg-1 which is the requirement for soil materials to be used for dwellings, this implies that the soil from this site is suitable use for residential buildings. The mean External Hazard Index ( Hext ) ranged from 0.1229 Bqkg-1 (AJ3) to 0.4226 Bqkg-1 (AJ7).. While the maximum allowed value of (Hext = 1) corresponds to the upper limit of Raeq (370 BqKg-1) in order to limit the external gamma radiation dose from the soil materials to 1.5 mGy y-1. That is, this Index should be equal to or less than unity (Hext ≤ = 1). Furthermore, the mean Internal Hazard Index (Hext) ranged from 0.3456 Bqkg-1 (AJ1) to 0.6453 Bqkg-1 (AJ2) .Finally, the mean value of the Excess Alpha Radiation (Iα) ranged from 0.1031 Bq.Kg-1 (AJ1) to 0.2202 Bq.Kg-1 (AJ3. All these values for Iα are below the maximum permissible value of Iα= 1 which corresponds to 200 Bq.Kg-1. It can therefore be said that no radiological hazard is envisaged to dwellers of the study areas and the miners working on those sites area.
Pick and place task is one among the most important tasks in industrial field handled by “Selective
Compliance Assembly Robot Arm” (SCARA). Repeatability with high-speed movement in horizontal plane is
remarkable feature of this type of manipulator. The challenge of design SCARA is the difficulty of achieving
stability of high-speed movement with long length of links. Shorter links arm can move more stable. This
condition made the links should be considered restrict then followed by restriction of operation area
(workspace). In this research, authors demonstrated on expanding SCARA robot’s workspace in horizontal area
via linear sliding actuator that embedded to base link of the robot arm. With one additional prismatic joint the
previous robot manipulator with 3 degree of freedom (3-DOF), 2 revolute joints and 1 prismatic joint is become
4-DOF PRRP manipulator. This designation increased workspace of robot from 0.5698m2 performed by the
previous arm (without linear actuator) to 1.1281m2 by the propose arm (with linear actuator). The increasing
rate was about 97.97% of workspace with the same links length. The result of experimentation also indicated
that the operation time spent to reach object position was also reduced.
The paper contains several technical solutions of air and moisture permeability in textile
layers and theirs combinations. It is useful collection of the author’s knowledge from several last years.
Discussed are also various marketing declarations of miraculous characteristics of individual used materials.
Examples show not only own technical solution, but also the good description of ongoing processes, using the
method of numerical simulation.
Physical and chemical properties of host environment to concrete structures have serious impact on
the performance and durability of constructed concrete facilities. This paper presents a 7-month study that
simulated the influence of soil contamination due to organic abattoir waste and indiscriminate disposal of spent
hydrocarbon on strength and durability of embedded concrete. Concrete mix, 1:1.5:3 was designed for all cube
and beam specimens with water-cement ratio of 0.5 and the compressive and flexural strengths of the specimen
were measured from age 28 days up to 196 days in the host environment. It was found that both host
environments attack the physical and strength of concrete in compression and flexure. However, hydrocarbon
had much greater adverse effect on the load-carrying capacity of concrete structures and hence make
constructed facilities less serviceable and vulnerable to premature failure.
More from International Journal of Modern Research in Engineering and Technology (20)
Overview of the fundamental roles in Hydropower generation and the components involved in wider Electrical Engineering.
This paper presents the design and construction of hydroelectric dams from the hydrologist’s survey of the valley before construction, all aspects and involved disciplines, fluid dynamics, structural engineering, generation and mains frequency regulation to the very transmission of power through the network in the United Kingdom.
Author: Robbie Edward Sayers
Collaborators and co editors: Charlie Sims and Connor Healey.
(C) 2024 Robbie E. Sayers
NO1 Uk best vashikaran specialist in delhi vashikaran baba near me online vas...Amil Baba Dawood bangali
Contact with Dawood Bhai Just call on +92322-6382012 and we'll help you. We'll solve all your problems within 12 to 24 hours and with 101% guarantee and with astrology systematic. If you want to take any personal or professional advice then also you can call us on +92322-6382012 , ONLINE LOVE PROBLEM & Other all types of Daily Life Problem's.Then CALL or WHATSAPP us on +92322-6382012 and Get all these problems solutions here by Amil Baba DAWOOD BANGALI
#vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore#blackmagicformarriage #aamilbaba #kalajadu #kalailam #taweez #wazifaexpert #jadumantar #vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore #blackmagicforlove #blackmagicformarriage #aamilbaba #kalajadu #kalailam #taweez #wazifaexpert #jadumantar #vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore #Amilbabainuk #amilbabainspain #amilbabaindubai #Amilbabainnorway #amilbabainkrachi #amilbabainlahore #amilbabaingujranwalan #amilbabainislamabad
Cosmetic shop management system project report.pdfKamal Acharya
Buying new cosmetic products is difficult. It can even be scary for those who have sensitive skin and are prone to skin trouble. The information needed to alleviate this problem is on the back of each product, but it's thought to interpret those ingredient lists unless you have a background in chemistry.
Instead of buying and hoping for the best, we can use data science to help us predict which products may be good fits for us. It includes various function programs to do the above mentioned tasks.
Data file handling has been effectively used in the program.
The automated cosmetic shop management system should deal with the automation of general workflow and administration process of the shop. The main processes of the system focus on customer's request where the system is able to search the most appropriate products and deliver it to the customers. It should help the employees to quickly identify the list of cosmetic product that have reached the minimum quantity and also keep a track of expired date for each cosmetic product. It should help the employees to find the rack number in which the product is placed.It is also Faster and more efficient way.
Hybrid optimization of pumped hydro system and solar- Engr. Abdul-Azeez.pdffxintegritypublishin
Advancements in technology unveil a myriad of electrical and electronic breakthroughs geared towards efficiently harnessing limited resources to meet human energy demands. The optimization of hybrid solar PV panels and pumped hydro energy supply systems plays a pivotal role in utilizing natural resources effectively. This initiative not only benefits humanity but also fosters environmental sustainability. The study investigated the design optimization of these hybrid systems, focusing on understanding solar radiation patterns, identifying geographical influences on solar radiation, formulating a mathematical model for system optimization, and determining the optimal configuration of PV panels and pumped hydro storage. Through a comparative analysis approach and eight weeks of data collection, the study addressed key research questions related to solar radiation patterns and optimal system design. The findings highlighted regions with heightened solar radiation levels, showcasing substantial potential for power generation and emphasizing the system's efficiency. Optimizing system design significantly boosted power generation, promoted renewable energy utilization, and enhanced energy storage capacity. The study underscored the benefits of optimizing hybrid solar PV panels and pumped hydro energy supply systems for sustainable energy usage. Optimizing the design of solar PV panels and pumped hydro energy supply systems as examined across diverse climatic conditions in a developing country, not only enhances power generation but also improves the integration of renewable energy sources and boosts energy storage capacities, particularly beneficial for less economically prosperous regions. Additionally, the study provides valuable insights for advancing energy research in economically viable areas. Recommendations included conducting site-specific assessments, utilizing advanced modeling tools, implementing regular maintenance protocols, and enhancing communication among system components.
Welcome to WIPAC Monthly the magazine brought to you by the LinkedIn Group Water Industry Process Automation & Control.
In this month's edition, along with this month's industry news to celebrate the 13 years since the group was created we have articles including
A case study of the used of Advanced Process Control at the Wastewater Treatment works at Lleida in Spain
A look back on an article on smart wastewater networks in order to see how the industry has measured up in the interim around the adoption of Digital Transformation in the Water Industry.
Industrial Training at Shahjalal Fertilizer Company Limited (SFCL)MdTanvirMahtab2
This presentation is about the working procedure of Shahjalal Fertilizer Company Limited (SFCL). A Govt. owned Company of Bangladesh Chemical Industries Corporation under Ministry of Industries.
Hierarchical Digital Twin of a Naval Power SystemKerry Sado
A hierarchical digital twin of a Naval DC power system has been developed and experimentally verified. Similar to other state-of-the-art digital twins, this technology creates a digital replica of the physical system executed in real-time or faster, which can modify hardware controls. However, its advantage stems from distributing computational efforts by utilizing a hierarchical structure composed of lower-level digital twin blocks and a higher-level system digital twin. Each digital twin block is associated with a physical subsystem of the hardware and communicates with a singular system digital twin, which creates a system-level response. By extracting information from each level of the hierarchy, power system controls of the hardware were reconfigured autonomously. This hierarchical digital twin development offers several advantages over other digital twins, particularly in the field of naval power systems. The hierarchical structure allows for greater computational efficiency and scalability while the ability to autonomously reconfigure hardware controls offers increased flexibility and responsiveness. The hierarchical decomposition and models utilized were well aligned with the physical twin, as indicated by the maximum deviations between the developed digital twin hierarchy and the hardware.
Explore the innovative world of trenchless pipe repair with our comprehensive guide, "The Benefits and Techniques of Trenchless Pipe Repair." This document delves into the modern methods of repairing underground pipes without the need for extensive excavation, highlighting the numerous advantages and the latest techniques used in the industry.
Learn about the cost savings, reduced environmental impact, and minimal disruption associated with trenchless technology. Discover detailed explanations of popular techniques such as pipe bursting, cured-in-place pipe (CIPP) lining, and directional drilling. Understand how these methods can be applied to various types of infrastructure, from residential plumbing to large-scale municipal systems.
Ideal for homeowners, contractors, engineers, and anyone interested in modern plumbing solutions, this guide provides valuable insights into why trenchless pipe repair is becoming the preferred choice for pipe rehabilitation. Stay informed about the latest advancements and best practices in the field.
The Benefits and Techniques of Trenchless Pipe Repair.pdf
Narrative: Text Generation Model from Data
1. w w w . i j m r e t . o r g I S S N : 2 4 5 6 - 5 6 2 8 Page 20
International Journal of Modern Research in Engineering and Technology (IJMRET)
www.ijmret.org Volume 2 Issue 8 ǁDecember 2017.
Narrative: Text Generation Model from Data
Darío Arenas1
, Javier Pérez1
, David Martínez1
,David Llorente1
, Eugenio
Fernández1
, Antonio Moratilla1
,
1
(Computer Science/ University of Alcalá, Spain)
ABSTRACT : The generation of digital content has undergone a great increase in recent years due to the
development of new technologies that allow the creation of content quickly and easily. A further step in this
evolution is the generation of contents by automatic systems without human intervention. Thus, for decadesit has
been developing models for the Natural Language Generation (NLG) that allow the transformation of content to
the form of narratives. At present, there are several systems that enable the generation in text format. In this
paper we present the Narrative system, which allows the generation of text narratives from different sources,
and which are indistinguishable for user from those made by a human being.
KEYWORDS –Automatic Digital Content Generation, D2T, NLG
I. INTRODUCTION
Nowadays, and in a general way, we
understand digital content as every data,
information, knowledge or a set of those, capable
of being treated or stored by digital devices, and
transmitted through telecommunication networks.
The format of that digital content is usually
categorized basically in four types or formats: text,
image, audio and video; and if web pages, blogs
social media, or digital newspaper could be
considered in technical terms as sets of elements
from those four basics elements, eventually they
start to treat it as per se formats; for other digital
content as software in general or videogames where
the concept is not that evident define by those
basics formats. Regardless of this kind of
classification, the creation of digital content of
every type has suffered an exponential increase in
last decades.
This tremendous growth in digital content
generation has been affected positively by different
factors. On the one hand, we have the fast
evolution in hardware and software which has
allowed the creation of content faster, easier and
more efficient each time. On the other hand, digital
content has arrived to everyone at final user level,
this implies that each day more people are using
technology and it has been developing more
complex content to satisfy that public, such as web
generation, and treatment of images, audios or
videos. Besides, there are social factors which have
allowed this huge increase in digital content
generation, as we were saying, like the contents
demanding from users which have even created
many moments of saturation in certain content.
From now on we will be referring to text digital
content as information, because it is the approach
of this work. That kind of information which digital
media publish for users consuming in digital
newspaper, blogs or webs for electronic commerce.
For example, those pieces of information can be a
journalist article, a product description, a brief
event explanation, a set of rules or the explanation
of weather predictions. In this way, in this work,
the subset of digital content we are going to deal
with is digital information in text format. However,
the model of automatic generation we are going to
expose in this work is perfectly valid for other
different subsets of digital content, not only for
texts ones.
II. NATURAL LANGUAGE
PROCESSING
Automatic generation of information in
text format is usually approached from Natural
Language Processing (NLP in what follows), which
combines techniques from Artificial Intelligence,
like Logic or Machine Learning, with Statistics,
Applicated Linguistics or Procedural Programming,
in order to develop methods that allow us tasks
such as understanding and computer assisted
processing of information given in human
languages for certain jobs, like automatic
translation, narratives generation, knowledge
extraction from texts, etcetera. NLP is divided in
2. w w w . i j m r e t . o r g I S S N : 2 4 5 6 - 5 6 2 8 Page 21
International Journal of Modern Research in Engineering and Technology (IJMRET)
www.ijmret.org Volume 2 Issue 8 ǁDecember 2017.
two large areas, Natural Language Generation (NLG hereafter) and Natural Language
Understanding (NLU hereafter), which
can be considered as the inverse task of NLG. On
the one hand, NLG aim is to generate text
messages in human language from structure and
complete data of the domain we want to describe.
On the other hand, NLU aims is obtain structure
and understanding data, given texts in human
language. In this work, we are keen on generate
narratives automatically, so we are interested in
NLG. It is not a new discipline, and it has support
from other disciplines which implies a lot of
techniques from those fields. NLP was born in the
1960’s, although it was not recognize until the
decade of 1980 [1],[2],[3],[4].
From the works of different researchers, principally
Reiter’s and Dale’s ones (1997), it has been
developing a model in recent years for solving
NLG problems, it is based on sequentially and
well-define stages ([5],[6],[7],[8],[9],
[10],[11],[12],[13],[14],[15],[16],[17],[18]), from
which it has developed specific models to apply to
a large number of fields, such as meteorology,
health or industrial processes [19].
This model is based on six phases or activities
grouped in three stages, so each one works in
different levels of linguistic representation
(semantic, lexicon, syntactic): Text Planning
(activities 1 and 2); Sentences Planning (activities
3, 4 and 5); Linguistic Realization (activity 6).
Those six basics activities which are represented on
Fig. 1 are [20]:
Contents determination: in this first
activity, from given data, we generate a
set of simple messages summed up in an
intermediate language which distinguishes
labels, objects, concepts and relations of
interest in the domain of application, and
they stablish the information that will be
on the text.
Speech Planning: we order and structure
the messages to transmit.
SentencesAggregation: we group the
messages on sentences to improve text
fluency.
Lexicalization: we choose the specific
words and expressions that we will use to
express the concepts and relations on the
domain.
Generation of reference expressions: we
select word or expression which identify
objects from the domain, in that way, we
can ensure that the system provides
enough information to distinguish each
object from the others.
Linguistic Realization: we apply
grammatic rules to provide a right final
text fromlexicon, syntactic and semantic
point of view.
III. TEXT GENERATION FROM
DATA
At present, the different developed systems based
on that model generate two kinds of texts. On the
one hand, those we can name “objective texts”, that
present information about a well-defined domain
(sports, finance, meteorology, products description,
etcetera) and those that usually define the domain
to deal with, in terms of data. On the other hand,
there are “subjective texts”, in which we express
stories with a high semantic weight and they have a
lot of emotionalconnotations, in contrast with
objective text which rarely get it because they
present objective information with no emotional
charge.
In this work we will focus on objective text,
inwhich we find a well-defined domain from a
finiteset of data. In the last years, that has been
named D2T models (data-to-text models). It has
increased the interest of NLG researchers [21] due
to the explosion in data generation, and the easy
access to it and their treatment. Therefore,
researchers have appreciated a practical application
for those techniques that have been developing for
the last years, in order to try to create systems
which permit to generate natural language
SISTEMA GENERACIÓN LENGUAJE NATURAL
Contents
determination
Speech
Planning
Sentence
Aggregation
Lexicalization Generation of
reference
expressions
Linguistic
Realization
Figure 1. Components of a NLG system.
3. w w w . i j m r e t . o r g I S S N : 2 4 5 6 - 5 6 2 8 Page 22
International Journal of Modern Research in Engineering and Technology (IJMRET)
www.ijmret.org Volume 2 Issue 8 ǁDecember 2017.
successfully, that have been applied with more or
less success to task like automatic translation of
languages. Our goal is to achieve an approximation
to the model in
Fig. 1 in a context where data are the content
definition, several D2T models in the last years
have been developed, and now, there is a
consensual structure followed by the vast majority
of practical implementation [22].
This model (Fig. 2) is based on 4 activities:
Signals Analysis (it analyzes given data looking for
patterns and trends); Data Interpretation (it
identifies complex messages in the domain from
patterns and trends found in the first activity,
identifying relations among the messages);
Document Planning (it decides which messages
must appear in the final text); Microplanning (it
transforms messages into several linked
expressions) and Realization (those expressions are
transformed into text strings). Later, it has been
modified and used by different authors
generatingsimilar approximations, like in [18]
which propose those activities: Data Analysis,
Content Defining (select and structure the
information given from the data analysis that we
want to transmit), Aggregation and Microplanning
(it transforms information into linked expressions),
and Realization (those expressions are transformed
into text strings). With that model structure as
base,specific D2T models have been developed like
WeatherReporter, FoG[5], MultiMeteo[23],
SumTime-Mousam [24], British Met Office [25],
TEMSIS [26], and MARQUIS [27] in meteorology
field; SUREGEN-2 [28] and BabyTalk[21] in
health field, Patent Claim Expert [29] y Sum-Time-
Turbine [30] in industry field; IDAS [31] to
automatic generation of online documents;
ModelExplainer[32] to generate text description of
software models oriented to objects; PEBA [33] to
entity description in a knowledge base; STOP [34]
to automatic generation of personal letters to give
up smoking. In relation with generation systems for
subjective text there are several systems like
TALE-SPIN [35],GESTER [36], MINSTREL [37],
MEXICA [38], BRUTUS [39], Cast [40].
If we analyze those and other implemented
systems, beyond minimal difference with the model
in Fig. 2 to obtain a specific D2T model, we can
classify those system in two types, those based on
writing scripts, and those which do not use it. From
a formal perspective we can say that both system
should get similar solutions in quality terms, it is
truly that there are some practical aspects that
makeit impossible to obtain successful results
without using writing scripts, such as execution
time, aspects related with journalistic style,
semantic aspects like ambiguities, and those related
with emotional charge in the messages. Given
those aspects is rather difficult to get automatic
messages with D2T systems. Anyway, authors as
Van Deemter[41] have analyzed and compared
those two types of systems and have concluded that
there are no difference in quality results in system
based on writing scripts.
Another relevant aspect in development of that
kind of system, is the validation of obtaining results
and we do that by using evaluation methods to
check if they accomplish the requirements a priori,
which is not easy, because there are requirements
difficult to express and quantify, like those related
with the style in narratives. Others as users or
consumer acceptation can not be established until
checking if the obtained results satisfy their
necessities. Eventually, it is subject of study and
that is a consensual thought in the community of
researchers. In that way, we found that the vast
majority of known methods for evaluation are
quantitative ([42],[43]) and they try to obtain a
numeric measure about quality (making
questionnaires to human experts; similarity
measures among generated texts and representative
texts, etcetera). There are qualitative methods used
to identify and to correct some aspects in content
analysis stages and in speech stages ([44],[45]).
Signal
Analysis
Data
Interpretation
Document
Planning
Microplanning and
Realization
Patterns FINAL
TEXT
…………
Messages
Relations
Selected
messages
Data Events
Figure 2. Components of “data-to-text” NLG component system.
4. w w w . i j m r e t . o r g I S S N : 2 4 5 6 - 5 6 2 8 Page 23
International Journal of Modern Research in Engineering and Technology (IJMRET)
www.ijmret.org Volume 2 Issue 8 ǁDecember 2017.
IV. NARRATIVE MODEL
From an applied perspective, the
development of D2T system is justified by the
constant increase on contents demanded by users
(information consumers), eventually this implies
situations in which it is impossible to generate that
amount of information with enough quality because
we do not have the human resources to make it
possible. Thus, it is necessary to ask ourselves in a
general way, if it is possible to automatize text
generation with developed techniques and with
enough quality, that is, showing complete and
accurate information, grammatically correct, with a
proper style, etcetera, which satisfy consumers
requirements about the content.
To achieve an illustration of our problem, we are
going to describe three real examples solved by
Narrative (www.narrativa.com) as an answer to
users demanding, in that case companies, which are
looking for solutions to their digital content
generation problems in text format:
First one is about sport information
offered by written media. In all terms, a
digital written media can offer high
quality information to users about more
relevant matches in principal leagues of
popular sports. However, when it is a
minority sport or they are lower leagues,
offering high quality information to users
is non-viable in economics terms
(understanding high quality information as
complete an accurate information),
because that requires some task by the
professionals as covering all matches that
is economically non-viable.
Second example comes from the necessity
of giving a quality description about a
huge quantity of products that offer some
webs for online shopping. In this case,
quality concept acquires more importance
because we do not achieve it only by
enumerating or simply describing
characteristics of the products, we need
appropriate descriptions for each product
in a specific way departing from their
characteristics and other data, so that this
generated information could identify
completely and accurately every product
among others similar, providing useful
information to users or consumers when
taking right decisions. This kind of
information is usually described by an
expert on the area, but for millions of
products, it is non-viable for human
experts to generate that much information
on time and shape.
The third one deals with the problematic
of these situations that need a quick text
generation and it is impossible for humans
to generate on time. For instance, task
execution orders for people, which are in
base of data, and must be modified in
seconds, such as products delivery
planning. Simply, we have to imagine a
group of truck drivers loading product into
the truck. In headquarters someone is
planning the load and the delivery of the
products, from that moment, drivers have
to receive text orders in voice format,
which indicate them at real time the route
they must be keeping (the route will be
modified depending on traffic, business
interest, etcetera…) and some other useful
information.
Those examples, which can be extrapolated it to
similar situations, illustrate a current situation in
which users demand information with different
quality degrees, and that is impossible to generate
on time and shape, because it requires human
participation in the generation process.It has an
effect on business models, which make them non-
viable and generate the lack of certain useful
information demanded by users at presents. In that
situation, what matters, if possible, is automatize,
that is, to develop automatic systems that allow to
generate that information with enough quality to be
offered to users and consumers. Narrative’s system
answers that question showing the possibility to
generate information in text format like narratives,
from sets of representative data from a specific
domain, such as some of those commented before.
To design our model, we start from the issue that
generating text in natural language from given data,
we must develop systems which have to face three
phases (Dara Analysis, Content Determination and
Realization) with high complexity in scientific
terms, but eventually is possible to develop
approximate methods, principally from Big Data
techniques, statistics and Artificial Intelligence,
which permit to obtain useful solutions and
acceptable to practical endings.
5. w w w . i j m r e t . o r g I S S N : 2 4 5 6 - 5 6 2 8 Page 24
International Journal of Modern Research in Engineering and Technology (IJMRET)
www.ijmret.org Volume 2 Issue 8 ǁDecember 2017.
First phase is related to data treatment, that is,
determine, obtain, analyze, classify and categorize
necessarily data needed to obtain the expected
narratives with the required quality, that is,
complete and accurate narratives [3]. Data must be
expressed in data bases and in knowledge bases
which will lead to proper treatments later.
As a general rule, the vast majority of development
system until now approach given data to the model
in plain text, that is, they work exclusively with
structure data, generally numeric. However, as we
can see it in the model displayed and in Fig. 3, this
concept to another and more complex given data
(documents with simple data, documents
elaborated in grammatical and emotional aspects,
images, audios, videos, social media, emails, web
pages, etcetera) from an analysis and extraction
data module, which obtain known and relevant
information with Big Data techniques, with the
goal of getting a final domain represented in a
specific knowledge base, undergoing through an
intermediate process of data bases generation that
will allow to aggregate proper data of the client, to
realize an unification process, and to keep a format
to extrapolate to any other model of knowledge
base later.
In second phase (Fig. 4), from the defined domain
in previous phase, and including client
requirements and other external agents which can
conditionate the final information, then it extracts
the relevant information that is subjected to appear
in the final text, using new analysis and exploring
techniques, principally of Artificial Intelligence,
about the knowledge base in that case.
Finally, once it has been obtained the relevant
information to express in the final text, it
realizesone and last phase to conform the final text
so that it fulfils all the editorial requirements from
the media or the client, such as style or proper
aspects of the language (Fig. 5).
In parallel with second and third phases, the model
is based on the use of written scripts at different
levels of granularity. It is known that humans when
generating text information are capable of
managing with a huge amount of information and
knowledge related with common sense, modelling
information and preparing messages adding
subjective aspects related with context or mood.
Thus, automatic text generation presents lack of
naturality an emotion, and poverty in semantic
content, when comparing with human texts.
Scientifics propose that, nowadays, humans are
capable of making complete and more complex
content than generating by automatic systems
based on NLG techniques, and it will last years,
probably decades, until obtaining automatic
approximations similar in quality terms.
Nevertheless, we hold that, from an applied
perspective, there is a way to deal with that
inconvenient. In those applied fields, there are
experts capable of generating narratives with those
complex aspects mentioned, and that the automatic
system is unable to show them in the final text. It is
possible to realize a process of atomic written
scripts, representatives of the domain, structures at
different levels of granularity, which add all the
emotional, subjective and style aspects, which an
automatic system is unable of. The module of
atomic written scripts is used as base to train the
system and to adapt to final texts, so that they show
information similar to a text generated by experts.
Figure 3. Components of Data Analysis phase.
Figure 4. Components of Content
Determination phase.
Figure 5. Components of Realization phase.
6. w w w . i j m r e t . o r g I S S N : 2 4 5 6 - 5 6 2 8 Page 25
International Journal of Modern Research in Engineering and Technology (IJMRET)
www.ijmret.org Volume 2 Issue 8 ǁDecember 2017.
Narrative’s model is based on those three stages
shown in Fig. 1, 2 and 3 and on the module for
managing the written scripts. It is shown in Fig. 6 a
descriptive text of a football match summary
generated automatically by the system from
Premier League provided by the Company Opta
(www.optasports.es).
V. CONCLUSION.
The development of automatic systems for
narratives generation in text format has undergone
a constant development in the last decades, in
research of all the aspects related to natural
language. Nevertheless, we are far from showing
subjective aspects like emotions and complex
semantic of human texts in automatic generation
systems. However, eventually, it is possible by
using the advantages provided in that field and
tools like written scripts, to build automatic useful
systems which allow massive generation of
contents on real time, which is impossible only
with human intervention. In addition, they have
characteristics that make them indistinguishable
from human ones. In this work, we present the
Narrative’s system which correspondsto that
philosophy, and which is used at present to solve
practical problems in different real situations of
content production for objective public who
consume it through web platforms.
REFERENCES
[1] Goldman, N. (1975). Conceptual generation. En R.C.
Schank, Conceptual.
[2] Davey, A. C. (1978). Discourse production. Edinburgh:
Edinburgh UP.
[3] McKeown, K. R. (1985). Discourse strategies for
generating natural-language text.
[4] Appelt, D. E. (1985). Planning english sentences.
Cambridge University Press.
[5] Goldberg, E., Driedgar, N., & Kittredge, R. (1994).
Using natural-language processing to produce weather
forecasts. IEEE Expert, 9, 45-53.
[6] Dalianis, H. (1996). Aggregation as a subtask of text and
sentence planning. Proceedings of the 9th Florida
Artificial Intelligence Research Symposium (FLAIRS-96),
1-5. Florida (EEUU), 20-22.
[7] Not, E. (1996). A computational model for generating
referring expressions in a multilingual application
domain. Proceedings of the 16th International
Conference onComputational Linguistics (COLING'96).
2, 848-853. Copenague (Denmark).
[8] Reiter, E. & Dale, R. (1997). Building Applied Natural-
Language Generation Systems. Journal of Natural-
Language Engineering. 3, 57-87.
[9] Bateman, J.A. (1997). Enabling technology for
multilingual natural language generation: the KPML
development environment. Journal of Natural Language
Engineering, 3, 15-55.
[10] Cheng, H. & Mellish, C. (1997). Aggregation based on
text structure for descriptive text generation. Proceedings
of the PhD Workshop on Natural Language Generation,
9th European Summer School in Logic, Language and
Information (ESSLLI97). Aix-en-Provence (France), 18-
22.
[11] Shaw, J. (1998). Clause aggregation using linguistic
knowledge. Proceedings of the 9th
International Natural
Language Generation Workshop (INLG’98). Canada.
138-147.
[12] Cahill, L. &Reape, M. (1999). Component tasks in
applied NLG systems. Technical Report ITRI-99-05.
Information Technology Research Institute, University of
Brighton (United Kingdom).
[13] Teich, E. (1999). Systemic functional grammar in natural
language generation: Linguistic description and
computational representation. Cassell Academic
Publishers,Londres (United Kingdom).
[14] Theune, M. (2000). From data to speech: language
generation in context. Tesis doctoral, Eindhoven
University of Technology (PaísesBajos), 2000
[15] Reiter, E. & Dale, R. (2000). Building Natural Language
Generation Systems. Cambridge University Press,
Cambridge.
[16] Bangalore, S. &Rambow, O. (2000). Corpus-based
lexical choice in natural language generation.
Proceedings of the 38th Annual Meeting of the
Association for Computational Linguistics (ACL 2000),
(pp. 464- 471). Hong-Kong (China).
[17] Díaz-Hermida, F, Ramos-Soto, A. &Bugarín, A. (2011).
On the role of fuzzy quantified statements in linguistic
summarization. En Proceedings of 11th International
Conference on. Intelligent Systems Design and
Applications (ISDA), 166–171.
[18] Hunter, J., Freer, Y, Gatt, A., Reiter, E., Sripada, S. &
Sykes, C. (2012). Automatic Generation of Natural
Language Nursing Shift Summaries in Neonatal Intensive
Care: BT-Nurse. Artificial intelligence in medicine.
56(3), 157-172.
Figure 6. Narrative’s text example from given
data.
7. w w w . i j m r e t . o r g I S S N : 2 4 5 6 - 5 6 2 8 Page 26
International Journal of Modern Research in Engineering and Technology (IJMRET)
www.ijmret.org Volume 2 Issue 8 ǁDecember 2017.
[19] Bateman, J.A. (2001). Natural language generation: an
introduction and open-ended review of the state of the
art, http://www.fb10.unibremen.de/.
[20] Bautista, S. (2008). Generación de textos adaptativa a
partir de una elección léxica basada en emociones.
Trabajo de Máster en Investigación en Informática.
Universidad Complutense de Madrid.
http//eprints.ucm.es/10061/1/ProyectoFinMaster.pdf.
[21] Portet, F., Reiter, E., Gatt, A., Hunter, J., Sripada, S.,
Freer, Y. & Sykes, C. (2009). Automatic Generation of
Textual Summaries from Neonatal Intensive Care Data.
Artificial Intelligence. 173(7). 789-816.
[22] Reiter, E. (2007). An architecture for data-to-text
systems. Proceedings of the 11th European Workshop on
Natural Language Generation (ENLG). Germany. 97-
104.
[23] Coch, J., Dycker, E.D., García-Moya, J.A., Gmoser, H.,
Stranart, J.F., &Tardieu, J. (1999). Multimeteo:
adaptable software for interactive production of
multilingual weather forecasts. En Proceedings of the 4th
European Conference on Applications ofMeteorology
(ECAM 99), Norrk¨oping, Sweden).
[24] Sripada, S., Reiter, E., & Davy, I. (2003):
Sumtimemousam: Configurable marine weather forecast
generator. Expert Update. 6(3), 4–10.
[25] Sripada, S.G., Burnett, N., Turner, R., Mastin, J., &
Evans, D. (2014): A case study: Nlgmeeting weather
industry demand for quality and quantity of textual
weather forecasts. En INLG-2014 Proceedings.
[26] Busemann, S. &Horacek, H. (1997). Generating air-
quality reports from environmental data. EnBusemann,
S., Becker, T., Finkler, W., eds.: DFKI Workshop on
Natural Language Generation, DFKI Document D-97-
06.
[27] Wanner, L., Bohnet, B., Bouayad-Agha, N., Lareau, F.,
&Nicklass, D. (2010). Marquis: Generation of user-
tailored multilingual air quality bulletins. Applied
Artificial Intelligence. 24(10), 914–952.
[28] Hüske-Kraus, D. (2003): Suregen 2: A shell system for
the generation of clinical documents. En Proceedings of
the 10th Conference of the European Chapter of the
Association for Computational Linguistics (EACL-2003).
215–218
[29] Sheremetyeva, S., Nirenburg, S., &Nirenburg, I. (1996):
Generating patent claims from interactive input. En
Proceedings of the 8th. International Workshop on
Natural Language Generation (INLG’96),
Herstmonceux, England. 61–70.
[30] Yu, J., Hunter, J., Reiter, E., &Sripada, S. (2001) An
approach to generating summaries of time series data in
the gas turbine domain. En Proceedings of IEEE
International Conference on Info-tech & Info-net
(ICII2001). Beijing. 44–51.
[31] Reiter, E., Mellish, C., & Levine, L. (1995). Automatic
generation of technical documentation. Applied Artificial
Intelligence. 9, 259-287.
[32] Lavoie, B. & Owen, R. (1997). A fast and portable
realizer for text generation. Proc. of the Fifth Conference
on Applied Natural-Language Processing. 265-268.
[33] Milosavljevic, M. & Dale, R. (1996). Strategies for
comparison in encyclopedia descriptions. Proceedings of
the Eighth International Workshop on Natural-Language
Generation. 161-170.
[34] Reiter, E., Robertson, R., & Osman, L. (1999).
Knowledge acquisition for natural language generation.
En Proceedings of the Joint European Conference on
Articial Intelligence in Medicine and Medical Decision
Making. 389-399.
[35] Schank, R.C. (1969). A conceptual dependency
representation for a computer-oriented semantics. Phd
thesis, University of Texas.
[36] Pemberton, L. (1989). A modular approach to story
generation. En 4th European Conference of the
Association for Computational Linguistics. Manchester.
[37] Turner, S.R. (1992). Minstrel: A computer model of
creativity and storytelling. Informe Técnico UCLA-AI-
92-04, Computer Science Department, University of
California.
[38] Pérez y Pérez, R. & Sharples, M. (2004). Three
computer-based models of storytelling: BRUTUS,
MINSTREL and MEXICA. Knowledge Based Systems
Journal, 17(1), 15-29.
[39] Bringsjord, S &Ferrucci, D. (1999). Artificial
Intelligence and Literary Creativity. En Inside the mind
of Brutus, a StoryTelling Machine. Lawrence Erlbaum
Associates, Hillsdale, NJ.
[40] León, C. &Gervás, P. (2008). Cast: Creative storytelling
based on transformation of generation rules. En P.
Gervás, R. Pérez y Pérez, y T. Veale, editores,
Proceedings of the 5th International Joint Workshop on
Computational Creativity.
[41] Van Deemter, K., Krahmer, E. &Theune, M. (2005). Real
Versus Template-based Natural Language Generation: a
False Opposition. Computational Linguistics. 31(1), 15-
24.
[42] Belz, A. & Reiter, E. (2006). Comparing automatic and
human evaluation of nlg systems. In Proc. European
Conference Association for Computational Linguistics
(EACL’06). 313–320.
[43] Papineni, K., Roukos, S., Ward, T., & Zhu, W.J. (2002).
Bleu: A method for automatic evaluation of machine
translation. En Proceedings of the 40th Annual Meeting
on Association for Computational Linguistics. ACL ’02.
Stroudsburg, PA, USA, 311–318.
[44] Sambaraju, R., Reiter, E., Logie, R., McKinlay, A.,
McVittie, C., Gatt, A., &Sykes,C. (2011). What is in a text
8. w w w . i j m r e t . o r g I S S N : 2 4 5 6 - 5 6 2 8 Page 27
International Journal of Modern Research in Engineering and Technology (IJMRET)
www.ijmret.org Volume 2 Issue 8 ǁDecember 2017.
and what does it do: Qualitative evaluations of an nlg
system the bt-nurse using content analysis and discourse
analysis. En Proceedings ofthe 13th European Workshop
on Natural Language Generation. 22 – 31
[45] Reiter, E. (2011). Task-based evaluation of nlg systems:
Control vs real-world context. En Proceedings of the
UCNLG+Eval: Language Generation and Evaluation
Workshop. UCNLG+EVAL’11, Stroudsburg, PA, USA.
28–32