Productivity Factors in Software Development for PC PlatformIJERA Editor
Identifying the most relevant factors influencing project performance is essential for implementing business strategies by selecting and adjusting proper improvement activities. The two major classification algorithms CRT and ANN that were recommended by the Auto Classifier tool in SPSS Modeler used for determining the most important variables (attributes) of software development in PC environment. While the accuracy of classification of productive versus non-productive cases are relatively close (72% vs 69%), their ranking of important variables are different. CRT ranks the Programming Language as the most important variable and Function Points as the least important. On the other hand, ANN ranks the Function Points as the most important followed by team size and Programming Language.
To Analyze Conflicts between Software Developer and Software TesterAM Publications,India
This document analyzes conflicts between software developers and testers based on responses from 19 companies in different domains. It finds that while most companies aim for a balanced ratio of developers to testers, the optimal ratio depends on factors like the project stage and type. Conflicts sometimes arise due to misunderstandings or different priorities between developers focused on efficiency and testers focused on quality. However, companies resolve conflicts through clear communication, understanding different perspectives, and ensuring all roles are well-defined. While conflicts can impact goals if not managed, most companies feel other factors like motivation have a larger influence, and effective project management can mitigate issues.
CRESUS-T: A COLLABORATIVE REQUIREMENTS ELICITATION SUPPORT TOOLijseajournal
Communicating an organisation's requirements in a semantically consistent and understandable manner
and then reflecting the potential impact of those requirements on the IT infrastructure presents a major
challenge among stakeholders. Initial research findings indicate a desire among business executives for a
tool that allows them to communicate organisational changes using natural language and a model of the IT
infrastructure that supports those changes. Building on a detailed analysis and evaluation of these findings,
the innovative CRESUS-T support tool was designed and implemented. The purpose of this research was to
investigate to what extent CRESUS-T both aids communication in the development of a shared
understanding and supports collaborative requirements elicitation to bring about organisational, and
associated IT infrastructural, change. In order to determine the extent shared understanding was fostered,
the support tool was evaluated in a case study of a business process for the roll out of the IT software
image at a third level educational institution. Statistical analysis showed that the CRESUS-T support tool
fostered shared understanding in the case study, through increased communication. Shared understanding
is also manifested in the creation of two knowledge representation artefacts namely, a requirements model
and the IT infrastructure model. The CRESUS-T support tool will be useful to requirements engineers and
business analysts that have to gather requirements asynchronously.
This essay contends that rather than a future of “Models will Run the World,” the route to AI software creates a focus on intelligent data. To move towards the latter, humans will need to contribute their judgement to how data is organized for machine learning to train algorithms. They will decide what biases may be included in the training data and check for any issues that might arise from these biases once algorithms are run in production.
To achieve success in this “intelligent data” world, humans will play a very different role in the workforce. Jobs will shift to those that support, conserve and evaluate the results that algorithms provide. They may also expand in “domain expertise” areas, as where knowledge of regulatory requirements for finance needs to be incorporated in new models that financial institutions want to create and the algorithms they need to run.
u
The software industry has had significant progress
in recent years. The entire life of software includes two phases:
Production and Maintenance. Software maintenance cost is
increasingly growing and estimates showed that about 90%, if
software life cost is related to its maintenance phase. Extraction
and considering the factors affecting software maintenance cost
help to estimate the cost and reduce it by controlling the factors.
Cost estimation of maintenance phase is necessary to predict the
reliability, improve the productivity, project planning, controlling
and adaptability of the software. Though there are various models
to estimate the maintenance cost of traditional software like
COCOMO, SLIM, Function Point etc., but till now there is no
such model to estimate the maintenance cost using fourth
generation language environment. Software maintenance will
continue to exist in the fourth generation environment, as systems
will still be required to evolve. In this kind of situation there is
needed to develop a model to estimate the maintenance cost using
fourth generation environment. We propose a systematic
approach and development for software maintenance cost
estimation model using fourth generation language environment
on the basis of COCOMO II. This model is based on three
parameters: SMCE with Fourth Generation Language
Environment, ACT (Annual Change Traffic), Technical and NonTechnical
factors which affect the maintenance cost. The
favorable results closely matching and it can be achieved by using
model implementation.
The software industry has had significant progress
in recent years. The entire life of software includes two phases:
Production and Maintenance. Software maintenance cost is
increasingly growing and estimates showed that about 90%, if
software life cost is related to its maintenance phase. Extraction
and considering the factors affecting software maintenance cost
help to estimate the cost and reduce it by controlling the factors.
Cost estimation of maintenance phase is necessary to predict the
reliability, improve the productivity, project planning, controlling
and adaptability of the software. Though there are various models
to estimate the maintenance cost of traditional software like
COCOMO, SLIM, Function Point etc., but till now there is no
such model to estimate the maintenance cost using fourth
generation language environment. Software maintenance will
continue to exist in the fourth generation environment, as systems
will still be required to evolve. In this kind of situation there is
needed to develop a model to estimate the maintenance cost using
fourth generation environment. We propose a systematic
approach and development for software maintenance cost
estimation model using fourth generation language environment
on the basis of COCOMO II. This model is based on three
parameters: SMCE with Fourth Generation Language
Environment, ACT (Annual Change Traffic), Technical and NonTechnical
factors which affect the maintenance cost. The
favorable results closely matching and it can be achieved by using
model implementation.
CRESUS: A TOOL TO SUPPORT COLLABORATIVE REQUIREMENTS ELICITATION THROUGH ENHA...cscpconf
Communicating an organisation's requirements in a semantically consistent and understandable manner and then reflecting the potential impact of those requirements on the IT infrastructure presents a major challenge among stakeholders. Initial research findings indicate a desire among business executives for a tool that allows them to communicate organisational changes using natural language and a simulation of the IT infrastructure that supports those changes. Building on a detailed analysis and evaluation of these findings, the innovative CRESUS tool was designed and implemented. The purpose of this research was to investigate to what extent CRESUS both aids communication in the development of a shared understanding and supports collaborative requirements elicitation to bring about organisational, and associated IT infrastructural, change. This paper presents promising results that show how such a tool can facilitate collaborative requirements elicitation through increased communication around organisational change and the IT infrastructure.
AN ITERATIVE HYBRID AGILE METHODOLOGY FOR DEVELOPING ARCHIVING SYSTEMSijseajournal
With the massive growth of the organizations files, the needs for archiving system become a must. A lot of
time is consumed in collecting requirements from the organization to build an archiving system. Sometimes
the system does not meet the organization needs. This paper proposes a domain-based requirement
engineering system that efficiently and effectively develops different archiving systems based on new
suggested technique that merges the two best used agile methodologies: extreme programming (XP) and
SCRUM. The technique is tested on a real case study. The results shows that the time and effort consumed
during analyzing and designing the archiving systems decreased significantly. The proposed methodology
also reduces the system errors that may happen at the early stages of the development of the system.
Productivity Factors in Software Development for PC PlatformIJERA Editor
Identifying the most relevant factors influencing project performance is essential for implementing business strategies by selecting and adjusting proper improvement activities. The two major classification algorithms CRT and ANN that were recommended by the Auto Classifier tool in SPSS Modeler used for determining the most important variables (attributes) of software development in PC environment. While the accuracy of classification of productive versus non-productive cases are relatively close (72% vs 69%), their ranking of important variables are different. CRT ranks the Programming Language as the most important variable and Function Points as the least important. On the other hand, ANN ranks the Function Points as the most important followed by team size and Programming Language.
To Analyze Conflicts between Software Developer and Software TesterAM Publications,India
This document analyzes conflicts between software developers and testers based on responses from 19 companies in different domains. It finds that while most companies aim for a balanced ratio of developers to testers, the optimal ratio depends on factors like the project stage and type. Conflicts sometimes arise due to misunderstandings or different priorities between developers focused on efficiency and testers focused on quality. However, companies resolve conflicts through clear communication, understanding different perspectives, and ensuring all roles are well-defined. While conflicts can impact goals if not managed, most companies feel other factors like motivation have a larger influence, and effective project management can mitigate issues.
CRESUS-T: A COLLABORATIVE REQUIREMENTS ELICITATION SUPPORT TOOLijseajournal
Communicating an organisation's requirements in a semantically consistent and understandable manner
and then reflecting the potential impact of those requirements on the IT infrastructure presents a major
challenge among stakeholders. Initial research findings indicate a desire among business executives for a
tool that allows them to communicate organisational changes using natural language and a model of the IT
infrastructure that supports those changes. Building on a detailed analysis and evaluation of these findings,
the innovative CRESUS-T support tool was designed and implemented. The purpose of this research was to
investigate to what extent CRESUS-T both aids communication in the development of a shared
understanding and supports collaborative requirements elicitation to bring about organisational, and
associated IT infrastructural, change. In order to determine the extent shared understanding was fostered,
the support tool was evaluated in a case study of a business process for the roll out of the IT software
image at a third level educational institution. Statistical analysis showed that the CRESUS-T support tool
fostered shared understanding in the case study, through increased communication. Shared understanding
is also manifested in the creation of two knowledge representation artefacts namely, a requirements model
and the IT infrastructure model. The CRESUS-T support tool will be useful to requirements engineers and
business analysts that have to gather requirements asynchronously.
This essay contends that rather than a future of “Models will Run the World,” the route to AI software creates a focus on intelligent data. To move towards the latter, humans will need to contribute their judgement to how data is organized for machine learning to train algorithms. They will decide what biases may be included in the training data and check for any issues that might arise from these biases once algorithms are run in production.
To achieve success in this “intelligent data” world, humans will play a very different role in the workforce. Jobs will shift to those that support, conserve and evaluate the results that algorithms provide. They may also expand in “domain expertise” areas, as where knowledge of regulatory requirements for finance needs to be incorporated in new models that financial institutions want to create and the algorithms they need to run.
u
The software industry has had significant progress
in recent years. The entire life of software includes two phases:
Production and Maintenance. Software maintenance cost is
increasingly growing and estimates showed that about 90%, if
software life cost is related to its maintenance phase. Extraction
and considering the factors affecting software maintenance cost
help to estimate the cost and reduce it by controlling the factors.
Cost estimation of maintenance phase is necessary to predict the
reliability, improve the productivity, project planning, controlling
and adaptability of the software. Though there are various models
to estimate the maintenance cost of traditional software like
COCOMO, SLIM, Function Point etc., but till now there is no
such model to estimate the maintenance cost using fourth
generation language environment. Software maintenance will
continue to exist in the fourth generation environment, as systems
will still be required to evolve. In this kind of situation there is
needed to develop a model to estimate the maintenance cost using
fourth generation environment. We propose a systematic
approach and development for software maintenance cost
estimation model using fourth generation language environment
on the basis of COCOMO II. This model is based on three
parameters: SMCE with Fourth Generation Language
Environment, ACT (Annual Change Traffic), Technical and NonTechnical
factors which affect the maintenance cost. The
favorable results closely matching and it can be achieved by using
model implementation.
The software industry has had significant progress
in recent years. The entire life of software includes two phases:
Production and Maintenance. Software maintenance cost is
increasingly growing and estimates showed that about 90%, if
software life cost is related to its maintenance phase. Extraction
and considering the factors affecting software maintenance cost
help to estimate the cost and reduce it by controlling the factors.
Cost estimation of maintenance phase is necessary to predict the
reliability, improve the productivity, project planning, controlling
and adaptability of the software. Though there are various models
to estimate the maintenance cost of traditional software like
COCOMO, SLIM, Function Point etc., but till now there is no
such model to estimate the maintenance cost using fourth
generation language environment. Software maintenance will
continue to exist in the fourth generation environment, as systems
will still be required to evolve. In this kind of situation there is
needed to develop a model to estimate the maintenance cost using
fourth generation environment. We propose a systematic
approach and development for software maintenance cost
estimation model using fourth generation language environment
on the basis of COCOMO II. This model is based on three
parameters: SMCE with Fourth Generation Language
Environment, ACT (Annual Change Traffic), Technical and NonTechnical
factors which affect the maintenance cost. The
favorable results closely matching and it can be achieved by using
model implementation.
CRESUS: A TOOL TO SUPPORT COLLABORATIVE REQUIREMENTS ELICITATION THROUGH ENHA...cscpconf
Communicating an organisation's requirements in a semantically consistent and understandable manner and then reflecting the potential impact of those requirements on the IT infrastructure presents a major challenge among stakeholders. Initial research findings indicate a desire among business executives for a tool that allows them to communicate organisational changes using natural language and a simulation of the IT infrastructure that supports those changes. Building on a detailed analysis and evaluation of these findings, the innovative CRESUS tool was designed and implemented. The purpose of this research was to investigate to what extent CRESUS both aids communication in the development of a shared understanding and supports collaborative requirements elicitation to bring about organisational, and associated IT infrastructural, change. This paper presents promising results that show how such a tool can facilitate collaborative requirements elicitation through increased communication around organisational change and the IT infrastructure.
AN ITERATIVE HYBRID AGILE METHODOLOGY FOR DEVELOPING ARCHIVING SYSTEMSijseajournal
With the massive growth of the organizations files, the needs for archiving system become a must. A lot of
time is consumed in collecting requirements from the organization to build an archiving system. Sometimes
the system does not meet the organization needs. This paper proposes a domain-based requirement
engineering system that efficiently and effectively develops different archiving systems based on new
suggested technique that merges the two best used agile methodologies: extreme programming (XP) and
SCRUM. The technique is tested on a real case study. The results shows that the time and effort consumed
during analyzing and designing the archiving systems decreased significantly. The proposed methodology
also reduces the system errors that may happen at the early stages of the development of the system.
AN ITERATIVE HYBRID AGILE METHODOLOGY FOR DEVELOPING ARCHIVING SYSTEMSijseajournal
With the massive growth of the organizations files, the needs for archiving system become a must. A lot of time is consumed in collecting requirements from the organization to build an archiving system. Sometimes the system does not meet the organization needs. This paper proposes a domain-based requirement engineering system that efficiently and effectively develops different archiving systems based on new
suggested technique that merges the two best used agile methodologies: extreme programming (XP) and SCRUM. The technique is tested on a real case study. The results shows that the time and effort consumed during analyzing and designing the archiving systems decreased significantly. The proposed methodology also reduces the system errors that may happen at the early stages of the development of the system.
IRJET- Strength and Workability of High Volume Fly Ash Self-Compacting Concre...IRJET Journal
The document discusses implementing a social customer relationship management (CRM) system for an online grocery shopping platform using customer reviews. It proposes collecting customer reviews from social media and other sources, refining the data, analyzing it using natural language processing and machine learning techniques, and storing the results in a database. This would allow the platform to better understand customer sentiment and needs to improve products, services and the customer experience.
IRJET- Implementing Social CRM System for an Online Grocery Shopping Platform...IRJET Journal
This document presents a proposed system architecture for implementing a social customer relationship management (CRM) system for an online grocery shopping platform using customer reviews and sentiment analysis. The proposed architecture involves collecting customer reviews from social media, preprocessing and analyzing the data using natural language processing techniques like stemming, and storing the results in a database. Sentiment analysis is performed to categorize reviews by aspects and sentiment. The analyzed data is then presented to users through an interface to help the online grocery shopping platform better understand customer needs and improve products/services based on feedback.
This document discusses groupware selection for small businesses in the United States. It defines groupware and small businesses according to the U.S. Small Business Administration. The paper will compare three major groupware technology solutions for small businesses and determine the most suitable option based on features identified in a Forrester research report. It will establish criteria for comparison and quantitatively assess the solutions to recommend an optimal choice.
Agent-SSSN: a strategic scanning system network based on multiagent intellige...IJERA Editor
The document describes an Agent-SSSN system that uses a multi-agent approach and ontology to develop a strategic scanning system for business intelligence. The system aims to integrate expert knowledge through cooperative information gathering from the web. It uses various agent roles like information retrieval agents, mediator agents, and notification agents. Ontologies are used to represent shared domain concepts and expert knowledge to enable knowledge sharing between agents. The system is modeled using the O-MaSE methodology, with goals, roles, and capabilities defined for each agent.
A Study of Software Size Estimation with use Case Pointsijtsrd
Estimates for cost and schedule in software projects are based on a prediction of the size of the system. Software size estimation is the most important role in software cost estimation. Use Case Point method can provide software size estimation at the early stage of the development process. Software size estimation is based on the high level speciation of Use Case. This paper describes a simple approach to software size estimation base on use case models the "Use Case Points Method. This model is imported into an estimating tool. To get software size with Use Case Point, the needed factors are the number of use cases and their complexity, the number of actors and their complexity, technical complexity factors TCF , and environmental complexity factors ECF . The system computes unadjusted use case points UUCP , adjusted use case points UPC , and the total effort in staff hours. Aye Aye Seint "A Study of Software Size Estimation with use Case Points" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-5 , August 2019, URL: https://www.ijtsrd.com/papers/ijtsrd26531.pdfPaper URL: https://www.ijtsrd.com/computer-science/other/26531/a-study-of-software-size-estimation-with-use-case-points/aye-aye-seint
Productivity of incident management with conversational bots-a reviewIAESIJAI
The use of conversational agents (bots) in information systems managed by company’s increases productivity in the development of activities focused on processes such as customer service, healthcare, and presentation. The present work is a systematic literature review that collects articles from 2019 to 2022 in the databases Scopus, Springer, Willey, Indexes-Csic, Taylor & Francis, Pubmed, and Ebsco Host. PRISMA methodology was used to systematize 47 relevant articles. As a result of the analysis, 2/19 very important benefits were obtained, which are: helping to obtain information and facilitating customer service; as for the types of conversational bots, a total of 9 types were found, of which conversational agents and chatbots with artificial intelligence (AI) are the most common; in the case of processes, 3/5 processes that optimize conversational bots were found, where the most prominent are: teaching process, health processes, and customer service processes. An architecture model for conversational bots in incident management is also proposed.
AN OVERVIEW OF EXISTING FRAMEWORKS FOR INTEGRATING FRAGMENTED INFORMATION SYS...ijistjournal
Literatures show that there are several structured integration frameworks which emerged with the aim of facilitating application integration. But weakness and strength of these frameworks are not known. This paper aimed at reviewing these frameworks with the focus on identifying their weakness and strength. To accomplish this, recommended comparison factors were identified and used to compare these frameworks. Findings shows that most of these structure frameworks are custom based on their motives. They focus on integrating applications from different sectors within an organization for the purpose of eliminating communication inefficiencies. There is no framework which guides application’s integrators on goals of integrations, outcomes of integration, outputs of integration and skills which will be required for types of applications expected to be integrated. The study recommended further study on integration framework especial on designing unstructured framework which will support and guide application’s integrators with consideration on consumer’s surrounding environment.
AN OVERVIEW OF EXISTING FRAMEWORKS FOR INTEGRATING FRAGMENTED INFORMATION SYS...ijistjournal
Literatures show that there are several structured integration frameworks which emerged with the aim of facilitating pplication integration. But weakness and strength of these frameworks are not known. This
paper aimed at reviewing these frameworks with the focus on identifying their weakness and strength. Toaccomplish this, recommended comparison factors were identified and used to compare these frameworks.Findings shows that most of these structure frameworks are custom based on their motives. They focus onintegrating applications from different sectors within an organization for the purpose of eliminating communication inefficiencies. There is no framework which guides pplication’s integrators on goals of integrations, outcomes of integration, outputs of integration and skills which will be required for
types of applications expected to be integrated. The study recommended further study on integration
framework especial on designing unstructured framework which will support and guide application’s
integrators with consideration on consumer’s surrounding environment.
Ludmila Orlova HOW USE OF AGILE METHODOLOGY IN SOFTWARE DEVELO.docxsmile790243
Ludmila Orlova
HOW USE OF AGILE METHODOLOGY IN SOFTWARE DEVELOPMENT INFLUENCE AGILITY OF THE BUSINESS
Agile methodology is widely distributed tool for software development. Presented article explore research data about use of these tools, its influence to quality of the end product and performance of development and overall agility of business and companies.
KEYWORDS:
Agile, software development, agile business
CONTENT
1 INTRODUCTION
2 AGILE SOFTWARE DEVELOPMENT
3 SCALING AGILE
4 AGILE BUSINESS
5 CONCLUSION
REFERENCES
1 INTRODUCTION
Fast pace of science progress in solid state electronics led to incredible progress of computer devices that on its turn demanded software to control and manage the power of computer calculations and usage.
Software engineering emerged in the beginning of 20th century and by the end of it became separate state of art science, activity and the profession for millions. There are about 18.2 million software developers worldwide, a number that is due to rise to 26.4 million by 2019, a 45% increase, says Evans Data Corp. in its latest Global Developer Population and Demographic Study (P. Thibodeau, 2013). Along with growing number of software developers (software development firms, projects and people involved), increased the need for effective management of software development process. This demanded new approach and methodology from business researchers and managers. In the last several decades there was huge number of research, both in IT field and business management dedicated to this area.
Popularity of agile software development methods started about decade ago and at present these methods are employed by many big, medium size and small companies. Still growing attention to agile methods from software development specialists confirm these methods filled the lack of management techniques for software development that emerged and developed extremely fast along with speedy advancement of hardware in IT area. Great number of research done in areas such as changes in performance of software development using agile methods or scaling agile for large companies and teams. Also one of modern trends is an attempt to apply agile methodology for project management, marketing, sales and other activities. Goal of this article is to explore influence of application agile methods in software development to agility of whole company and business. Presented work based on secondary data taken from a multiple sources, the work performed as an exploratory study and a review of existing research in the area.
2 AGILE SOFTWARE DEVELOPMENT
Definition of an adjective agile in English is: able to move quickly and easily or able to think and understand quickly (Oxford Dictionary, 2015). The most often contemporary use presented by the following sentence: Relating to or denoting a method of project management, used especially for software development, that is characterized by the division of tasks into ...
This document discusses a product analyst advisor software that uses natural language processing techniques like sentiment analysis to analyze customer reviews and sentiments about products. It extracts reviews from various websites about a product being researched and processes the data to provide useful insights. The insights help users easily select the best available option. The system architecture involves scraping live data from websites, using deep learning algorithms to analyze reviews for sentiments, and displaying product insights. It uses BERT for sentiment analysis and frameworks like Django and ReactJS. Web scraping is used to extract review data for analysis and providing recommendations to users.
This document discusses estimating the total cost of ownership (TCO) when acquiring a new software application. It defines TCO as the sum of all costs involved in owning an application over its lifetime. These costs include direct costs like procurement, installation, and customization, as well as indirect costs like maintenance, support, and licensing fees. The document outlines TCO calculation for different types of applications like Software as a Service (SaaS), proprietary software, and open source software. It provides a formula for calculating TCO and discusses how purchasers, managers, vendors, and procurement professionals can apply TCO estimates when evaluating and selecting software solutions.
Business Talk: Harnessing Generative AI with Data Analytics MaturityIJCI JOURNAL
Generative AI applications offer transformative potential for business operations, yet their adoption introduces substantial challenges. This paper utilizes the CBDAS data maturity model to pinpoint pivotal success factors for seamless generative AI integration in businesses. Through a comprehensive analysis of these factors, we underscore the essentials of generative AI deployment: cohesive architecture, robust data governance, and a data-centric corporate ethos. The study also highlights the hurdles and facilitators influencing its implementation. Key findings suggest that fostering a data-friendly culture, combined with structured governance, optimizes generative AI adoption. The paper culminates in presenting the practical implications of these insights, urging further exploration into the real-world efficacy of the proposed recommendations.
IRJET- Factors in Selection of Construction Project Management Software i...IRJET Journal
The document discusses factors to consider when selecting construction project management software in India. It conducted interviews with 15 experts in the construction industry with experience ranging from 5-30 years. The interviews aimed to understand the software selection process. Based on the literature review and interviews, the document proposes a model for software selection with 8 steps: 1) identify software options, 2) review organization policies, 3) analyze the project's needs, 4) analyze the client's needs, 5) inquire the purpose of planning, 6) analyze software performance and price, 7) check available skills, and 8) select and use software. The model categorizes factors as either project specific or general to guide effective software selection.
This document discusses and compares several agent-assisted methodologies for developing multi-agent systems:
- It reviews Gaia, HLIM, PASSI, and Tropos methodologies, outlining their key models and phases. Gaia focuses on analysis and design, HLIM models internal and external agent behavior, and PASSI and Tropos incorporate UML modeling.
- It then proposes a new MAB methodology intended to address shortcomings of existing approaches. MAB includes requirements, analysis, design, and implementation phases and models such as use case maps and agent roles.
- Finally, it concludes that agent technologies represent a promising approach for developing complex software systems, but that matching methodologies to problem domains and developing princip
A Service Oriented Analytics Framework For Multi-Level Marketing BusinessBrandi Gonzales
This document proposes a service-oriented analytics framework for multi-level marketing businesses. It discusses developing a statistical service engine solution using R to automate analytical processes and improve enterprise knowledge generation and reusability. The solution would involve:
1) A statistical job portal for users to submit predefined or ad-hoc analysis requests via an XML message format.
2) An enterprise service bus to route job requests to GNU-R engines running statistical scripts on distributed servers. Data could be retrieved from databases or file repositories.
3) The GNU-R engines would execute the scripts, retrieve and analyze data, save results to files, and return outcomes to users for decision making. Asynchronous messaging and portability were prior
Understanding Customer Voice of Project Portfolio Management SoftwarePeachy Essay
Abstract—Project Portfolio Management (PPM) has gained
success in many projects due to its large number of features that covers effective scheduling, risk management, collaboration, and third-party software integrations to mention a few. A broad range of PPM software is available; however, it is essential to select the PPM with minimum usage issues over time. While many companies use surveys and market research to get users feedback, the PPM product software reviews carry the voice of users; the positive and negative sentiments of the PPM software reviews. This paper collected 4,775 reviews of ten PPM software from Capttera.com. Our approach has these phases- text preprocessing, sentiment analysis, summarization, and categorizations. The software reviews are filtered and cleaned, then negative sentiments of user reviews are summarized into a set of factors that identify issues of adopted PPM software. We report the most important issues of PPM software which were related to missing technological features and lack of training.
Results using Latent Dirichlet Allocation (LDA) model showed
that the top ten common issues are related to software complexity and lack of required features.
TOWARDS AUDITABILITY REQUIREMENTS SPECIFICATION USING AN AGENT-BASED APPROACHijseajournal
Transparency is an important factor in democratic societies composed of characteristics such as accessibility, usability, informativeness, understandability and auditability. In this research we focus on auditability since it plays an important role for citizens that need to understand and audit public information. Although auditability has been a subject of discussion when designing systems, there is a lack of systematization in its specification. We propose an approach to systematically add auditability requirements specification during the goal-oriented agent-based Tropos methodology. We used the Transparency Softgoal Interdependency Graph that captures the different facets of transparency while considering their operationalization. An empirical evaluation was conducted through the design and implementation of LawDisTrA system that distributes lawsuits among judges in an appellate court. Experiments included the distribution of over 300,000 lawsuits at the Brazilian Superior Labor Court. We theorize that the presented approach for auditability provides adequate techniques to address the cross-organizational nature of transparency
Paper Explained: Deep learning framework for measuring the digital strategy o...Devansh16
Companies today are racing to leverage the latest digital technologies, such as artificial intelligence, blockchain, and cloud computing. However, many companies report that their strategies did not achieve the anticipated business results. This study is the first to apply state of the art NLP models on unstructured data to understand the different clusters of digital strategy patterns that companies are Adopting. We achieve this by analyzing earnings calls from Fortune Global 500 companies between 2015 and 2019. We use Transformer based architecture for text classification which show a better understanding of the conversation context. We then investigate digital strategy patterns by applying clustering analysis. Our findings suggest that Fortune 500 companies use four distinct strategies which are product led, customer experience led, service led, and efficiency led. This work provides an empirical baseline for companies and researchers to enhance our understanding of the field.
Check out my other articles on Medium. : https://rb.gy/zn1aiu
My YouTube. It's a work in progress haha: https://rb.gy/88iwdd
Reach out to me on LinkedIn. Let's connect: https://rb.gy/m5ok2y
My Twitter: https://twitter.com/Machine01776819
My Substack: https://devanshacc.substack.com/
If you would like to work with me email me: devanshverma425@gmail.com
Live conversations at twitch here: https://rb.gy/zlhk9y
To get updates on my content- Instagram: https://rb.gy/gmvuy9
Get a free stock on Robinhood: https://join.robinhood.com/fnud75
Discuss how a successful organization should have the followin.docxsalmonpybus
Discuss how a successful organization should have the following layers of security in place for the protection of its operations: information security management, data security, and network security.
Multiple Layers of Security
Marlowe Rooks posted Mar 13, 2020 9:54 AM
Looking at Vacca”s book chapter 1, “Information security management as a field is ever increasing in demand and responsibility because most organizations spend increasingly larger percentages of their IT budgets in attempting to manage risk and mitigate intrusions, not to mention the trend in many enterprises of moving all IT operations to an Internet-connected infrastructure, known as enterprise cloud computing (John R. Vacca, 2014)”. It is the organization responsibility to protect its business and its client information at all times. With that said I’m going to break down why companies need to have multiple layers of security and what types they should implement below.
The first layer is Information security management which can be from Physical Security, or Personnel Security. Physical Security can range from physical items, objects, or areas from unauthorized access and misuse. Personnel Security is to protect the individual or group of individuals who are authorized to access the organization and its operations. Some of the reason to implement Information Security is as follow:
· Decrease in downtime of IT systems
· Decrease in security related incidents
· Increase in meeting an organization's compliance requirements and standards
· Increase in customer satisfaction, demonstrating that security issues are tackled in the most appropriate manner
· Increase in quality of service
· Process approach adoption, which helps account for all legal and regulatory requirements
· More easily identifiable and managed risks
· Also covers information security (IS) (in addition to IT information security)
· Provides a competitive edge to an organization with the help of tackling risks and managing resources/processes
The second layer would be Data Security which can be refers to the process of protecting data from unauthorized access and data corruption throughout its lifecycle. Data security includes data encryption, tokenization, and key management practices that protect data across all applications and platforms. Some of the reason to implement Data Security is as follow:
· Cloud access security – Protection platform that allows you to move to the cloud securely while protecting data in cloud applications.
· Data encryption – Data-centric and tokenization security solutions that protect data across enterprise, cloud, mobile and big data environments.
· Web Browser Security - Protects sensitive data captured at the browser, from the point the customer enters cardholder or personal data, and keeps it protected through the ecosystem to the trusted host destination.
· Mobile App Security - Protecting sensitive data in native mobile apps while safeguarding the data end-to-end.
· eMai.
Discuss how portrayals of violence in different media may affect hum.docxsalmonpybus
Discuss how portrayals of violence in different media may affect human behavior
Describe a key piece of research by Albert Bandura and colleagues into children’s imitation of violent
acts
Outline why findings of associations between events and behaviour do not provide conclusive evidence
of cause-and-effect relationships
Outline how and why experiments can identify causes of behavior
Summarise the findings of psychological research into the topic of media violence and behavior
Outline the policies designed to protect children from negative effects of screen violence.\
400 Words
APA
well cited
.
More Related Content
Similar to httpsdoi.org10.11772329488418819139International Jour.docx
AN ITERATIVE HYBRID AGILE METHODOLOGY FOR DEVELOPING ARCHIVING SYSTEMSijseajournal
With the massive growth of the organizations files, the needs for archiving system become a must. A lot of time is consumed in collecting requirements from the organization to build an archiving system. Sometimes the system does not meet the organization needs. This paper proposes a domain-based requirement engineering system that efficiently and effectively develops different archiving systems based on new
suggested technique that merges the two best used agile methodologies: extreme programming (XP) and SCRUM. The technique is tested on a real case study. The results shows that the time and effort consumed during analyzing and designing the archiving systems decreased significantly. The proposed methodology also reduces the system errors that may happen at the early stages of the development of the system.
IRJET- Strength and Workability of High Volume Fly Ash Self-Compacting Concre...IRJET Journal
The document discusses implementing a social customer relationship management (CRM) system for an online grocery shopping platform using customer reviews. It proposes collecting customer reviews from social media and other sources, refining the data, analyzing it using natural language processing and machine learning techniques, and storing the results in a database. This would allow the platform to better understand customer sentiment and needs to improve products, services and the customer experience.
IRJET- Implementing Social CRM System for an Online Grocery Shopping Platform...IRJET Journal
This document presents a proposed system architecture for implementing a social customer relationship management (CRM) system for an online grocery shopping platform using customer reviews and sentiment analysis. The proposed architecture involves collecting customer reviews from social media, preprocessing and analyzing the data using natural language processing techniques like stemming, and storing the results in a database. Sentiment analysis is performed to categorize reviews by aspects and sentiment. The analyzed data is then presented to users through an interface to help the online grocery shopping platform better understand customer needs and improve products/services based on feedback.
This document discusses groupware selection for small businesses in the United States. It defines groupware and small businesses according to the U.S. Small Business Administration. The paper will compare three major groupware technology solutions for small businesses and determine the most suitable option based on features identified in a Forrester research report. It will establish criteria for comparison and quantitatively assess the solutions to recommend an optimal choice.
Agent-SSSN: a strategic scanning system network based on multiagent intellige...IJERA Editor
The document describes an Agent-SSSN system that uses a multi-agent approach and ontology to develop a strategic scanning system for business intelligence. The system aims to integrate expert knowledge through cooperative information gathering from the web. It uses various agent roles like information retrieval agents, mediator agents, and notification agents. Ontologies are used to represent shared domain concepts and expert knowledge to enable knowledge sharing between agents. The system is modeled using the O-MaSE methodology, with goals, roles, and capabilities defined for each agent.
A Study of Software Size Estimation with use Case Pointsijtsrd
Estimates for cost and schedule in software projects are based on a prediction of the size of the system. Software size estimation is the most important role in software cost estimation. Use Case Point method can provide software size estimation at the early stage of the development process. Software size estimation is based on the high level speciation of Use Case. This paper describes a simple approach to software size estimation base on use case models the "Use Case Points Method. This model is imported into an estimating tool. To get software size with Use Case Point, the needed factors are the number of use cases and their complexity, the number of actors and their complexity, technical complexity factors TCF , and environmental complexity factors ECF . The system computes unadjusted use case points UUCP , adjusted use case points UPC , and the total effort in staff hours. Aye Aye Seint "A Study of Software Size Estimation with use Case Points" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-5 , August 2019, URL: https://www.ijtsrd.com/papers/ijtsrd26531.pdfPaper URL: https://www.ijtsrd.com/computer-science/other/26531/a-study-of-software-size-estimation-with-use-case-points/aye-aye-seint
Productivity of incident management with conversational bots-a reviewIAESIJAI
The use of conversational agents (bots) in information systems managed by company’s increases productivity in the development of activities focused on processes such as customer service, healthcare, and presentation. The present work is a systematic literature review that collects articles from 2019 to 2022 in the databases Scopus, Springer, Willey, Indexes-Csic, Taylor & Francis, Pubmed, and Ebsco Host. PRISMA methodology was used to systematize 47 relevant articles. As a result of the analysis, 2/19 very important benefits were obtained, which are: helping to obtain information and facilitating customer service; as for the types of conversational bots, a total of 9 types were found, of which conversational agents and chatbots with artificial intelligence (AI) are the most common; in the case of processes, 3/5 processes that optimize conversational bots were found, where the most prominent are: teaching process, health processes, and customer service processes. An architecture model for conversational bots in incident management is also proposed.
AN OVERVIEW OF EXISTING FRAMEWORKS FOR INTEGRATING FRAGMENTED INFORMATION SYS...ijistjournal
Literatures show that there are several structured integration frameworks which emerged with the aim of facilitating application integration. But weakness and strength of these frameworks are not known. This paper aimed at reviewing these frameworks with the focus on identifying their weakness and strength. To accomplish this, recommended comparison factors were identified and used to compare these frameworks. Findings shows that most of these structure frameworks are custom based on their motives. They focus on integrating applications from different sectors within an organization for the purpose of eliminating communication inefficiencies. There is no framework which guides application’s integrators on goals of integrations, outcomes of integration, outputs of integration and skills which will be required for types of applications expected to be integrated. The study recommended further study on integration framework especial on designing unstructured framework which will support and guide application’s integrators with consideration on consumer’s surrounding environment.
AN OVERVIEW OF EXISTING FRAMEWORKS FOR INTEGRATING FRAGMENTED INFORMATION SYS...ijistjournal
Literatures show that there are several structured integration frameworks which emerged with the aim of facilitating pplication integration. But weakness and strength of these frameworks are not known. This
paper aimed at reviewing these frameworks with the focus on identifying their weakness and strength. Toaccomplish this, recommended comparison factors were identified and used to compare these frameworks.Findings shows that most of these structure frameworks are custom based on their motives. They focus onintegrating applications from different sectors within an organization for the purpose of eliminating communication inefficiencies. There is no framework which guides pplication’s integrators on goals of integrations, outcomes of integration, outputs of integration and skills which will be required for
types of applications expected to be integrated. The study recommended further study on integration
framework especial on designing unstructured framework which will support and guide application’s
integrators with consideration on consumer’s surrounding environment.
Ludmila Orlova HOW USE OF AGILE METHODOLOGY IN SOFTWARE DEVELO.docxsmile790243
Ludmila Orlova
HOW USE OF AGILE METHODOLOGY IN SOFTWARE DEVELOPMENT INFLUENCE AGILITY OF THE BUSINESS
Agile methodology is widely distributed tool for software development. Presented article explore research data about use of these tools, its influence to quality of the end product and performance of development and overall agility of business and companies.
KEYWORDS:
Agile, software development, agile business
CONTENT
1 INTRODUCTION
2 AGILE SOFTWARE DEVELOPMENT
3 SCALING AGILE
4 AGILE BUSINESS
5 CONCLUSION
REFERENCES
1 INTRODUCTION
Fast pace of science progress in solid state electronics led to incredible progress of computer devices that on its turn demanded software to control and manage the power of computer calculations and usage.
Software engineering emerged in the beginning of 20th century and by the end of it became separate state of art science, activity and the profession for millions. There are about 18.2 million software developers worldwide, a number that is due to rise to 26.4 million by 2019, a 45% increase, says Evans Data Corp. in its latest Global Developer Population and Demographic Study (P. Thibodeau, 2013). Along with growing number of software developers (software development firms, projects and people involved), increased the need for effective management of software development process. This demanded new approach and methodology from business researchers and managers. In the last several decades there was huge number of research, both in IT field and business management dedicated to this area.
Popularity of agile software development methods started about decade ago and at present these methods are employed by many big, medium size and small companies. Still growing attention to agile methods from software development specialists confirm these methods filled the lack of management techniques for software development that emerged and developed extremely fast along with speedy advancement of hardware in IT area. Great number of research done in areas such as changes in performance of software development using agile methods or scaling agile for large companies and teams. Also one of modern trends is an attempt to apply agile methodology for project management, marketing, sales and other activities. Goal of this article is to explore influence of application agile methods in software development to agility of whole company and business. Presented work based on secondary data taken from a multiple sources, the work performed as an exploratory study and a review of existing research in the area.
2 AGILE SOFTWARE DEVELOPMENT
Definition of an adjective agile in English is: able to move quickly and easily or able to think and understand quickly (Oxford Dictionary, 2015). The most often contemporary use presented by the following sentence: Relating to or denoting a method of project management, used especially for software development, that is characterized by the division of tasks into ...
This document discusses a product analyst advisor software that uses natural language processing techniques like sentiment analysis to analyze customer reviews and sentiments about products. It extracts reviews from various websites about a product being researched and processes the data to provide useful insights. The insights help users easily select the best available option. The system architecture involves scraping live data from websites, using deep learning algorithms to analyze reviews for sentiments, and displaying product insights. It uses BERT for sentiment analysis and frameworks like Django and ReactJS. Web scraping is used to extract review data for analysis and providing recommendations to users.
This document discusses estimating the total cost of ownership (TCO) when acquiring a new software application. It defines TCO as the sum of all costs involved in owning an application over its lifetime. These costs include direct costs like procurement, installation, and customization, as well as indirect costs like maintenance, support, and licensing fees. The document outlines TCO calculation for different types of applications like Software as a Service (SaaS), proprietary software, and open source software. It provides a formula for calculating TCO and discusses how purchasers, managers, vendors, and procurement professionals can apply TCO estimates when evaluating and selecting software solutions.
Business Talk: Harnessing Generative AI with Data Analytics MaturityIJCI JOURNAL
Generative AI applications offer transformative potential for business operations, yet their adoption introduces substantial challenges. This paper utilizes the CBDAS data maturity model to pinpoint pivotal success factors for seamless generative AI integration in businesses. Through a comprehensive analysis of these factors, we underscore the essentials of generative AI deployment: cohesive architecture, robust data governance, and a data-centric corporate ethos. The study also highlights the hurdles and facilitators influencing its implementation. Key findings suggest that fostering a data-friendly culture, combined with structured governance, optimizes generative AI adoption. The paper culminates in presenting the practical implications of these insights, urging further exploration into the real-world efficacy of the proposed recommendations.
IRJET- Factors in Selection of Construction Project Management Software i...IRJET Journal
The document discusses factors to consider when selecting construction project management software in India. It conducted interviews with 15 experts in the construction industry with experience ranging from 5-30 years. The interviews aimed to understand the software selection process. Based on the literature review and interviews, the document proposes a model for software selection with 8 steps: 1) identify software options, 2) review organization policies, 3) analyze the project's needs, 4) analyze the client's needs, 5) inquire the purpose of planning, 6) analyze software performance and price, 7) check available skills, and 8) select and use software. The model categorizes factors as either project specific or general to guide effective software selection.
This document discusses and compares several agent-assisted methodologies for developing multi-agent systems:
- It reviews Gaia, HLIM, PASSI, and Tropos methodologies, outlining their key models and phases. Gaia focuses on analysis and design, HLIM models internal and external agent behavior, and PASSI and Tropos incorporate UML modeling.
- It then proposes a new MAB methodology intended to address shortcomings of existing approaches. MAB includes requirements, analysis, design, and implementation phases and models such as use case maps and agent roles.
- Finally, it concludes that agent technologies represent a promising approach for developing complex software systems, but that matching methodologies to problem domains and developing princip
A Service Oriented Analytics Framework For Multi-Level Marketing BusinessBrandi Gonzales
This document proposes a service-oriented analytics framework for multi-level marketing businesses. It discusses developing a statistical service engine solution using R to automate analytical processes and improve enterprise knowledge generation and reusability. The solution would involve:
1) A statistical job portal for users to submit predefined or ad-hoc analysis requests via an XML message format.
2) An enterprise service bus to route job requests to GNU-R engines running statistical scripts on distributed servers. Data could be retrieved from databases or file repositories.
3) The GNU-R engines would execute the scripts, retrieve and analyze data, save results to files, and return outcomes to users for decision making. Asynchronous messaging and portability were prior
Understanding Customer Voice of Project Portfolio Management SoftwarePeachy Essay
Abstract—Project Portfolio Management (PPM) has gained
success in many projects due to its large number of features that covers effective scheduling, risk management, collaboration, and third-party software integrations to mention a few. A broad range of PPM software is available; however, it is essential to select the PPM with minimum usage issues over time. While many companies use surveys and market research to get users feedback, the PPM product software reviews carry the voice of users; the positive and negative sentiments of the PPM software reviews. This paper collected 4,775 reviews of ten PPM software from Capttera.com. Our approach has these phases- text preprocessing, sentiment analysis, summarization, and categorizations. The software reviews are filtered and cleaned, then negative sentiments of user reviews are summarized into a set of factors that identify issues of adopted PPM software. We report the most important issues of PPM software which were related to missing technological features and lack of training.
Results using Latent Dirichlet Allocation (LDA) model showed
that the top ten common issues are related to software complexity and lack of required features.
TOWARDS AUDITABILITY REQUIREMENTS SPECIFICATION USING AN AGENT-BASED APPROACHijseajournal
Transparency is an important factor in democratic societies composed of characteristics such as accessibility, usability, informativeness, understandability and auditability. In this research we focus on auditability since it plays an important role for citizens that need to understand and audit public information. Although auditability has been a subject of discussion when designing systems, there is a lack of systematization in its specification. We propose an approach to systematically add auditability requirements specification during the goal-oriented agent-based Tropos methodology. We used the Transparency Softgoal Interdependency Graph that captures the different facets of transparency while considering their operationalization. An empirical evaluation was conducted through the design and implementation of LawDisTrA system that distributes lawsuits among judges in an appellate court. Experiments included the distribution of over 300,000 lawsuits at the Brazilian Superior Labor Court. We theorize that the presented approach for auditability provides adequate techniques to address the cross-organizational nature of transparency
Paper Explained: Deep learning framework for measuring the digital strategy o...Devansh16
Companies today are racing to leverage the latest digital technologies, such as artificial intelligence, blockchain, and cloud computing. However, many companies report that their strategies did not achieve the anticipated business results. This study is the first to apply state of the art NLP models on unstructured data to understand the different clusters of digital strategy patterns that companies are Adopting. We achieve this by analyzing earnings calls from Fortune Global 500 companies between 2015 and 2019. We use Transformer based architecture for text classification which show a better understanding of the conversation context. We then investigate digital strategy patterns by applying clustering analysis. Our findings suggest that Fortune 500 companies use four distinct strategies which are product led, customer experience led, service led, and efficiency led. This work provides an empirical baseline for companies and researchers to enhance our understanding of the field.
Check out my other articles on Medium. : https://rb.gy/zn1aiu
My YouTube. It's a work in progress haha: https://rb.gy/88iwdd
Reach out to me on LinkedIn. Let's connect: https://rb.gy/m5ok2y
My Twitter: https://twitter.com/Machine01776819
My Substack: https://devanshacc.substack.com/
If you would like to work with me email me: devanshverma425@gmail.com
Live conversations at twitch here: https://rb.gy/zlhk9y
To get updates on my content- Instagram: https://rb.gy/gmvuy9
Get a free stock on Robinhood: https://join.robinhood.com/fnud75
Similar to httpsdoi.org10.11772329488418819139International Jour.docx (20)
Discuss how a successful organization should have the followin.docxsalmonpybus
Discuss how a successful organization should have the following layers of security in place for the protection of its operations: information security management, data security, and network security.
Multiple Layers of Security
Marlowe Rooks posted Mar 13, 2020 9:54 AM
Looking at Vacca”s book chapter 1, “Information security management as a field is ever increasing in demand and responsibility because most organizations spend increasingly larger percentages of their IT budgets in attempting to manage risk and mitigate intrusions, not to mention the trend in many enterprises of moving all IT operations to an Internet-connected infrastructure, known as enterprise cloud computing (John R. Vacca, 2014)”. It is the organization responsibility to protect its business and its client information at all times. With that said I’m going to break down why companies need to have multiple layers of security and what types they should implement below.
The first layer is Information security management which can be from Physical Security, or Personnel Security. Physical Security can range from physical items, objects, or areas from unauthorized access and misuse. Personnel Security is to protect the individual or group of individuals who are authorized to access the organization and its operations. Some of the reason to implement Information Security is as follow:
· Decrease in downtime of IT systems
· Decrease in security related incidents
· Increase in meeting an organization's compliance requirements and standards
· Increase in customer satisfaction, demonstrating that security issues are tackled in the most appropriate manner
· Increase in quality of service
· Process approach adoption, which helps account for all legal and regulatory requirements
· More easily identifiable and managed risks
· Also covers information security (IS) (in addition to IT information security)
· Provides a competitive edge to an organization with the help of tackling risks and managing resources/processes
The second layer would be Data Security which can be refers to the process of protecting data from unauthorized access and data corruption throughout its lifecycle. Data security includes data encryption, tokenization, and key management practices that protect data across all applications and platforms. Some of the reason to implement Data Security is as follow:
· Cloud access security – Protection platform that allows you to move to the cloud securely while protecting data in cloud applications.
· Data encryption – Data-centric and tokenization security solutions that protect data across enterprise, cloud, mobile and big data environments.
· Web Browser Security - Protects sensitive data captured at the browser, from the point the customer enters cardholder or personal data, and keeps it protected through the ecosystem to the trusted host destination.
· Mobile App Security - Protecting sensitive data in native mobile apps while safeguarding the data end-to-end.
· eMai.
Discuss how portrayals of violence in different media may affect hum.docxsalmonpybus
Discuss how portrayals of violence in different media may affect human behavior
Describe a key piece of research by Albert Bandura and colleagues into children’s imitation of violent
acts
Outline why findings of associations between events and behaviour do not provide conclusive evidence
of cause-and-effect relationships
Outline how and why experiments can identify causes of behavior
Summarise the findings of psychological research into the topic of media violence and behavior
Outline the policies designed to protect children from negative effects of screen violence.\
400 Words
APA
well cited
.
Discuss how culture affects health physical and psychological healt.docxsalmonpybus
Culture influences both physical and psychological health through lifestyle behaviors and stress levels, shapes how individuals perceive and understand health issues, and impacts health decision-making processes based on cultural norms and beliefs. Personal experiences with different cultural approaches to health can provide insights. Research from scholarly sources can help illustrate these cultural influences on health.
Discuss how business use Access Control to protect their informa.docxsalmonpybus
Discuss how business use Access Control to protect their information, describe the how a business will the Control Process.
Length, 2 – 3 pages.
All paper are written in APA formatting, include title and references pages (not counted). Must use at least two references and citations.
Please reference the rubric for grading.
All paper are checked for plagiarism using SafeAssign, you can review your score.
I have attachedd a template you can use to write your paper.
.
Discuss how or if post-Civil War America was truly a period of r.docxsalmonpybus
Discuss how or if post-Civil War America was truly a period of reform and justice for marginalized populations or if the population and economic landscapes provided for new forms of social and professional segregation. Use examples to support your answer.
Your response must be at least 200 words in length.
.
Discuss how partnerships are created through team development..docxsalmonpybus
Discuss how partnerships are created through team development.
Use the COVID-19 crisis to focus on the role of leadership in developing teams to mitigate, and contain the virus, and treat patients.
How can a team of nurses have an impact on promoting safety while providing care to afflicted patients within the hospital setting, within the community, within the country, state and federal levels?
How should nurses deal with the media - TV, newspaper; social media - Facebook, tweets, Instagram, snapchat?
How can nurses influence policy such as legislation related to stimulus relief, unemployment compensation, pay protection.
How can a nurse protect himself or herself and the employer from lawsuits? What would you do if you were sued?
Apa format 2 references
.
discuss how health and illness beliefs can influence the assessment .docxsalmonpybus
Health and illness beliefs can influence how a client responds to an assessment interview, as their belief structure may impact what they disclose. A client's culture can also influence their physical exam findings, as certain cultures may view health issues differently. When assessing a client, it is important to be aware of their beliefs and cultural background to gain accurate information and provide culturally-sensitive care.
Discuss how geopolitical and phenomenological place influence the .docxsalmonpybus
Discuss how geopolitical and phenomenological place influence the context of a population or community assessment and intervention. Describe how the nursing process is utilized to assist in identifying health issues (local or global in nature) and in creating an appropriate intervention, including screenings and referrals, for the community or population.
.
Discuss how each of these factors (inflation, changing population de.docxsalmonpybus
Discuss how each of these factors (inflation, changing population demographics, intensity, and technology of services) influence health care costs.
And I need two responses of my classmates about how I might offer ways that individuals can mitigate a negative effect of these factors.
reference book: Stanhope, M. & Lancaster, J. (2018). Foundations for Population Health in Community/Public Health Nursing (5 th ed.). Elsevier. (e-Book)
.
Discuss Five (5) database membersobjects of NoSQL. Why is NoSQL is.docxsalmonpybus
Discuss Five (5) database members/objects of NoSQL. Why is NoSQL is better than traditional T-SQL an ideal database type for Big Data Analytics?
Textbook:
EMC Education Service (Eds). (2015) Data Science and Big Data Analytics: Discovering, Analyzing, Visualizing, and Presenting Data, Indianapolis, IN: John Wiley & Sons, Inc
.
Discuss how business use Access Control to protect their information.docxsalmonpybus
Discuss how business use Access Control to protect their information, describe how a business will Control the Process.
Length, 2 – 3 pages.
All paper are written in APA formatting, include title and references pages (not counted). Must use at least two references and citations.
.
Discuss how and why companies use cryptography.Length, 2 – 3 pag.docxsalmonpybus
Discuss how and why companies use cryptography.
Length, 2 – 3 pages.
All paper are written in APA formatting, include title and references pages (not counted). Must use at least two references and citations.
All paper are checked for plagiarism using SafeAssign, you can review your score.
.
Discuss how and why companies use cryptography.Length, 2 pages..docxsalmonpybus
Companies use cryptography to protect sensitive data and communications from unauthorized access. Cryptography allows companies to securely transmit financial information, personal details of customers and employees, trade secrets, and other confidential digital records. References and citations from outside sources must be included when discussing how cryptography techniques like encryption help companies maintain security and privacy in their digital operations.
Discuss how an object must be an expert on certain sets of informati.docxsalmonpybus
Discuss how an object must be an expert on certain sets of information.
Visit a business' online Web presence. Construct a list of complex data types that would be needed to store all of the online catalog information.
Explain the similarities and differences between ODBMS and RDBMS.
Detail the ways redundant key values add complexity to processes that manipulate key fields.
Web designers use cookies and session variables to maintain state. Explain how each accomplishes its task and what pitfalls there are to using each.
.
Discuss how Angela Davis, Patricia Collins, andor The Combahee Rive.docxsalmonpybus
Discuss how Angela Davis, Patricia Collins, and/or The Combahee Rive Collective would respond to Malcolm X’s injunction to silence over problems within the African-American community: “we must first learn to forget our differences, let us differ in the closet; when we come out in front, let us not have anything to argue about until we get finished arguing with the man” (Malcolm X 522). What are the differences between how the men and the women believe issues of racism and sexism should be approached, explored, and resolved?
Use at least 3 of the sources provided (out of the 8 articles)
Remember to use MLA formatting in the file you upload to the dropbox. This includes
double spacing, in-text citations, page numbers, Times New Roman 12pt font, and a works cited page. (
5-6 double spaced pages
)
I provided :
- 8 Articles to be used in the essay (uploaded files).
- 4 Files about the formatting of the essay.
.
Discuss how Biblical worldview provides guidance to the Christian he.docxsalmonpybus
Discuss how Biblical worldview provides guidance to the Christian health administrator in developing willingness and hope as an organizational leader. 250-300 words
Discuss how the revolutionary Christian health administrator uses influence to “seek out champions”. 250-300 words
.
Discuss how an IPSIDS can protect user information on a Windows sys.docxsalmonpybus
Discuss how an IPS/IDS can protect user information on a Windows system or any computing device that is connect to a network. What other security controls can help protect user information in tangent with an IPS/IDS?
at least 250-400 words with minimum 3 references.
APA format
with proper citations
Need it by 07/17 3:00 PM EST.
.
discuss how a particular human-computer interface might impact a per.docxsalmonpybus
discuss how a particular human-computer interface might impact a person’s satisfaction with and ability to use technology. Then, describe another example of a technology product and the human-computer interface you use to interact with that product, such as a wearable device or a self-service checkout machine. In your post, discuss the positives and negatives of the experience, with a focus on how HCI elements allow you to interact with the technology. Finally, describe how interacting with that technology compares to the way you were accustomed to doing that task before.
.
Discuss Fluid Volume Excess. Describe the imbalance, identify typ.docxsalmonpybus
Discuss Fluid Volume Excess. Describe the imbalance, identify types of patients who are greatest risk for these imbalances, discuss specific implementations, develop related nursing prevention strategies, and factors that make it difficult to implement prevention strategies or possible nursing responses.
.
Discuss emerging trends in the use of Internet currency such as Bitc.docxsalmonpybus
Discuss emerging trends in the use of Internet currency such as Bitcoin and how this has or may lead to fraudulent activities. As part of your discussion state how you think this currency or other currencies may impact GAAP especially as related to revenue recognition rules.
3 paragraphs
3 references
.
How to Setup Warehouse & Location in Odoo 17 InventoryCeline George
In this slide, we'll explore how to set up warehouses and locations in Odoo 17 Inventory. This will help us manage our stock effectively, track inventory levels, and streamline warehouse operations.
This presentation was provided by Racquel Jemison, Ph.D., Christina MacLaughlin, Ph.D., and Paulomi Majumder. Ph.D., all of the American Chemical Society, for the second session of NISO's 2024 Training Series "DEIA in the Scholarly Landscape." Session Two: 'Expanding Pathways to Publishing Careers,' was held June 13, 2024.
How to Make a Field Mandatory in Odoo 17Celine George
In Odoo, making a field required can be done through both Python code and XML views. When you set the required attribute to True in Python code, it makes the field required across all views where it's used. Conversely, when you set the required attribute in XML views, it makes the field required only in the context of that particular view.
Chapter wise All Notes of First year Basic Civil Engineering.pptxDenish Jangid
Chapter wise All Notes of First year Basic Civil Engineering
Syllabus
Chapter-1
Introduction to objective, scope and outcome the subject
Chapter 2
Introduction: Scope and Specialization of Civil Engineering, Role of civil Engineer in Society, Impact of infrastructural development on economy of country.
Chapter 3
Surveying: Object Principles & Types of Surveying; Site Plans, Plans & Maps; Scales & Unit of different Measurements.
Linear Measurements: Instruments used. Linear Measurement by Tape, Ranging out Survey Lines and overcoming Obstructions; Measurements on sloping ground; Tape corrections, conventional symbols. Angular Measurements: Instruments used; Introduction to Compass Surveying, Bearings and Longitude & Latitude of a Line, Introduction to total station.
Levelling: Instrument used Object of levelling, Methods of levelling in brief, and Contour maps.
Chapter 4
Buildings: Selection of site for Buildings, Layout of Building Plan, Types of buildings, Plinth area, carpet area, floor space index, Introduction to building byelaws, concept of sun light & ventilation. Components of Buildings & their functions, Basic concept of R.C.C., Introduction to types of foundation
Chapter 5
Transportation: Introduction to Transportation Engineering; Traffic and Road Safety: Types and Characteristics of Various Modes of Transportation; Various Road Traffic Signs, Causes of Accidents and Road Safety Measures.
Chapter 6
Environmental Engineering: Environmental Pollution, Environmental Acts and Regulations, Functional Concepts of Ecology, Basics of Species, Biodiversity, Ecosystem, Hydrological Cycle; Chemical Cycles: Carbon, Nitrogen & Phosphorus; Energy Flow in Ecosystems.
Water Pollution: Water Quality standards, Introduction to Treatment & Disposal of Waste Water. Reuse and Saving of Water, Rain Water Harvesting. Solid Waste Management: Classification of Solid Waste, Collection, Transportation and Disposal of Solid. Recycling of Solid Waste: Energy Recovery, Sanitary Landfill, On-Site Sanitation. Air & Noise Pollution: Primary and Secondary air pollutants, Harmful effects of Air Pollution, Control of Air Pollution. . Noise Pollution Harmful Effects of noise pollution, control of noise pollution, Global warming & Climate Change, Ozone depletion, Greenhouse effect
Text Books:
1. Palancharmy, Basic Civil Engineering, McGraw Hill publishers.
2. Satheesh Gopi, Basic Civil Engineering, Pearson Publishers.
3. Ketki Rangwala Dalal, Essentials of Civil Engineering, Charotar Publishing House.
4. BCP, Surveying volume 1
2. to explain how
automated text summarization applications work from an
overarching, semitechnical,
modestly theoretical perspective and, using ROUGE-1 (Recall-
Oriented Understudy
for Gisting Evaluation–1) evaluation metrics, assess how
effective the summarization
software is when summarizing complex business reports. The
results of this study
show that the extraction-based summarization system produced
moderately
satisfactory results in terms of extracting relevant instances of
the text from the
business reports. Much work still needs to be accomplished in
the area of precision
and recall in extraction-based systems before the software can
match a human’s
ability to capture the gist of a body of text.
Keywords
ROUGE-1, automatic text summarization, artificial intelligence,
company annual reports
The rapid advances made in machine learning over the past few
decades have paved the
way for a prolific rise in a new generation of sophisticated
artificial intelligence (AI)
systems that can perform tasks autonomously. AI is arguably the
most important tech-
nology innovation of our era (Brynjolfsson, Rock, & Syverson,
2017); its transforma-
tive impact has been felt in almost every societal domain.
Intelligence communities are
leveraging AI across their portfolios to strengthen national
security, reduce biological
3. 1University of Alabama, Tuscaloosa, AL, USA
Corresponding Author:
Jefrey Naidoo, University of Alabama, Stadium Drive,
Tuscaloosa, AL 35487-0001, USA.
Email: [email protected]
819139 JOBXXX10.1177/2329488418819139International
Journal of Business CommunicationNaidoo and Dulek
research-article2018
https://us.sagepub.com/en-us/journals-permissions
https://journals.sagepub.com/home/job
mailto:[email protected]
Naidoo and Dulek 127
warfare, and mitigate cyber threats (Allen & Chan, 2017); legal
firms are employing AI
to enhance legal informatics, predict litigation, and measure
workflows in real time
(Sobowale, 2016); health care entities are utilizing AI to
perform clinical diagnostics on
medical images at levels equal to those of experienced
clinicians (HealthIT, 2017); the
airline industry is engaging AI to reduce “human-steered” flight
time to only 7 minutes
of the total flight time (Narula, 2018); and, finally, social media
platforms are deploying
AI to generate a more personalized and interactive user
experience.
AI’s pervasive impact has extended into the business
environment as well. By pro-
viding tools that automate redundant tasks, identify patterns
within data, and uncover
4. valuable insights, AI has helped corporations automate routine
processes and improve
overall process performance. These improvements have taken
the form of enhanced
compliance, security and risk management; increased gains in
productivity and market
share; and improved employee retention (Jha, 2018). A recent
global survey of 1,600
business decision makers found that 76% of the respondents
believed that AI is funda-
mental to future business success, while 64% believed that their
organization’s future
growth is dependent on AI adoption. The survey also found that
companies expect AI
to contribute an average revenue increase of 39% by 2020
(Infosys, 2018).
Its value proposition seemingly endless, AI has entered the
domain of business
communication in a number of ways, with perhaps the most
pronounced being auto-
matic text summarization of corporate disclosures (Cardinaels,
Hollander, & White,
2017). Large financial institutions, such as Citicorp and Bank of
America; regulators,
such as the Securities and Exchange Commission; and investors
are key beneficiaries
of this type of summarization (Barth, 2015). The first two
entities see similar effi-
ciency benefits from summarization software because
disclosures have, over the years,
become fairly protracted and include a substantial amount of
redundancy (Dyer, Lang,
& Stice-Lawrence, 2017). The third group, investors, including
hedge fund investors,
employs AI engines to analyze macroeconomic data, assess
5. market fundamentals, and
analyze corporate financial disclosures, each with the intention
of making more accu-
rate market predictions and executing more successful stock
trades (Metz, 2016).
Yet despite AI’s far-reaching influence in the financial
reporting and other business
domains, there is a surprising dearth of accessible descriptions
about the assumptions
underlying the software’s development along with an absence of
empirical evidence
assessing the viability and usefulness of this communication
tool. The lack of the for-
mer means that we need a kind of pretheory about
summarization software; the lack of
the latter means that we have yet to determine how effective
automatic text summari-
zation software is as a business communication tool.
With the above observations in mind, the purposes of this study
are threefold:
1. To explain how automated text summarization applications
work from an
overarching, semitechnical, modestly theoretical perspective
2. To study how effective the summarization software is when
summarizing com-
plex business reports
3. To explore variances between outputs produced by human
authors and artifi-
cial intelligence for the selected data genre
6. 128 International Journal of Business Communication 59(1)
To measure the effectiveness of summarization software, we
first created manual
(human-authored) summaries of the Letter to Shareholders in 10
Fortune 500 company
annual reports that were published in 2018. Next, we used an
automated extraction-based
text summarization application, Resoomer, to produce machine-
generated summaries of
the same documents. We then used ROUGE-1 (Recall-Oriented
Understudy for Gisting
Evaluation–1), a highly regarded and widely employed set of
metrics for evaluating auto-
matic summarization, to conduct our assessment of efficacy and
determine variances
between the outputs produced within the respective summary
categories.
This study makes several contributions to the body of literature
in business com-
munication and to the business field at large. First, as the
software continues to become
more and more effective, the manner in which business
summaries are written is going
to change dramatically. It therefore seems wise for the field’s
researchers and practi-
tioners to familiarize themselves with how this family of
application software works
as well as to determine where we are in terms of the software’s
efficacy.
Second, this is the first evaluative study of automatic text
summarization conducted
on this specific instrument of strategic business communication.
7. We considered a broad
range of data corpora to serve as potential datasets for this
evaluation. We concluded that
Letters to Shareholders worked well for this study because they
provide an important
business communication bridge between the voluntary and
mandatory information dis-
closures of public companies (Williams, 2008). Additionally,
the subject matter of these
letters reaches across a variety of business enterprises and
disciplines. Hence, we decided
that these letters provide an objective way to evaluate the
effectiveness of summarization
software as a business communication tool, with the letters
functioning as independent
variables on which to test the summarization software’s
effectiveness. We are not evalu-
ating the design, effectiveness, or even the strategic approach of
the Letters themselves.
Finally, the study calls attention to evolutionary developments
and practices in the busi-
ness communication space. By condensing large business
documents into short, informa-
tive summaries, automatic text summarization is expediting
information communication in
business environments, thereby affecting what the organization
knows. Additionally, it
likely affects organizational decision making as well other
downstream processes such as
information searches and report generation (Paulus, Xiong, &
Socher, 2017).
In the following sections of this article, we provide an extensive
exposition of how
the software works from an overarching, semitechnical
8. perspective. We then describe
the selection, extraction, and processing of the dataset and
conclude with an analysis
and discussion of our results and findings.
Overview of Automated Text Summarization and
Evaluation
While appearing simple to do on the surface, the act of
summarizing text is actually a
highly complex task that involves summarization of source
codes based on software
reflection and lexical source model extraction (Murphy &
Notkin, 1996). Proof of its
complexity is found in the fact that developers have been
working for decades to make
this software viable and to advance its efficacy.
Naidoo and Dulek 129
Automated text summarization systems endeavor to produce a
concise summary of
the source or reference text while retaining its fundamental
essence and overall mean-
ing. The system’s goal is to generate a summary of the source or
reference text that is
equivalent to a summary generated by a human (Brownlee,
2017). A three-phase pro-
cess generally characterizes these systems:
1. An analysis of the source text
2. The determination of its salient points
3. A synthesis of an appropriate output (Alonso, Castellon,
Fuentes, Climent, &
9. Horacio Rodriquez, 2003)
Previous studies (e.g., Smith, Patmos, & Pitts, 2018) have found
that much work still
needs to be accomplished in the area of precision and recall in
extraction-based systems
before the software can match a human’s ability to capture the
gist of a body of text.
Seminal work in automatic text summarization began in the
1950s, with the first
sentence extraction algorithm being developed in 1958
(Steinberger & Jezek, 2009).
The algorithm used term frequencies to measure the relevance
of the sentence.
Understandably, the methods developed during that era were
fairly rudimentary (Hovy
& Lin, 1998). Since then, a large number of techniques and
approaches have been
developed. Interestingly, the large volumes of information
created on the web have
triggered much of this development (Nenkova & McKeown,
2011; Shams, Hashem,
Hossain, Akter, & Gope, 2010). Bhargava, Sharma, and Sharma
(2016) posit that text
summarization tools have now become a necessity to navigate
the information on the
web because they help eliminate dispensable or superfluous
content. Torres-Moreno
(2014) asserts that automatic text summarization reduces
reading time, expedites
research by making the selection process of documents easier,
employs algorithms that
are less biased than human summarizers, improves the
effectiveness of indexing, and
10. enables commercial abstraction services to increase the number
of texts they are able
to process. All in all, high praise for the software.
Extraction-Based Text Summarization
Automatic text summarization systems utilize different
summarization techniques to
condense source text. The vast majority of today’s
summarization algorithms employ
what is referred to as an extraction-based approach (Saggion &
Poibeau, 2012). The
flexibility and greater general applicability of the extraction-
based approach make it
the preferred approach for most business summaries (Liu & Liu,
2009).
Extraction-based techniques involve the analysis of text features
at the sentence level,
discourse level, or corpus level to locate salient text units that
are extracted, with mini-
mal or no modification, to formulate a summary of the text (Liu
& Liu, 2009). Stated
more simply, in extraction-based text summarization, relevant
phrases and sentences are
selected from the source document and rearranged into a new
summary sequence (Paulus
et al., 2017). The summary, then, is essentially a subset of the
sentences in the original
source or reference text (Allahyari et al., 2017).
130 International Journal of Business Communication 59(1)
Salient text units are identified by evaluating their linguistic
11. and statistical rele-
vance or by matching phrasal patterns (Hahn & Mani, 2000).
Statistical relevance is
based on the frequency of certain elements in the text, such as
words or terms, while
linguistic relevance is determined from a simplified
argumentative structure of the
text (Neto, Freitas, & Kaestner, 2002). These parameters serve
as inputs to a combi-
nation function with modifiable weights to derive a total score
for each text unit. Text
units with a concentration of high-score words are often likely
contenders for extrac-
tion (Liu & Liu, 2009). Extraction-based summarization, then,
is essentially con-
cerned with evaluating the salience or the indicative power of
each sentence in a
given document (Shams et al., 2010). Figure 1 maps out the
process flow for extrac-
tion-based systems.
Evaluation of Text Summarization Using ROUGE-1 Metrics
Intrinsic evaluations of text summarization outputs
conventionally involved manual
human assessments of the quality and utility of a given
summary. Rubrics based on
coherence, conciseness, grammaticality, readability, and content
provided the guid-
ance for these human assessments (Mani, 2001). Given the
potential for bias, and the
time-consuming nature of the process, this practice gradually
evolved into automatic
comparisons of the summaries with human-authored gold
standards thus minimizing
the need for human involvement (Nenkova, 2006).
12. Today, various summarization evaluation systems and methods
employing sophisti-
cated algorithms may be used to compare human-authored
summaries with machine-
generated summaries. One such method, ROUGE, the most
widely used metric for
automatic evaluation (Allahyari, 2017), was found to produce
evaluation rankings that
correlate reasonably with human rankings (Lin, 2004). It
leverages numerous measures
to automatically determine the quality of a computer-generated
summary. The mea-
sures include, but are not limited to, a count of variables such
as word sequences and
Source
Text
Term Frequency Counts
Pattern Matching Ops.
Presence of Specific Terms
Sentence Location
Statistical
Metrics
Weight
Selection
Extraction
Analysis Synthesis
13. Lexical
Metrics
Figure 1. Process flow for extraction-based systems.
Naidoo and Dulek 131
word pairs between the computer-generated summary and the
reference summary cre-
ated by humans (Lytras, Aljohani, Damiani, & Chui, 2018).
In this study, we used ROUGE-1 evaluation metrics that
measure the overlap of
words (unigrams) between the machine-generated and reference
summaries and pro-
vide three measures of quality.
Recall. Also known as sensitivity, this is the measure of the
fraction of relevant
instances that have been retrieved over the total amount of
relevant instances. Stated
more simply, it is the computation of the number of overlapping
words between the
machine-generated summary and the reference summary (i.e.,
number of overlapping
words/total number of words in reference summary).
Precision. Also called positive predictive value, this is the
measure of the fraction of
relevant instances among the retrieved instances. In other
words, it is the computation
of how much of the machine-generated summary is actually
relevant or essential (i.e.,
14. number of overlapping words/total words in machine-generated
summary).
F1-Score. This score is a weighted average of the precision and
recall. A score of 1
suggests perfect precision and recall, while a score of 0
indicates the opposite. This
measure is regularly employed in the field of information
retrieval to provide a quan-
tifiable assessment of performance (Beitzel, 2006).
Each of the above standards is arguably more precise than
subjective human calcu-
lations of coherence and conciseness.
Method
Data Corpus
Our data corpus was composed of Letters to the Shareholders of
a subset of corpora-
tions listed on the Fortune 100 list for 2017. Letters to the
Shareholders are voluntary
inclusions in the annual report, usually appearing as an
introduction. Considered an
important piece of information (Vozzo, 2016), these documents
provide useful insight
into the quality of leadership at the corporation and
management’s commitment to
creating meaningful long-term value for shareholders (Heyman,
2010).
Wielding much influence in investment transactions, Letters to
Shareholders are
integral to an investor’s due diligence process. They are read
with much interest by
15. professional investors, analysts, and other stakeholders
(Heyman, 2010). Additionally,
these letters often supplement the overall effort to frame the
annual report’s informa-
tion through narrative and graphical strategies (Laskin, 2018;
Penrose, 2008). The
ability to effectively and accurately summarize the most salient
content from these
letters may, therefore, offer significant value to its readership.
To ensure that we obtained a meaningful understanding of the
effectiveness of auto-
mated text summarization applications, we elected to focus our
investigation on a small,
purposive sample of 10 Fortune 100 corporations. To this end,
we selected the top 10
corporations listed in the Fortune 100 list for 2017. Two of the
top 10 corporations,
132 International Journal of Business Communication 59(1)
Apple and United Health, however, did not include a Letter to
the Shareholders in their
respective annual reports. We sought to replace them with
letters from the corporations
listed in 11th and 12th place, respectively. However, the annual
report for the corpora-
tion listed in 11th place, AmerisourceBergen, was unavailable
at the time of the study.
Ultimately, Letters to the Shareholders from Amazon, listed in
12th place, and General
Electric, listed in 13th place, were included in the dataset in
lieu of Letters to the
Shareholders from Apple and United Health.
16. Corpus Extraction and Preparation
We located the Letters to the Shareholders on the respective
corporate websites and
reformatted the PDF files into text files. We conducted a
manual inspection of each
Letter to the Shareholders and removed all redundant graphics
and images. In addition
to harmonizing the data corpus, this exercise ensured that the
datatype was exclusively
text based.
Procedure
There are generally two ways to assess the quality of automatic
text summarization
output. The first method, referred to as extrinsic evaluation,
assesses the usefulness of
the output summary in a task-based setting. Here, the summary
is used to support the
completion of a specific task. Its usefulness is determined by
measuring established
metrics for task completion efficiency (Hirschberg, McKeown,
Passonneau, Elson, &
Nenkova, 2005). The second method, referred to as intrinsic
evaluation, is conducted
by “by soliciting human judgments on the goodness and utility
of a given summary, or
by a comparison of the summary with a human-authored gold
standard” (Nenkova &
McKeown, 2011, p. 199).
For this study, we employed the intrinsic evaluation method.
We first compared the
machine-generated text summary with a human-authored
17. summary as prescribed in
the literature to assess the “goodness” of the machine-generated
summary. To this end,
human-generated summaries and machine-generated summaries
were produced for
each Letter to the Shareholders at predetermined levels of
reduction. Each machine-
generated summary was then assessed by the summarization
evaluation system,
ROUGE-1, using the human-authored summary as the source or
reference text. This
method is consistent with standard practice in automated text
summarization evalua-
tion as noted by Nenkova and McKeown (2011).
Subsequently, in an effort to explore potential variances
between the outputs pro-
duced by human authors and artificial intelligence for the
selected data genre, we
conducted a second-phase evaluation in which we employed
ROUGE-1 metrics to
evaluate the “goodness” of
•• the human-authored summary for each company using the
respective Letter to
the Shareholders for that company as the reference summary
and
•• the machine-generated summary for each company using the
respective Letter
to the Shareholders for that company as the reference summary
Naidoo and Dulek 133
18. In doing so, we aimed to assess the extent of the variance, if
any, in the recall, preci-
sion, and F-measures between the two summary classes (i.e.,
human-authored and
machine-generated). We hypothesized that comparable scores
between the two summary
classes in each of those corresponding measures would likely
indicate a similarity in the
quality and utility of the summaries, while widely disparate
scores would suggest the
alternative. Ultimately, either outcome would provide a broader
commentary on the effec-
tiveness of the summarization software when summarizing
complex business reports.
In summary, then, we employed ROUGE-1 metrics to evaluate
•• the machine-generated summary against the human-authored
summary,
•• the human-generated summary against the original Letter to
the Shareholders, and
•• the machine-generated summary against the original Letter to
the Shareholders.
Following is a more detailed description of each of these
processes.
Formulation of Datasets. To reiterate, two distinct categories of
summaries were pro-
duced for each Letter to the Shareholders (i.e., human-authored
summaries and
machine-generated summaries). To facilitate a more robust
investigation, two sum-
maries were produced within each category, each differentiated
by the total word
count. The first summary was capped at 10% of the word count
19. of the original Letter
to the Shareholders; the second summary was capped at 20%.
Thus, the dataset for
each company comprised the following data:
•• A human-authored summary capped at 10% of the word count
of the original
Letter to the Shareholders
•• A human-authored summary capped at 20% of the word count
of the original
Letter to the Shareholders
•• A machine-generated summary capped at 10% of the word
count of the original
Letter to the Shareholders
•• A machine-generated summary capped at 20% of the word
count of the original
Letter to the Shareholders
These summaries served as the input data for the study. A more
detailed description of
the process to create the data follows.
Human-authored summaries. A writer trained and experienced
in writing business
summaries generated summaries at both summarization levels
(i.e., 10% and 20%) of
all the documents in the data corpus. The word count was
validated using MSWord’s
word count feature. To mitigate bias, this writer was not
involved in processing the
Letter to the Shareholders through the automated text
summarization application.
20. Machine-generated summaries. Simultaneously, the text files
were processed indi-
vidually through the online automated text summarization
application. Resoomer
was selected because of its current popularity as a text
summarization tool and its
demonstrated superiority over other online text summarization
applications in terms
134 International Journal of Business Communication 59(1)
of functionality, ease of use, and accuracy (Hobler, 2017;
Nyzam, Gatto, & Bossard,
2017). An advanced feature of this application is the ability to
set the summarization
to a desired level of word count reduction. Accordingly, the
summarization level
was set first to 10% and then to 20% of the word count of the
original Letter to the
Shareholders. The resulting machine-generated summaries were
saved as MSWord
documents.
Evaluation of Summaries. The human-authored and machine-
generated summary for each
corporation was then processed in the ROUGE-1 notepad
interface, and the evaluation run
was executed. The resulting scores were captured in an Excel
spreadsheet and evaluated.
Figure 2 provides a visual representation of the process flow.
Data Corpus
Letters to the
21. Shareholders (LTS) in
Published Format
Step 1: Corpus
Preparation
Removal of Images and
Infographics
Harmonized Data
Corpus
LTS in Text Format
Step 2: Formulation of
Datasets
Data category 1: Human-
authored Summaries
Produced by Researcher
Word-counts of 10% and 20% of
original LTS, respectively
Data category 2: Machine-
generated Summaries
Produced by Text Summarization
application (Resoomer)
Word-counts of 10% and 20% of
original LTS, respectively
Step 3: Evaluation of
22. Datasets using
ROUGE-1 metrics
i) Evaluation of
Machine-generated
summary with Human-
authored summary as
reference
ii) Evaluation of Human-
authored summary with
original LTS as reference
iv) Evaluation of
Machine-generated
summary with original
LTS as reference
Step 4: Evaluation of variances in
human and machine-generated output using boxplots
Figure 2. Method flowchart.
Naidoo and Dulek 135
Results
Example of Outputs
23. Following are examples of the output summaries for one Letter
to the Shareholders
from the data corpus.
Corporation: ExxonMobil
Original Word Count: 510
Human-Authored Summaries. Reduction: 10% of original word
count (51 words)
Winning involves capturing value, maintaining a technological
edge, and operating safely and responsibly.
ExxonMobil’s financial future looks promising. It invests in
growth projects and integrates in ways
competitors cannot. Innovation occurs through technical
exploration and the development of environ-
mentally friendly products with higher financial returns.
ExxonMobil is an industry leader.
Reduction: 20% of original word count (102 words)
Winning involves capturing value, maintaining a technological
edge, and operating safely and
responsibly.
ExxonMobil is an industry leader. Its financial future looks
promising. We invest in high-value growth
projects and integrate in ways competitors cannot. We are
adding new low-cost supplies of LNG. We are
ramping up unconventional production. We use proprietary
technology to produce higher value products.
Innovation occurs through technical exploration and the
development of environmentally friendly prod-
ucts with higher financial returns. Our technology investments
build a foundation for the future—creat-
ing long-term value for society. We lead in the discovery of
24. scalable technologies.
ExxonMobil is an industry leader.
Machine-Generated Summaries. Reduction: 10% of original
word count (46 words)
Winning in today’s energy business takes a cost to the whole
commodity cycle. In our Downstream,
we’re using our proprietary technology to produce higher value
products. Innovative products pioneered
in our Chemical business are enabling a growing global middle
class to enjoy a higher quality of life.
Reduction: 20% of original word count (98 words)
Winning in today’s energy business takes a cost to the whole
commodity cycle. A company is able to
capture value across the supply chain. In our Downstream,
we’re using our proprietary technology to
produce higher value products. And in our Chemical business,
we are investing in capacity and manu-
facturing to meet the needs of growing economies around the
world. ExxonMobil is investing for high-
value growth.
Innovative products pioneered in our Chemical business are
enabling a growing global middle class to
enjoy a higher quality of life. Our innovation is delivering value
to our customers, our communities, and
you, our shareholders.
ROUGE-1 Scores for Precision, Recall, and F-Measures
In Tables 1 to 6, we report ROUGE-1 scores when specific
summaries are evaluated
against a reference summary. As mentioned earlier, the
25. reference summary is deemed
136 International Journal of Business Communication 59(1)
to be the ideal or standard document against which the ROUGE-
1 algorithm evaluates
other summaries for precision and recall.
Evaluation of Machine-Generated Summaries Against Human-
Authored Summaries. To
maintain consistency with the standard protocol defined in the
literature for conducting
evaluations of automated text summarization outputs, we
designated the human-
authored summaries as the reference summaries. We then
evaluated the machine-gen-
erated summary for each company against the reference
summary for that company.
Table 1 shows ROUGE-1 scores (average recall, average
precision, and average
F1-score) for input documents summarized to 10% of the word
count of the original
Letter to the Shareholders. For illustrative purposes, the average
recall score of 0.21
for Walmart in Table 1 implies that 21% of the words
(unigrams) in the machine-gen-
erated summary are also present in the human-authored
summary for this company.
The corresponding precision score of 0.20 implies that only
20% of the overlapping
words in the machine-generated summary were actually
relevant. The F-measure of
0.21, the weighted average of the recall and precision,
26. essentially quantifies the per-
formance efficiency of the automatic text summarization tool.
Table 2 shows ROUGE-1 scores for input documents
summarized to 20% of the
word count of the original Letter to the Shareholders.
Evaluation of Human-Authored Summaries Against the Original
Letters to the Shareholders. In
this instance, we designated the original Letters to the
Shareholders as the standard/ ideal/
reference summaries. We evaluated the human-authored
summary for each company
against the reference summary (Letters to the Shareholders) for
that company. Our goal in
doing so was to assess the integrity of the human-authored
summaries. Table 3 shows
ROUGE-1 scores for human-authored summaries compiled at
10% of the word count of
the original Letter to the Shareholders. In this case, the average
recall score of 0.05 for
Table 1. Evaluation of Machine-Generated Summaries Using
Human-Authored Summaries
as Reference (10% Summarization Level).a
Corporation Average recall Average precision Average F1-score
Walmart 0.21 0.20 0.21
Exxon Mobil 0.13 0.11 0.12
Berkshire Hathaway 0.25 0.27 0.26
McKesson 0.22 0.23 0.23
CVS Health 0.27 0.22 0.24
Amazon.com 0.21 0.22 0.22
AT&T 0.23 0.26 0.25
General Motors 0.26 0.27 0.26
27. Ford 0.15 0.20 0.17
GE 0.27 0.19 0.22
Mean 0.22 0.22 0.22
aRounded to two decimal places.
Naidoo and Dulek 137
Walmart in Table 3 implies that there is a 5% overlap in words
(unigrams) between the
human-authored summary and the original Letter to the
Shareholders for this company.
The corresponding precision score of 0.49 implies that almost
50% of the overlapping
words in the human-authored summary were actually relevant.
The F-measure of 0.09
quantifies the performance efficiency of the automatic text
summarization tool.
Table 4 shows ROUGE-1 scores for human-authored summaries
condensed to 20%
of the word count of the original Letter to the Shareholders.
Comparison of Machine-Generated Summaries With Original
Letters to the Sharehold-
ers. Here, we once again designated the original Letters to the
Shareholders as the
Table 3. Evaluation of Human-Authored Summaries Using
Original Letter to the
Shareholders as Reference (10% Summarization Level).a
Corporation Average recall Average precision Average F1-score
28. Walmart 0.05 0.49 0.09
Exxon Mobil 0.03 0.31 0.06
Berkshire Hathaway 0.05 0.48 0.10
McKesson 0.06 0.49 0.10
CVS Health 0.05 0.48 0.09
Amazon.com 0.05 0.47 0.09
AT&T 0.05 0.48 0.10
General Motors 0.05 0.46 0.09
Ford 0.07 0.48 0.12
General Electric 0.05 0.48 0.10
Mean 0.05 0.46 0.09
aRounded to two decimal places.
Table 2. Evaluation of Machine-Generated Summaries Using
Human-Authored Summaries
as Reference (20% Summarization Level).a
Corporation Average recall Average precision Average F1-score
Walmart 0.26 0.26 0.26
Exxon Mobil 0.20 0.18 0.19
Berkshire Hathaway 0.26 0.30 0.28
McKesson 0.25 0.27 0.26
CVS Health 0.27 0.29 0.28
Amazon.com 0.29 0.26 0.27
AT&T 0.27 0.30 0.28
General Motors 0.28 0.29 0.28
Ford 0.22 0.31 0.25
GE 0.31 0.21 0.25
Mean 0.26 0.27 0.26
aRounded to two decimal places.
29. 138 International Journal of Business Communication 59(1)
reference summaries. We evaluated the machine-generated
summary for each com-
pany against the reference summary (Letters to the
Shareholders) for that company.
Our goal in doing so was to assess the integrity of the machine-
generated summa-
ries. Tables 5 and 6 show ROUGE-1 scores for machine-
generated summaries
extracted to 10% and 20% of the word count of the original
Letter to the Sharehold-
ers, respectively.
Comparison of Human-Authored and Machine-Generated
Summaries. We used compara-
tive boxplots of ROUGE-1 F1-scores (see Tables 3-6) to
determine if there were any
observable differences between the human-authored summaries
and machine-generated
Table 4. Evaluation of Human-Authored Summaries Using
Original Letter to the
Shareholders as Reference (20% Summarization Level).a
Corporation Average recall Average precision Average F1-score
Walmart 0.10 0.49 0.17
Exxon Mobil 0.07 0.38 0.12
Berkshire Hathaway 0.10 0.47 0.17
McKesson 0.11 0.48 0.17
CVS Health 0.11 0.48 0.18
Amazon.com 0.10 0.47 0.16
AT&T 0.11 0.48 0.17
General Motors 0.10 0.46 0.16
Ford 0.14 0.49 0.21
30. General Electric 0.10 0.47 0.17
Mean 0.10 0.47 0.17
aRounded to two decimal places.
Table 5. Evaluation of Machine-Generated Summaries Using
Original Letter to the
Shareholders as Reference (10% Summarization Level).a
Corporation Average recall Average precision Average F score
Walmart 0.05 0.50 0.10
Exxon Mobil 0.06 0.50 0.10
Berkshire Hathaway 0.05 0.50 0.09
McKesson 0.05 0.50 0.10
CVS Health 0.06 0.50 0.11
Amazon.com 0.05 0.50 0.09
AT&T 0.05 0.50 0.09
General Motors 0.05 0.50 0.09
Ford 0.05 0.50 0.10
General Electric 0.05 0.50 0.10
Mean 0.05 0.50 0.10
aRounded to two decimal places.
Naidoo and Dulek 139
Table 6. Evaluation of Machine-Generated Summaries Using
Original Letter to the
Shareholders as Reference (20% Summarization Level).a
Corporation Average recall Average precision Average F score
Walmart 0.10 0.50 0.17
31. Exxon Mobil 0.11 0.50 0.18
Berkshire Hathaway 0.09 0.50 0.15
McKesson 0.10 0.46 0.16
CVS Health 0.11 0.50 0.19
Amazon.com 0.11 0.50 0.18
AT&T 0.10 0.50 0.17
General Motors 0.11 0.50 0.18
Ford 0.10 0.50 0.16
General Electric 0.10 0.50 0.17
Mean 0.10 0.50 0.17
aRounded to two decimal places.
summaries. Instead of analyzing precision and recall
individually, we focused our analy-
sis on the F1 scores because they represent the weighted
average of the two measures.
Our results are shown in Figures 3 and 4.
Field Expert Evaluation of Machine-Generated Summaries.
Finally, a reviewer of the arti-
cle wisely suggested that we seek input from financial experts
with regard to the effec-
tiveness of the machine-generated summaries. We solicited
open-ended feedback from
a convenience sample of eight financial experts, each of whom
held positions within
nationally or internationally recognized financial firms. Seven
of the eight invited par-
ticipants provided feedback on the reports.
We drew on discourse analysis principles to evaluate the
resulting feedback.
Specifically, we employed the discourse-based interpretive
content analysis method.
This method proposes a holistic approach not restricted by
32. coding rules, with the flex-
ibility to take context more fully into account (Ahuvia, 2001).
Although the responses
were not uniform, themes emerging from the analysis were
fairly homogeneous.
As a whole, the group commented that they would use the
summaries to make a
rapid determination as to whether to spend additional time and
energy reviewing the
Letter to Shareholders and the Annual Report. The reviewers
noted that the summaries
provided hints of insights into initiatives the companies are
pursuing, challenges faced
by the company, and overall perspectives with regard to the
organization’s culture and
values. As such, the summaries served a useful sorting function
as to which, if any,
reports the financial experts might examine in more depth. Each
reviewer was adamant,
however, that these documents provided financial information
that is at best dated.
From a structural perspective, the financial experts evaluated
the summaries for
overall coherence. They viewed 80% of the sample set as cogent
and coherent; the
other 20% was viewed as disjointed and difficult to interpret.
140 International Journal of Business Communication 59(1)
Discussion
The intention of this study was first to examine summarization
33. software from an over-
arching, semitechnical, almost pretheoretical perspective. After
that, we sought to evalu-
ate the effectiveness of summarization software and look for
important variances in the
data. These latter two areas enabled us to begin to answer two
key research questions:
Research Question 1: How effective is the summarization
software when sum-
marizing complex business reports?
Research Question 2: Are there any important variances
between outputs pro-
duced by human authors and artificial intelligence for the
selected data genre?
Summarization Software Effectiveness
Because ROUGE-1 is a recall-based measure based on content
overlap, it endeavors
to determine if the general concepts covered in an automatic
summary and a refer-
ence summary align (Allahyari et al., 2017). For summaries
comprising 10% of total
0
0.02
0.04
0.06
0.08
0.1
34. 0.12
0.14
Human-authored Summaries Machine-generated Summaries
Figure 3. Boxplot of F1 scores for summaries capped at 10% of
word count of Letters to
the Shareholders.
Naidoo and Dulek 141
0
0.05
0.1
0.15
0.2
0.25
Human-authored Summaries Machine-generated Summaries
Figure 4. Boxplot of F1 scores for summaries capped at 20% of
word count of Letters to
the Shareholders.
word count, ROUGE-1 metrics determined that approximately
21.9% of co-occur-
ring words within a given window in the human-authored
35. reference summaries were
also present in the machine-generated summary (see Table 1).
For summaries com-
prising 20% of total word count, ROUGE-1 metrics determined
that approximately
26.1% of unigrams in the human-authored reference summaries
were also present in
the machine-generated summary (see Table 2). ROUGE-1
metrics also determined
that machine-generated summaries have an approximately one-
fifth overlap with the
human-authored reference summaries comprising 10% of total
word count and one-
fourth overlap with the human-authored reference summaries
comprising 20% of
total word count. Combined with the overall F-measures, these
results suggest that
the automated text summarization tool is moderately sensitive
in terms of extracting
relevant instances of the text. Thus, while significant progress
has been made in the
field of natural language processing and computational
linguistics in the past six
decades, producing sophisticated advances in text
summarization (Das & Martins,
2007; Liu & Liu, 2009), much still needs to be accomplished in
the area of precision
and recall when summarizing complex business reports.
142 International Journal of Business Communication 59(1)
Variances in Human-Authored and Machine-Generated Outputs
The boxplots in Figures 3 and 4 above highlighted perceptible
36. differences between
the two summary categories. In Figure 3, the boxplots show that
the distribution for
the human-authored summaries were slight left-skewed, in
contrast to the distribu-
tion for the machine-generated summaries that were right-
skewed. In Figure 4,
however, the boxplots show a more symmetric distribution for
human-authored
summaries and a left-skewed distribution for the machine-
generated summaries.
Next, while the medians were relatively equal in both instances,
machine-generated
summaries exhibited tighter spreads than human-authored
summaries, indicative of
greater variability in the latter. The observations from the
boxplots, to some extent,
corroborate Steinberger and Jezek’s (2009) contention that a big
gap exists between
the summaries produced by automatic text summarizations
systems and summari-
zations generated by humans.
It is interesting to note that both summaries of Berkshire
Hathaway’s Letter to the
Shareholders (i.e., at 10% and 20% word count level) earned the
highest F-scores of
the 10 corporations evaluated when evaluated against their
corresponding human-
authored summaries (see Tables 1 and 2). The lowest F-scores,
on the other hand, were
earned by the summaries of ExxonMobil’s Letter to the
Shareholders. These results
prompted a qualitative inspection of the Letters to the
Shareholders of Berkshire
Hathaway and ExxonMobil to determine if there were any
37. distinguishing features,
apart from the fact that they operated in different industry
sectors.
Following are excerpts from the Letters to the Shareholders
delivered by the
Chairman and CEO of each of these corporations.
Warren Buffet, Berkshire Hathaway
Why the purchasing frenzy? In part, it’s because the CEO job
self-selects for “can-do”
types. If Wall Street analysts or board members urge that brand
of CEO to consider
possible acquisitions, it’s a bit like telling your ripening
teenager to be sure to have a
normal sex life. (2018, p. 4)
The bet illuminated another important investment lesson:
Though markets are generally
rational, they occasionally do crazy things. Seizing the
opportunities then offered does
not require great intelligence, a degree in economics or a
familiarity with Wall Street
jargon such as alpha and beta. What investors then need instead
is the ability to both
disregard mob fears, or enthusiasm, and to focus on a few
simple fundamentals. A
willingness to look unimaginative for a sustained period—or
even to look foolish—is
also essential. (2018, p. 12)
Darren Woods, ExxonMobil
ExxonMobil is in a prime position to generate strong returns
and remain the industry
leader, leveraging our strengths and outperforming our
competition in growing
38. shareholder value.
Naidoo and Dulek 143
We’re investing in advantaged projects to grow our world-class
portfolio. Through
exploration and strategic acquisitions, we’ve captured our
highest-quality inventory since
the Exxon and Mobil merger, including high-impact projects in
Guyana and Brazil.
Integration enables us to capture efficiencies, apply
technologies, and create value that
our competitors can’t. (2018, p. 3)
A qualitative analysis of these excerpts reveals a distinctive
stylistic posturing in
the narrative of each letter. The Chairmen and CEO of
ExxonMobil employs a rigid,
formal writing style, which follows a conventional mechanical
formula that tradition-
ally characterizes official letters from the C-suite. His letter is
peppered with appro-
priate business conventions and familiar industry jargon such as
“leveraging our
strengths,” “strategic acquisitions,” “high-impact projects,” and
“integration enables
us to capture efficiencies.”
Warren Buffett, on the other hand, renowned for the folksy,
personal manner in
which he writes the company’s annual letter, employs a less
rigid, less formal style. On
the surface, Buffet’s style seems devoid of any artifice. He
infuses his letter with
39. unique words, creative phrases that are not traditionally used to
communicate informa-
tion formally in the business domain. Hence, his use of
analogies such as “It’s a bit like
telling your ripening teenager to be sure to have a normal sex
life” and statements such
as “They occasionally do crazy things” or “A willingness to
look unimaginative for a
sustained period—or even to look foolish—is also essential.”
The scores for the more traditionally postured Letter to the
Shareholders appear to
suggest that the summarization tool had greater success with
recall and precision when
the text strayed away from linguistic patterns that are common
and specific to the busi-
ness world. The likely conjecture from this is that extraction-
based automatic summa-
rization systems function less optimally when domain-specific
ontologies are employed.
Finally, evaluations of the machine-generated summaries by a
pool of financial
experts posit a favorable outlook for automated text
summarization tools. The respon-
dents overwhelmingly agreed that the machine-generated
summaries provided a sliver
of insight into the company’s operational performance and
strategic initiatives. These
insights were sufficient to trigger a go/no go decision in terms
of further exploration
of the original document.
Conclusion and Future Studies
The results of this study show that the extraction-based
40. summarization system pro-
duced moderately satisfactory results in terms of extracting
relevant instances of the
text from the business reports. Much work still needs to be
accomplished in the area of
precision and recall in extraction-based systems before the
software can match a
human’s ability to capture the gist of a body of text.
But beyond practical applications, automatic text summarization
highlights a
broader discourse. Automatic text summarization raises
important issues connected to
AI and cognitive science. Therefore, further study into how
advanced text summariza-
tion capability affects cognitive capacity and intelligence may
augment our ability as
144 International Journal of Business Communication 59(1)
communication professionals to both disseminate and consume
information more effi-
ciently. Additional text corpora covering different data genres
should be empirically
evaluated to obtain more robust findings.
From a business communication perspective, we best agree that
this form of com-
munication technology is not going away. The effectiveness of
the text summarization
software may only be between 22% and 26%, but it is not going
to get lower. Instead,
the field should remain alert to future developments of this
software and look for ways
41. by which to incorporate it into future studies as well as class
teachings.
Finally, and perhaps most important, our findings hint at a
forthcoming synergy
between what AI does and what business leaders proclaim to
desire. At its heart, AI
depends on consistency, pattern recognition, and logical
development, even when deal-
ing with summarization software. Christensen (2015)) and many
other business experts,
on the other hand, argue vociferously in favor of creativity and
new ideas for business
models. When presented with the creativity of a Warren
Buffett—or, more directly,
when presented with a letter written differently from other
patterns—the AI summari-
zation software proved to be very effective. In fact, when
compared against a human
gold standard, AI proved demonstrably better at extracting
Berkshire-Hathaway’s cre-
ative syntax than it did ExxonMobil’s business jargon-laded
language. This synergy
bodes well for AI’s role in business communication and
business in general.
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with
respect to the research, authorship,
and/or publication of this article.
Funding
The author(s) received no financial support for the research,
authorship, and/or publication of
42. this article.
References
Ahuvia, A. (2001). Traditional, interpretive, and reception
based content analyses: Improving
the ability of content analysis to address issues of pragmatic
and theoretical concern. Social
Indicators Research, 54, 139-172.
Allahyari, M., Pouriyeh, S., Assefi, M., Safaei, S., Trippe, E.
D., Gutierrez, J. B., & Kochut, K.
(2017). Text summarization techniques: A brief survey. arXiv
preprint arXiv:1707.02268.
Retrieved from https://arxiv.org/pdf/1707.02268.pdf
Allen, G., & Chan, T. (2017). Artificial intelligence and
national security. Cambridge, MA:
Belfer Center for Science and International Affairs, Harvard
Kennedy School.
Alonso, L., Castellon, I., Fuentes, M., Climent, S., & Horacio
Rodriquez, L. P. (2003).
Approaches to text summarization: Questions and answered.
Inteligencia Artificial, 20,
34-52.
Barth, M. E. (2015). Financial accounting research, practice,
and financial accountability.
ABACUS: A Journal of Accounting, Finance and Business
Studies, 51, 499-510.
Beitzel, S. M. (2006). On understanding and classifying web
queries (Unpublished doctoral
dissertation). Illinois Institute of Technology, Chicago.
43. https://arxiv.org/pdf/1707.02268.pdf
Naidoo and Dulek 145
Bhargava, R., Sharma, Y., & Sharma, G. (2016). Atssi:
Abstractive text summarization using
sentiment infusion. Procedia Computer Science, 89, 404-411.
Brownlee, J. (2017, November 29). A gentle introduction to text
summarization. Deep Learning
for Natural Language Processing. Retrieved from
https://machinelearningmastery.com
/gentle-introduction-text-summarization/
Brynjolfsson, E., Rock, D., & Syverson, C. (2017). Artificial
intelligence and the modern pro-
ductivity paradox: A clash of expectations and statistics
(Working paper No. w24001).
Cambridge, MA: National Bureau of Economic Research.
Cardinaels, E., Hollander, S., & White, B. (2017, July).
Automatic summarization of corpo-
rate disclosures. Retrieved from
https://www.nhh.no/globalassets/departments/accounting
-auditing-and-law/seminar-papers/chw-manuscript-july-14-
2017.pdf
Christensen, C. (2015). The Innovator’s Dilemma: When New
Technologies Cause Great Firms
to Fail. Cambridge, MA: Harvard Business Review Press.
Das, D., & Martins, A. F. (2007). A survey on automatic text
summarization. Literature Survey
for the Language and Statistics II course at CMU, 4, 192-195.
Pittsburgh, PA: Language
44. Technologies Institute.
Dyer, T., Lang, M., & Stice-Lawrence, L. (2017). The evolution
of 10-K textual disclosure: Evidence
from Latent Dirichlet Allocation. Journal of Accounting and
Economics, 64, 221-245.
Hahn, U., & Mani, I. (2000). The challenges of automatic
summarization. Computer, 33(11),
29-36.
HealthIT. (2017). Artificial intelligence for health and health
care. Retrieved from https://www
.healthit.gov/sites/default/files/jsr-17-task-
002_aiforhealthandhealthcare12122017.pdf
Heyman, E. (2010). What you can learn from shareholder
letters. Chicago, IL: American
Association of Individual Investors. Retrieved from
http://www.aaii.com/journal/article
/what-you-can-learn-from-shareholder-letters.touch
Hirschberg, J. B., McKeown, K., Passonneau, R., Elson, D. K.,
& Nenkova, A. (2005). Do sum-
maries help? A task-based evaluation of multi-document
summarization. Retrieved from
https://academiccommons.columbia.edu/doi/10.7916/D87370BC
Hobler, D. (2017). A functional text summarizer that adapts to
the times. Retrieved from http:
//techsophist.net/a-functional-text-summarizer-that-adapts-to-
the-times/
Hovy, E., & Lin, C. Y. (1998, October). Automated text
summarization and the SUMMARIST
system. In Proceedings of a workshop on held at Baltimore,
45. Maryland: October 13-15,
1998, USA (pp. 197-214). Stroudsburg, PA: Association for
Computational Linguistics.
Infosys. (2018). Amplifying human potential: Towards
purposeful artificial intelligence.
Retrieved from https://www.infosys.com/aimaturity/
Jha, S. (2018). The impact of AI on business leadership and the
modern workforce. Retrieved from
https://www.techemergence.com/the-impact-of-ai-on-business-
leadership-and-the-modern
-workforce/
Laskin, A. V. (2018). The narrative strategies of winners and
losers: Analyzing annual reports
of publicly traded corporations. International Journal of
Business Communication, 55,
338-356.
Lin, C. Y. (2004, June). Looking for a few good metrics:
Automatic summarization eval-
uation-how many samples are enough? In Proceedings of
NTCIR Workshop 4, Tokyo,
Japan (pp. 1-10). Retrieved from
https://pdfs.semanticscholar.org/0996/e937a14f6fa-
f34a3ce39fa537189e12b1ef7.pdf
Liu, F., & Liu, Y. (2009, August). From extractive to
abstractive meeting summaries: Can it
be done by sentence compression? In Proceedings of the ACL-
IJCNLP 2009 Conference
Short Papers (pp. 261-264). Singapore: Association for
Computational Linguistics.
https://machinelearningmastery.com/gentle-introduction-text-
47. Innovations, developments, and
applications of semantic web and information systems. Hershey,
PA: IGI Global.
Mani, I. (2001). Automatic summarization (Vol. 3). Amsterdam,
Netherlands: John Benjamins.
Metz, C. (2016, January 25). The rise of the artificially
intelligent hedge fund. Wired. Retrieved
from https://www.wired.com/2016/01/the-rise-of-the-
artificially-intelligent-hedge-fund/
Murphy, G. C., & Notkin, D. (1996). Lightweight lexical source
model extraction. ACM
Transactions on Software Engineering and Methodology, 5,
262-292.
Narula, G. (2018). Everyday examples of artificial intelligence
and machine learning. Retrieved
from https://www.techemergence.com/everyday-examples-of-ai/
Nenkova, A. (2006, September). Summarization evaluation for
text and speech: Issues and
approaches. Paper presented at the Ninth International
Conference on Spoken Language
Processing, Pittsburgh, PA. Retrieved from
http://www.cis.upenn.edu/~nenkova/papers
/sumEval.pdf
Nenkova, A., & McKeown, K. (2011). Automatic
summarization. Foundations and Trends® in
Information Retrieval, 5, 103-233.
Neto, J. L., Freitas, A. A., & Kaestner, C. A. (2002, November).
Automatic text summarization
using a machine learning approach. In Brazilian Symposium on
48. Artificial Intelligence (pp.
205-215). Berlin, Germany: Springer.
Nyzam, V., Gatto, N., & Brossard, A. (2017). Automatically
summarize online: demonstration
of a multi-document summary web service. In proceedings of
the 24th Conference on the
Automatic Processing of Natural Languages (TALN) (p. 30).
Paulus, R., Xiong, C., & Socher, R. (2017). A deep reinforced
model for abstractive summari-
zation. arXiv preprint arXiv:1705.04304. Retrieved from
https://arxiv.org/abs/1705.04304
Penrose, J. M. (2008). Annual report graphic use: A review of
the literature. Journal of Business
Communication, 45, 158-180.
Saggion, H., & Poibeau, T. (2012). Automatic text
summarization: Past, present and future. In
T. Poibeau, H. Saggion, J. Piskorski, & R. Yangarber (Eds.),
Multi-source, multilingual
information extraction and summarization (pp. 3-21). Berlin,
Germany: Springer-Verlag.
Shams, R., Hashem, M. M. A., Hossain, A., Akter, S. R., &
Gope, M. (2010, May). Corpus-
based web document summarization using statistical and
linguistic approach. In Computer
and Communication Engineering (ICCCE), 2010 International
Conference on (pp. 1-6).
Piscataway, NJ: IEEE. Retrieved from
https://ieeexplore.ieee.org/search/searchresult.jsp?
newsearch=true&queryText=Corpus-
based%20web%20document%20summarization%20
using%20statistical%20and%20linguistic%20approach
49. Smith, S. A., Patmos, A., & Pitts, M. J. (2018). Communication
and teleworking: A study
of communication channel satisfaction, personality, and job
satisfaction for teleworking
employees. International Journal of Business Communication,
55, 44-68.
Sobowale, J. (2016, April). How artificial intelligence is
transforming the legal profession. ABA
Journal [online]. Retrieved from
www.abajournal.com/magazine/article/how_artificial
_intelligence_is_transforming_the_legal_profession/
Steinberger, J., & Jezek, K. (2009). Evaluation measures for
text summarization. Computing
and Informatics, 28, 1001-1026.
J. M. Torres-Moreno (Ed.). (2014). Automatic text
summarization. Hoboken, NJ: Wiley.
Vozzo, P. (2016, March). How to write your annual letter to
shareholders. Baltimore, MD:
Westwicke Partners. Retrieved from
https://westwickepartners.com/2016/03/how-to-write
-your-annual-letter-to-shareholders/
Williams, C. (2008). Toward a taxonomy of corporate reporting
strategies. Journal of Business
Communication, 45, 232-264.
https://www.wired.com/2016/01/the-rise-of-the-artificially-
intelligent-hedge-fund/
https://www.techemergence.com/everyday-examples-of-ai/
http://www.cis.upenn.edu/~nenkova/papers/sumEval.pdf
http://www.cis.upenn.edu/~nenkova/papers/sumEval.pdf
51. at the University of Alabama.
He is a longtime supporter of the Association for Business
Communication and a past recipient
ofthe Kitty O. Locker Award.
Copyright of International Journal of Business Communication
is the property of Association
for Business Communication and its content may not be copied
or emailed to multiple sites or
posted to a listserv without the copyright holder's express
written permission. However, users
may print, download, or email articles for individual use.
Artificial intelligence in healthcare: a review on predicting
clinical needs
Djihane Houfani, Sihem Slatnia, Okba Kazar , Hamza Saouli
and Abdelhak Merizig
LINFI Laboratory, University of Biskra, Biskra, Algeria
ABSTRACT
Artificial Intelligence is revolutionizing the world. In the last
decades, it is applied in almost all
fields especially in medical prediction. Researchers in artificial
intelligence have exploited
predictive approaches in the medical sector for its vital
importance in the process of
decision making. Medical prediction aims to estimate the
probability of developing a
disease, to predict survivability and the spread of a disease in
an area. Prediction is at the
52. core of modern evidence-based medicine, and healthcare is one
of the largest and most
rapidly growing segments of AI. Application of technologies
such as genomics,
biotechnology, wearable sensors, and AI allows to:
(1) increase availability of healthcare data and rapid progress of
analytics techniques and
make the foundation of precision medicine;
(2) progress in detecting pathologies and avoid subjecting
patients to intrusive examinations;
(3) make an adapted diagnosis and therapeutic strategy to the
patient’s need, his
environment and his way of life.
In this research, an overview of applied methods on the
management of diseases is presented.
The most used methods are Artificial Intelligence methods such
as machine learning and deep
learning techniques which have improved diagnosis and
prognosis efficiency.
ARTICLE HISTORY
Received 6 March 2020
Accepted 26 December 2020
KEYWORDS
Predictive medicine; artificial
intelligence; prediction;
healthcare; diagnosis;
prognosis; breast cancer;
cardiovascular diseases
1. Introduction
53. The healthcare domain is facing many challenges. In
particular, handling large amounts of data (Big Data)
will be a critical issue due to its sensibility. Also,
these data are growing continuously and are some-
times more complex which need diagnosis time and
rising costs. In fact, every area has been impacting
most healthcare providers and patients [1]. Predictive
medicine is a field of medicine, which consists of
determining the probability of disease. Its main role
is to decrease the impact upon the patient such as by
preventing mortality or limiting morbidity. Despite
the several proposed solutions, medical prediction
remains a challenging task and demands a lot of
efforts. This is attributed to its vital importance in
decision making. The main goals of predictive medi-
cine are: (i) the practice of collecting and cataloguing
characteristics of patients (big data analytics) [2]; (ii)
analyzing that data to predict the patient’s individual
risk for an outcome of interest; (iii) predicting which
treatment in which individual will be most effective,
and then intervening before the outcome occurs.
Actually, Medical Informatics is at the junction of
the disciplines of medicine and information technol-
ogy and artificial intelligence tools. Both of these con-
cepts play a crucial role in advancing the science of
quality measurement. Artificial intelligence
technologies provide multiple services. They are used
to improve accuracy, efficiency and public health,
and maintain privacy and security of patient health
information. The rest of the paper is organized as fol-
lows. Section 2 introduces the predictive medicine
domain. Section 3 describes some proposed works in
medical prediction domain. Section 4 elaborates a
comparative study of described works. Then, we
55. mailto:[email protected]
http://www.tandfonline.com
powerful new tools by exploiting artificial intelligence
technology and biology techniques.
3. Literature review
Janghel et al. [4] developed a system for diagnosis,
prognosis, and prediction of breast cancer (BC)
using ANN models to assist doctors. Four models of
neural networks were used to implement this system:
Back Propagation Algorithm (MLP), Radial Basis
Function Networks (RBF), Learning vector Quantiza-
tion (LVQ), and Competitive Learning Network (CL).
LVQ gave the best accuracy in the testing data set.
However, the performed experiments of this work
were limited to single database with a limited attri-
butes for breast cancer.
Vikas and Saurabh [5] proposed diagnosis system
for detecting BC based on three data mining tech-
niques RepTree, RBF Network and Simple Logistic.
These algorithms were used to predict the survivability
rate of breast cancer data set. The three classification
techniques were compared to find the most accurate
one for predicting cancer survivability rate. The data
used in this study were provided by the University
Medical Centre, Institute of Oncology, Ljubljana,
Yugoslavia. Authors used WEKA software to
implement the machine learning algorithms.
The objective of Bichen et al. [6] in this research
was to diagnose breast cancer by extracting tumor fea-
tures. Authors developed a hybrid of K-means and
56. SVM algorithms to extract useful information and
diagnose the tumor. The K-means algorithm was uti-
lized to recognize the hidden patterns of the benign
and malignant tumors separately. Then, to obtain a
new classifier an SVM was used.
Karabatak and Cevdet Ince [7] proposed an auto-
matic diagnosis system based on associative rules
(AR) and neuronal network for detecting breast can-
cer. This method consisted of two stages. In the first
stage, association rules were used to reduce the input
feature vector dimension. Then, in the second stage
neural network used these inputs and classified the
breast cancer data. This method worked well; how-
ever, it performs poorly if the features are not chosen
well.
Seera and Lim [8] proposed a hybrid intelligent sys-
tem based on Fuzzy Min–Max neural network, the
Classification and Regression Tree, and the Random
Forest (RF) model for undertaking medical data
classification problems. This system had two impor-
tant practical implications in the domain of medical
decision Support: accuracy and the ability to provide
explanation and justification for the prediction. The
results were evaluated using three benchmark medical
data sets.
Nilashi et al. [9] developed a knowledge-based sys-
tem for the classification of breast cancer disease using
Expectation Maximization (EM), Classification and
Regression Trees (CART), and Principal Component
Analysis (PCA). The proposed system can be used as
a clinical decision support system to assist medical
practitioners in the healthcare practice.
57. Nguyen et al. [10] proposed a computer-aided diag-
nostic system to distinguish benign breast tumor from
malignant one. Their method consisted of two stages
in which a backward elimination approach of feature
selection and a learning algorithm RF are hybridized.
The average obtained classification accuracy was
between 99.70 and 99.82% in test phase applied for
Wisconsin Breast Cancer Diagnosis Dataset (WBC-
DD) and Wisconsin Breast Cancer Prognostic Dataset
(WBCPD). This result indicated that the proposed
method can be applied to other breast cancer pro-
blems with different data sets especially with ones
that have a higher number of training data. However,
RF becomes slow and ineffective for real-time predic-
tions when a large number of trees are generated.
Ahmed et al. [11] developed a Computer-Aided
Diagnosis (CAD) scheme for the detection of breast
cancer using deep belief network (DBN) unsupervised
path followed by back propagation supervised path.
The proposed system was tested on the Wisconsin
Breast Cancer Dataset (WBCD) and gave an accuracy
of 99.68%. However, this approach was computation-
ally expensive.
Thein and Tun [12] proposed a breast cancer
classification approach. This approach was based on
the Wisconsin Diagnostic and Prognostic Breast Can-
cer and the classification of different types of breast
cancer datasets. The proposed system implemented
the island-based training method to obtain better
accuracy and less training time by using and analyzing
between two different migration topologies. However,
in this method same parameters may not guarantee
the global optimum solution.
58. Arpit et al. [13] proposed a GONN algorithm, for
solving classification problems. This algorithm was
used to classify breast cancer tumors as benign or
malignant. To demonstrate their results, authors
took theWBCD database fromUCIMachine Learning
repository and compared the classification accuracy,
sensitivity, specificity, confusion matrix, ROC curves,
and AUC under ROC curves of GONN with classical
model and classical Back propagation model. How-
ever, in this algorithm, only crossover and mutation
operators were improved and it was applied only on
WBCD database.
Dheeba et al. [14] proposed a new classification
approach for the detection of breast abnormalities in
digital mammograms using Particle Swarm Optimized
Wavelet Neural Network (PSOWNN). The proposed
work was based on extracting Laws Texture Energy
Measures from the mammograms and classifying the
suspicious regions by applying a pattern classifier
268 D. HOUFANI ET AL.
and applied to real clinical database. However,
PSOWNN method suffers from difficulty in finding
their optimal design parameters.
Raúl Ramos-Polĺan et al. [15] proposed and evalu-
ated a method to design mammography-based
machine learning classifiers (MLC) for breast cancer
diagnosis. This method allowed to characterize breast
lesions according to BI-RADS classes (grouped by
benign and malignant). This approach gave a good
59. accuracy but it was evaluated on one database.
Geert Litjens et al. [16] explored deep learning to
improve the objectivity and efficiency of histopatholo-
gic slide analysis. Authors used convolutional neural
network to digitized histopathology through two
different experiments: prostate cancer detection in
hematoxylin and eosin (H&E)-stained biopsy speci-
mens and identification of metastases in sentinel
lymph nodes obtained from breast cancer patients.
This method gave accurate results but it showed
some detection errors in the prostate cancer exper-
iment and data were extracted from a single center.
This approach was performing in terms of accuracy
but it was computationally expensive.
Wang et al. [17] proposed a deep learning based
approach for detecting metastatic breast cancer from
whole slide images of sentinel lymph nodes. This
approach was tested on Camelyon16 dataset. The pro-
posed approach improved in the reproducibility, accu-
racy, and clinical value of pathological diagnoses;
however, it was computationally expensive.
Gonźalez-Briones et al. [18] designed a multi-
agents based system to manage information of
expression arrays. In this system, different data mining
techniques and databases were used to analyze
expression profiles; its aim was to provide genes that
show differences between samples from younger and
older patients to discover why older women respond
better to the treatment. The system identified the
genes that can be therapeutic targets. However, for a
best result, it is necessary to check if the gene in ques-
tion is over or under-expressed.
60. Cruz-Roa et al. [19] proposed a deep learning based
tool that employed a convolutional neural network
(CNN) to detect automatically presence of invasive
tumors on digitized images. This approach was tested
on data from different sources. However, while using
this method, some breast cancer regions were incor-
rectly classified.
In this paper, Ankur and Jaymin [20] proposed a
predictive model for heart disease detection using
Machine Learning and Data Mining techniques. The
proposed approach combined between Naive Bayes
(NB) and Genetic Algorithm (GA) to classify heart
diseases. Data were collected from Cleveland Heart
Disease Data set (CHDD) available on the UCI Repo-
sitory. Nonetheless, this model could not predict
specific heart disease.
In this paper, Vignon-Clementel et al. [21] pro-
posed a 3D simulation approach for blood flow and
arterial pressure, this method has been applied to cal-
culate hemodynamic quantities in various physiologi-
cally relevant cardiovascular models, including
patient-specific examples, to study non-periodic flow
phenomena, often seen in normal subjects and in
patients with acquired or congenital cardiovascular
disease. However, it was difficult to measure pressures
and flow rates in vivo simultaneously and it was feas-
ible in a very limited number of research cases. Fur-
thermore, the vessel wall displacements were
overestimated because of resistance boundary
condition.
In this paper, Subanya et al. [22] used meta-heuris-
tic algorithm (bee colony) to determine the subset of
optimal characteristics with better classification accu-
61. racy in the diagnosis of cardiovascular disease. Data
were taken from UCI repository (a database of cardi-
ovascular diseases).
Shaikh et al. [23] used ANNs to predict the medical
prescription of heart disease. This work included
detailed information about the patient’s symptoms
and the pretreatment that was done. Doctors can
also use this web-based tool for the diagnosis of
heart disease using the basic radial function. Outputs
of this system have been compared with the prescrip-
tions of the doctors and it was satisfactory.
In this paper, Singh et al. [24] applied Structural
Equation Model (SEM) to identify the strength of
relationships among variables that are considered
related to the cause of Cardiovascular Diseases
(CVDs) and Fuzzy Cognitive Map (FCM) to evaluate
obtained results in a predictive system that helps for
the detection of people who are at risk of developing
CVDs. In this study, data have been extracted from
Canadian Community Health Survey (CCHS) data
source. However, authors did not use enough attri-
butes to have a very accurate model.
Singh et al. [24] proposed a predictive system of
CVDs using quantum neural network (QNN) for
machine learning. Data were extracted from 689
patients showing symptoms of CVD and the dataset
of 5209 CVD patients of the Framingham study.
This system had been experimentally evaluated and
compared with Framingham risk score (FRS). This
proposed system predicted the CVD risk with high
accuracy and was able to update itself with time.
In this paper, Venkatalakshmi et al. [25] designed
62. and developed diagnosis and prediction system for
heart diseases. In this system, prediction was based
on two algorithms: DT and NB were executed on
Weka tool; dataset consisted of attributes and values
which are collected from UCI machine learning repo-
sitory which is a repository of databases, domain the-
ories, and data generators. In order to improve the
efficiency and accuracy, an optimization process
INTERNATIONAL JOURNAL OF HEALTHCARE
MANAGEMENT 269
genetic algorithm has been used. In this system, a large
amount of data were used that must be reduced and
take into consideration only subset of attribute
sufficient for heart disease prediction.
Boden et al. [26] proposed a mathematical method
to predict the probability of surgery prior to the first
visit based on a sample of 8006 patients with low
back pain. Independent risk factors for undergoing
spinal surgery were identified by using univariate
and multivariate statistical analysis, and the Spine Sur-
gery Likelihood (SSL) model was created using a ran-
dom sample of 80% of the total patients in the used
cohort, and validated on the remaining 20%. However,
this method was unable to track patients who have
undergone surgery in a different facility and, therefore,
may have been misclassified in the non-surgical group.
In this paper, Søreide et al. [27] proposed an
approach that used Artificial Neural Network
(ANN), multilayer perceptron (MLP) to predict the
mortality of patients with perforated peptic ulcer.
63. Input to this approach was a sample of patients ana-
lyzed by Statistical Package for Social Sciences (IBM
SPSS v. 21, Inc. for Mac). Its principle was to propose
three models of MLP and give the model with the opti-
mal performance. However, in this kind of
approaches, the intervention of the human expert is
essential for the collection of data and garbage-in, gar-
bage out problem can exist.
Nyssa et al. [28] proposed, in their article, a predic-
tive model of rabies in Tennessee; it was based on
spatial analysis. The proposed method consisted of:
(1) Data acquisition from the Tennessee’s Health
Department
(2) Data processing using ArcGIS software to get the
predictive model
(3) Spatial analysis using Fragstats and Circuitscape
software.
Result of this system was a set of models (maps)
such as distribution models, density model and so
on. However, it did not allow a real-time disease’s sur-
veillance and was not efficient in case of companies
with large population.
In this paper, Sharmila Devi et al. [29] described in
this paper a distributed system of e-health for the
automatic diagnosis of the situation of a patient
based on his data without the participation of a doctor.
This service was provided on the Internet. When a
patient’s situation changes, the system will automati-
cally alert the doctor. This has been implemented
64. using Multi-Agent System (MAS) and Adaptive
Neuro-Fuzzy Inference System (ANFIS). The different
agents in the system were in different places and used
an asynchronous communication to communicate
each other.
In this paper, Kaberi et al. [30] presented an
approach that consisted of hybridization between
GA, harmony search algorithms (HAS) and support
vector machine (SVM) for the selection of informative
genes. However, heuristic methods depend on the pro-
blem and they are generally based on a local optimum
that fails to obtain the optimal overall solution.
Golnaz et al. [31], in this paper, proposed a feature
selection method based on a genetic algorithm. To
evaluate the subsets of the selected characteristics,
the k nearest neighbors (KNN) classifier was used
and validated on a set of data of the UCI database.
In this paper, Talayeh et al. [32] used unbalanced
classification techniques: NB, Radial Basis Function
Neural Network (RBFNN), 5-Nearest Neighbors,
Decision Trees (DT), SVMs, and Logistic Regression
(LR) to identify the complications of bariatric surgery
for each patient. The combination of classification
methods made possible to achieve higher performance
measures (Figure 1).
3.1. Breast cancer prediction and diagnosis
In this section, we discuss researches which used
different AI methods to manage breast cancer disease.
Table 1 summarizes the reviewed work dealing with
Figure 1. Flow diagram that summarizes the reviewed
65. researches.
270 D. HOUFANI ET AL.
Table 1. Summary table of researches which used different
techniques to manage breast cancer disease.
Works Objective Method Data Result Limitations
Janghel et al. [4] Diagnosis (malignant and
benign cells classification)
ANN (application of 4 methods) WBCD (collected data) Best
classification method
(LVQ)
Use of one dataset with limited attributes
Chaurasia et al.
[5]
Diagnosis/prognosis
(survivability prediction)
Data mining (Rep tree, RBF
network, simple logistic)
University Medical Centre, Institute of oncology
Ljubljana Yugoslavia
Best method (simple logistic) Use of one dataset with limited
attributes
Zheng et al. [6] Diagnosis K-means and SVM classifier WBCD
(table: attributes-values) Features selection for tumors
66. classification
It is not implemented in a large-scale sparse data set
Karabatak et al.
[7]
Diagnosis AR and neural network WBCD (table: attributes-
values) Tumors classification Applied on one dataset
Seera et al. [8] Medical data classification FMin-MaxNN,
Classification and
Regression Tree, RF model
WBCD, Pima Indians Diabetes, and Liver Disorders
from the UCI Repository of Machine Learning
Undertaking medical data
classification problems
Good
Nilashi et al. [9] Diagnosis - EM for data clustering
- Fuzzy logic for data
classification
- PCA to solve multi-collinearity
problem
- CART for automatic fuzzy rules
generation
- WBCD (table: attributes-values)
- Mammographic mass dataset
Tumors classification EM fails on high-dimensional data sets
due to
numerical precision problems
67. Nguyen et al. [10] Diagnosis and prognosis Feature selection
RF classifier
WBCDD and WBCPD Tumor classification RF becomes slow
and ineffective for real-time
predictions when a large number of trees are
generated
Abdel-Zaher et al.
[11]
Diagnosis DBN (unsupervised) for pre-
training
Supervised back propagation for
classification
WBCDD Tumor classification Computationally expensive
Thein et al. [12] Diagnosis Differential evolution algorithm
(for training)
Parallelism
WBCDD A neural network for Tumor
Classification
Same parameters may not guarantee the global
optimum solution
Bhardwaj et al.
[13]
Diagnosis GONN WBCDD Tumor classification - Only
crossover and mutation operators are improved
- Applied on one dataset
68. Dheeba [14] Diagnosis PSOWNN Mammogram screening center
(real data `a images) BC detection - Dependency on initial point
and parameters.
- Difficulty in finding their optimal design parameters
Ramos-Polĺan
et al. 15]
Diagnosis Machine learning classifier BCDR ML classifiers
Evaluated on one database
Litjens et al. [16] Diagnosis Deep learning (CNN) Collected
patient’s specimens Histopathologic slide analysis
Computationally expensive
Wang et al. [17] Diagnosis Deep learning Camelyon16 dataset
Cancer metastases
identification
Computationally expensive
Gonźalez-Briones
et al. [18]
Prognosis MAS
Deep learning
Samples provided by Salamanca Cancer Institute Gene selection
Computationally expensive
Cruz-Roa et al.
[19]
Diagnosis CNN Digital images from different institutions
Invasive breast cancer
classification
69. Some errors of classification
IN
TERN
A
TIO
N
A
L
JO
U
RN
A
L
O
F
H
EA
LTH
C
A
RE
M
A
N
A
G
EM
EN
70. T
271
breast cancer disease. The first column refers to the
investigated work; the second column is the objective
of the work; the third column is the used method to
handle the disease; the fourth column refers to the
used dataset of the paper; the fifth column consists
of the results; and finally, the last one refers to the
limitations of the proposed work.
3.1.1. Discussion
Breast cancer is the most common cause of women’s
deaths worldwide [33]. It is a result of mutations,
anarchic division, and abnormal changes of cells.
AI applies algorithms on a large volume of health-
care data to assist clinical practice. These algorithms
show their ability to improve accuracy by learning
and self-correcting.
After observing the reviewed researches that man-
age breast cancer disease, we can notice that machine
learning techniques are widely used in diagnosis,
tumors classification and breast cancer prediction to
assist physicians in decision making process and
early detection. The most used dataset is WBCD
from UCI Repository. These works show a good per-
formance in terms of accuracy. However, some techni-
cal problems can be considered:
(1) Computational and memory expenses
(2) Data availability: Training AI systems requires
71. large amounts of structured and comprehensive
data. However, the available data are fragmented,
incomplete and unstructured, these problems
increase the risk of error
(3) Overfitting problem: This occurs when the model
properly fits the training data and encounters
difficulties for generalization on new or unseen
data (validation data).
(4) Reproducibility issue: A study is reproducible
when others can replicate the results using the
same algorithms, data and methodology.
3.2. Other diseases
Researches mainly concentrate around diseases which
are leading causes of death. We can classify them into
the following types: cardiovascular disease, cancers,
viral disease, and nervous system disease; therefore,
early diagnosis and prognosis are fundamental to pre-
vent the deterioration of patients’ health status.
Table 2 summarizes the reviewed work dealing with
different diseases. The first column refers to the inves-
tigated work; the second column is the tackled disease;
the third column is the used method to handle the dis-
ease; the fourth column refers to the objective of the
paper; the fifth column consists of the used dataset;
and finally, the last one refers to the achieved perform-
ance of the proposed work.
3.2.1. Discussion
The use of artificial intelligence techniques in medical
72. prediction to manage different diseases shows a
Table 2. Summary table of researches which used different
techniques to manage multiple diseases.
Works Disease Method Objectives Input Performance
Makwana et al. [20] CVD ML and Data Mining Heart disease
detection Cleveland Heart
Disease Data set
Good but it can be
improved
Vignon et al. [21] Cardiovascular
system
- Mathematic
equation
- Analog electrical
circuit
3D simulation approach for
blood flow and arterial
pressure
Measured data Its validation is proven
in vitro and in vivo
data
Subanya et al. [22] CVD Meta-heuristic
algorithm (bee
colony)
CVD Classification UCI repository Good
Hannan et al. 23] CVD ANN Medical prescription of heart
73. disease prediction
Patient information Good
Singh et al. [24] Cardiovascular
disease
SEM and FCM Building a Cardiovascular
Disease Predictive Model
CCHS dataset It can be improved
Narain et al. [25] CVD QNN Risk of CVDs prediction Patients
with CVDs Good but it can be
improved
Venkatalakshmi
et al. [26]
CVD DT and NB Heart diseases prediction Attributes and
values
from UCI database
Good but it can be
improved
Boden et al. [27] Orthopedic surgery Mathematical method
Surgery prior’s probability
prediction
Patient-reported data Low level of evidence (4)
Søreide et al. [28] Gastric disease ANN modeling Mortality
prediction for
patients with
74. Gastric disease ANN modeling
Nyassa et al. [29] Viral disease spatial analysis Rabies
prediction in
Tennessee
Tennessee’s Health
Department
Good accuracy
Devi et al. [30] Neck and arm pain
disease
MAS and ANFIS Patients automatic diagnosis Patient-reported
data Good
Das et al. [31] Informative genes
selection
GA, HAS and SVM Selection of informative genes Gene
expression
dataset
Good
Sahebi et al. [32] Feature selection
method
GA Feature selection and
classification optimization
UCI Arrhythmia
database,
Good
75. Razzaghi et al. [33] Bariatric surgery Imbalanced
classification
techniques
Identify bariatric surgery’s
complications
The Premier
Healthcare
Database
Good
272 D. HOUFANI ET AL.
performance improvement in terms of accuracy, speed
and interoperability. Machine Learning techniques are
suitable for the management of multiple diseases
(Figure 2). Furthermore, their use makes disease man-
agement more reliable by reducing diagnosis and
therapeutic errors, and extracting useful information
from large amount of data to predict health outcomes.
Multiple data are used in these researches such as
medical images, patient’s reported, data datasets
from UCI Repository and several public datasets.
3.3. Application of AI in healthcare: General
challenges
This paper shows that Artificial intelligence brings
important developments to health-care field, however,
a subsequent research challenges remaining:
76. (1) Data quality and availability: Acquiring large
amounts of high-quality clinical datasets is a
very difficult process, because they are in multiple
formats and fragmented across different systems
and generally have limited access [34].
(2) Security and privacy issue: Several researchers
have been interested in this concept and have pro-
posed work to manage data security [35] because
it is one of the biggest challenges facing AI sys-
tem’s developers. The requirement of large
amounts of data from many patients may affect
their data privacy.
(3) Bias issue: AI systems learn to make decisions
based on training data which can include biases.
(4) Computational cost: Most reviewed works are
computationally expensive, which is not beneficial
for both clinician and patient.
(5) Interpretability: The most important task in the
healthcare domain is evaluating and validating
the proposed approach to be accepted by the
community.
(6) Injuries and error: An AI system may be some-
times wrong by failing in diseases prediction or
in a drug recommendation or in predicting the
response of a patient to a specific treatment.
These failures can occur patient injury or other
healthcare problems.
4. Conclusion
77. Medical prediction is a very important challenge for
clinicians because it has a direct influence on their
daily practice. In the last decade, the death rate
increases significantly, this required methods and
tools for accurate and early detection of diseases.
While going through literature review, we noticed
that researchers are interested in medical prediction
especially in the diagnosis and prognosis of breast can-
cer using methods and approaches of artificial intelli-
gence such as ANN, deep learning and data mining,
and so on. The authors in the literature proposed sys-
tems and compared them to other existing works. We
can note that their approaches are efficient in terms of
accuracy; however, most of them are time-consuming
in the training phase. We can also notice that very few
of these research works have actually been integrated
the clinical practice.
In this paper, we discussed the biggest challenges
facing the application of AI in the healthcare field.
To handle these challenges, several solutions can be
proposed:
(1) High-quality data generation and availability: To
build an efficient AI system it is important to pro-
ceed on good datasets, that’s why it is important
to create high-quality databases accessible by
researchers and AI systems developers in a man-
ner consistent with protecting patient privacy.
Blockchain technology can be used to secure per-
sonal and medical data [36].
(2) Quality supervising: Good training and validating
of AI systems will help address the risk of errors
and patient injury.
78. (3) Good exploitation of AI methods: Hybridization
of deep learning method with optimization algor-
ithms [37], parallelization, could be powerful for
time and cost reduction. Big data analytics also
offers several opportunities in this field [38].
The used techniques in reviewed works include
mathematical methods, evolutionary computing,
case-based reasoning, fuzzy logic, ANNs, data mining,
machine learning, deep learning, and intelligent
agents. However, the medical prediction is not wide-
spread due to several constraints. Hence comprehen-
sive research needs to be done in this sphere keeping
an eye towards developing hybrid techniques that
could be employed to predictive medicine. The selec-
tion of the appropriate technique is important for
developing and implementing disease diagnosis sys-
tems. As a perspective of this work, we aim to designFigure 2.
Used techniques in medical literature.
INTERNATIONAL JOURNAL OF HEALTHCARE
MANAGEMENT 273
our medical predictive approach based on deep
reinforcement learning and genetic algorithms to
improve breast cancer diagnostic performance. Fur-
thermore, to overcome big data problems, the number
of characteristics in the dataset must be reduced which
allows ensuring the quality of data (QoD). The advan-
tage of developing deep learning technique for the
management of breast cancer disease will be reached
by applying it as support tools that help physicians
in diagnosis, prognosis, and treatment. By using this
type of systems reading variability by physicians will
79. be eliminated. Besides, more quick and accurate diag-
nosis will result.
Despite the several challenges facing AI application
in healthcare field, it is very promising in decision-
making aid, physician and patient medical support,
and prediction and we believe there are still significant
perspectives on this topic.
Disclosure statement
No potential conflict of interest was reported by the author(s).
Notes on contributors
Djihane Houfani received the Licence andMaster degrees in
Computer Science from University of Biskra, Algeria in
2015 and 2017, respectively. She is now a PhD student in
artificial intelligence at the University of Biskra and her cur-
rent research interest includes medical prediction, deep
learning, multi-agent systems and optimization.
Sihem Slatnia was born in the city of Biskra, Algeria. She
followed her high studies at the university of Biskra, Algeria
at the Computer Science Department and obtained the
engineering diploma in 2004 on the work “Diagnostic
based model by Black and White analyzing in Background
Petri Nets”, After that, she obtained Master diploma in
2007 (option: Artificial intelligence and advanced system’s
information), on the work “Evolutionary Cellular Automata
Based-Approach for Edge Detection”. She obtained PhD
degree from the same university in 2011, on the work “Evol-
utionary Algorithms for Image Segmentation based on Cel-
lular Automata”. Presently she is an associate professor at
computer science department of Biskra University. She is
interested to the artificial intelligence, emergent complex
80. systems and optimization.
Okba Kazar professor in the Computer Science Department
of Biskra, he helped to create the laboratory LINFI at the
University of Biskra. He is a member of international con-
ference program committees and the “editorial board” for
various magazines. His research interests are artificial intel-
ligence, multi-agent systems, web applications and infor-
mation systems.
Hamza Saouli received the Master and Doctorate degrees in
Computer Science from University of Mohamed Khider
Biskra (UMKB), the Republic of Algeria in 2010 and 2015,
respectively. He is a university lecturer since 2015 and his
research interest includes artificial intelligence, web services
and Cloud Computing.
Abdelhak Merizig obtained his Master degree by 2013 from
Mohamed Khider University, Biskra, Algeria, He is working
on an artificial intelligence field. He obtained his PhD
degree from the same university in 2018. Abdelhak Merizig
is now a university lecturer at the computer science depart-
ment of Biskra University. Also, he is a member of LINFI
Laboratory at the same University. His research interest
includes multi-agent systems, service composition, Cloud
Computing and Internet of Things.
ORCID
Okba Kazar http://orcid.org/0000-0003-0522-4954
References
[1] Usman Ahmad M, Zhang A, Goswami M, et al. A pre-
dictive model for decreasing clinical no-show rates in
81. a primary care setting. Int J Healthcare Manag.
2019;11:1–8.
[2] Kamble SS, Gunasekaran A, Goswami M, et al. A sys-
tematic perspective on the applications of big data
analytics in healthcare management. Int J Healthcare
Manag. 2018;12:226–240.
[3] Hood L, Flores M. A personal view on systems medi-
cine and the emergence of proactive P4 medicine: pre-
dictive, preventive, personalized and participatory.
New Biotechnol. 2012;6(23):613–624.
[4] Janghel RR, Shukla A, Tiwari R, et al. Breast cancer
diagnosis using artificial neural network modelsThe
3rd International Conference on Information
Sciences and Interaction Sciences 2010.
[5] Chaurasia V, Pal S. Data mining techniques: to predict
and resolve breast cancer survivability. Int J Comput
Sci Mob Com. 2014;3(1):10–22.
[6] Zheng B, Yoon SW, Lam SS. Breast cancer diagnosis
based on feature extraction using a hybrid of K-
means and support vector machine algorithms.
Expert Syst Appl. 2013;41:1476–1482.
[7] Karabatak M, Ince MC. An expert system for detection
of breast cancer based on association rules and neural
network. Elsevier, Expert Syst Appl. 2009;36:3465–3469.
[8] Seera M, Lim CP. A hybrid intelligent system for
medical data classification. Expert Syst Appl.
2013;41:2239–2249.
[9] Nilashi M, Ibrahim O, Ahmadi H, et al. A knowledge-
82. based system for breast cancer classification using
Fuzzy logic method. Telemat Inform. 2017;34:133–
144.
[10] Nguyen C, Wang Y, Nguyen HN. Random forest clas-
sifier combined with feature selection for breast cancer
diagnosis and prognostic. J Biomed Sci Eng.
2013;06:551–560.
[11] Abdel-Zaher AM, Eldeib AM. Breast cancer classifi-
cation using deep belief networks. Expert Syst Appl.
2015;46:139–144.
[12] Thein HTT, Tun KMM. An approach for breast can-
cer diagnosis classification using neural network.
Adv Com Int J. 2015;6:1–11.
[13] Bhardwaj A, Tiwari A. Breast cancer diagnosis using
genetically optimized neural network model. Expert
Syst Appl. 2015;42:4611–4620.
[14] Dheeba J, Singh NA, Selvi ST. Computer-aided detec-
tion of breast cancer on mammograms: a swarm intel-
ligence optimized wavelet neural network approach. J
Biomed Inform. 2014;49:45–52.
[15] Ramos-Polĺan R, Guevara-Ĺopez MA, Súarez-Ortega
C, et al. Discovering mammography-based machine
274 D. HOUFANI ET AL.
http://orcid.org/0000-0003-0522-4954
learning classifiers for breast cancer diagnosis. J Med
Sys. 2012;36:2259–2269.
83. [16] Litjens G, Śanchez CI, Timofeeva N, et al. Deep learn-
ing as a tool for increased accuracy and efficiency of
histopathological diagnosis. Sci Rep. 2016;6:1–11.
[17] Wang D, Khosla A, Gargeya R, et al. Deep learning for
identifying metastatic breast cancer. Int Symp Biomed
Imaging. 2016: 1–6.
[18] Gonźalez-Briones A, Ramos J, De Paz JF, et al. Multi-
agent system for obtaining relevant genes in
expression analysis between young and older women
with triple negative breast cancer. J Integr
Bioinform. 2015;12:1–14.
[19] Cruz-Roa A, Gilmore H, Basavanhally A, et al.
Accurate and reproducible invasive breast cancer
detection in whole slide images: a deep learning
approach for quantifying tumor extent. Sci Rep.
2017;7:1–14.
[20] Makwana A, Patel J. Decision support system for heart
disease prediction using data mining techniques. Int J
Comput Appl. 2015;117 (22):1–5.
[21] Vignon-Clementel IE, Figueroa CA, Jansen KE, et al.
Outflow boundary conditions for 3D simulations of
non-periodic blood flow and pressure fields in
deformable arteries. Comput Methods Biomech
Biomed Engin. 2010;13:625–640.
[22] Subanya B, Rajalaxmi RR. Feature selection using
artificial Bee colony for cardiovascular disease
classification 2014 International Conference on
Electronics and Communication Systems (ICECS)
IEEE. 2014;1–6.