Communicating an organisation's requirements in a semantically consistent and understandable manner and then reflecting the potential impact of those requirements on the IT infrastructure presents a major challenge among stakeholders. Initial research findings indicate a desire among business executives for a tool that allows them to communicate organisational changes using natural language and a simulation of the IT infrastructure that supports those changes. Building on a detailed analysis and evaluation of these findings, the innovative CRESUS tool was designed and implemented. The purpose of this research was to investigate to what extent CRESUS both aids communication in the development of a shared understanding and supports collaborative requirements elicitation to bring about organisational, and associated IT infrastructural, change. This paper presents promising results that show how such a tool can facilitate collaborative requirements elicitation through increased communication around organisational change and the IT infrastructure.
CRESUS-T: A COLLABORATIVE REQUIREMENTS ELICITATION SUPPORT TOOLijseajournal
Communicating an organisation's requirements in a semantically consistent and understandable manner
and then reflecting the potential impact of those requirements on the IT infrastructure presents a major
challenge among stakeholders. Initial research findings indicate a desire among business executives for a
tool that allows them to communicate organisational changes using natural language and a model of the IT
infrastructure that supports those changes. Building on a detailed analysis and evaluation of these findings,
the innovative CRESUS-T support tool was designed and implemented. The purpose of this research was to
investigate to what extent CRESUS-T both aids communication in the development of a shared
understanding and supports collaborative requirements elicitation to bring about organisational, and
associated IT infrastructural, change. In order to determine the extent shared understanding was fostered,
the support tool was evaluated in a case study of a business process for the roll out of the IT software
image at a third level educational institution. Statistical analysis showed that the CRESUS-T support tool
fostered shared understanding in the case study, through increased communication. Shared understanding
is also manifested in the creation of two knowledge representation artefacts namely, a requirements model
and the IT infrastructure model. The CRESUS-T support tool will be useful to requirements engineers and
business analysts that have to gather requirements asynchronously.
The mobile phone has received global attention primarily as a personal consumer technology. However, we believe that mobile information technology in general will play a significant role in organisational efforts to innovate current practices and have significant economic impact. Enterprise mobility signals new ways of managing how people work together using mobile information technology and will form an integral part of the efforts to improve the efficiency and effectiveness of information work. This belief is, however, not reflected in the current selection of books and collections exploring the issue of enterprise mobility. The aim of this paper is to highlight some of the key challenges in the application of mobile information technology to improve organisational efficiency. This is accomplished through comparing and contrasting findings from a selection of 11 empirical studies of enterprise mobility with information technology conducted between 2001 and 2007. The paper argues that the debate so far has largely failed to embed glowing accounts for technological potential in a sound discussion of organisational realities. In particular, there has been a lack of balanced accounts of the implicit and explicit trade-offs involved in mobilising the interaction between members of the workforce.
AN ITERATIVE HYBRID AGILE METHODOLOGY FOR DEVELOPING ARCHIVING SYSTEMSijseajournal
With the massive growth of the organizations files, the needs for archiving system become a must. A lot of
time is consumed in collecting requirements from the organization to build an archiving system. Sometimes
the system does not meet the organization needs. This paper proposes a domain-based requirement
engineering system that efficiently and effectively develops different archiving systems based on new
suggested technique that merges the two best used agile methodologies: extreme programming (XP) and
SCRUM. The technique is tested on a real case study. The results shows that the time and effort consumed
during analyzing and designing the archiving systems decreased significantly. The proposed methodology
also reduces the system errors that may happen at the early stages of the development of the system.
An Investigation of Critical Failure Factors In Information Technology ProjectsIOSR Journals
Rate of failed projects in information technology system project remains high in comparison with other infrastructure or high technology projects. The objective of this paper is to determine and represent a broad range of potential failure factors during the implementation phase and cause of IS/IT Project defeat/failure. Challenges exist in order to achieve the projects goal successfully and to avoid the failure. In this research study, 12 articles were studied as significant contributions to analyze developing a list of critical failure factors of IT projects
CRESUS-T: A COLLABORATIVE REQUIREMENTS ELICITATION SUPPORT TOOLijseajournal
Communicating an organisation's requirements in a semantically consistent and understandable manner
and then reflecting the potential impact of those requirements on the IT infrastructure presents a major
challenge among stakeholders. Initial research findings indicate a desire among business executives for a
tool that allows them to communicate organisational changes using natural language and a model of the IT
infrastructure that supports those changes. Building on a detailed analysis and evaluation of these findings,
the innovative CRESUS-T support tool was designed and implemented. The purpose of this research was to
investigate to what extent CRESUS-T both aids communication in the development of a shared
understanding and supports collaborative requirements elicitation to bring about organisational, and
associated IT infrastructural, change. In order to determine the extent shared understanding was fostered,
the support tool was evaluated in a case study of a business process for the roll out of the IT software
image at a third level educational institution. Statistical analysis showed that the CRESUS-T support tool
fostered shared understanding in the case study, through increased communication. Shared understanding
is also manifested in the creation of two knowledge representation artefacts namely, a requirements model
and the IT infrastructure model. The CRESUS-T support tool will be useful to requirements engineers and
business analysts that have to gather requirements asynchronously.
The mobile phone has received global attention primarily as a personal consumer technology. However, we believe that mobile information technology in general will play a significant role in organisational efforts to innovate current practices and have significant economic impact. Enterprise mobility signals new ways of managing how people work together using mobile information technology and will form an integral part of the efforts to improve the efficiency and effectiveness of information work. This belief is, however, not reflected in the current selection of books and collections exploring the issue of enterprise mobility. The aim of this paper is to highlight some of the key challenges in the application of mobile information technology to improve organisational efficiency. This is accomplished through comparing and contrasting findings from a selection of 11 empirical studies of enterprise mobility with information technology conducted between 2001 and 2007. The paper argues that the debate so far has largely failed to embed glowing accounts for technological potential in a sound discussion of organisational realities. In particular, there has been a lack of balanced accounts of the implicit and explicit trade-offs involved in mobilising the interaction between members of the workforce.
AN ITERATIVE HYBRID AGILE METHODOLOGY FOR DEVELOPING ARCHIVING SYSTEMSijseajournal
With the massive growth of the organizations files, the needs for archiving system become a must. A lot of
time is consumed in collecting requirements from the organization to build an archiving system. Sometimes
the system does not meet the organization needs. This paper proposes a domain-based requirement
engineering system that efficiently and effectively develops different archiving systems based on new
suggested technique that merges the two best used agile methodologies: extreme programming (XP) and
SCRUM. The technique is tested on a real case study. The results shows that the time and effort consumed
during analyzing and designing the archiving systems decreased significantly. The proposed methodology
also reduces the system errors that may happen at the early stages of the development of the system.
An Investigation of Critical Failure Factors In Information Technology ProjectsIOSR Journals
Rate of failed projects in information technology system project remains high in comparison with other infrastructure or high technology projects. The objective of this paper is to determine and represent a broad range of potential failure factors during the implementation phase and cause of IS/IT Project defeat/failure. Challenges exist in order to achieve the projects goal successfully and to avoid the failure. In this research study, 12 articles were studied as significant contributions to analyze developing a list of critical failure factors of IT projects
EFFECTS OF HUMAN FACTOR ON THE SUCCESS OF INFORMATION TECHNOLOGY OUTSOURCINGijitcs
Information technology outsourcing is one of the factors affecting the improvement of flexibility and
dynamics of enterprises in the competitive environment. Also, the study of the factors affecting its success
has been always considered by business owners and the area of research. Professional experiences and
research results consider that the success of IT (Information technology) outsourcing projects relates to the
effective knowledge transfer and human factors. The human factors are influenced by the cultural and
environmental context of the inside and outside of the organization. Hence, it is necessary to study the
effectiveness of these variables in different cultural environments. This study investigates the effect of
human factors including the customer motivation and vendor willingness on the success of IT outsourcing
projects. For this purpose, the research hypotheses were developed and analyzed by the structural equation
method. The result of a field study among 94 companies and organizations show the difference of the
findings of this study with earlier findings in other countries. Based on the findings, the client motivation
doesn’t affect the knowledge transfer but the vendor willingness affects the customer motivation to
knowledge transfer. This result can help the business owners to take appropriate approaches for achieving
success in IT outsourcing projects.
There are essential security considerations in the systems used by semiconductor companies like TI. Along
with other semiconductor companies, TI has recognized that IT security is highly crucial during web
application developers' system development life cycle (SDLC). The challenges faced by TI web developers
were consolidated via questionnaires starting with how risk management and secure coding can be
reinforced in SDLC; and how to achieve IT Security, PM and SDLC initiatives by developing a prototype
which was evaluated considering the aforementioned goals. This study aimed to practice NIST strategies
by integrating risk management checkpoints in the SDLC; enforce secure coding using static code analysis
tool by developing a prototype application mapped with IT Security goals, project management and SDLC
initiatives and evaluation of the impact of the proposed solution. This paper discussed how SecureTI was
able to satisfy IT Security requirements in the SDLC and PM phases.
Paper shareing_Platform design framework conceptualisation and applicationYOU SHENG CHEN
TECHNOLOGY ANALYSIS & STRATEGIC MANAGEMENT
Auth:Nina Tura, Antero Kutvonen and Paavo Ritala(2017)
In this case, I also using FB ads-manager to demo a platform framework that could more understand.
Agent Assisted Methodologies have become an
important subject of research in advance Software Engineering.
Several methodologies have been proposed as, a theoretical
approach, to facilitate and support the development of complex
distributed systems. An important question when facing the
construction of Agent Applications is deciding which
methodology to follow. Trying to answer this question, a
framework with several criteria is applied in this paper for the
comparative analysis of existing multiagent system
methodologies. The results of the comparative over two of them,
conclude that those methodologies have not reached a sufficient
maturity level to be used by the software industry. The
framework has also proved its utility for the evaluation of any
kind of Agent Assisted Software Engineering Methodology.
RECOMMENDATION GENERATION JUSTIFIED FOR INFORMATION ACCESS ASSISTANCE SERVICE...ijcsit
Recommendation systems only provide more specific recommendations to users. They do not consider
giving a justification for the recommendation. However, the justification for the recommendation allows the
user to make the decision whether or not to accept the recommendation. It also improves user satisfaction
and the relevance of the recommended item. However, the IAAS recommendation system that uses
advisories to make recommendations does not provide a justification for the recommendations. That is why
in this article, our task consists for helping IAAS users to justify their recommendations. For this, we
conducted a related work on architectures and approaches for justifying recommendations in order to
identify an architecture and approach suitable for the context of IAAS. From the analysis in this article, we
note that neither of these approaches uses the notices (IAAS mechanism) to justify their recommendations.
Therefore, existing architectures cannot be used in the context of IAAS. That is why,we have developed a
new IAAS architecture that deals separately with item filtration and justification extraction that
accompanied the item during recommendation generation (Figure 7). And we haveimproved the reviews by
adding users’ reviews on the items. The user’s notices include the Documentary Unit (DU), the user Group
(G), the Justification (J) and the weight (a); noted A=(DU,G,J,a).
A DEVELOPMENT FRAMEWORK FOR A CONVERSATIONAL AGENT TO EXPLORE MACHINE LEARNIN...mlaij
This study aims to introduce a discussion platform and curriculum designed to help people understand how
machines learn. Research shows how to train an agent through dialogue and understand how information
is represented using visualization. This paper starts by providing a comprehensive definition of AI literacy
based on existing research and integrates a wide range of different subject documents into a set of key AI
literacy skills to develop a user-centered AI. This functionality and structural considerations are organized
into a conceptual framework based on the literature. Contributions to this paper can be used to initiate
discussion and guide future research on AI learning within the computer science community.
EMPIRICAL STUDY OF THE EVOLUTION OF AGILE-DEVELOPED SOFTWARE SYSTEM IN JORDAN...ijbiss
The focus of agile in software development methods and practices. How to take effective methods in use
have received less attention. Especially in a large organization is not a little to take in agile methods to use.
This paper discusses the adoption and level of experience of the use of agile practices in three companies
in the software development company wide contacts in Jordan. The more practices that relied on largescale
flexibility to measure the progress made by the code work, that the developers efforts to the task of
estimating, to the use of coding standards, and the lack of overtime continuous, has a team to develop their
own operations, to use the limited documentation, and to have the team in one place facility. The adoption
of agile practices of the test, any test of the first unit tests and automated, and low. Some can only appear
agile practices without the adoption of a conscious, because developers find them useful. So it seems that
an emergency operation aimed at agility may also neglect the important agile practices.
Survey Based Reviewof Elicitation ProblemsIJERA Editor
Any software development process is the combination of multiple development activities and each activity has a
vital role in the software development cycle. Requirement Engineering is the main and basic branch of Software
Engineering, it has many phases but the most initial phase is Requirement Elicitation. In this phase requirements
are gathered for system development.
This paper provides a literature review of the requirements engineering processes performed in traditional and
modern development processes and analyses the problems in the requirements elicitation phase. This problem
analysis is based on a survey which was conducted in University. A questionnaire posing questions regarding
the problems in requirement elicitation was given to final year computer science graduate students who are
working on their final year project as a requirement for their degree. The theoretical analysis of the
questionnaire further clarifies the problems. This problems analysis will help to find out the main problems
which are faced by the perspective software developers
Ijcsit12REQUIREMENTS ENGINEERING OF A WEB PORTAL USING ORGANIZATIONAL SEMIOTI...ijcsit
The requirements of software are key elements that contribute to the quality and users satisfaction of the
final system. In this work, Requirements Engineering (RE) of web sites is presented using an organizational
semiotics perspective. They are shown as being part of an organization, with particular practices, rules
and views considering stakeholders several differences and opinions. The main contribution of this paper is
to relate an experience, from elicitation to validation, showing how organizational semiotics artifacts were
exploited in a collaborative and participatory way to RE of a web portal. A case study is described in order
to demonstrate the feasibility of using such artifacts to RE when we think about the system as being part of
a social organization.
Our study shows that users can even be very important a part of data system development in its varied
stages. supported a paper we have a tendency to gift our hypothesis that data provided by users by
information, past experiences and learning will cause higher project management. Establishing a relationship
between user and knowledge system will improve the look stage development of the project and as a result
higher project performance are often obtained.Project management data, system analysis skills, programming
data, information administration data square measure a number of the areas within which users will
contribute.In the development stage, users ought to participate within the review method to confirm that the
integrated data (system design) is meted out effectively by the developers. Users will facilitate to find or
expose inappropriate styles as early as doable to scale back supererogatory prices.This paper additionally
focuses on establishing understanding between users and developers.
Distributed Software Development Process, Initiatives and Key Factors: A Syst...zillesubhan
Geographically Distributed Software Development (GSD) process differs from Collocated Software Development (CSD) process in various technical aspects. It is empirically proven that renowned process improvement initiatives applicable to CSD are not very effective for GSD. The objective of this research is to review the existing literature (both academia and industrial) to identify initiatives and key factors which play key role in the improvement and maturity of a GSD process, to achieve this goal we planned a Systematic Literature Review (SLR) following a standard protocol. Three highly respected sources are selected to search for the relevant literature which resulted in a large number of TOIs (Title of Interest). An inter-author custom protocol is outlined and followed to shortlist most relevant articles for review. The data is extracted from this set of finally selected articles. We have performed both qualitative and quantitative analysis of the extracted data to obtain the results. The concluded results identify several initiatives and key factors involved in GSD and answer each research question posed by the SLR.
AN ITERATIVE HYBRID AGILE METHODOLOGY FOR DEVELOPING ARCHIVING SYSTEMSijseajournal
With the massive growth of the organizations files, the needs for archiving system become a must. A lot of time is consumed in collecting requirements from the organization to build an archiving system. Sometimes the system does not meet the organization needs. This paper proposes a domain-based requirement engineering system that efficiently and effectively develops different archiving systems based on new
suggested technique that merges the two best used agile methodologies: extreme programming (XP) and SCRUM. The technique is tested on a real case study. The results shows that the time and effort consumed during analyzing and designing the archiving systems decreased significantly. The proposed methodology also reduces the system errors that may happen at the early stages of the development of the system.
AN OVERVIEW OF EXISTING FRAMEWORKS FOR INTEGRATING FRAGMENTED INFORMATION SYS...ijistjournal
Literatures show that there are several structured integration frameworks which emerged with the aim of facilitating application integration. But weakness and strength of these frameworks are not known. This paper aimed at reviewing these frameworks with the focus on identifying their weakness and strength. To accomplish this, recommended comparison factors were identified and used to compare these frameworks. Findings shows that most of these structure frameworks are custom based on their motives. They focus on integrating applications from different sectors within an organization for the purpose of eliminating communication inefficiencies. There is no framework which guides application’s integrators on goals of integrations, outcomes of integration, outputs of integration and skills which will be required for types of applications expected to be integrated. The study recommended further study on integration framework especial on designing unstructured framework which will support and guide application’s integrators with consideration on consumer’s surrounding environment.
AN OVERVIEW OF EXISTING FRAMEWORKS FOR INTEGRATING FRAGMENTED INFORMATION SYS...ijistjournal
Literatures show that there are several structured integration frameworks which emerged with the aim of facilitating pplication integration. But weakness and strength of these frameworks are not known. This
paper aimed at reviewing these frameworks with the focus on identifying their weakness and strength. Toaccomplish this, recommended comparison factors were identified and used to compare these frameworks.Findings shows that most of these structure frameworks are custom based on their motives. They focus onintegrating applications from different sectors within an organization for the purpose of eliminating communication inefficiencies. There is no framework which guides pplication’s integrators on goals of integrations, outcomes of integration, outputs of integration and skills which will be required for
types of applications expected to be integrated. The study recommended further study on integration
framework especial on designing unstructured framework which will support and guide application’s
integrators with consideration on consumer’s surrounding environment.
EFFECTS OF HUMAN FACTOR ON THE SUCCESS OF INFORMATION TECHNOLOGY OUTSOURCINGijitcs
Information technology outsourcing is one of the factors affecting the improvement of flexibility and
dynamics of enterprises in the competitive environment. Also, the study of the factors affecting its success
has been always considered by business owners and the area of research. Professional experiences and
research results consider that the success of IT (Information technology) outsourcing projects relates to the
effective knowledge transfer and human factors. The human factors are influenced by the cultural and
environmental context of the inside and outside of the organization. Hence, it is necessary to study the
effectiveness of these variables in different cultural environments. This study investigates the effect of
human factors including the customer motivation and vendor willingness on the success of IT outsourcing
projects. For this purpose, the research hypotheses were developed and analyzed by the structural equation
method. The result of a field study among 94 companies and organizations show the difference of the
findings of this study with earlier findings in other countries. Based on the findings, the client motivation
doesn’t affect the knowledge transfer but the vendor willingness affects the customer motivation to
knowledge transfer. This result can help the business owners to take appropriate approaches for achieving
success in IT outsourcing projects.
There are essential security considerations in the systems used by semiconductor companies like TI. Along
with other semiconductor companies, TI has recognized that IT security is highly crucial during web
application developers' system development life cycle (SDLC). The challenges faced by TI web developers
were consolidated via questionnaires starting with how risk management and secure coding can be
reinforced in SDLC; and how to achieve IT Security, PM and SDLC initiatives by developing a prototype
which was evaluated considering the aforementioned goals. This study aimed to practice NIST strategies
by integrating risk management checkpoints in the SDLC; enforce secure coding using static code analysis
tool by developing a prototype application mapped with IT Security goals, project management and SDLC
initiatives and evaluation of the impact of the proposed solution. This paper discussed how SecureTI was
able to satisfy IT Security requirements in the SDLC and PM phases.
Paper shareing_Platform design framework conceptualisation and applicationYOU SHENG CHEN
TECHNOLOGY ANALYSIS & STRATEGIC MANAGEMENT
Auth:Nina Tura, Antero Kutvonen and Paavo Ritala(2017)
In this case, I also using FB ads-manager to demo a platform framework that could more understand.
Agent Assisted Methodologies have become an
important subject of research in advance Software Engineering.
Several methodologies have been proposed as, a theoretical
approach, to facilitate and support the development of complex
distributed systems. An important question when facing the
construction of Agent Applications is deciding which
methodology to follow. Trying to answer this question, a
framework with several criteria is applied in this paper for the
comparative analysis of existing multiagent system
methodologies. The results of the comparative over two of them,
conclude that those methodologies have not reached a sufficient
maturity level to be used by the software industry. The
framework has also proved its utility for the evaluation of any
kind of Agent Assisted Software Engineering Methodology.
RECOMMENDATION GENERATION JUSTIFIED FOR INFORMATION ACCESS ASSISTANCE SERVICE...ijcsit
Recommendation systems only provide more specific recommendations to users. They do not consider
giving a justification for the recommendation. However, the justification for the recommendation allows the
user to make the decision whether or not to accept the recommendation. It also improves user satisfaction
and the relevance of the recommended item. However, the IAAS recommendation system that uses
advisories to make recommendations does not provide a justification for the recommendations. That is why
in this article, our task consists for helping IAAS users to justify their recommendations. For this, we
conducted a related work on architectures and approaches for justifying recommendations in order to
identify an architecture and approach suitable for the context of IAAS. From the analysis in this article, we
note that neither of these approaches uses the notices (IAAS mechanism) to justify their recommendations.
Therefore, existing architectures cannot be used in the context of IAAS. That is why,we have developed a
new IAAS architecture that deals separately with item filtration and justification extraction that
accompanied the item during recommendation generation (Figure 7). And we haveimproved the reviews by
adding users’ reviews on the items. The user’s notices include the Documentary Unit (DU), the user Group
(G), the Justification (J) and the weight (a); noted A=(DU,G,J,a).
A DEVELOPMENT FRAMEWORK FOR A CONVERSATIONAL AGENT TO EXPLORE MACHINE LEARNIN...mlaij
This study aims to introduce a discussion platform and curriculum designed to help people understand how
machines learn. Research shows how to train an agent through dialogue and understand how information
is represented using visualization. This paper starts by providing a comprehensive definition of AI literacy
based on existing research and integrates a wide range of different subject documents into a set of key AI
literacy skills to develop a user-centered AI. This functionality and structural considerations are organized
into a conceptual framework based on the literature. Contributions to this paper can be used to initiate
discussion and guide future research on AI learning within the computer science community.
EMPIRICAL STUDY OF THE EVOLUTION OF AGILE-DEVELOPED SOFTWARE SYSTEM IN JORDAN...ijbiss
The focus of agile in software development methods and practices. How to take effective methods in use
have received less attention. Especially in a large organization is not a little to take in agile methods to use.
This paper discusses the adoption and level of experience of the use of agile practices in three companies
in the software development company wide contacts in Jordan. The more practices that relied on largescale
flexibility to measure the progress made by the code work, that the developers efforts to the task of
estimating, to the use of coding standards, and the lack of overtime continuous, has a team to develop their
own operations, to use the limited documentation, and to have the team in one place facility. The adoption
of agile practices of the test, any test of the first unit tests and automated, and low. Some can only appear
agile practices without the adoption of a conscious, because developers find them useful. So it seems that
an emergency operation aimed at agility may also neglect the important agile practices.
Survey Based Reviewof Elicitation ProblemsIJERA Editor
Any software development process is the combination of multiple development activities and each activity has a
vital role in the software development cycle. Requirement Engineering is the main and basic branch of Software
Engineering, it has many phases but the most initial phase is Requirement Elicitation. In this phase requirements
are gathered for system development.
This paper provides a literature review of the requirements engineering processes performed in traditional and
modern development processes and analyses the problems in the requirements elicitation phase. This problem
analysis is based on a survey which was conducted in University. A questionnaire posing questions regarding
the problems in requirement elicitation was given to final year computer science graduate students who are
working on their final year project as a requirement for their degree. The theoretical analysis of the
questionnaire further clarifies the problems. This problems analysis will help to find out the main problems
which are faced by the perspective software developers
Ijcsit12REQUIREMENTS ENGINEERING OF A WEB PORTAL USING ORGANIZATIONAL SEMIOTI...ijcsit
The requirements of software are key elements that contribute to the quality and users satisfaction of the
final system. In this work, Requirements Engineering (RE) of web sites is presented using an organizational
semiotics perspective. They are shown as being part of an organization, with particular practices, rules
and views considering stakeholders several differences and opinions. The main contribution of this paper is
to relate an experience, from elicitation to validation, showing how organizational semiotics artifacts were
exploited in a collaborative and participatory way to RE of a web portal. A case study is described in order
to demonstrate the feasibility of using such artifacts to RE when we think about the system as being part of
a social organization.
Our study shows that users can even be very important a part of data system development in its varied
stages. supported a paper we have a tendency to gift our hypothesis that data provided by users by
information, past experiences and learning will cause higher project management. Establishing a relationship
between user and knowledge system will improve the look stage development of the project and as a result
higher project performance are often obtained.Project management data, system analysis skills, programming
data, information administration data square measure a number of the areas within which users will
contribute.In the development stage, users ought to participate within the review method to confirm that the
integrated data (system design) is meted out effectively by the developers. Users will facilitate to find or
expose inappropriate styles as early as doable to scale back supererogatory prices.This paper additionally
focuses on establishing understanding between users and developers.
Distributed Software Development Process, Initiatives and Key Factors: A Syst...zillesubhan
Geographically Distributed Software Development (GSD) process differs from Collocated Software Development (CSD) process in various technical aspects. It is empirically proven that renowned process improvement initiatives applicable to CSD are not very effective for GSD. The objective of this research is to review the existing literature (both academia and industrial) to identify initiatives and key factors which play key role in the improvement and maturity of a GSD process, to achieve this goal we planned a Systematic Literature Review (SLR) following a standard protocol. Three highly respected sources are selected to search for the relevant literature which resulted in a large number of TOIs (Title of Interest). An inter-author custom protocol is outlined and followed to shortlist most relevant articles for review. The data is extracted from this set of finally selected articles. We have performed both qualitative and quantitative analysis of the extracted data to obtain the results. The concluded results identify several initiatives and key factors involved in GSD and answer each research question posed by the SLR.
AN ITERATIVE HYBRID AGILE METHODOLOGY FOR DEVELOPING ARCHIVING SYSTEMSijseajournal
With the massive growth of the organizations files, the needs for archiving system become a must. A lot of time is consumed in collecting requirements from the organization to build an archiving system. Sometimes the system does not meet the organization needs. This paper proposes a domain-based requirement engineering system that efficiently and effectively develops different archiving systems based on new
suggested technique that merges the two best used agile methodologies: extreme programming (XP) and SCRUM. The technique is tested on a real case study. The results shows that the time and effort consumed during analyzing and designing the archiving systems decreased significantly. The proposed methodology also reduces the system errors that may happen at the early stages of the development of the system.
AN OVERVIEW OF EXISTING FRAMEWORKS FOR INTEGRATING FRAGMENTED INFORMATION SYS...ijistjournal
Literatures show that there are several structured integration frameworks which emerged with the aim of facilitating application integration. But weakness and strength of these frameworks are not known. This paper aimed at reviewing these frameworks with the focus on identifying their weakness and strength. To accomplish this, recommended comparison factors were identified and used to compare these frameworks. Findings shows that most of these structure frameworks are custom based on their motives. They focus on integrating applications from different sectors within an organization for the purpose of eliminating communication inefficiencies. There is no framework which guides application’s integrators on goals of integrations, outcomes of integration, outputs of integration and skills which will be required for types of applications expected to be integrated. The study recommended further study on integration framework especial on designing unstructured framework which will support and guide application’s integrators with consideration on consumer’s surrounding environment.
AN OVERVIEW OF EXISTING FRAMEWORKS FOR INTEGRATING FRAGMENTED INFORMATION SYS...ijistjournal
Literatures show that there are several structured integration frameworks which emerged with the aim of facilitating pplication integration. But weakness and strength of these frameworks are not known. This
paper aimed at reviewing these frameworks with the focus on identifying their weakness and strength. Toaccomplish this, recommended comparison factors were identified and used to compare these frameworks.Findings shows that most of these structure frameworks are custom based on their motives. They focus onintegrating applications from different sectors within an organization for the purpose of eliminating communication inefficiencies. There is no framework which guides pplication’s integrators on goals of integrations, outcomes of integration, outputs of integration and skills which will be required for
types of applications expected to be integrated. The study recommended further study on integration
framework especial on designing unstructured framework which will support and guide application’s
integrators with consideration on consumer’s surrounding environment.
ANALYSIS OF ENTERPRISE SHARED RESOURCE INVOCATION SCHEME BASED ON HADOOP AND Rijaia
The response rate and performance indicators of enterprise resource calls have become an important part
of measuring the difference in enterprise user experience. An efficient corporate shared resource calling
system can significantly improve the office efficiency of corporate users and significantly improve the
fluency of corporate users' resource calling. Hadoop has powerful data integration and analysis
capabilities in resource extraction, while R has excellent statistical capabilities and resource personalized
decomposition and display capabilities in data calling. This article will propose an integration plan for
enterprise shared resource invocation based on Hadoop and R to further improve the efficiency of
enterprise users' shared resource utilization, improve the efficiency of system operation, and bring
enterprise users a higher level of user experience. First, we use Hadoop to extract the corporate shared
resources required by corporate users from the nearby resource storage computer room and
terminal equipment to increase the call rate, and use the R function attribute to convert the user’s search
results into linear correlations, according to the correlation The strong and weak principles are displayed
in order to improve the corresponding speed and experience. This article proposes feasible solutions to the
shortcomings in the current enterprise shared resource invocation. We can use public data sets to perform
personalized regression analysis on user needs, and optimize and integrate most relevant information.
Analysis of Enterprise Shared Resource Invocation Scheme based on Hadoop and R gerogepatton
The response rate and performance indicators of enterprise resource calls have become an important part
of measuring the difference in enterprise user experience. An efficient corporate shared resource calling
system can significantly improve the office efficiency of corporate users and significantly improve the
fluency of corporate users' resource calling. Hadoop has powerful data integration and analysis
capabilities in resource extraction, while R has excellent statistical capabilities and resource personalized
decomposition and display capabilities in data calling. This article will propose an integration plan for
enterprise shared resource invocation based on Hadoop and R to further improve the efficiency of
enterprise users' shared resource utilization, improve the efficiency of system operation, and bring
enterprise users a higher level of user experience. First, we use Hadoop to extract the corporate shared
resources required by corporate users from the nearby resource storage computer room and
terminal equipment to increase the call rate, and use the R function attribute to convert the user’s search
results into linear correlations, according to the correlation The strong and weak principles are displayed
in order to improve the corresponding speed and experience. This article proposes feasible solutions to the
shortcomings in the current enterprise shared resource invocation. We can use public data sets to perform
personalized regression analysis on user needs, and optimize and integrate most relevant information.
Selecting Experts Using Data Quality Conceptsijdms
Personal networks are not always diverse or large enough to reach those with the right information. This
problem increases when assembling a group of experts from around the world, something which is a
challenge in Future-oriented Technology Analysis (FTA). In this work, we address the formation of a panel
of experts, specifically how to select a group of experts from a huge group of people. We propose an
approach which uses data quality dimensions to improve expert selection quality and provide quality
metrics to the forecaster. We performed a case study and successfully showed that it is possible to use data
quality methods to support the expert search process.
ARABIC TEXT MINING AND ROUGH SET THEORY FOR DECISION SUPPORT SYSTEM.Hassanein Alwan
The present work based on an implement classification system for Arab Complaint. The
basis of this system is the construction of Decision Support Systems, Text Mining, and Rough Set, a
the specialized semantic approach of correlation which is referred to as the model of ASN for creating
semantic structure between all document’s words. A particular semantic weight represents true
significance of this term in the text after each feature is probable to be utilized the semantic selection
of a suggested feature based on 2 threshold values, the first being the maximum weight in a document
and the other one representing the characteristic having the maximum semantic scale with the first one
of the thresholds. The task of classification in the stage of testing depends on the dependency degree
concept in the raw group theory than before to treat all the characteristics of every one of the classes
that results from the stage of training as particular condition rules and therefore, considered in the
lower rounding group. The outputs of this classification system are sufficient, performance is quite
good persuasive, and assessment of this the system through measurement precision.
An interactive approach to requirements prioritization using quality factorsijfcstjournal
As the prevalence of software increases, so does the complexity and the number of requirements assoc
iated
to the software project. This presents a dilemma for the developers to clearly identify and prioriti
ze the
most important requirements in order to del
iver the project in given amount of resources and time.
A
number of prioritization methods have been proposed which provide consistent results, but they are v
ery
difficult and complex to implement in practical scenarios as well as lack proper structure to
analyze the
requirements properly. In this study, the users can provide their requirements in two forms: text ba
sed
story form and use case form.
Moreover, the existing prioritization techniques have a very little or no
interaction with the users. So, in t
his paper an attempt has been made to make the prioritization process
user interactive by adding a second level of prioritization where after the developer has properly a
nalyzed
and ranked the requirements on the basis of quality attributes in the first le
vel, takes the opinion of distinct
user’s about the requirements priority sequence. The developer then calculates the disagreement valu
e
associated with each user sequence in order to find out the final priority sequence.
1
45
Table of Contents
Introduction3
Need for technology-based solutions3
Infrastructure Automation Tools4
Implementation4
The Central Theory: Organizational Management and Memory4
Organizational Management4
Organizational Memory4
Need of Data Archival And Storage5
Data Storage.5
Types of Storage.6
Data Archival9
Data Archival Process9
Archiving principles12
Data Management Systems12
Enterprise Resource Planning Systems (ERP systems) for data integration.13
Microservices.15
Properties of Monolithic.17
Conclusion22
References24
Introduction
Technology is considered vital in today's globalized world. Especially in terms of business, information technology has both quantifiable and unquantifiable benefits. It is essential to communicate with customers and stakeholders regularly and necessary for communicating quickly and clearly. It helps in implementing business operations efficiently and effectively, also. A business with robust technological capacity creates new opportunities for a company to stay ahead of the competition and grow eventually (Rangus & Slavec, 2017). Consequently, it also makes dynamic teams that can interact from anywhere in the world—furthermore, technology aids in understanding the business needs and managing and securing confidential and critical data.Need for technology-based solutions
The need for data recovery, active and continuous data processing by its life cycle of significance and utility for research, scientific and educational purposes (Bukari Zakaria & Mamman, 2014). The acknowledgment that information is an organization's key asset since late, decisively affecting its profitability, has contributed to some comprehensive corporate memory approaches. The key causes of competitive advantage are corporate memory and organizational learning ability (C. Priya, 2011). Hence the main obstacle is the effectiveness of information management while ensuring the consistency of training facilities.
Organizations need robust technology-based solutions. Thus, software developers have developed and deployed various forms of overtime architectures that enable software products to become resource-effective and usable. Some architectures implement their frameworks in either one layer or various layers or levels (Suresh, 2012). It is understood that ERP implementation efficiency of ERP implementations is influenced by the rise or excess of a certain degree of capability in the volume of data to process (Johansson, 2012). In the last couple of decades, new architectures have been created with creativity that offers optimum solutions. Thus, the microservices architecture is gaining room and becoming part of the technological, financial, and advertising decision-making process. Microservices replace monolithic, tightly dispersed system-focused applications with an independent operation (Vrîncianu, Anica-Popa, & Anica-Popa, 2009).Infrastructure Automation Tools
One issue as microservices are applied is that any s ...
1
45
Table of Contents
Introduction3
Need for technology-based solutions3
Infrastructure Automation Tools4
Implementation4
The Central Theory: Organizational Management and Memory4
Organizational Management4
Organizational Memory4
Need of Data Archival And Storage5
Data Storage.5
Types of Storage.6
Data Archival9
Data Archival Process9
Archiving principles12
Data Management Systems12
Enterprise Resource Planning Systems (ERP systems) for data integration.13
Microservices.15
Properties of Monolithic.17
Conclusion22
References24
Introduction
Technology is considered vital in today's globalized world. Especially in terms of business, information technology has both quantifiable and unquantifiable benefits. It is essential to communicate with customers and stakeholders regularly and necessary for communicating quickly and clearly. It helps in implementing business operations efficiently and effectively, also. A business with robust technological capacity creates new opportunities for a company to stay ahead of the competition and grow eventually (Rangus & Slavec, 2017). Consequently, it also makes dynamic teams that can interact from anywhere in the world—furthermore, technology aids in understanding the business needs and managing and securing confidential and critical data.Need for technology-based solutions
The need for data recovery, active and continuous data processing by its life cycle of significance and utility for research, scientific and educational purposes (Bukari Zakaria & Mamman, 2014). The acknowledgment that information is an organization's key asset since late, decisively affecting its profitability, has contributed to some comprehensive corporate memory approaches. The key causes of competitive advantage are corporate memory and organizational learning ability (C. Priya, 2011). Hence the main obstacle is the effectiveness of information management while ensuring the consistency of training facilities.
Organizations need robust technology-based solutions. Thus, software developers have developed and deployed various forms of overtime architectures that enable software products to become resource-effective and usable. Some architectures implement their frameworks in either one layer or various layers or levels (Suresh, 2012). It is understood that ERP implementation efficiency of ERP implementations is influenced by the rise or excess of a certain degree of capability in the volume of data to process (Johansson, 2012). In the last couple of decades, new architectures have been created with creativity that offers optimum solutions. Thus, the microservices architecture is gaining room and becoming part of the technological, financial, and advertising decision-making process. Microservices replace monolithic, tightly dispersed system-focused applications with an independent operation (Vrîncianu, Anica-Popa, & Anica-Popa, 2009).Infrastructure Automation Tools
One issue as microservices are applied is that any s ...
Service-oriented computing has created new requirements for information systems development processes
and methods. The adoption of service-oriented development requires service identification methods
matching the challenge in enterprises. A wide variety of service identification methods (SIM) have been
proposed, but less attention has been paid to the actual requirements of the methods. This paper provides
an ethnographical look at challenges in service identification based on data from 14 service identification
sessions, providing insight into the practice of service identification. The findings identified two types of
service identification sessions and the results can be used for selecting the appropriate SIM based on the
type of the session.
A GROUNDED THEORY OF THE REQUIREMENTS ENGINEERING PROCESSijseajournal
This paper explores the requirements engineering (RE) process by conducting interviews with RE professionals and applying grounded theory to determine whether a theory of RE emerges. Analysis of the interviews revealed prominent data patterns that were used to model the RE process. The model depicts the RE process as one of establishing a match through discovery and streamlining, and which utilizes corrective measures in order to manage threats to establishing a match. The process involves many entities but is mainly conducted by RE professionals whose experience plays a major role in extracting complete requirements and detecting occasions of mismatch between customer needs and the software requirements, which represent their main concern during the process.
A HUMAN-CENTRIC APPROACH TO GROUP-BASED CONTEXT-AWARENESSIJNSA Journal
The emerging need for qualitative approaches in context-aware information processing calls for proper modelling of context information and efficient handling of its inherent uncertainty resulted from human interpretation and usage. Many of the current approaches to context-awareness either lack a solid theoretical basis for modelling or ignore important requirements such as modularity, high-order uncertainty management and group-based context-awareness. Therefore, their real-world application and extendibility remains limited. In this paper, we present f-Context as a service-based contextawareness framework, based on language-action perspective (LAP) theory for modelling. Then we identify some of the complex, informational parts of context which contain high-order uncertainties due to differences between members of the group in defining them. An agent-based perceptual computer architecture is proposed for implementing f-Context that uses computing with words (CWW) for handling uncertainty. The feasibility of f-Context is analyzed using a realistic scenario involving a group of mobile users. We believe that the proposed approach can open the door to future research on context-awareness by offering a theoretical foundation based on human communication, and a service-based layered architecture which exploits CWW for context-aware, group-based and platform-independent access to information systems.
CREATING AND SHARING KNOWLEDGE THROUGH A CORPORATE SOCIAL NETWORKING SITE: TH...Julio Figueroa
Paper published in PACIS2012
There have been various claims that enterprise social networking sites (ESN) might improve business effectiveness and performance. Nevertheless, many of the initiatives supported by ESNs have failed. This paper argues that divergent perceptions about ESNs across the different levels of the organization may explain failures in ESNs’ design and implementation. Using an extended version of the Technological Frames of Reference framework (Orlikowski & Gash, 1994), this paper reports on a study that analyzed employee’s perceptions about an ESN within a software engineering firm. It was found that significant divergent perceptions in the organization led to a social order that discouraged employees to create and share knowledge through the ESN. This paper highlights the importance of aligning top management perceptions about the ESN with its actual scope. It also highlights the relevance of aligning perceptions about the ESN across the different levels of the organization. This paper proposes extending the original Technological Frames of Reference framework in order to better understand people’s perceptions about technologies that support knowledge management systems. It also proposes an explanatory model for understanding how people’s perceptions about a corporate social networking site impact on its usage.
ANALYSIS OF LAND SURFACE DEFORMATION GRADIENT BY DINSAR cscpconf
The progressive development of Synthetic Aperture Radar (SAR) systems diversify the exploitation of the generated images by these systems in different applications of geoscience. Detection and monitoring surface deformations, procreated by various phenomena had benefited from this evolution and had been realized by interferometry (InSAR) and differential interferometry (DInSAR) techniques. Nevertheless, spatial and temporal decorrelations of the interferometric couples used, limit strongly the precision of analysis results by these techniques. In this context, we propose, in this work, a methodological approach of surface deformation detection and analysis by differential interferograms to show the limits of this technique according to noise quality and level. The detectability model is generated from the deformation signatures, by simulating a linear fault merged to the images couples of ERS1 / ERS2 sensors acquired in a region of the Algerian south.
4D AUTOMATIC LIP-READING FOR SPEAKER'S FACE IDENTIFCATIONcscpconf
A novel based a trajectory-guided, concatenating approach for synthesizing high-quality image real sample renders video is proposed . The lips reading automated is seeking for modeled the closest real image sample sequence preserve in the library under the data video to the HMM predicted trajectory. The object trajectory is modeled obtained by projecting the face patterns into an KDA feature space is estimated. The approach for speaker's face identification by using synthesise the identity surface of a subject face from a small sample of patterns which sparsely each the view sphere. An KDA algorithm use to the Lip-reading image is discrimination, after that work consisted of in the low dimensional for the fundamental lip features vector is reduced by using the 2D-DCT.The mouth of the set area dimensionality is ordered by a normally reduction base on the PCA to obtain the Eigen lips approach, their proposed approach by[33]. The subjective performance results of the cost function under the automatic lips reading modeled , which wasn’t illustrate the superior performance of the
method.
MOVING FROM WATERFALL TO AGILE PROCESS IN SOFTWARE ENGINEERING CAPSTONE PROJE...cscpconf
Universities offer software engineering capstone course to simulate a real world-working environment in which students can work in a team for a fixed period to deliver a quality product. The objective of the paper is to report on our experience in moving from Waterfall process to Agile process in conducting the software engineering capstone project. We present the capstone course designs for both Waterfall driven and Agile driven methodologies that highlight the structure, deliverables and assessment plans.To evaluate the improvement, we conducted a survey for two different sections taught by two different instructors to evaluate students’ experience in moving from traditional Waterfall model to Agile like process. Twentyeight students filled the survey. The survey consisted of eight multiple-choice questions and an open-ended question to collect feedback from students. The survey results show that students were able to attain hands one experience, which simulate a real world-working environment. The results also show that the Agile approach helped students to have overall better design and avoid mistakes they have made in the initial design completed in of the first phase of the capstone project. In addition, they were able to decide on their team capabilities, training needs and thus learn the required technologies earlier which is reflected on the final product quality
PROMOTING STUDENT ENGAGEMENT USING SOCIAL MEDIA TECHNOLOGIEScscpconf
Using social media in education provides learners with an informal way for communication. Informal communication tends to remove barriers and hence promotes student engagement. This paper presents our experience in using three different social media technologies in teaching software project management course. We conducted different surveys at the end of every semester to evaluate students’ satisfaction and engagement. Results show that using social media enhances students’ engagement and satisfaction. However, familiarity with the tool is an important factor for student satisfaction.
A SURVEY ON QUESTION ANSWERING SYSTEMS: THE ADVANCES OF FUZZY LOGICcscpconf
In real world computing environment with using a computer to answer questions has been a human dream since the beginning of the digital era, Question-answering systems are referred to as intelligent systems, that can be used to provide responses for the questions being asked by the user based on certain facts or rules stored in the knowledge base it can generate answers of questions asked in natural , and the first main idea of fuzzy logic was to working on the problem of computer understanding of natural language, so this survey paper provides an overview on what Question-Answering is and its system architecture and the possible relationship and
different with fuzzy logic, as well as the previous related research with respect to approaches that were followed. At the end, the survey provides an analytical discussion of the proposed QA models, along or combined with fuzzy logic and their main contributions and limitations.
DYNAMIC PHONE WARPING – A METHOD TO MEASURE THE DISTANCE BETWEEN PRONUNCIATIONS cscpconf
Human beings generate different speech waveforms while speaking the same word at different times. Also, different human beings have different accents and generate significantly varying speech waveforms for the same word. There is a need to measure the distances between various words which facilitate preparation of pronunciation dictionaries. A new algorithm called Dynamic Phone Warping (DPW) is presented in this paper. It uses dynamic programming technique for global alignment and shortest distance measurements. The DPW algorithm can be used to enhance the pronunciation dictionaries of the well-known languages like English or to build pronunciation dictionaries to the less known sparse languages. The precision measurement experiments show 88.9% accuracy.
INTELLIGENT ELECTRONIC ASSESSMENT FOR SUBJECTIVE EXAMS cscpconf
In education, the use of electronic (E) examination systems is not a novel idea, as Eexamination systems have been used to conduct objective assessments for the last few years. This research deals with randomly designed E-examinations and proposes an E-assessment system that can be used for subjective questions. This system assesses answers to subjective questions by finding a matching ratio for the keywords in instructor and student answers. The matching ratio is achieved based on semantic and document similarity. The assessment system is composed of four modules: preprocessing, keyword expansion, matching, and grading. A survey and case study were used in the research design to validate the proposed system. The examination assessment system will help instructors to save time, costs, and resources, while increasing efficiency and improving the productivity of exam setting and assessments.
TWO DISCRETE BINARY VERSIONS OF AFRICAN BUFFALO OPTIMIZATION METAHEURISTICcscpconf
African Buffalo Optimization (ABO) is one of the most recent swarms intelligence based metaheuristics. ABO algorithm is inspired by the buffalo’s behavior and lifestyle. Unfortunately, the standard ABO algorithm is proposed only for continuous optimization problems. In this paper, the authors propose two discrete binary ABO algorithms to deal with binary optimization problems. In the first version (called SBABO) they use the sigmoid function and probability model to generate binary solutions. In the second version (called LBABO) they use some logical operator to operate the binary solutions. Computational results on two knapsack problems (KP and MKP) instances show the effectiveness of the proposed algorithm and their ability to achieve good and promising solutions.
DETECTION OF ALGORITHMICALLY GENERATED MALICIOUS DOMAINcscpconf
In recent years, many malware writers have relied on Dynamic Domain Name Services (DDNS) to maintain their Command and Control (C&C) network infrastructure to ensure a persistence presence on a compromised host. Amongst the various DDNS techniques, Domain Generation Algorithm (DGA) is often perceived as the most difficult to detect using traditional methods. This paper presents an approach for detecting DGA using frequency analysis of the character distribution and the weighted scores of the domain names. The approach’s feasibility is demonstrated using a range of legitimate domains and a number of malicious algorithmicallygenerated domain names. Findings from this study show that domain names made up of English characters “a-z” achieving a weighted score of < 45 are often associated with DGA. When a weighted score of < 45 is applied to the Alexa one million list of domain names, only 15% of the domain names were treated as non-human generated.
GLOBAL MUSIC ASSET ASSURANCE DIGITAL CURRENCY: A DRM SOLUTION FOR STREAMING C...cscpconf
The amount of piracy in the streaming digital content in general and the music industry in specific is posing a real challenge to digital content owners. This paper presents a DRM solution to monetizing, tracking and controlling online streaming content cross platforms for IP enabled devices. The paper benefits from the current advances in Blockchain and cryptocurrencies. Specifically, the paper presents a Global Music Asset Assurance (GoMAA) digital currency and presents the iMediaStreams Blockchain to enable the secure dissemination and tracking of the streamed content. The proposed solution provides the data owner the ability to control the flow of information even after it has been released by creating a secure, selfinstalled, cross platform reader located on the digital content file header. The proposed system provides the content owners’ options to manage their digital information (audio, video, speech, etc.), including the tracking of the most consumed segments, once it is release. The system benefits from token distribution between the content owner (Music Bands), the content distributer (Online Radio Stations) and the content consumer(Fans) on the system blockchain.
IMPORTANCE OF VERB SUFFIX MAPPING IN DISCOURSE TRANSLATION SYSTEMcscpconf
This paper discusses the importance of verb suffix mapping in Discourse translation system. In
discourse translation, the crucial step is Anaphora resolution and generation. In Anaphora
resolution, cohesion links like pronouns are identified between portions of text. These binders
make the text cohesive by referring to nouns appearing in the previous sentences or nouns
appearing in sentences after them. In Machine Translation systems, to convert the source
language sentences into meaningful target language sentences the verb suffixes should be
changed as per the cohesion links identified. This step of translation process is emphasized in
the present paper. Specifically, the discussion is on how the verbs change according to the
subjects and anaphors. To explain the concept, English is used as the source language (SL) and
an Indian language Telugu is used as Target language (TL)
EXACT SOLUTIONS OF A FAMILY OF HIGHER-DIMENSIONAL SPACE-TIME FRACTIONAL KDV-T...cscpconf
In this paper, based on the definition of conformable fractional derivative, the functional
variable method (FVM) is proposed to seek the exact traveling wave solutions of two higherdimensional
space-time fractional KdV-type equations in mathematical physics, namely the
(3+1)-dimensional space–time fractional Zakharov-Kuznetsov (ZK) equation and the (2+1)-
dimensional space–time fractional Generalized Zakharov-Kuznetsov-Benjamin-Bona-Mahony
(GZK-BBM) equation. Some new solutions are procured and depicted. These solutions, which
contain kink-shaped, singular kink, bell-shaped soliton, singular soliton and periodic wave
solutions, have many potential applications in mathematical physics and engineering. The
simplicity and reliability of the proposed method is verified.
AUTOMATED PENETRATION TESTING: AN OVERVIEWcscpconf
The using of information technology resources is rapidly increasing in organizations,
businesses, and even governments, that led to arise various attacks, and vulnerabilities in the
field. All resources make it a must to do frequently a penetration test (PT) for the environment
and see what can the attacker gain and what is the current environment's vulnerabilities. This
paper reviews some of the automated penetration testing techniques and presents its
enhancement over the traditional manual approaches. To the best of our knowledge, it is the
first research that takes into consideration the concept of penetration testing and the standards
in the area.This research tackles the comparison between the manual and automated
penetration testing, the main tools used in penetration testing. Additionally, compares between
some methodologies used to build an automated penetration testing platform.
CLASSIFICATION OF ALZHEIMER USING fMRI DATA AND BRAIN NETWORKcscpconf
Since the mid of 1990s, functional connectivity study using fMRI (fcMRI) has drawn increasing
attention of neuroscientists and computer scientists, since it opens a new window to explore
functional network of human brain with relatively high resolution. BOLD technique provides
almost accurate state of brain. Past researches prove that neuro diseases damage the brain
network interaction, protein- protein interaction and gene-gene interaction. A number of
neurological research paper also analyse the relationship among damaged part. By
computational method especially machine learning technique we can show such classifications.
In this paper we used OASIS fMRI dataset affected with Alzheimer’s disease and normal
patient’s dataset. After proper processing the fMRI data we use the processed data to form
classifier models using SVM (Support Vector Machine), KNN (K- nearest neighbour) & Naïve
Bayes. We also compare the accuracy of our proposed method with existing methods. In future,
we will other combinations of methods for better accuracy.
VALIDATION METHOD OF FUZZY ASSOCIATION RULES BASED ON FUZZY FORMAL CONCEPT AN...cscpconf
In order to treat and analyze real datasets, fuzzy association rules have been proposed. Several
algorithms have been introduced to extract these rules. However, these algorithms suffer from
the problems of utility, redundancy and large number of extracted fuzzy association rules. The
expert will then be confronted with this huge amount of fuzzy association rules. The task of
validation becomes fastidious. In order to solve these problems, we propose a new validation
method. Our method is based on three steps. (i) We extract a generic base of non redundant
fuzzy association rules by applying EFAR-PN algorithm based on fuzzy formal concept analysis.
(ii) we categorize extracted rules into groups and (iii) we evaluate the relevance of these rules
using structural equation model.
PROBABILITY BASED CLUSTER EXPANSION OVERSAMPLING TECHNIQUE FOR IMBALANCED DATAcscpconf
In many applications of data mining, class imbalance is noticed when examples in one class are
overrepresented. Traditional classifiers result in poor accuracy of the minority class due to the
class imbalance. Further, the presence of within class imbalance where classes are composed of
multiple sub-concepts with different number of examples also affect the performance of
classifier. In this paper, we propose an oversampling technique that handles between class and
within class imbalance simultaneously and also takes into consideration the generalization
ability in data space. The proposed method is based on two steps- performing Model Based
Clustering with respect to classes to identify the sub-concepts; and then computing the
separating hyperplane based on equal posterior probability between the classes. The proposed
method is tested on 10 publicly available data sets and the result shows that the proposed
method is statistically superior to other existing oversampling methods.
CHARACTER AND IMAGE RECOGNITION FOR DATA CATALOGING IN ECOLOGICAL RESEARCHcscpconf
Data collection is an essential, but manpower intensive procedure in ecological research. An
algorithm was developed by the author which incorporated two important computer vision
techniques to automate data cataloging for butterfly measurements. Optical Character
Recognition is used for character recognition and Contour Detection is used for imageprocessing.
Proper pre-processing is first done on the images to improve accuracy. Although
there are limitations to Tesseract’s detection of certain fonts, overall, it can successfully identify
words of basic fonts. Contour detection is an advanced technique that can be utilized to
measure an image. Shapes and mathematical calculations are crucial in determining the precise
location of the points on which to draw the body and forewing lines of the butterfly. Overall,
92% accuracy were achieved by the program for the set of butterflies measured.
SOCIAL MEDIA ANALYTICS FOR SENTIMENT ANALYSIS AND EVENT DETECTION IN SMART CI...cscpconf
Smart cities utilize Internet of Things (IoT) devices and sensors to enhance the quality of the city
services including energy, transportation, health, and much more. They generate massive
volumes of structured and unstructured data on a daily basis. Also, social networks, such as
Twitter, Facebook, and Google+, are becoming a new source of real-time information in smart
cities. Social network users are acting as social sensors. These datasets so large and complex
are difficult to manage with conventional data management tools and methods. To become
valuable, this massive amount of data, known as 'big data,' needs to be processed and
comprehended to hold the promise of supporting a broad range of urban and smart cities
functions, including among others transportation, water, and energy consumption, pollution
surveillance, and smart city governance. In this work, we investigate how social media analytics
help to analyze smart city data collected from various social media sources, such as Twitter and
Facebook, to detect various events taking place in a smart city and identify the importance of
events and concerns of citizens regarding some events. A case scenario analyses the opinions of
users concerning the traffic in three largest cities in the UAE
SOCIAL NETWORK HATE SPEECH DETECTION FOR AMHARIC LANGUAGEcscpconf
The anonymity of social networks makes it attractive for hate speech to mask their criminal
activities online posing a challenge to the world and in particular Ethiopia. With this everincreasing
volume of social media data, hate speech identification becomes a challenge in
aggravating conflict between citizens of nations. The high rate of production, has become
difficult to collect, store and analyze such big data using traditional detection methods. This
paper proposed the application of apache spark in hate speech detection to reduce the
challenges. Authors developed an apache spark based model to classify Amharic Facebook
posts and comments into hate and not hate. Authors employed Random forest and Naïve Bayes
for learning and Word2Vec and TF-IDF for feature selection. Tested by 10-fold crossvalidation,
the model based on word2vec embedding performed best with 79.83%accuracy. The
proposed method achieve a promising result with unique feature of spark for big data.
GENERAL REGRESSION NEURAL NETWORK BASED POS TAGGING FOR NEPALI TEXTcscpconf
This article presents Part of Speech tagging for Nepali text using General Regression Neural
Network (GRNN). The corpus is divided into two parts viz. training and testing. The network is
trained and validated on both training and testing data. It is observed that 96.13% words are
correctly being tagged on training set whereas 74.38% words are tagged correctly on testing
data set using GRNN. The result is compared with the traditional Viterbi algorithm based on
Hidden Markov Model. Viterbi algorithm yields 97.2% and 40% classification accuracies on
training and testing data sets respectively. GRNN based POS Tagger is more consistent than the
traditional Viterbi decoding technique.
Model Attribute Check Company Auto PropertyCeline George
In Odoo, the multi-company feature allows you to manage multiple companies within a single Odoo database instance. Each company can have its own configurations while still sharing common resources such as products, customers, and suppliers.
Read| The latest issue of The Challenger is here! We are thrilled to announce that our school paper has qualified for the NATIONAL SCHOOLS PRESS CONFERENCE (NSPC) 2024. Thank you for your unwavering support and trust. Dive into the stories that made us stand out!
Synthetic Fiber Construction in lab .pptxPavel ( NSTU)
Synthetic fiber production is a fascinating and complex field that blends chemistry, engineering, and environmental science. By understanding these aspects, students can gain a comprehensive view of synthetic fiber production, its impact on society and the environment, and the potential for future innovations. Synthetic fibers play a crucial role in modern society, impacting various aspects of daily life, industry, and the environment. ynthetic fibers are integral to modern life, offering a range of benefits from cost-effectiveness and versatility to innovative applications and performance characteristics. While they pose environmental challenges, ongoing research and development aim to create more sustainable and eco-friendly alternatives. Understanding the importance of synthetic fibers helps in appreciating their role in the economy, industry, and daily life, while also emphasizing the need for sustainable practices and innovation.
Instructions for Submissions thorugh G- Classroom.pptxJheel Barad
This presentation provides a briefing on how to upload submissions and documents in Google Classroom. It was prepared as part of an orientation for new Sainik School in-service teacher trainees. As a training officer, my goal is to ensure that you are comfortable and proficient with this essential tool for managing assignments and fostering student engagement.
A Strategic Approach: GenAI in EducationPeter Windle
Artificial Intelligence (AI) technologies such as Generative AI, Image Generators and Large Language Models have had a dramatic impact on teaching, learning and assessment over the past 18 months. The most immediate threat AI posed was to Academic Integrity with Higher Education Institutes (HEIs) focusing their efforts on combating the use of GenAI in assessment. Guidelines were developed for staff and students, policies put in place too. Innovative educators have forged paths in the use of Generative AI for teaching, learning and assessments leading to pockets of transformation springing up across HEIs, often with little or no top-down guidance, support or direction.
This Gasta posits a strategic approach to integrating AI into HEIs to prepare staff, students and the curriculum for an evolving world and workplace. We will highlight the advantages of working with these technologies beyond the realm of teaching, learning and assessment by considering prompt engineering skills, industry impact, curriculum changes, and the need for staff upskilling. In contrast, not engaging strategically with Generative AI poses risks, including falling behind peers, missed opportunities and failing to ensure our graduates remain employable. The rapid evolution of AI technologies necessitates a proactive and strategic approach if we are to remain relevant.
Biological screening of herbal drugs: Introduction and Need for
Phyto-Pharmacological Screening, New Strategies for evaluating
Natural Products, In vitro evaluation techniques for Antioxidants, Antimicrobial and Anticancer drugs. In vivo evaluation techniques
for Anti-inflammatory, Antiulcer, Anticancer, Wound healing, Antidiabetic, Hepatoprotective, Cardio protective, Diuretics and
Antifertility, Toxicity studies as per OECD guidelines
The French Revolution, which began in 1789, was a period of radical social and political upheaval in France. It marked the decline of absolute monarchies, the rise of secular and democratic republics, and the eventual rise of Napoleon Bonaparte. This revolutionary period is crucial in understanding the transition from feudalism to modernity in Europe.
For more information, visit-www.vavaclasses.com
Welcome to TechSoup New Member Orientation and Q&A (May 2024).pdfTechSoup
In this webinar you will learn how your organization can access TechSoup's wide variety of product discount and donation programs. From hardware to software, we'll give you a tour of the tools available to help your nonprofit with productivity, collaboration, financial management, donor tracking, security, and more.
Embracing GenAI - A Strategic ImperativePeter Windle
Artificial Intelligence (AI) technologies such as Generative AI, Image Generators and Large Language Models have had a dramatic impact on teaching, learning and assessment over the past 18 months. The most immediate threat AI posed was to Academic Integrity with Higher Education Institutes (HEIs) focusing their efforts on combating the use of GenAI in assessment. Guidelines were developed for staff and students, policies put in place too. Innovative educators have forged paths in the use of Generative AI for teaching, learning and assessments leading to pockets of transformation springing up across HEIs, often with little or no top-down guidance, support or direction.
This Gasta posits a strategic approach to integrating AI into HEIs to prepare staff, students and the curriculum for an evolving world and workplace. We will highlight the advantages of working with these technologies beyond the realm of teaching, learning and assessment by considering prompt engineering skills, industry impact, curriculum changes, and the need for staff upskilling. In contrast, not engaging strategically with Generative AI poses risks, including falling behind peers, missed opportunities and failing to ensure our graduates remain employable. The rapid evolution of AI technologies necessitates a proactive and strategic approach if we are to remain relevant.
2. 48 Computer Science & Information Technology (CS & IT)
is the creation of simulation models from web services. This approach provides a high fidelity
between the simulation and real world. Moreover, it provides an ability to plug real web services
into simulated entities, thus creating simulations that utilise as much ‘real world’ data as possible.
An initial evaluation of a scenario-based prototype was conducted with ten business executives in
higher education. The purpose of the evaluation was to examine the desire for a tool that allows
business executives communicate about organisational and IT infrastructural changes [10]. The
analysed data produce results that indicate a desire for such a tool with open comments such as
“The idea seems to be very good, especially the natural language interface, which I particularly
like”, and “Good to specify objectives through the use of goals”. Quantitative results reinforce the
comments. They indicate that the majority of this small sample group (6 executives) like the
approach of specifying their goals and rules in natural language. In addition, the majority (7
executives) liked the approach where the system automatically identifies a business process that
may solve the executive’s goal, and applies rules to services in the process. While this was only a
limited survey the results indicate promise. The executives intuitively understood the approach
being proposed. That is to augment the requirements gathering process with semantics that drives
the formulation of organisational changes using controlled natural language. In addition, they also
liked the simulation of the IT infrastructure that supports those changes. Moreover, they strongly
indicated that they believed such an approach was desirable.
Building on the evaluation, CRESUS was designed and implemented as a collaborative
requirements elicitation tool. The tool allows stakeholders communicate requirements guided by
an ontological domain model, then validate the requirements, and finally create a simulation of
the IT infrastructure that supports those requirements.
This research investigates to what extent CRESUS both aids communication in the development
of a shared understanding and supports collaborative requirements elicitation to bring about
organisational, and associated IT infrastructural, change.
The remainder of this paper describes the related work in section 2, then an outline of research,
followed by architecture and implementation. Section 5 describes the evaluation, followed by
limitations, and future work.
2. RELATED WORK
The elicitation or communication of requirements is recognised as a critical activity of software
development [11]. The goal of which is to reach a shared understanding between all parties
involved in the communication process. An increased amount of communication effort [12] is
often required to overcome the gap in communicating the requirements desired in a semantically
consistent and understandable manner and then reflecting the potential impact of those
requirements on the IT infrastructure.
Effective communication has been difficult to achieve and is a recurring problem in the elicitation
of requirements [13]. Macaulay [14] identifies communication as a key factor to the design of
successful systems. Christal et al. [15] categorise the problems of requirements elicitation that
relate to scope, understanding and volatility where communication features strongly in gaining an
understanding. Coughlan et al. [16] conducted an analysis of effective communication in
requirements elicitation. They discuss the user-centred (problem finding viewpoint that advocates
an emergent and collaborative nature to requirements that emerge as part of on-going interactions
and negotiations between participants. This problem finding approach involves spending more
time communicating and developing relationships which has been linked to greater success in the
determination of requirements [17]. Coughlan et al. [16] conclude with four recommendations for
effective communication for an organization and its stakeholders in attempting to integrate
3. Computer Science & Information Technology (CS & IT) 49
technology namely, include users in the design; select an adequate mix of IT and business users
who then interact on a cooperative basis; the incorporation of communication activities that relate
to knowledge acquisition, knowledge negotiation and user acceptance; the use of elicitation
techniques for mediating communication for the requirements of a system such as prototyping,
questionnaires, brainstorming, scenarios, etc.
This research proposes to use simulation as an elicitation technique for creating the IT
infrastructure. In this context, simulation has the potential to evolve into a prototype.
Tools that support requirements elicitation can be categorised as non web-based and web-based.
Non web-based tools are VORDTool [18], FAES [5], AMORE [6], and JPreview [7]. The web-
based tools are TeamWave [19] and WRET [20]. Web based tools support distributed
collaborative requirements elicitation.
Current research on web-based tools indicates that they can support a collaborative requirements
process and in some cases support the creation of requirements documentations. However, none
of them incorporate simulation as an elicitation technique to study the impact requirements will
have on the organisational and associated IT infrastructure change. This research proposes to
address the issue by incorporating this technique into a web-based collaborative requirement
elicitation tool.
Rogers and Kincaid [21] convergence model describes communication as a dynamic process of
idea and knowledge generation, which occurs over time through interaction with others and
which leads to mutual understanding and collective action. The work of Lind and Zmud [3]
shows that frequent communication helps create a mutual understanding of the role of IT
infrastructure in supporting the business functions. Johnson and Lederer [2] extend this work to
communication between the Chief Executive and Chief Information Officers. Their measures
however are based on the subject’s perception and they recommend that communication
frequency would benefit from objective assessments. In addition, they mention that future
research would benefit if they incorporate other dimensions to the frequency of communication
such as the richness of the communication. The recommendation of using objective measures for
frequency of communication and other dimensions that lead to a shared understanding form the
basis of evaluating this research.
Requirements are often written in natural language (e.g., English) which is inherently ambiguous.
This allows customers that are unfamiliar with modelling notations to still understand and
validate requirement specifications [22]. A controlled natural language is a precisely defined
subset of full English that can be used for communicating the organisations requirements in such
a way that it can be automatically reasoned over by machines and thus removes the ambiguity
issues of natural language. Attempto Controlled English (ACE) [23] demonstrates how a
controlled natural language describes knowledge for the semantic web. This allows the language
to be understandable by people and machines. A predictive editor [24] guides the stakeholders,
word-by-word in the construction of a sentence that complies with ACE. The sentence can be
converted to an ontological format using the ACE Parsing Engine (APE) [25]. The approach
described with ACE shows the potential for a natural language interfaces to utilize semantic
inference to refine and hone requirements which form part of the controlled natural language
interface in CRESUS.
Simulation models play an important and inexpensive role in displaying significant features and
characteristics of dynamic system, which one wishes to study, predict, modify, or control [8]. The
true potential of simulation is in portraying the envisaged impact of certain decisions and changes
on an operational system or process. Chandrasekaran et al [9] examines the synergy between web
service technology and simulation. One of the approaches they propose is the creation of
4. 50 Computer Science & Information Technology (CS & IT)
simulation models or components from web services in order to provide a high fidelity between
the simulation and real world. It provides an ability to plug real web services into simulated
entities, thus creating simulations that utilise as much ‘real world’ data as possible. The proposed
approach of creating a simulation model from web services forms the basis of the simulation
platform that represents the evolution of the IT infrastructure in CRESUS.
In summary, the communication of requirements is recognised as a critical activity to reaching a
shared understanding between stakeholders involved in software development. Effective
communication has been difficult to achieve and is a recurring problem in the elicitation of
requirements. An increased amount of communication effort is often required to overcome the
gap in communicating the requirements desired in a semantically consistent and understandable
manner and then reflecting the potential impact of those requirements on the IT infrastructure.
The literature review demonstrates the potential for controlled natural language interfaces to
utilise semantic inference to collaboratively communicate organisational requirements. A
simulation represented by web services provides the stakeholders with the means to effectively
communicate the potential impact of the requirements on the IT infrastructure. Thus, the use of a
simulation model comprised of semantically enabled web services could potentially represent the
evolution of the organisation’s IT infrastructure. The literature also demonstrates the criteria for
evaluating this research using objective measures for frequency of communication and other
dimensions that lead to a shared understanding.
3. OUTLINE OF RESEARCH
3.1. Research background
This research explores to what extent CRESUS both aids communication in the development of a
shared understanding and supports collaborative requirements elicitation to bring about
organisational, and associated IT infrastructural, change. In addition, the experiment controls for
title, role, level of service, and academic qualifications of employees at the National College of
Ireland.
The experiment employed a pre-test-post-test control group design with matching participants in
the experimental and control groups to evaluate the frequency of communication, quality of
communication, participant’s identification of organisational change issues, participant
perceptions of attaining a shared representation of the IT infrastructure supporting organisational
change, and, control variables that relate to title, role, level of service and academic
qualifications.
In this context CRESUS is generally defined as a collaborative communication tool that allows
stakeholders to communicate requirements from an ontological domain model, validate the
requirements, and create a simulation of the IT infrastructure that supports those requirements.
Communication will encompass three dimensions namely, frequency of communication, quality
of communication and the identification of organisational change issues. Shared understanding is
generally defined as the stakeholders’ perception that the requirements represent the
organisational and associated IT infrastructural change.
Organisational and associated IT infrastructural change will be generally defined as a series of IT
systems denoted by web services that are semantically enabled and may access ontological data
that represents the organisational changes as denoted by the stakeholders’ requirements.
5. Computer Science & Information Technology (CS & IT) 51
3.2. Method
The following steps were carried out in developing CRESUS namely a literature review,
evaluation of a scenario-based prototype, implementation of CRESUS and evaluation of
CRESUS.
The literature review focused on organisational communication using a controlled natural
language, shared understanding of organisational and associated IT infrastructural changes,
collaborative requirements elicitation to bring about organisational and associated IT
infrastructure change, and simulation that comprises of semantically enabled web services
representing the evolution of the organisational IT infrastructure.
The evaluation of a scenario-based prototype is described in section 1. The implementation of
CRESUS is described in section 4. Finally, the evaluation is described in section 5.
4. ARCHITECTURE AND IMPLEMENTATION
The architecture for the CRESUS tool was built around a controlled natural language interface,
ontological domain model and a language translator that creates a simulation platform consisting
of web services representing the evolution of the IT infrastructure as shown in Figure 1.
The architecture revolves around collaborative requirements elicitation between the business
executive, IT architect and other stakeholders using controlled natural language. The language is
guided by a lexicon that consists of nouns, proper nouns and verbs from the ontological domain
model. The Knowledge Engineer creates an ontological domain model based on concepts,
relationships and actual data from the problem domain.
The business executive, IT architect and stakeholders may create or modify requirements through
the controlled natural language interface [24] as shown in Figure 2. These requirements are
parsed to ensure that the grammar is based on a subset of natural language, namely Attempto
Controlled English (ACE) [23] and stored in the ontological domain model. This results in
machine accessible semantics that are automatically process-able. The controlled natural language
interface component represents the only 3rd party component in this architecture.
Figure 1. Architecture of CRESUS
6. 52 Computer Science & Information Technology (CS & IT)
Figure 2. CRESUS – Controlled natural language interface (Preditor)
The language translator component parses the ontological domain model and applies a template
rule to create the web services and simulation data.
Deployment of the web services results in the creation of the simulation platform that represents
the evolution of the IT infrastructure. Staff can retrieve, create, delete and modify simulation data
through the web service interfaces.
The innovation with CRESUS lies in the collaboration between the business executive, IT
architect and stakeholders in creating a shared understanding of the evolving IT infrastructure that
supports the requirements through the creation of semantically enabled web services and data
structures during the simulation. The advantage of this approach is the high fidelity between the
simulation and real world. It provides an ability to plug real web services into simulated entities,
thus creating simulations that utilise as much empirical data as possible.
CRESUS is a web based application that is implemented in Java using the echo web framework
[26]. The web based application is hosted on port 8081. Preditor [24] is a 3rd party product for
the controlled natural language interface and is written in Java based on the echo web framework.
The grammar that guides the controlled natural language interface is created using the Document
Object Model (DOM) [27], The Attempto Controlled English Parsing Engine (APE) [25] server is
a 3rd party product that is responsible for parsing the requirements from the business executive,
IT architect and other stakeholders to ensure that they are a subset of ACE. In addition, APE
converts the natural language to the Resource Description Framework (RDF) [28]. APE is set-up
on port 8000. The ontological domain model is created using protégé [29]. The server side
implementation uses Jena [30] which is a Java framework for building semantic web applications
with RDF and the OWL Web Ontology Language [31]. The ontological domain model is
incorporated into Jena. Jena uses Pellet [32] to reason over the ontological domain model. The
server side implementation is also responsible for creating the simulation platform using the Java
XML API and XSLT. XSLT is used for the language translation from the ontology domain model
to the web services. The web services representing the simulation platform are implemented in
Java and deployed on port 8083. Data for the web based application and the simulation platform
is stored in an XML database, eXist [33] on port 8080. Interaction with the database involves the
technologies XQuery, XUpdate and XPath.
7. Computer Science & Information Technology (CS & IT) 53
5. EVALUATION
5.1. Background
A member of the IT department at the National College of Ireland was interviewed about the
process for creating the software environment in the computer laboratories. This process involves
gathering software and hardware requirements for each module from lecturers. Then creating and
testing an image of the software environment. And finally rolling the image out to all the pc’s in
the computer laboratories. Several issues that arose related to getting the module requirements in
a timely fashion and confusion over changes to the software requirements of modules. This
confusion related to requests for different software versions. There was an indication that the
current process may improve by modifying the IT infrastructure for identifying the software and
hardware requirements of each module. The results of the interview formed the basis of the
scenario that was given to the control and experimental groups.
5.2. Population
Twelve employees of the National College of Ireland took part in the experiment. Eight of the
employees were academics and three were from the IT department. The academics consisted of
four course directors (lecturer grade II), one lecturer grade I, one fulltime lecturer grade II from
the school of business, one postdoctoral research fellow and one support tutor. All academics
lectured in the school of computing. The IT personnel consisted of one senior IT administrator,
one IT support specialist and two IT support personnel.
5.3. Operationalisation of variables
In this context, communication is operationalised by the frequency of communication, quality of
communication, and the identification of organisational change issues. Frequency of
communication indicates the degree to which messages and responses take place between the
business executive, IT architect, and staff. Quality of communication indicates the degree to
which messages that identify organisational changes are successfully approved by the appropriate
authority. Organisational change issues indicate the degree to which messages that identify
organisational change issues are discovered but there is no progress to resolve those issues by the
participants.
Shared understanding indicates the degree to which the business executive, staff and IT architect
perceive that the IT infrastructure supports the organisational change faithfully.
5.4. Methodology
Data was captured through online questionnaires, a workshop, logs, observations and a debriefing
session. The level of service and qualification’s online questionnaire is a pre-test that contains
nine items in three major content sections. The content sections are demographic details, service
details and qualification details. The majority of these questions asked for factual information
such as the person’s title, role and work experience in the organisation. Other questions contain
selection buttons such as gender, age and the number of years’ service with NCI. The control
information that relates to the title, role, level of service and academic qualifications used in the
analysis for matching participants is derived from this questionnaire’s data. Thereafter one
employee of each match pair is randomly assigned to the experimental group and the other to the
control group. The workshop involves a brainstorming session with each group that involves
creating a conceptual model of the problem domain which is converted into an ontological
domain model. The ontological domain model is used to constrain the words in the controlled
8. 54 Computer Science & Information Technology (CS & IT)
natural language interface and the naming conventions for the automatically generated web
services. The matching participants experiment involves exposing all groups to email as the
communication tool and exposing the experimental group to an additional treatment of
communication through CRESUS. The experiment is biased towards the use of asynchronous
communication where the control group use only one tool i.e. email and the experimental group
use two tools i.e., email and CRESUS. During the experiment, a log of all messages is stored in a
communal email for each group and in addition the CRESUS logs each message. The information
that relates to the frequency and quality of communication, and organisational change issues used
in the analysis is derived from the log’s data. Observations are noted on an A4 writing pad. The
asynchronous communication for decision making online questionnaire is a post-test that contains
thirteen items in four major content sections. The content sections are demographics, frequency of
communication, quality of communication and perception. The majority of these questions asked
for factual information such as the person’s name, list any positive aspects of the experiment, list
any negative aspects of the experiment, and list any suggested improvements to the experiment.
Another question on participant’s perception is based on a 1-5 semantic differential scale from
“strongly disagree” to “strongly agree”. The information that relates to the participant’s
perception of attaining a shared representation of the IT infrastructure supporting organisational
change, used in the analysis is derived from this questionnaire’s data. The online questionnaire
was chosen as a data collection procedure for the study because of the rapid turnaround in data
collection and the proficiency of participants with technology. The experimental group are
administered the simulation based communication tool online survey. The survey contains thirty-
six items. It was created specifically for this research by this author and contains items from
several instruments [34-36]. The major content sections of this survey are demographic details,
perceived usefulness, usability heuristics, user interface satisfaction, screen layout, learning and
project specific questions. The perceived usefulness identifies details on frequency and quality of
communication, shared understanding of how IT can support organisational changes, and if the
web service simulation of the IT architecture supports the organisational change faithfully. The
project specific questions identify details on how understandable the scenario is and if
components of the simulation-based communication tool are understandable. The final step
involves a debriefing of the results and validation by the participants. The results are analysed
based on a paired samples t-test using SPSS.
5.5. Results and data analysis
The eight academics and four IT personnel that took part in the experiment completed the pre-test
questionnaire with results described in Table 1. The table describes the participants’ role, title,
number of years of service with the National College of Ireland and their level of educational
qualification attained based on the National Qualification Framework. Academic’s 2, 3, 5 and 8
with a title of lecturer grade II and where their role indicates that they are programme directors
are assigned the role of business executive. The academics 7, 11, 10 and 12 whose titles are
lecturer grade I, lecturer grade II from the school of business, postdoctoral research fellow and
support tutor, and where their role indicates that they lecture in the school of computing are
assigned to the role of staff. The IT personnel 1, 4, 6 and 9 with the titles senior IT administrator,
one IT support specialist and two IT support personnel are assigned the role of IT architect. The
academics and IT personnel are pair matched based on their number of years of service with the
National College of Ireland and their level of educational qualification attained based on the
national qualification framework. The matched participants are 2 and 3; 7 and 11; 4 and 6; 5 and
8; 10 and 12; and finally 1 and 9. Participants were randomly allocated to the experimental and
control groups as follows, participants 2, 7 and 1 to experimental group one; participants 3, 11
and 9 to control group two; participants 8, 10 and 4 to experimental group three; and participants
5, 12 and 6 to control group four.
9. Computer Science & Information Technology (CS & IT) 55
Table 1. Matching participants based on level of service and academic qualification
Role Name Title
1-2 years 3-4 years 5-6 years 7-8 years 9-10 years >10 years
Honours
Degree
Masters PhD
2 Lecturer II X X
3 Lecturer II X X
5 Lecturer II X X
8 Lecturer II X X
7 Lecturer I X X
10
Computing
Support
Tutor
X X
11
Post-
Doctoral
Research
Fellow
X X
12 Lecturer II X X
IT personnel 1
Senior IT
Administrator
X X
4 IT support X X
6 IT support X X
9
IT Support
Specialist
X X
Number of years of service with NCI
The level of education attained
based on the National
Qualification Framework
Programme
Director
Lecture in
School of
Computing
Legend
Experimental group 1
Control group 2
Experimental group 3
Control group 4
The results from the brainstorming session define the problem domain model and the approval
process for any requirements that will be identified. The problem domain model is converted into
an ontology domain model which is used to constrain the grammar in the controlled natural
language interface and the naming conventions in the automatically generated web services. A
sample of the conceptual model and approval process that one of the group’s identified, related to
the concepts of Module and Software. An example of a rule was “Module requires Software”
with instance data that represents an organisational change such as “Software Development
requires Netbeans”. In one group the approval process was for the executive to approve any
requirements that the staff identifies. If approval is granted, then the rule goes to the IT architect
for further approval before becoming part of the ontological domain model.
The results from CRESUS logs and the email tool are displayed from Figure 3 to Figure 5. A
comparative analysis of the data from Figure 3 indicates that the frequency of communication is
significantly higher with the experimental group than the control group, where the experimental
group use the CRESUS tool and email to communicate. The control group uses only email.
Figure 4 indicates that there are no significant differences in the quality of communication
between the experimental and control groups. However, a deeper analysis of the results clearly
shows that staff in the experimental groups (participants 7 and 10) identifies the majority of
organisational changes that are successfully approved by the appropriate authority compared to
the control groups. This result may be worth further exploration through additional experiments.
Another observation was that in general the IT architect did not contribute to defining the
organisational changes. This is as expected as the IT architect is responsible for identifying the IT
infrastructure that provides support to the organisational change rather than forcing the
organisational change in any particular direction. There was one exception to this observation in
10. 56 Computer Science & Information Technology (CS & IT)
the fourth control group where the IT architect identifies one organisational change. The result in
this instance is biased as the IT architect is actually part of the real process for rolling out the IT
environment to the pc’s and had knowledge that the other IT architects did not have.
Figure 3. Frequency of communication
Figure 4. Communication Quality
The comparison in Figure 5 indicates that the identification of organisational change issues is
significantly higher among participants in the control group that use email only, compared to the
experimental group. Though initially unexpected, upon reflection the control group’s
communication was not constrained by identifying organisational changes only, and so the
control group’s communication was wider in scope which led to the identification of more issues.
Other results from the post-test online questionnaire indicate there is no difference between the
experimental and control groups for the participant’s perception of attaining a shared
representation of the IT infrastructure supporting organisational change. On reflection one of the
goals of all four groups was to identify an IT system that could support the organisational
changes. As the operational metric was based on perception then it stands to reason that there will
be no difference between the groups in identifying the IT infrastructure that supports the
organisational changes. The qualitative feedback from the experimental groups indicates that
CRESUS facilitates collaboration and communication around the IT architecture with comments
such as “Very useful tool for defining requirements and collaboration” and “Provided a new
means for communication around IT infrastructure”.
11. Computer Science & Information Technology (CS & IT) 57
Figure 5. Organisational change issues
The design of CRESUS ensures that participants focus the communication on identifying
requirements that relate to the organisational changes and approving those changes. Followed by,
an automatic creation of the IT architecture to support those changes. This was as observed with
the experimental group’s one and three. An observation of control group two by the experimenter
and validated by the group indicates that they spent a substantial amount of work defining the IT
infrastructure. This group had two suggested solutions, but had not made a decision on which
solution to adopt. One of the participants of that group made the following comment “Maybe the
contributed should be told not to focus on specific technologies too much”. This comment
highlights an advantage of using CRESUS in that it allows the participants to focus on
communicating about the business and not the technology.
6. LIMITATIONS
This was the first experiment with the tool and so it was important to get early feedback and
direction on further development and experimentation. As such the experiment took place at the
National College of Ireland. One of the goals of the experiment was to demonstrate that the
architecture has the possibility to scale up to the full expressivity of the controlled natural
language and so for this experiment sentences were constrained to simple “noun verb noun”.
Maturation was seen as a threat to the matched participants’ research design used in this
experiment. Maturation is where participants mature or change during the experiment. The
experimenter attempted to select participants that would mature at the same rate. In group one and
group two, participant 2 and participant 3 are both studying for a PhD. During the experiment,
participant 2 completed the viva. Also in group one and group two, participant 7 and participant
11 both currently have an honours degree and are both completing a PhD. During the course of
the experiment, participant 11 completed a viva. Survey items that capture potential maturation
should be incorporated into a pre-test.
Metrics used for the participants’ perception of attaining a shared representation of the IT systems
that supports the organisational change were not suitable for a matching participant’s research
design. As each group is communicating collaboratively to make decisions about organisational
changes and creating the IT system that supports those changes. Thus it stands to reason that there
will be no significant differences in perception. Objective metrics instead of participants’
perceptions should be used in further experimentation.
12. 58 Computer Science & Information Technology (CS & IT)
In group four, participant 6 is involved with the roll out of the IT environment and this extra
knowledge would have biased the results in particular when identifying an IT system and
identifying organisational change solutions, issues and constraints. Survey items that capture the
participants’ role in relation to the scenario should be incorporated into a pre-test and controlled
for in future experiments.
A weakness in the experimental design was that the controlled experiment was conducted in a one
hour setting which would not be representative of actual communication behaviours among the
employees. Further experimentation should be conducted over a longer period of time and
incorporated into their actual job so that it is representative of their actual communication
patterns. The limitations will be addressed in further experimentation.
7. CONCLUSION
The goal of requirements elicitation is to reach a shared understanding between all parties
involved in the communication process which often involves an increased amount of
communication effort to overcome the gap in communicating the requirements desired in a
semantically consistent and understandable manner and then reflecting the potential impact of
those requirements on the IT infrastructure.
An initial study conducted among ten business executives in higher education indicates a desire
by the majority of this small group for a tool that allows them to communicate organisational
changes using natural language where these changes are automatically translated into the IT
infrastructure that supports a business process.
Building on this research, CRESUS was implemented as a collaborative communication tool that
allows stakeholders to communicate requirements from an ontological domain model, validate the
requirements, and create a simulation of the IT infrastructure that supports those requirements.
The tool was evaluated at the National College of Ireland.
Results show that the tool significantly increases the frequency of communication which is a
predictor of reaching a shared understanding between all stakeholders during requirements
elicitation. In addition, evidence from the experiment suggests that the tool increases the quality
of communication initiated by staff. Qualitative feedback from participants that use CRESUS
indicates that the tool facilitates collaboration and communication around the IT architecture with
comments such as “good for storing knowledge and facilitating collaboration” and “Provided a
new means for communication around the IT infrastructure”.
In conclusion CRESUS shows promise as a tool for collaborative requirements elicitation through
increased communication and understanding around organisational and IT infrastructural change.
8. FUTURE WORK
Web services can be orchestrated into an executable business process using the business process
execution language. As such future work will involve investigating CRESUS’s role in
collaborative requirements elicitation for the creation of the IT infrastructure that supports a
business process.
Further evaluation is required to investigate the significance of the quality of communication
among staff, objective metrics for attaining a shared understanding of the IT infrastructure that
supports the organisational change with the provision that the metrics are objective and not based
on the participant’s perception. The experiment will be conducted in different organisations over
13. Computer Science & Information Technology (CS & IT) 59
a longer period of time such that it replicates their actual communication patterns. Questionnaire
items that capture maturation and participants’ role that relates to the organisational change
scenario will be included in the pre-test.
REFERENCES
[1] Preston, D. S., Karahanna, E., & Rowe, F. (2006). Development of shared understanding between the
chief information officer and top management team in U.S. and French Organizations: A cross-
cultural comparison. IEEE Transactions on Engineering Management, 53(2), 191-206. doi:
10.1109/TEM.2006.872244.
[2] Johnson, A.M., & Lederer, A.L., (2005). The effect of communication frequency and channel richness
on the convergence between chief executive and chief information officers. Journal of Management
Information Systems / Fall 2005, Vol. 22, No. 2, pp. 227-252.
[3] Lind, M. R., & Zmud, R. W. (1991). The Influence of a Convergence in Understanding between
Technology Providers and Users on Information Technology Innovativeness. Organization Science,
2(2), pp. 195-217.
[4] Sommerville I., Software Engineering, 6h Edition, Addison-Wesley,Harlow, England. 2001.
[5] Gilvaz A. P.,and J. C. S.P. Leite, .FAES : A Case Tool for Information Acquisition., Case .95,
Proceedings of the Seventh Intl. Work on Computer Aided Software engineering. IEEE Computer
Society Press, July 1995, pp. 260-269.
[6] Christel, Micheal G, et al. (1993) .AMORE: The Advanced Multimedia Organizer for Requirements
Elicitation., Technical Report CMU/SEI-93-TR-12.
[7] Sawyer P., Sommerville, I., and Viller, S. (1996). PREview: Tackling the Real Concerns of
Requirements Engineering., CSEG Technical Report, Computing Department, Lancaster University.
[8] Kellner, Madachy, and Raffo, (1999), “Software Process Modeling and Simulation: Why, What,
How,” Journal of Systems and Software, Vol. 46, No. 2/3.
[9] Chandrasekaran, S., Silver, G., Miller, J., Cardoso, J. & Sheth, A. (2002) XML-based modeling and
simulation: Web service technologies and their synergy with simulation. IN: Proceedings of the 34th
Winter Simulation Conference: exploring new frontiers (WSC 2002), 8-11, December, San Diego,
California, USA. Winter Simulation Conference. pp 606-615.
[10] Stynes, P., Conlan, O., O’Sullivan, D. (2008) Towards a Simulation-based Communication Tool to
Support Semantic Business Process Management. IN: Proceedings of the Fourth International
Workshop on Semantic Business Process Management in Proceedings of Workshops held at the Fifth
European Semantic Web Conference, (ESWC08), 2nd June, Tenerife, Canary Islands, Spain.
[11] Gottesdeiner, E., Requirements by Collaboration, Addison-Wesley, 2002.
[12] Bostrum, R.P. (1989). Successful application of communication techniques to improve the systems
development process. Inform Manage 1989; 16: 279-295.
[13] Saiedian H, Dale R. Requirements engineering: making the connection between the software
developer and the customer. Inform Software Tech 2000; 42 (6): 419-428.
[14] Macaulay LA. Requirements Engineering. Springer-Verlag, London, 1996.
[15] Christel MG, Kang KC. (1992). Issues in requirements elicitation. Technical Report CMU/SEI-92-
TR-12 ESC-TR-92-012.
14. 60 Computer Science & Information Technology (CS & IT)
[16] Coughlan, J., & Macredie, R. D. (2002). Effective communication in requirements elicitation: A
comparison of methodologies. Requirements Engineering, 7(2), 47–60.
[17] Marakas, G.M., Elam, J.J., (1998). Semantic structuring in analyst and representation of facts in
requirements analysis. Inform Syst Research; 9 (1): 37-63.
[18] VordTool, available at URL
http://www.filewatcher.com/b/ftp/ftp.gmu.edu/drine/cs_se2/CourseMaterials/SupplementaryProjectM
aterials/Supplements_TextBookChapter7_Technology-0.html.
[19] TeamWave Software Ltd., Availabale at URL http://www.teamwave.com/, 2001.
[20] Hassan, S., & Salim, S. S. (2004). A Tool to Support Collaborative Requirements Elicitation using
Viewpoint Approach'. In 1st International Conference on Informatics, Izmir, Turkey, September
2004, (pp. 1–4).
[21] Rogers, E.M., and Kincaid, D.L. Communication Networks. New York: Free Press, 1981.
[22] Institute of Electrical and Electronic Engineers, (1998), IEEE Recommended Practice for Software
Requirements Specifications. IEEE Standard 830-1998, Institute of Electrical and Electronic
Engineers, New York.
[23] Norbert E. Fuchs, Kaarel Kaljurand, and Tobias Kuhn. (2008). Attempto Controlled English for
Knowledge Representation. In Cristina Baroglio, Piero A. Bonatti, Jan Maluszynski, Massimo
Marchiori, Axel Polleres, and Sebastian Schaffert, editors, Reasoning Web, Fourth International
Summer School 2008, number 5224 in Lecture Notes in Computer Science, pages 104–124. Springer.
[24] Kuhn, T., (2008). Combining Semantic Wikis and Controlled Natural Language. Proceedings of the
Poster and Demonstration Session at the 7th International Semantic Web Conference (ISWC2008),
CEUR Workshop Proceedings, 2008.
[25] Kaarel Kaljurand and Norbert E. Fuchs. (2006) Bidirectional mapping between OWL DL and
Attempto Controlled English. In Fourth Workshop on Principles and Practice of Semantic Web
Reasoning, Budva, Montenegro.
[26] Echo2 Web framework Available from http://echo.nextapp.com/site/ [last accessed on 11th February,
2010.
[27] Document Object Model (DOM). Available from http://www.w3.org/DOM/ [last accessed on 11th
February, 2010]
[28] Resource Description Framework (RDF). Available from http://www.w3.org/RDF/ [last accessed on
11th February, 2010]
[29] Protégé. Available from http://protege.stanford.edu/ [last accessed on 11th February, 2010]
[30] Jena – A semantic web framework for Java. Available from http://jena.sourceforge.net/ [last accessed
on 11th February, 2010]
[31] OWL Web Ontology Language. Available from http://www.w3.org/TR/owl-features/ [last accessed
on 11th February, 2010]
[32] Pellet: OWL 2 Reasoner for Java. Available from http://clarkparsia.com/pellet [last accessed on 11th
February, 2010]
[33] XML Database, eXist. Available from http://exist.sourceforge.net/ [last accessed on 11th February,
2010]