Software re-engineering involves the studying of targeting system’s design and architecture. However,
enterprise legacy software systems tend to be large and complex, making the analysis of system design
architecture a difficult task. To solve this problem, the study proposes an approach that dynamically
decomposes software architecture using the run-time system information to reduce the complexity
associated with analyzing large scale architecture artifacts. The study demonstrates that dynamic
architecture decomposition is an efficient way to limit the complexity and risk associated with reengineering
activities of a large legacy system. This new approach divides the system into a collection of
meaningful modular parts with low coupling, high cohesion, and a minimal interface. This division
facilitates the design analysis and incremental software re-engineering process. This paper details two
major techniques to decompose legacy system architecture. The approach is also supported by automated
reverse engineering tools that were developed during the course of the study. The preliminary results
indicate that this novel approach is very promising.
A Model of Local Area Network Based Application for Inter-office Communicationtheijes
In most organizations, information circulation within offices poses a problem because clerical officers and office messengers usually dispatch letters and memos from one office to another manually. Sequel to this circulation procedure, implementation of decisions is always difficult and slow. On one hand, clerical officers sometimes divulge confidential information during the process of receiving and sending of mails and other documents. On the other hand, money is wasted on the purchase of paper for printing. However, it is cheerful to learn that technology has made it possible for staff, structures and infrastructures to control and share organization resources; hardware; software and knowledge by means of modern electronic communication. One cannot deny the fact that the flow of information is very imperative in every organization as it determines the effectiveness of decision-making and implementation. This feat can be achieved using Object-Oriented System Analysis and Design Methodology (OSADM). This is structurally analyzed with Use Case Diagram (UCD), Class Diagram, Sequence Diagram, State Transition Diagram, and Activity Diagram. Moreover, the system is coded with Java 1.6 version, in a client-server network running on star topology in LAN environment. This is the concept of this paper. The system is designed to work in two parts – the control part that is installed on the server; and the messenger part that is installed in the clients. This research work integrates the utility and organization resources into a shared center for all users to have access to; and for free communication during office hours.
Application Of UML In Real-Time Embedded Systemsijseajournal
The UML was designed as a graphical notation for use with object-oriented systems and applications.
Because of its popularity, now it is emerging in the field of embedded systems design as a modeling
language. The UML notation is useful in capturing the requirements, documenting the structure,
decomposing into objects and defining relationships between objects. It is a notational language that is
very useful in modelling the real-time embedded systems. This paper presents the requirements and
analysis modelling of a real-time embedded system related to a control system application for platform
stabilization using COMET method of design with UML notation. These applications involve designing of
electromechanical systems that are controlled by multi-processors.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
A Model of Local Area Network Based Application for Inter-office Communicationtheijes
In most organizations, information circulation within offices poses a problem because clerical officers and office messengers usually dispatch letters and memos from one office to another manually. Sequel to this circulation procedure, implementation of decisions is always difficult and slow. On one hand, clerical officers sometimes divulge confidential information during the process of receiving and sending of mails and other documents. On the other hand, money is wasted on the purchase of paper for printing. However, it is cheerful to learn that technology has made it possible for staff, structures and infrastructures to control and share organization resources; hardware; software and knowledge by means of modern electronic communication. One cannot deny the fact that the flow of information is very imperative in every organization as it determines the effectiveness of decision-making and implementation. This feat can be achieved using Object-Oriented System Analysis and Design Methodology (OSADM). This is structurally analyzed with Use Case Diagram (UCD), Class Diagram, Sequence Diagram, State Transition Diagram, and Activity Diagram. Moreover, the system is coded with Java 1.6 version, in a client-server network running on star topology in LAN environment. This is the concept of this paper. The system is designed to work in two parts – the control part that is installed on the server; and the messenger part that is installed in the clients. This research work integrates the utility and organization resources into a shared center for all users to have access to; and for free communication during office hours.
Application Of UML In Real-Time Embedded Systemsijseajournal
The UML was designed as a graphical notation for use with object-oriented systems and applications.
Because of its popularity, now it is emerging in the field of embedded systems design as a modeling
language. The UML notation is useful in capturing the requirements, documenting the structure,
decomposing into objects and defining relationships between objects. It is a notational language that is
very useful in modelling the real-time embedded systems. This paper presents the requirements and
analysis modelling of a real-time embedded system related to a control system application for platform
stabilization using COMET method of design with UML notation. These applications involve designing of
electromechanical systems that are controlled by multi-processors.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Metric for Evaluating Availability of an Information System : A Quantitative ...IJNSA Journal
The purpose of the paper is to present a metric for availability based on the design of the information
system. The availability metric proposed in this paper is twofold, based on the operating program and
network delay metric of the information system (For the local bound component composition the
availability metric is purely based on the software/operating program, for the remote bound component
composition the metric incorporates the delay metric of the network). The aim of the paper is to present a
quantitative availability metric derived from the component composition of an Information System, based
on the dependencies among the individual measurable components of the system. The metric is used for
measuring and evaluating availability of an information system from the security perspective, the
measurements may be done during the design phase or may also be done after the system is fully
functional. The work in the paper provides a platform for further research regarding the quantitative
security metric (based on the components of an information system i.e. user, hardware, operating
program and the network.) for an information system that addresses all the attributes of information and
network security.
SURVEY SURFER: A WEB BASED DATA GATHERING AND ANALYSIS APPLICATIONaciijournal
The most important need for a web based survey technology is speedy performance and accurate results
.Though there are variety of ways a survey can be taken manually and have it assessed manually but here
we are introducing the survey site concept where the assessment of the survey responses are created
automatically by applying certain statistics. In this paper we propose the idea of automated approach to
the analysis of the survey where the results would be evident graphically to take a wise decision. Additional
to graphical view we aim on forming a platform to view complex, tedious data in simple, understandable,
interactive and visualized form.
SIMPLIFIED CBA CONCEPT AND EXPRESS CHOICE METHOD FOR INTEGRATED NETWORK MANAG...IJCNCJournal
The process of choosing and integrating a network management system (NMS) to an existing computer
network became a big question due to the complexity of used technologies and the variety of NMS options.
Most of computer networks are being developed according to their internal rules in cloud environments.
The use of NMS requires not only infrastructural changes, consequently increasing the cost of integration
and maintenance, but also increases the risk of potential failures. In this paper, conception and method of
express choice to implement and integrate a network management system are presented. Review of basic
methods of cost analysis for IT systems is presented. The simplified conception of cost benefits analysis
(CBA) is utilized as a basis of the offered method. A final estimation is based on three groups of
parameters: parameters of expected integration risk evaluation, expected effect and level of completed
management tasks. The explanation of the method is provided via example.
A CLOUD BASED ARCHITECTURE FOR WORKING ON BIG DATA WITH WORKFLOW MANAGEMENTIJwest
In real environment there is a collection of many noisy and vague data, called Big Data. On the other hand,
to work on the data middleware have been developed and is now very widely used. The challenge of
working on Big Data is its processing and management. Here, integrated management system is required
to provide a solution for integrating data from multiple sensors and maximize the target success. This is in
situation that the system has constant time constrains for processing, and real-time decision-making
processes. A reliable data fusion model must meet this requirement and steadily let the user monitor data
stream. With widespread using of workflow interfaces, this requirement can be addressed. But, the work
with Big Data is also challenging. We provide a multi-agent cloud-based architecture for a higher vision to
solve this problem. This architecture provides the ability to Big Data Fusion using a workflow management
interface. The proposed system is capable of self-repair in the presence of risks and its risk is low.
Development of a Suitable Load Balancing Strategy In Case Of a Cloud Computi...IJMER
Cloud computing is an attracting technology in the field of computer science. In
Gartner’s report, it says that the cloud will bring changes to the IT industry. The cloud is changing
our life by providing users with new types of services. Users get service from a cloud without paying
attention to the details. NIST gave a definition of cloud computing as a model for enabling
ubiquitous, convenient, on-demand network access to a shared pool of configurable computing
resources (e.g., networks, servers, storage, applications, and services) that can be rapidly
provisioned and released with minimal management effort or service provider interaction. More
and more people pay attention to cloud computing. Cloud computing is efficient and scalable but
maintaining the stability of processing so many jobs in the cloud computing environment is a very
complex problem with load balancing receiving much attention for researchers. Since the job
arrival pattern is not predictable and the capacities of each node in the cloud differ, for load
balancing problem, workload control is crucial to improve system performance and maintain
stability. Load balancing schemes depending on whether the system dynamics are important can be
either static or dynamic. Static schemes do not use the system information and are less complex
while dynamic schemes will bring additional costs for the system but can change as the system
status changes. A dynamic scheme is used here for its flexibility. The model has a main controller
and balancers to gather and analyze the information. Thus, the dynamic control has little influence
on the other working nodes. The system status then provides a basis for choosing the right load
balancing strategy. The load balancing model given in this research article is aimed at the public
cloud which has numerous nodes with distributed computing resources in many different
geographic locations. Thus, this model divides the public cloud into several cloud partitions. When
the environment is very large and complex, these divisions simplify the load balancing. The cloud
has a main controller that chooses the suitable partitions for arriving jobs while the balancer for
each cloud partition chooses the best load balancing strategy.
Decision Making Framework in e-Business Cloud Environment Using Software Metr...ijitjournal
Cloud computing technology is most important one in IT industry by enabling them to offer access to their
system and application services on payment type. As a result, more than a few enterprises with Facebook,
Microsoft, Google, and amazon have started offer to their clients. Quality software is most important one in
market competition in this paper presents a hybrid framework based on the goal/question/metric paradigm
to evaluate the quality and effectiveness of previous software goods in project, product and organizations
in a cloud computing environment. In our approach it support decision making in the area of project,
product and organization levels using Neural networks and three angular metrics i.e., project metrics,
product metrics, and organization metrics
Decentralized data fusion approach is one in which features are extracted and processed individually and finally fused to obtain global estimates. The paper presents decentralized data fusion algorithm using factor analysis model. Factor analysis is a statistical method used to study the effect and interdependence of various factors within a system. The proposed algorithm fuses accelerometer and gyroscope data in an inertial measurement unit (IMU). Simulations are carried out on Matlab platform to illustrate the algorithm.
ТПЛМ 1062 Обробка й виконання замовлень. О.М.Горяїнов (2009)Oleksiy Goryayinov
Презентація з дисципліни «Логістика» (для менеджерів). Наведено матеріал «6.2 Обробка й виконання замовлень» (шляхи і джерела отримання інформації, процедура обробки інформації, план-графік виконання замовлення та інше)
Metric for Evaluating Availability of an Information System : A Quantitative ...IJNSA Journal
The purpose of the paper is to present a metric for availability based on the design of the information
system. The availability metric proposed in this paper is twofold, based on the operating program and
network delay metric of the information system (For the local bound component composition the
availability metric is purely based on the software/operating program, for the remote bound component
composition the metric incorporates the delay metric of the network). The aim of the paper is to present a
quantitative availability metric derived from the component composition of an Information System, based
on the dependencies among the individual measurable components of the system. The metric is used for
measuring and evaluating availability of an information system from the security perspective, the
measurements may be done during the design phase or may also be done after the system is fully
functional. The work in the paper provides a platform for further research regarding the quantitative
security metric (based on the components of an information system i.e. user, hardware, operating
program and the network.) for an information system that addresses all the attributes of information and
network security.
SURVEY SURFER: A WEB BASED DATA GATHERING AND ANALYSIS APPLICATIONaciijournal
The most important need for a web based survey technology is speedy performance and accurate results
.Though there are variety of ways a survey can be taken manually and have it assessed manually but here
we are introducing the survey site concept where the assessment of the survey responses are created
automatically by applying certain statistics. In this paper we propose the idea of automated approach to
the analysis of the survey where the results would be evident graphically to take a wise decision. Additional
to graphical view we aim on forming a platform to view complex, tedious data in simple, understandable,
interactive and visualized form.
SIMPLIFIED CBA CONCEPT AND EXPRESS CHOICE METHOD FOR INTEGRATED NETWORK MANAG...IJCNCJournal
The process of choosing and integrating a network management system (NMS) to an existing computer
network became a big question due to the complexity of used technologies and the variety of NMS options.
Most of computer networks are being developed according to their internal rules in cloud environments.
The use of NMS requires not only infrastructural changes, consequently increasing the cost of integration
and maintenance, but also increases the risk of potential failures. In this paper, conception and method of
express choice to implement and integrate a network management system are presented. Review of basic
methods of cost analysis for IT systems is presented. The simplified conception of cost benefits analysis
(CBA) is utilized as a basis of the offered method. A final estimation is based on three groups of
parameters: parameters of expected integration risk evaluation, expected effect and level of completed
management tasks. The explanation of the method is provided via example.
A CLOUD BASED ARCHITECTURE FOR WORKING ON BIG DATA WITH WORKFLOW MANAGEMENTIJwest
In real environment there is a collection of many noisy and vague data, called Big Data. On the other hand,
to work on the data middleware have been developed and is now very widely used. The challenge of
working on Big Data is its processing and management. Here, integrated management system is required
to provide a solution for integrating data from multiple sensors and maximize the target success. This is in
situation that the system has constant time constrains for processing, and real-time decision-making
processes. A reliable data fusion model must meet this requirement and steadily let the user monitor data
stream. With widespread using of workflow interfaces, this requirement can be addressed. But, the work
with Big Data is also challenging. We provide a multi-agent cloud-based architecture for a higher vision to
solve this problem. This architecture provides the ability to Big Data Fusion using a workflow management
interface. The proposed system is capable of self-repair in the presence of risks and its risk is low.
Development of a Suitable Load Balancing Strategy In Case Of a Cloud Computi...IJMER
Cloud computing is an attracting technology in the field of computer science. In
Gartner’s report, it says that the cloud will bring changes to the IT industry. The cloud is changing
our life by providing users with new types of services. Users get service from a cloud without paying
attention to the details. NIST gave a definition of cloud computing as a model for enabling
ubiquitous, convenient, on-demand network access to a shared pool of configurable computing
resources (e.g., networks, servers, storage, applications, and services) that can be rapidly
provisioned and released with minimal management effort or service provider interaction. More
and more people pay attention to cloud computing. Cloud computing is efficient and scalable but
maintaining the stability of processing so many jobs in the cloud computing environment is a very
complex problem with load balancing receiving much attention for researchers. Since the job
arrival pattern is not predictable and the capacities of each node in the cloud differ, for load
balancing problem, workload control is crucial to improve system performance and maintain
stability. Load balancing schemes depending on whether the system dynamics are important can be
either static or dynamic. Static schemes do not use the system information and are less complex
while dynamic schemes will bring additional costs for the system but can change as the system
status changes. A dynamic scheme is used here for its flexibility. The model has a main controller
and balancers to gather and analyze the information. Thus, the dynamic control has little influence
on the other working nodes. The system status then provides a basis for choosing the right load
balancing strategy. The load balancing model given in this research article is aimed at the public
cloud which has numerous nodes with distributed computing resources in many different
geographic locations. Thus, this model divides the public cloud into several cloud partitions. When
the environment is very large and complex, these divisions simplify the load balancing. The cloud
has a main controller that chooses the suitable partitions for arriving jobs while the balancer for
each cloud partition chooses the best load balancing strategy.
Decision Making Framework in e-Business Cloud Environment Using Software Metr...ijitjournal
Cloud computing technology is most important one in IT industry by enabling them to offer access to their
system and application services on payment type. As a result, more than a few enterprises with Facebook,
Microsoft, Google, and amazon have started offer to their clients. Quality software is most important one in
market competition in this paper presents a hybrid framework based on the goal/question/metric paradigm
to evaluate the quality and effectiveness of previous software goods in project, product and organizations
in a cloud computing environment. In our approach it support decision making in the area of project,
product and organization levels using Neural networks and three angular metrics i.e., project metrics,
product metrics, and organization metrics
Decentralized data fusion approach is one in which features are extracted and processed individually and finally fused to obtain global estimates. The paper presents decentralized data fusion algorithm using factor analysis model. Factor analysis is a statistical method used to study the effect and interdependence of various factors within a system. The proposed algorithm fuses accelerometer and gyroscope data in an inertial measurement unit (IMU). Simulations are carried out on Matlab platform to illustrate the algorithm.
ТПЛМ 1062 Обробка й виконання замовлень. О.М.Горяїнов (2009)Oleksiy Goryayinov
Презентація з дисципліни «Логістика» (для менеджерів). Наведено матеріал «6.2 Обробка й виконання замовлень» (шляхи і джерела отримання інформації, процедура обробки інформації, план-графік виконання замовлення та інше)
ТПЛМ 1061 Цілі управління і складові циклу замовлення. О.М.Горяїнов (2009)Oleksiy Goryayinov
Презентація з дисципліни «Логістика» (для менеджерів). Наведено матеріал «6.1 Цілі управління і складові циклу замовлення» (обробка й виконання замовлень, інтегрований підхід до управління замовленнями та інше)
Many of us do not know the what we want from our life, if in case we know our target we do are not aware with the rule to reach to it. so do you want to know it ?
Médico Especialista Álvaro Miguel Carranza Montalvo, soy Médico General Alto, Rubio, de Piel Blanca, ojos claros , soy Atlético Simpático, me esmero a seguir Adelante solucionando los Problemas de las demás Personas para salvar su Vida en Salud y en Enfermedades. Internet, Networds….
Médico Especialista Álvaro Miguel Carranza Montalvo, la VIDA es una VIRTUD que cada Humano, Persona tiene es Valeroso y Digno lograr SALVAR la VIDA de una Persona que está en Peligro, cada Persona es una sóla Unidad único no hay nadie como esa persona somos distintos. Internet, Networds….
Médico Especialista Álvaro Miguel Carranza Montalvo, la NATURALEZA es Bella y Linda Vivirla al Aire Libre, con Agua, la Vegetación, los Bellos Animales en el Ecosistema la Biodiversidad hay que Valorar y Gozar lo que hay en el Mundo Vivirla y Disfrutarla. Internet, Networds….
Médico Especialista Álvaro Miguel Carranza Montalvo, ME GUSTA LO QUE SOY MI FORMA DE SER ME ENCANTA LO QUE SOY YÓ MI FÍSICO, MENTE, PENSAMIENTOS, ALMA Y CUERPO, FÍSICO. Y VIVIR LA VIDA, NATURALEZA LA BELLEZA. Web, Redes Sociales….
Médico Especialista Álvaro Miguel Carranza Montalvo, Me gusta la Naturaleza y la Vida. VIVIR LA VIDA RESPETANDO A LOS DEMÁS CHICAS Y CHICOS A TODAS LAS PERSONAS LES RESPETO Y ADMIRO PORQUE TIENEN SUS VALORES Y DONES. HACER EL BIEN NUNCA EL MAL A LA PERSONA TRATAR COMO A UNO LE GUSTARÍA QUE LE TRATEN. Web, Redes Sociales….
Médico Especialista Álvaro Miguel Carranza Montalvo, "creo que las artes marciales mixtas sirven principalmente para desarrollar la energía. A veces es necesario darse cuenta de un peligro y conocer el medio para salvar la vida. Web, Redes Sociales….
Médico Especialista Álvaro Miguel Carranza Montalvo, La Energía es Vital para lograr una Meta con Fuerza y Salud es lo más Importante en la Vida. ", Web, Internet….
Médico Especialista Álvaro Miguel Carranza Montalvo, "es necesario realizar ejercicios determinados en la columna, para proporcionar oxígeno al cerebro y ayudarle a descansar totalmente", Web, Internet….
Médico Especialista Álvaro Miguel Carranza Montalvo, "hay tres palabras que aprendemos a gritar que llevan consigo descanso y energía; fuerza, valor y convicción", Web, Internet….
Tcc vulnerabilidade humana – recomendação para conscientização do aspecto hu...Eduardo da Silva
Trabalho de Conclusão de Curso. Vulnerabilidade humana - recomendação para conscientização do aspecto humano como elo mais fraco da segurança da informação nas empresas.
Médico Especialista Álvaro Miguel Carranza Montalvo, soy Médico General Alto, Rubio, de Piel Blanca, ojos claros , soy Atlético Simpático, me esmero a seguir Adelante solucionando los Problemas de las demás Personas para salvar su Vida en Salud y en Enfermedades. Internet, Networds….
Médico Especialista Álvaro Miguel Carranza Montalvo, la VIDA es una VIRTUD que cada Humano, Persona tiene es Valeroso y Digno lograr SALVAR la VIDA de una Persona que está en Peligro, cada Persona es una sóla Unidad único no hay nadie como esa persona somos distintos. Internet, Networds….
Médico Especialista Álvaro Miguel Carranza Montalvo, la NATURALEZA es Bella y Linda Vivirla al Aire Libre, con Agua, la Vegetación, los Bellos Animales en el Ecosistema la Biodiversidad hay que Valorar y Gozar lo que hay en el Mundo Vivirla y Disfrutarla. Internet, Networds….
Médico Especialista Álvaro Miguel Carranza Montalvo, ME GUSTA LO QUE SOY MI FORMA DE SER ME ENCANTA LO QUE SOY YÓ MI FÍSICO, MENTE, PENSAMIENTOS, ALMA Y CUERPO, FÍSICO. Y VIVIR LA VIDA, NATURALEZA LA BELLEZA. Web, Redes Sociales….
Médico Especialista Álvaro Miguel Carranza Montalvo, Me gusta la Naturaleza y la Vida. VIVIR LA VIDA RESPETANDO A LOS DEMÁS CHICAS Y CHICOS A TODAS LAS PERSONAS LES RESPETO Y ADMIRO PORQUE TIENEN SUS VALORES Y DONES. HACER EL BIEN NUNCA EL MAL A LA PERSONA TRATAR COMO A UNO LE GUSTARÍA QUE LE TRATEN. Web, Redes Sociales….
Médico Especialista Álvaro Miguel Carranza Montalvo, "creo que las artes marciales mixtas sirven principalmente para desarrollar la energía. A veces es necesario darse cuenta de un peligro y conocer el medio para salvar la vida. Web, Redes Sociales….
Médico Especialista Álvaro Miguel Carranza Montalvo, La Energía es Vital para lograr una Meta con Fuerza y Salud es lo más Importante en la Vida. ", Web, Internet….
Médico Especialista Álvaro Miguel Carranza Montalvo, "es necesario realizar ejercicios determinados en la columna, para proporcionar oxígeno al cerebro y ayudarle a descansar totalmente", Web, Internet….
Médico Especialista Álvaro Miguel Carranza Montalvo, "hay tres palabras que aprendemos a gritar que llevan consigo descanso y energía; fuerza, valor y convicción", Web, Internet….
Software requirement analysis enhancements by
prioritizing requirement attributes using rank
based Agents.
Ashok Kumar Vinay Goyal
Professor Assistant Professor
Department of Computer Science and Applications Department of MCA
Kurukshetra University, Kurukshetra, India Panipat Institute of Engineering & Technology
Panipat, India
Abstract- This paper proposes a new technique in the
domain of Agent oriented software engineering. Agents
work in autonomous environments and can respond to
agent triggers. Agents can be very useful in requirement
analysis phase of software development process, where
they can react towards the requirement triggers and
result in aligned notations to identify the best possible
design solution from existing designs. Agent helps in
design generation process, which includes the use of
Artificial intelligence. The results produced clearly
shows the improvements over the conventional
reusability principles and ideas.
1. INTRODUCTION
Agent oriented software engineering is a new
emerging technique which is growing very
rapidly. Software development industries have
invested huge efforts in this domain and results
published by many of them are very exiting [1].
The autonomous and reactive nature of agents
makes it possible for the designers to visualize
in terms of real life problem solving scenarios
where socio-logical [2] characteristics of agents
automatically activate the timely checks for any
problem in domain and to solve the same using
agents.
Agents are very helpful in the software
development life cycle. Experiments carried out
in past have shown [2][9][10] the improvement
in the SDLC and conclusion is that agents can be
very helpful in cost and effort minimization; if
tuned properly. Fine-tuning of agents and SDLC
process-state-plug-in for two-way
communications results in agent based software
development process where intelligent agents
will take decisions for better time and resource
utilization.
Fine-tuning of agents and SDLC process-state-
plug-in for two-way communications results in
agent based software development process
where intelligent agents will take decisions for
better time and resource utilization. Agents are
capable of storing historic data, which helps in
decision-making using heuristic based approach.
This paper discusses the details of one such
experiment conducted to improve the
requirement analysis process with the help of
proactive agents. Agents automatically sense the
requirement environment and propose their own
set of important requirement checklist. This is
sort of intelligent assistance with domain
heuristic, which leads to cover all possible
requirement entities of the problem domain.
2. RELATED WORK
Michael Wooldridge, Nicholas R. Jennings &
David Kinny describe the analysis process using
agent-oriented approach [1]. They have
considered the GAIA notations. The analysis
stages of Gaia are:
1) Identify the agent’s roles in the system, which
typically correspond to identify ro ...
Availability Assessment of Software Systems Architecture Using Formal ModelsEditor IJCATR
There has been a significant effort to analyze, design and implement the information systems to process the information and data, and solve various problems. On the one hand, complexity of the contemporary systems, and eye-catching increase in the variety and volume of information has led to great number of the components and elements, and more complex structure and organization of the information systems. On the other hand, it is necessary to develop the systems which meet all of the stakeholders' functional and non-functional requirements. Considering the fact that evaluation and assessment of the aforementioned requirements - prior to the design and implementation phases - will consume less time and reduce costs, the best time to measure the evaluable behavior of the system is when its software architecture is provided. One of the ways to evaluate the architecture of software is creation of an executable model of architecture.
The present research used availability assessment and took repair, maintenance and accident time parameters into consideration. Failures of software and hardware components have been considered in the architecture of software systems. To describe the architecture easily, the authors used Unified Modeling Language (UML). However, due to the informality of UML, they utilized Colored Petri Nets (CPN) for assessment too. Eventually, the researchers evaluated a CPN-based executable model of architecture through CPN-Tools.
Multiagent Based Methodologies have become an
important subject of research in advance Software Engineering.
Several methodologies have been proposed as, a theoretical
approach, to facilitate and support the development of complex
distributed systems. An important question when facing the
construction of Agent Applications is deciding which
methodology to follow. Trying to answer this question, a
framework with several criteria is applied in this paper for the
comparative analysis of existing multiagent system
methodologies. The results of the comparative over two of them,
conclude that those methodologies have not reached a sufficient
maturity level to be used by the software industry. The
framework has also proved its utility for the evaluation of any
kind of Multiagent Based Software Engineering Methodology
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Leading Change strategies and insights for effective change management pdf 1.pdf
SOFTWARE DESIGN ANALYSIS WITH DYNAMIC SYSTEM RUN-TIME ARCHITECTURE DECOMPOSITION
1. International Journal of Software Engineering & Applications (IJSEA), Vol.8, No.1, January 2017
DOI : 10.5121/ijsea.2017.8106 59
SOFTWARE DESIGN ANALYSIS WITH DYNAMIC
SYSTEM RUN-TIME ARCHITECTURE
DECOMPOSITION
Lei Wu1
and Sankalp Vinayak2
1
Software Engineering, University of Houston-Clear Lake, Houston, Texas U.S.A
2
Computer Science, Clear Creek Independent School District, Houston, Texas U.S.A
ABSTRACT
Software re-engineering involves the studying of targeting system’s design and architecture. However,
enterprise legacy software systems tend to be large and complex, making the analysis of system design
architecture a difficult task. To solve this problem, the study proposes an approach that dynamically
decomposes software architecture using the run-time system information to reduce the complexity
associated with analyzing large scale architecture artifacts. The study demonstrates that dynamic
architecture decomposition is an efficient way to limit the complexity and risk associated with re-
engineering activities of a large legacy system. This new approach divides the system into a collection of
meaningful modular parts with low coupling, high cohesion, and a minimal interface. This division
facilitates the design analysis and incremental software re-engineering process. This paper details two
major techniques to decompose legacy system architecture. The approach is also supported by automated
reverse engineering tools that were developed during the course of the study. The preliminary results
indicate that this novel approach is very promising.
KEYWORDS
Design analysis, reverse engineering, software architecture, dynamic analysis, system decomposition
1. INTRODUCTION
Software re-engineering involves the study of the target system’s architecture and design.
However, enterprise legacy software systems tend to be large and complex [1][2][3][32]. System
decomposition is important to limit the complexity and risk associated with the re-engineering
activities of a large legacy system [3][4][5][6].
Architecture decomposition divides the system into a collection of meaningful modular parts with
low coupling, high cohesion, and a minimal interface, thus facilitating the incremental approach
to implement the progressive software re-engineering process [4][7][8][33][30][37].
To fulfill this goal, the study developed two major techniques to decompose legacy system
architecture. The approach is supported by reverse engineering tools that developed during the
course of the study. The preliminary results indicate that this novel approach is very promising.
2. International Journal of Software Engineering & Applications (IJSEA), Vol.8, No.1, January 2017
60
2. SYSTEM RUNTIME DYNAMIC ANALYSIS & VISUALIZATION
System usability is embodied in detailed atomic system functionalities, which represent the
concrete utility of the system [9][24][21][45]. This visualization and dynamic analysis technique
enhance the linkage between legacy source code construction and system functionality based on
dynamic system run-time information, thereby facilitating the legacy system decomposition based
on system execution information (see Figure 1).
Injection of probe instruments:
behavior tracing facility code
Design execution range & test cases to
high-light specific system functionality and
behavior
Daemon thread: monitor, capture & record
the dynamic information provided by the
probe instrument
Abstraction & Analysis of system
components; System decomposition
based on recovered system
functionality construction structure
Dynamic analysis, visualization of source
code interaction and construction structure
Execution
Legacy Source
Code
Legacy Source
with probe code
Figure 1: Dynamic Analysis & Visualization Process
A modified legacy source version is generated by injecting a probe code into original source code.
During the execution of a legacy system with detection code, a range of test cases allows for
retrieval of specific system functionality. Additionally, visual effects link system behavior with
legacy source code modules. The visualization and animation present a meaningful method to
investigate the interactions among architecture components. The method further generates the
visual diagram to reflect the modular structure of the observed system functionality.
The system decomposition tasks were performed based on the recovered source code segments
that had participated in the system functionality of interest. Visualization was implemented at
several levels based on different granularity of abstraction.
2.1 Architecture Decomposition Partition
System utility is performed by those detailed atomic system functionalities, which embody the
service of the system [10][11][12]. As a consequence of this limitation, static analysis does not
present sufficient information to study the interactions of source modules [26][27][34]. Recording
dynamic information of a program can provide us with sufficient knowledge about message
exchanges and modular interactions during the program execution period [40][41][19]. However,
this technique faces two major issues: (i) the overwhelming volume of tracing data [29][36] and
(ii) incomplete coverage of the code [25][13]. In order to have a full coverage of the target code
during each experimental session, this approach focuses on a specific set of observable system
functionalities and behaviors. Therefore, the dynamic coverage contains only the relevant code
3. International Journal of Software Engineering & Applications (IJSEA), Vol.8, No.1, January 2017
61
artifacts. In fact, this focus turns out to aid in the resolution of the first issue by reducing the
volume of tracing data. Based on this approach, this study has developed a reverse engineering
tool called the Dynamic-Analyzer to automate the dynamic capturing and visualization of source
code modular interactions and to generate the architectural view of target systems.
One desirable feature of the Dynamic-Analyzer is that the observed system and analysis tool run
in parallel. Analyzers are now able to observe visualization patterns of the system behavior and
module interaction at the same time. Therefore, specific system behavior can be directly related to
the visual effects of module interactions in a real-time manner, reducing the cognitive load
required to remember and match these subjects.
LegacyPrincipal
Subordinate Subordinate Subordinate Subordinate Subordinate Subordinate
Associate
Associate
Associate
Associate
Associate
Associate
Associate
Associate
Associate
Associate
Associate
Associate
System
Sub-systems
Sub-system
Constitutional
Services
Atomic System
Functionalities
Figure 2: Hierarchical System Function Abstraction
The Dynamic-Analyzer toolset is able to produce various views to exhibit information at distinct
granularity levels, and facilitate the smooth navigation among those levels. Two types of
information can be visualized: the pure interactions and the statistical data information. The
executable system hence can be viewed in a hierarchical manner as follows: whole system,
subsystems, subsystem constitutional services, detailed atomic system functionalities for each
service, etc. (see Figure 2).
Program Module Layer
Program Modules: program piece units (eg. source code files, modules)
Source Code Entities: programing units, such as data structure, functions, procedures, variables, etc.
Source Code Entity Layer
Reference = function call, usage, reference, containing etc.
Figure 3: Source Code Hierarchical Abstraction
4. International Journal of Software Engineering & Applications (IJSEA), Vol.8, No.1, January 2017
62
The physical program source code can also be viewed in a hierarchical way as program module
layer, which includes source code files/modules, and source code entity layer which includes data
structure components, database components, functional component (variables, functions,
procedures, routines, data, etc.,) (see Figure 3). This study attempts to answer the following
question: Which program artifacts realize the observed atomic system functionality provided by
the system, and how? Mapping atomic system functionalities to the realizing code fragments and
recovering the interaction relationships among source code artifacts would divide the whole
source code into constructional parts, revealing the structure of legacy software [16] [13]. Based
on this approach, the study has developed a dynamic and visualization analysis technique to
analyze active system functionality behavior, and build up the linkage between the corresponding
code artifacts, their interaction structures, and observed system functionalities. One of the most
important features of this technique is the mapping ability for different abstraction layers. The
dynamic run-time information can be automatically captured and analyzed though Dynamic-
Analyzer, the reverse engineering software toolkit. Dynamic-Analyzer can generate a variety of
system architectural analysis views:
Routine Interaction View: This view illustrates the inside of source code module and
demonstrates which routines (program functions or procedures) interact with each other to
contribute to the performance of a certain kind of atomic system functionality.
Figure 4: Module Interaction & Collaborative Pattern Detection Generated by Dynamic-Analyzer
Module Interaction View: This view demonstrates which source code modules work together to
carry out a certain kind of system functionality (see Figure 4). This information will be used to
facilitate the system decomposition process. The left-most vertical part shows the name of
modules; the horizontal direction represents the time sequence; the dark (red) box indicates an
invocation interaction instance from the sender module; the gray (green) box shows the return of
interaction instance from the receiver module; the dark (red) line with direction point shows an
outgoing message from the sender module towards the receiver module; the gray (green) line with
direction point represents the returning of the interaction message from the receiver module back
to the sender module. The Dynamic-Analyzer automatically detects all the repetitive serial of
collaboration instances, and distills them as candidate collaboration patterns.
5. International Journal of Software Engineering & Applications (IJSEA), Vol.8, No.1, January 2017
63
Construction Structure View: To further reveal the construction structure of particular system
functionality, more effective means to discover the dynamic module interaction space are needed.
The approach is to visualize the dynamic information in a form that illustrates the relationships
between different source code modules involved in the activities that generate the specific system
functionality.
Figure 5 Architectural View of System Functionality Generated by Dynamic-Analyzer
As demonstrated in Figure 4 and Figure 5, the visual representation of the comprehensive module
interaction relationships reveals a system constructional structure that implements the observed
system functionality.
Figure 6: Color Representation Scheme of Weight Variance Gradient
The invocation level: corresponds to the call depth from sender module to receiver module.
The link between modules: represents the invocation instance from higher level module to
lower level module.
6. International Journal of Software Engineering & Applications (IJSEA), Vol.8, No.1, January 2017
64
The location of rectangle: shows at the specific location (invocation level and module) a
certain amount of module invocation activities.
The size of rectangle: stands for the percentage of invocations the module has at the
particular level, compared with the total number of invocations that module has among all the
levels. Suppose the Full_Size rectangle has 1cmX1cm height and width. The mathematic
formula is expressed as Figure 6:
The color of both link and rectangle uses “weight” to represent the percentage of invocations it
occupies compared with the total number of invocations that all modules have. The mathematical
formulas are expressed as following:
Weight_Link (a,b)= 10 * Invocations ( Module_a Module_b) / Invocations (All Modules)
Weight_Rectangle (Module_a, Level_b) = 10 * Invocations ( Level_b) / Invocations(All Modules)
Figure 7: Run-time Partial System Architecture Visualization Generated by Dynamic-Analyzer
The color of rectangle: reveals the module at certain level external relative impact, which
specifies the activity intensity degree (“weight”) at each invocation level for one particular
module in comparison to the total invocations of all modules. Color scale schema are applied
to reflect the weight (see Figure 6).
The color of link: illustrates the coupling degree of these two linked modules, which reflects
the weight of that link. For the color of link and color of rectangle, 10 color scale schema to
symbolize the weight variance gradient from High to Low were applied. Figure 6 illustrates
the color representation scheme of the weight variance gradient.
With the help of visual expression provided by the Dynamic-Analyzer, comprehensive system
functionality construction structures to reveal partial system architecture are able to be generated.
7. International Journal of Software Engineering & Applications (IJSEA), Vol.8, No.1, January 2017
65
Figure 7 illustrates the system modular architectural construction structure that implements one
sub-system’s functionality in a case study. It exhibits the inter-relationships among all the
modules that contribute to the implementation of one specific system functionality. The integrated
diagram can be further decomposed into a set of visual representations of collaboration patterns
(see Figure 4), with assignments of major role for each module.
Figure 8: Module Contribution Comparison
Module Contribution Comparison View: This view of two modules provides the visual
comparison of any two source code modules automatically generated by Dynamic-Analyzer (see
Figure 8). It is used to analyze the detailed efforts of modules at each invocation level. The
vertical coordinates represent the number of invocation times, whereas the horizontal direction
shows the invocation depths. Cardinal splines are used to diminish the sharp angles of the curve.
This method creates a drawing under the horizontal coordinate to reduce the radical changes from
high value to zero. Any drawing under the horizontal line is ignored.
Module Participation View: This view is provided by Dynamic-Analyzer to analyze the overall
participation percentage of each source code module for the observed system functionality (see
Figure 9). The vertical coordinate indicates the name of each source code module, whereas the
horizontal axis shows the scale of invocations. The histogram demonstrates the overall efforts of
each module that contributes to the implementation of the observed system functionality. The
view shows the difference of contribution for each source module during the execution of specific
system functionality.
System Functionality Construction View: Normally, a mapping between fine-grained code
artifacts and system behavior will cause the maintainer to be lost in the middle of the huge code
interaction space, limiting the usability of the technique and reverse engineering tools. By
mapping particular system functionalities with different abstract layers of program artifacts, a
more efficient high level view of system structure will be revealed. Consequently, the
relationships among different system functionalities and corresponding program artifacts at
various abstraction levels were discoverable. The result is later used to facilitate the legacy
system decomposition task.
8. International Journal of Software Engineering & Applications (IJSEA), Vol.8, No.1, January 2017
66
Figure 9: Statistical Module Participation View Provided by Dynamic-Analyzer
Statistic Information Analysis View: For a certain kind of system atomic functionality, different
sets of code fragments at different abstract layers were discovered by executing test cases (see
Figure 9). Dynamic-Analyzer cane be used to retrieve the following statistical information: which
parts of each different abstract layer have persistently participated; which parts are conditionally
involved; which parts undertake the heaviest computing part; and which parts mostly contribute
to the task dispatching and management job, etc.
2.2. Architecture Decomposition Strategy
Recovering the architecture of legacy systems requires more than just one reverse engineering
toolset to generate descriptive diagrams of systems. Software architecture is commonly defined in
terms of components and connectors [14][15], which is lacking in legacy systems based on
procedural methodology. Therefore, it is necessary to abstract the high-level architecture by
identifying component-like parts and express it in the decomposed way. By dynamically
analyzing atomic system functionalities with software visualization, the decomposition strategy
separates source code artifacts based on their involvement in a certain type of system
functionalities. Code modules that have participated in more than one system functionality are
separated to form a new partition serving other functionalities. With the help of the visual
dynamic analysis technique and automated reverse engineering toolset, whole program artifacts
can be decomposed into cohesive elements revealing the system organization.
3. STATIC MODULE DEPENDENCY ANALYSIS & VISUALIZATION
For most legacy procedural languages, the whole system can be divided into program pieces
(such as source code files) [1][22]. Such individual program units are seen as a single source code
module. Normally, original developers had a certain kind of principle to organize their program
artifacts. The dependency of user-defined data types (UDT) reflects the module relations between
each other [18][22][9]. It can be further viewed as an indicator of the associations between the
host modules and the rest of the system [17][23][28]. By analyzing the dependency of user
defined data type, useful design information of legacy system architecture construction can be
elicited. To validate this approach, a case study was undertaken using a code segment from a
legacy finance software system. For the source code module, called Account.c/h, there is a User
Defined Data Type:
9. International Journal of Software Engineering & Applications (IJSEA), Vol.8, No.1, January 2017
67
typedef struct _ac_t {
node_t node; /* has to be the first member (inherit)*/
---Error.c/h
char *symbol; /* symbol or id */
char *altsymbol; /* alternative symbol (for import) */
prec_t precision; /* precisions for in/output */ ---Data.c/h
double fraction; /* fraction of an account (factor) */
GPtrArray *transarr; /* GArray<trans_t> transaction list*/
---TransArray.c/h
int flags; /* frozen == readonly */
period_t lifetime; /* time period: 1st to last transaction*/
---Transaction.c/h
GArray *rtarr; /* GArray<rt_t>: "prepared" trans data*/
---TransArray.c/h
} ac_t;
(i)
(ii)
Figure 10. Source Code Module Dependency Analysis
node.c
configuration.c
mainWin.c
inMsgBox.c
icons.c
main.c
util.c
inDateDlg.c
error.c
portfolio.c
path.c
treeView.c
comvmom.c
inDateRange.c
inStatusBar.c
manuBar.c
inChart.c inPool.c
inGlossaryDlg.c
inLogWin.c
inColorDlg.c
transaction.c
inTransInp.c
inTransBut.c
inTransList.c
transArray.c
inDateEntry.c
inFileEntry.c
inImportInp.c
inFileDlg.c
runTime.c
tester.c
account.c
ratedData.c
inTransInp.c
archive.c
dataSet.c
inNewDlg.c inReportDlg.c
inInputDlg.c
inGenRepDlg.c
inChartDlg.c
inPropDlg.c
inDiaryDlg.c
inImportDlg.c
inDiscDlg.c
inConfigDlg.c
inActionItem.c
nodeMenu.c
color.c
import.c
inDataSheet.c
inDataSheet.c
inActionItem.c
Figure 11. System Module Call Graph with Static Analysis
10. International Journal of Software Engineering & Applications (IJSEA), Vol.8, No.1, January 2017
68
The user data type dependency (one UDT uses other UDTs) could reflect the reliance
relationships between its host module and its dependent modules. To explain this, the study used
the example illustrated in Figure 10. The model dependency diagram (ii)) is generated from the
source code of a user-defined data type (Figure. 10 (i)). Source code module Account defines one
UDT called ac_t, which uses four other UDTs defined in other four modules, namely Error, Data,
Transaction, and TransArray. The relationships reflect the dependency between Account module
and the other four modules. Those modules involved in this relationship were granted higher
coupling value than others which do not have such relations. This process to assign coupling
degrees among each module pair can be iterative. Therefore, the module dependency relationship
can further be viewed as an indicator for dividing the whole system source code into organically
integrated parts.
Fine-grain Detail Code Entity Exploring: By using traditional reverse engineering tools, useful
information deep inside of legacy source code can be obtained, thus to ease the system
decomposition task. The source code information includes the parsing result of AST (abstract
syntax tree); the data-flow diagram; the routine, variable and data structure reference graphs; and
more. In the current study, a reverse engineering tool called Source-Navigator [12][13] was used
to parse legacy source code and generate various intermediate result information.
The borders of construction parts are the borders of the module dependency relationships.
Consequently, according to the module dependency relations, the whole system can be
decomposed into parts. The study has devised a second system decomposition approach based on
source code module dependency analysis. Figures 11 and 12 illustrate an example of
decomposing legacy finance system based on this approach. The first diagram of a call graph
(Figure 11) demonstrates the relationships among all the source modules. It is difficult to get a
meaningful insight of how system architecture i and how the source modules are organized as
components to constitute system functions.
As a result, eventually, this type of call graph information becomes too overwhelming to
comprehend: it is hard to distinguish which modules have higher coupling features and which do
not, thus making it difficult to decompose the whole system based on coupling and cohesion
analysis. The second diagram (Figure 12) is constructed based on module dependency analysis. It
further divides the whole system according to module dependency relationships, represented in
the diagram by the lines. The result created is more understandable and meaningful for further
system decomposition of its architecture with same static information provided by source code.
11. International Journal of Software Engineering & Applications (IJSEA), Vol.8, No.1, January 2017
69
inchart 1.17k
dateset 0.2k
color 0.25k
configuration 0.85k
inchartDlg 0.93k
inimportdlg 0.68k
account 0.73k
intranslist 0.87k
import 0.82k
transarray 0.5k
inimportinp 0.41k
icon 0.2k
node 1.1k
inactionitem 0.83k inpropdlg 0.74k indiscdlg 0.36k interest 0.16k
archive 0.22k
util 0.74k
portfolio 0.57k
transaction 0.91k
mainwin 0.71k
instatusbar 0.3k
indiarydlg 0.24k inreportdlg 0.74k
ingenrepdlg 0.67k runtime 0.55kindatedlg 0.21kindaterange 0.58k
intransbut 0.21k
inconfigdlg 1.05k
intransinp 0.7k
incolorcfg 0.25
indataentry 0.34k inglossarydlg 0.46k
inmsgbox 0.3k inpool 0.45k path 0.53k
indatasheet 0.41k
ratedata 1.36k
treeview 1.46k main 0.18k menubar 1.08k nodemenu 0.4k
error 0.18k
common 0.16k
infiledlg 0.38k infileentry 0.3k
innewdlg 0.35k
Configuration Subsystem
Kernel Subsystem
Utility Subsystem
Interface Subsystem
Figure 12: Architecture Decomposition with Module Dependency Analysis
4. DISCUSSION AND RELATED WORK
One of the key research issues of system design analysis and architecture decomposition is to
identify program modular features from source code. This identification can be accomplished by
viewing existing systems in a component manner [40][17], forming an active research direction of
legacy system reverse engineering [43]. In the literature, many modeling techniques are proposed
to discover architecture modular structure inside of legacy source code [42][44][38]. Some of
these techniques can be partially automated [39], therefore greatly reducing the time to construct
a decomposition model from legacy code.
Similarity Clustering: the similarity clustering approach compares pairs of entities by their
direct relationships in order to determine whether they belong to the same atomic component
[16][10]. This technique groups base entities (subprograms, user-defined types, and global
variables) according to the proportion of common features (entities they access, their name, the
file where they are defined, etc.).
12. International Journal of Software Engineering & Applications (IJSEA), Vol.8, No.1, January 2017
70
The intuition is that if these features reflect the correct direct and indirect relationships between
these entities, then entities having the most similar relationships should belong to the same atomic
component [16][11][6]. The key issue for applying this technique is determining the similarity of
the program entities for a certain aspect. Subprograms are clustered into modules based on
similarity metrics. Since the two most similar groups are combined per iteration, the order of
combinations can be represented by a binary tree, in which the leaves are the initial groups and
the inner nodes are combinations of groups. The farther a combination is away from the root of
the tree, the higher is its degree of similarity. This procedure is called hierarchical clustering; see
the following algorithm:
Place each entity in a group by itself;
Repeat
Identify the most similar groups Si and Sj;
Combine Si and Sj;
Add a sub-tree with children Si and Sj to the clustering tree;
Until the existing groups are satisfactory or only one group is left;
In each iteration, the most similar groups are combined using a similarity metric. Many aspects
can be considered to compare the similarity of two program entities, such as using the same user-
defined data type, accessing the same files outside of the program [18][31][35], etc.
Dominance Analysis: Cimitile et al. proposed a dominance analysis to call graphs to identify
candidates for system architecture modules [6][8]. A node N is said to dominate another node M
in a directed graph G if each path from the root of G to M contains N. If N is a dominator of M
and every other dominator N of M is also a dominator of N, then N is called an immediate or
direct dominator of M. The dominance relationship can be represented as a dominance tree where
a node’s parent is its immediate dominator.
In their approach, cycles (i.e., strongly connected components) are collapsed before applying
dominance analysis. It is used to detect additional entities. The algorithm involves the following
basic steps:
1. All members of an atomic component are collapsed to a single node (this step is
denoted by Collapse);
2. Dominance analysis is applied to the collapsed graph;
3. In the dominance tree, each component C absorbs its (transitively) dominated
subprograms that are not dominated by any other component dominated by C.
Concept Analysis: The use of concept analysis has been applied as an automated technique for
analyzing the modular structure of legacy software [39][15]. Concept analysis is a mathematical
technique that provides a way to identify groupings of items that have common features. The
main application is to derive the architecture structure of legacy software. It starts with a context:
a binary table (relations) indicating the features of a given set of items. From that table, the
analysis builds up so-called concepts which are maximal sets of items sharing certain features.
The relations between all possible concepts in a binary relation can be given using a concise
lattice representation. The component identification is done by deriving a concept lattice from the
code based on data usages in source code procedures. The structure of this lattice reveals the
modularization that is implicated in the code.
Concept analysis has a sound mathematical background and the insights into the relationships
among system components. It is an interesting technique for atomic component detection. On the
13. International Journal of Software Engineering & Applications (IJSEA), Vol.8, No.1, January 2017
71
other hand, it is also a time-consuming process when applying concept analysis to larger systems,
creating a major barrier to applying concept analysis technique in object modeling tasks.
5. CONCLUSION AND FUTURE WORK
Software reengineering involves the activities of studying a target system’s architecture
[4][7][21]. However, enterprise legacy software systems tend to be large and complex. The
analysis of system architecture therefore becomes a difficult task [10][15][40]. To solve this
problem, the current study proposes an approach that decomposes software architecture to reduce
the complexity associated with analyzing large scale architecture artifacts.
The study has demonstrated that architecture decomposition is an efficient way to limit the
complexity and risk associated with the re-engineering activities of a large legacy system. It
divides the system into a collection of meaningful modular parts with low coupling, high
cohesion and a minimal interface, thus facilitating the incremental approach to implement the
progressive software re-engineering process [7][5][9][12]. The system architecture decomposition
focuses on how to decompose legacy system into parts, thus facilitating the next stage of applying
a divide-and-conquer approach to implement the legacy system re-architecting and incremental
re-engineering tasks [15][30][33]. The decomposition strategies are constructed based on
different emphases of system analysis aspects. This paper presented two techniques developed to
conduct legacy system architecture decomposition work: visualization & dynamic analysis of
system architecture and module dependency analysis respectively by utilizing run-time dynamic
and static information. These two techniques are constructed based on different emphasis on
system analysis aspects. Our approach is supported by our automated reverse engineering tools
called Dynamic-Analyzer. The preliminary experiment demonstrates that this approach yields
promising results.
REFERENCES
[1] F. Balmas and Harald Wertz, “Identifying information needs for program understanding: An Iterative
Approach,” Proceedings of 7th International Conference on Reverse Engineering for Information
Systems, Lyon (France) July 2001.
[2] L. Bass, P. Clements, R. Kazman, Software Architecture in Practice (2nd edition), Addison-Wesley
2003.
[3] J. Bergey, D. Smith, N. Weiderman, and S. Wood. “Options analysis for reengineering (OAR): Issues
and conceptual approach,” Technical Note, Carnegie Mellon Software Engineering Institute
CMU/SEI, TN, 2000.
[4] A.Bianchi, D.Caivano, G.Visaggio, “Method and process for iterative reengineering data in legacy
system," Proc. IEEE Working Conference on Reverse Engineering, November 2000
[5] Liam O’Brien Christoph Stoermer, Chris Verhoef, “Software architecture reconstruction: practice
needs and current approaches,” Carnegie Mellon Software Engineering Institute, Technical Report,
CMU/SEI-TR-024, 2002
[6] G. Canfora, A. Cimitile, A. De Lucia, G.A. Di Lucca, "decomposing legacy systems into objects: An
eclectic approach," Information and Software Technology, vol. 43, no. 6, 2001, pp. 401-412.
[7] A. Eden, R. Kazman, “Architecture, design, and implementation,” Proceedings of the 25th
International Conference on Software Engineering (ICSE 25), (Portland, OR), pp. 149-159, May
2003.
[8] J. Eloff, “Software restructuring: implementing a code abstraction transformation,” ACM
International Conference Proceedings of SAICSIT 2002,
[9] D. Jackson and M. Rinard. “Software analysis: A roadmap,” in The Future of Software Engineering,
Anthony Finkelstein (Ed.), ACM Press 2000.
[10] R. Koschke, “Atomic architectural component recovery for program understanding and evolution,” In
Proceedings of the International Conference on Software Maintenance, Montréal, Canada, October
2002.
14. International Journal of Software Engineering & Applications (IJSEA), Vol.8, No.1, January 2017
72
[11] T. Kuipers and L. Moonen. “Types and concept analysis for legacy systems.” In Proceedings of the
International Workshop on Programming Comprehension (IWPC 2000). IEEE Computer Society,
June 2000.
[12] A. De Lucia, G.A. Di Lucca, G. Canfora, A. Cimitile, “Decomposing legacy programs: A first step
towards migrating to client-server platforms,” The Journal of Systems and Software, vol. 54, 2000,
pp. 99-110.
[13] M. Ludger, A. Giesl, J. Martin. “Dynamic component program visualisation.” In Proceedings of the
9th Working Conference for Reverse Engineering, Richmond, Virginia, October 2002.
[14] I. Michiels, D. Deridder, H. Tromp and A. Zaidman, “Identifying problems in legacy software,” Elisa
ICSM workshop, 2003
[15] D. Notkin, “Longitudinal program analysis,” ACM SIGSOFT Software Engineering Notes ,
Proceedings of the 2002 ACM SIGPLAN-SIGSOFT Workshop on Program Analysis for Software
Tools And Engineering, Volume 28 Issue 1, November 2002.
[16] K. Rainer, “Atomic architectural component recovery for program understanding and evolution,”
Ph.D Thesis, Institut für Informatik, Universität Stuttgart 2000
[17] C. Stoermer, L. O'Brien, C. Verhoef, “moving towards quality attribute driven software architecture
reconstruction,” Working Conference on Reverse Engineering, Victoria, BC, Canada, November 13th
- 16th, 2003
[18] A. van Deursen and L. Moonen. “Exploring legacy systems using types”. In Proceedings 7th Working
Conference on Reverse Engineering, (WCRE'2000), pages 32-41. IEEE Computer Society, 2000.
[19] M. Fowler, Refactoring - Improving the Design of Existing Code. Addison-Wesley Professional,
2000.
[20] E. J. Chikofsky and J. H. Cross II, “Reverse engineering and design recovery: A taxonomy,” IEEE
Software, vol. 7, no. 1, pp. 13-17, 1990.
[21] T. M. Pigoski and A. April, Software engineering body of knowledge. IEEE Computer Society, 2004,
pp. 6.1-6.16.
[22] M. Cohn, Succeeding with agile: Software development using scrum, K. Gettman, Ed. Addison-
Wesley, 2009.
[23] E. Murphy-Hill and A. P. Black, “Refactoring tools: Fitness for purpose,” IEEE Software, vol. 25,
Sep. 2008, pp. 38-44.
[24] G. C. Murphy, M. Kersten, and L. Findlater, “How are java software developers using the eclipse
ide?” IEEE Software, vol. 23, Jul. 2006, pp. 76-83.
[25] P. Weissgerber and S. Diehl, “Are refactorings less error-prone than other changes?” in Proceedings
of the International Workshop on Mining Software Repositories, 2006, pp. 112-118.
[26] T. Eisenbarth, R. Koschke, and D. Simon, “Locating features in source code,” IEEE Transactions on
Software Engineering, vol. 29, no. 3, 2003, pp.210-224.
[27] D. L. Parnas, “Software aging,” in International Conference on Software Engineering, 1994, pp. 279-
287.
[28] A. Telea and L. Voinea, “Case study: Visual analytics in software product assessments,” in
Proceedings of the International Workshop on Visualizing Software for Understanding and Analysis,
2009, pp. 65-72.
[29] F. Bourquin and R. K. Keller, “High-impact refactoring based on architecture violations,” in
Proceedings of the European Conference on Software Maintenance and Reengineering, 2007, pp.
149-158.
[30] S. Demeyer, S. Ducasse, and O. Nierstrasz, Object oriented reengineering patterns. Morgan
Kaufmann Publishers Inc., 2002.
[31] F. Simon, F. Steinbruckner, and C. Lewerentz, “Metrics based refactoring,” in Proceedings of the
European Conference on Software Maintenance and Reengineering, 2001, pp. 30-38.
[32] S. Roock and M. Lippert, Refactoring in large software projects: Performing complex restructurings
successfully. John Wiley & Sons, Inc., 2006.
[33] M. Lanza and S. Ducasse, “Polymetric views—a lightweight visual approach to reverse engineering,”
IEEE Transactions on Software Engineering, vol. 29, no. 9, 2003, pp. 782-795.
[34] R. Wettel and M. Lanza, “Visually localizing design problems with disharmony maps,” in
Proceedings of the Symposium on Software Visualization, 2008, pp. 155-164.
[35] D. Holten, “Hierarchical edge bundles: Visualization of adjacency relations in hierarchical data,”
IEEE Transactions on Visualization and Computer Graphics, vol. 12, 2006, pp. 741-748.
[36] J. Bohnet and J. Dollner, “Monitoring code quality and development activity by software maps,” in
Proceedings of the International Workshop on Managing Technical Debt, 2011.
15. International Journal of Software Engineering & Applications (IJSEA), Vol.8, No.1, January 2017
73
[37] C. Lewerentz, F. Simon, and F. Steinbruckner, “Crococosmos,” in Graph Drawing, ser. Lecture Notes
in Computer Science. Springer, 2002, vol.2265, pp. 72-76.
[38] D. Ratiu, S. Ducasse, T. Gˆırba, and R. Marinescu, “Using history information to improve design
flaws detection,” in Proceedings of the Euromicro Working Conference on Software Maintenance and
Reengineering, 2004, pp. 223-232.
[39] R. W. Schwanke and S. J. Hanson, “Using neural networks to modularize software,” Machine
Learning, vol. 15, pp. 137-168, May 1994.
[40] R. N. M. Watson, J Woodruff, P. G. Neumann, et al. “Cheri: A hybrid capability-system architecture
for scalable software compartmentalization,” 2015 IEEE Symposium on Security and Privacy. IEEE,
2015, pp 20-37.
[41] E. Murphy-Hill and A. P. Black, “An interactive ambient visualization for code smells,” in
Proceedings of the International Symposium on Software Visualization, 2010, pp. 5-14.
[42] R. Chitchyan, A. Rashid. P. Sawyer, A. Garcia, MP. Alarcon, J. Bakker and A. Jackson. Survey of
aspect-oriented analysis and design approaches. 2015.
[43] N. Tsantalis and A. Chatzigeorgiou, “Identification of refactoring opportunities introducing
polymorphism,” Journal of Systems and Software, vol. 83, pp. 391-404, Mar. 2010.
[44] S. Hayashi, M. Saeki, and M. Kurihara, “Supporting refactoring activities using histories of program
modification,” IEICE Transactions on Information and Systems, vol. E89-D, Apr. 2006, pp. 1403-
1412.
[45] L. Bass, P. Clements, and R. Kazman, Software Architecture in Practice. Addison-Wesley, 2007.