This paper presents a simplified form of UML state
diagrams for modeling agent mobility. Mobile agent has gained
more importance technology. The notations used to model
agent mobility are focused on capturing agent creation,
mobility paths and current agent location. In this paper, we
demonstrate how the simplification of the state UML 2.0
Activity Diagrams can be used for modeling mobile agent
applications. The paper concludes with the appropriateness
of the presented approach for modeling agent mobility with
UML state diagrams as well as with sequence diagrams of the
mobile agent system.
UML2SAN: Toward A New Software Performance Engineering Approachijseajournal
Software Performance Engineering (SPE) has recently considered as an important issue in the software
development process. It consists on evaluate the performance of a system during the design phase. We have
recently proposed a new methodology to generate a Stochastic Automata Network (SAN) model from a
UML model, to consequently obtain performance predications from UML specifications. In this paper, we
expand our idea to cover more complex UML models taking in advantage the modularity of SAN in
modeling large systems. A formal description of the generation process is presented. The new extension
gives rise to a serious approach in SPE that we call UML2SAN.
Distributed Graphical User Interfaces to Class Diagram: Reverse Engineering A...ijseajournal
The graphical user interfaces of software programs are used by researchers in the soft-ware engineering
field to measure functionality, usability, durability, accessibility, and performance. This paper describes a
reverse engineering approach to transform the cap-tured images of the distributed GUIs into class
diagram. The processed distributed GUIs come from different and separate client computers. From the
distributed GUIs, the inter-faces are captured as images, attributes and functions are extracted and
processed through pattern recognitions mechanism to be stored into several temporary tables
corresponding to each client’s graphical user interface. These tables will be analyzed and processed into
one integrated normalized table eliminating any attribute redundancies. Further, the normalized the one
integrated table is to create a class diagram.
The Application of Function Models In Software Design: A Survey Within the So...CSCJournals
Numerous function modelling approaches exist for software design. However, there is little empirical evidence on how these approaches are used in the early stages of software design. This article presents the results of an online survey on the application of function models in the academic and industrial software development community. The results show that more than 90% of the 75 respondents agreed with the statement that software projects that use function modelling techniques have a higher chance of success than other projects. UML is the most widely accepted and used modelling approach among the respondents, but only a handful of UML diagrams appear to be prominently addressed during the early software design stages. Asked for reasons for selecting or rejecting UML models the majority of respondents mentioned using function models to understand software requirements and communicate these with clients and technical teams, whereas lack of familiarity, the time-consuming nature of some models and data redundancy are widely mentioned reasons for not or seldomly using certain models. The study also shows a strong relationship between model usage and respondents’ professions. We conclude that improvements are required to ensure the benefits of the various available models and the links between the models can be fully exploited to support individual designers, to improve communication and collaboration, and to increase project success. A short discussion on the chosen solution direction - a simplified function modelling approach – closes the paper.
Model-Driven Generation of MVC2 Web Applications: From Models to CodeIJEACS
Computer systems engineering is based,
increasingly, on models. These models permit to describe the
systems under development and their environment at different
abstraction levels. These abstractions allow us to conceive
applications independently of target platforms. For a long
time, models have only constituted a help for human users,
allow to manually develop the final code of computer
applications. The Model-Driven Engineering approach (MDE)
consists of programming at the level of models, represented as
an instance of a meta-model, and using them for generating the
end code of applications. The MDA (Model-Driven
Architecture) is a typical model-driven engineering approach
to application design. MDA is based on the UML standard to
define models and on the meta-modeling environment (MOF)
[1] for model-level programming and code generation. The
code generation operation is the subject of this paper. Thus, in
this work, we explain the code generation of MVC2 Web
application by using the M2M transformation (ATL
transformation language) then the M2T transformation. To
implement this latter we use the Acceleo generator which is a
generator language. In the M2T transformation, we use the
PSM model of Struts2 already generated by M2M
transformation as an input model of Acceleo generator. This
transformation is validated by a case study. The main goal of
this paper is to achieve the end-to-end code generation.
International Journal of Computational Engineering Research(IJCER)ijceronline
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
UML2SAN: Toward A New Software Performance Engineering Approachijseajournal
Software Performance Engineering (SPE) has recently considered as an important issue in the software
development process. It consists on evaluate the performance of a system during the design phase. We have
recently proposed a new methodology to generate a Stochastic Automata Network (SAN) model from a
UML model, to consequently obtain performance predications from UML specifications. In this paper, we
expand our idea to cover more complex UML models taking in advantage the modularity of SAN in
modeling large systems. A formal description of the generation process is presented. The new extension
gives rise to a serious approach in SPE that we call UML2SAN.
Distributed Graphical User Interfaces to Class Diagram: Reverse Engineering A...ijseajournal
The graphical user interfaces of software programs are used by researchers in the soft-ware engineering
field to measure functionality, usability, durability, accessibility, and performance. This paper describes a
reverse engineering approach to transform the cap-tured images of the distributed GUIs into class
diagram. The processed distributed GUIs come from different and separate client computers. From the
distributed GUIs, the inter-faces are captured as images, attributes and functions are extracted and
processed through pattern recognitions mechanism to be stored into several temporary tables
corresponding to each client’s graphical user interface. These tables will be analyzed and processed into
one integrated normalized table eliminating any attribute redundancies. Further, the normalized the one
integrated table is to create a class diagram.
The Application of Function Models In Software Design: A Survey Within the So...CSCJournals
Numerous function modelling approaches exist for software design. However, there is little empirical evidence on how these approaches are used in the early stages of software design. This article presents the results of an online survey on the application of function models in the academic and industrial software development community. The results show that more than 90% of the 75 respondents agreed with the statement that software projects that use function modelling techniques have a higher chance of success than other projects. UML is the most widely accepted and used modelling approach among the respondents, but only a handful of UML diagrams appear to be prominently addressed during the early software design stages. Asked for reasons for selecting or rejecting UML models the majority of respondents mentioned using function models to understand software requirements and communicate these with clients and technical teams, whereas lack of familiarity, the time-consuming nature of some models and data redundancy are widely mentioned reasons for not or seldomly using certain models. The study also shows a strong relationship between model usage and respondents’ professions. We conclude that improvements are required to ensure the benefits of the various available models and the links between the models can be fully exploited to support individual designers, to improve communication and collaboration, and to increase project success. A short discussion on the chosen solution direction - a simplified function modelling approach – closes the paper.
Model-Driven Generation of MVC2 Web Applications: From Models to CodeIJEACS
Computer systems engineering is based,
increasingly, on models. These models permit to describe the
systems under development and their environment at different
abstraction levels. These abstractions allow us to conceive
applications independently of target platforms. For a long
time, models have only constituted a help for human users,
allow to manually develop the final code of computer
applications. The Model-Driven Engineering approach (MDE)
consists of programming at the level of models, represented as
an instance of a meta-model, and using them for generating the
end code of applications. The MDA (Model-Driven
Architecture) is a typical model-driven engineering approach
to application design. MDA is based on the UML standard to
define models and on the meta-modeling environment (MOF)
[1] for model-level programming and code generation. The
code generation operation is the subject of this paper. Thus, in
this work, we explain the code generation of MVC2 Web
application by using the M2M transformation (ATL
transformation language) then the M2T transformation. To
implement this latter we use the Acceleo generator which is a
generator language. In the M2T transformation, we use the
PSM model of Struts2 already generated by M2M
transformation as an input model of Acceleo generator. This
transformation is validated by a case study. The main goal of
this paper is to achieve the end-to-end code generation.
International Journal of Computational Engineering Research(IJCER)ijceronline
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
In the last few years user modeling has become an important research area in Human Computer Interaction. A large amount of research has been conducted in this field where different approaches on user modeling are shown. In this paper, we provide an overview of the field of user modeling and describes
the different user model namely, GOMS family of models, cognitive architecture, grammar-based model, and application specific models. We have discussed a few examples of user models in each category. This paper also discusses the future challenges of this research area.
A framework to performance analysis of software architectural stylesijfcstjournal
Growing and executable system architecture has a significant role in successful production of large and
distributed systems. Assessing the effect of different decisions in architecture design can decrease the time and cost of software production, especially when these decisions are related to non-functional properties of system. Performance is a non-functional property which relates to timing behaviour of system. In this paper
we propose an approach for modelling and analysis of performance in architecture level. To do this,we follow a general process which needs two formal notations for specifying architecture and performance models of system. In this paper we show how Stochastic Process Algebra (SPA) in the form of PEPA language can be used for performance modelling and analysis of software archi
tectures modelled using Graph Transformation System (GTS). To enable architecture model for performance analysis, equivalent PEPA model should be constructed with transformation. Transformed performance model of the
architecture has been analysed through PEPA toolkit for some properties like throughput, sensitivity analysis, response time and utilisation rate. The analysis results have been explained with regard to a realistic case study.
Our research aims to propose a global approach for specification, design and verification of context awareness Human Computer Interface (HCI). This is a Model Based Design approach (MBD). This methodology describes the ubiquitous environment by ontologies. OWL is the standard used for this purpose. The specification and modeling of Human-Computer Interaction are based on Petri nets (PN). This raises the question of representation of Petri nets with XML. We use for this purpose, the standard of modeling PNML. In this paper, we propose an extension of this standard for specification, generation and verification of HCI. This extension is a methodological approach for the construction of PNML with Petri nets. The design principle uses the concept of composition of elementary structures of Petri nets as PNML Modular. The objective is to obtain a valid interface through verification of properties of elementary Petri nets represented with PNML.
WiSANCloud: a set of UML-based specifications for the integration of Wireless...Priscill Orue Esquivel
Giving the current trend to combine the advantages of Wireless Sensor and Actor Networks (WSANs) with the Cloud Computing technology, this work proposes a set of specifications, based on the Unified Modeling Language - UML, in order to provide the general framework for the design of the integration of said components. One of the keys of the integration is the architecture of the WSAN, due to its structural relationship with the Cloud in the definition of the combination. Regarding the standard applied in the integration, UML and its subset, Systems Modeling Language - SysML , are proposed by the Object Management Group - OMG to deal with cloud applications; so, this indicates the starting point of the process of the design of specifications for WSAN-Cloud Integration. Based on the current state of UML tools for analysis and design, there are several aspects to take into account in order to define the integration process.
An Implementation on Effective Robot Mission under Critical Environemental Co...IJERA Editor
Software engineering is a field of engineering, for designing and writing programs for computers or other electronic devices. A software engineer, or programmer, writes software (or changes existing software) and compiles software using methods that make it better quality. Is the application of engineering to the design, development, implementation, testingand main tenance of software in a systematic method. Now a days the robotics are also plays an important role in present automation concepts. But we have several challenges in that robots when they are operated in some critical environments. Motion planning and task planning are two fundamental problems in robotics that have been addressed from different perspectives. For resolve this there are Temporal logic based approaches that automatically generate controllers have been shown to be useful for mission level planning of motion, surveillance and navigation, among others. These approaches critically rely on the validity of the environment models used for synthesis. Yet simplifying assumptions are inevitable to reduce complexity and provide mission-level guarantees; no plan can guarantee results in a model of a world in which everything can go wrong. In this paper, we show how our approach, which reduces reliance on a single model by introducing a stack of models, can endow systems with incremental guarantees based on increasingly strengthened assumptions, supporting graceful degradation when the environment does not behave as expected, and progressive enhancement when it does.
The Dynamic Host Configuration Protocol (DHCP)
provides a framework for passing configuration information
to hosts on a UDP network. Computers that are connected to
IP networks must be configured before they can communicate
with other hosts. The most essential information needed is
an IP address. DHCP eliminates the manual task by a network
administrator. DHCP is based on the Bootstrap Protocol
(BOOTP), adding the capability of automatic allocation of
reusable network addresses and additional configuration
options. DHCP captures the behavior of BOOTP relay agents,
and DHCP participants can interoperate with BOOTP
participants. Proposed system, i.e., Customized DHCP aims
to give the security for DHCP, which was not present in the
older one and it uses UDP instead of TCP thus reducing the
number of fields as compared to the old DHCP, in turn which
decreases the execution time and still providing the basic
functionality of the usual DHCP.
On-line Fault diagnosis of Arbitrary Connected NetworksIDES Editor
This paper proposes an on-line two phase fault
diagnosis algorithm for arbitrary connected networks. The
algorithm addresses a realistic fault model considering crash
and value faults in the nodes. Fault diagnosis is achieved by
comparing the heartbeat message generated by neighboring
nodes and dissemination of decision made at each node.
Theoretical analysis shows that time and message complexity
of the diagnosis scheme is O(n) for a n-node network. The
message and time complexity are comparable to the existing
state of art approaches and thus well suited for design of
different fault tolerant wireless communication networks
High Capacity Robust Medical Image Data Hiding using CDCS with Integrity Chec...IDES Editor
While transferring electronic patient report (EPR)
data along with corresponding medical images over network,
confidentiality must be assured. This can be achieved by
embedding EPR data in corresponding medical image itself.
However, as the size of EPR increases, security and
robustness of the embedded information becomes major issue
to monitor. Also checking the integrity of this embedded data
must be needed in order to assure that retrieved EPR data is
original and not manipulated by different types of attacks.
This paper proposes high capacity, robust secured blind data
hiding technique in Discrete Cosine Transform (DCT) domain
along with integrity checking. A new coding technique called
Class Dependent Coding Scheme (CDCS) is used to increase
the embedding capacity. High imperceptibility is achieved by
adaptively selecting the efficient DCT blocks. Even a slight
modification of stego image in embedded region as well as in
ROI (Region of Interest) can be detected at receiver so to
confirm that attack has been done. The embedding scheme
also takes care of ROI which is diagnostically important part
of the medical images and generates security key
automatically. Experimental results show that the proposed
scheme exhibits high imperceptibility as well as low
perceptual variations in Stego-images. Security and
robustness have been tested against various image
manipulation attacks.
In the last few years user modeling has become an important research area in Human Computer Interaction. A large amount of research has been conducted in this field where different approaches on user modeling are shown. In this paper, we provide an overview of the field of user modeling and describes
the different user model namely, GOMS family of models, cognitive architecture, grammar-based model, and application specific models. We have discussed a few examples of user models in each category. This paper also discusses the future challenges of this research area.
A framework to performance analysis of software architectural stylesijfcstjournal
Growing and executable system architecture has a significant role in successful production of large and
distributed systems. Assessing the effect of different decisions in architecture design can decrease the time and cost of software production, especially when these decisions are related to non-functional properties of system. Performance is a non-functional property which relates to timing behaviour of system. In this paper
we propose an approach for modelling and analysis of performance in architecture level. To do this,we follow a general process which needs two formal notations for specifying architecture and performance models of system. In this paper we show how Stochastic Process Algebra (SPA) in the form of PEPA language can be used for performance modelling and analysis of software archi
tectures modelled using Graph Transformation System (GTS). To enable architecture model for performance analysis, equivalent PEPA model should be constructed with transformation. Transformed performance model of the
architecture has been analysed through PEPA toolkit for some properties like throughput, sensitivity analysis, response time and utilisation rate. The analysis results have been explained with regard to a realistic case study.
Our research aims to propose a global approach for specification, design and verification of context awareness Human Computer Interface (HCI). This is a Model Based Design approach (MBD). This methodology describes the ubiquitous environment by ontologies. OWL is the standard used for this purpose. The specification and modeling of Human-Computer Interaction are based on Petri nets (PN). This raises the question of representation of Petri nets with XML. We use for this purpose, the standard of modeling PNML. In this paper, we propose an extension of this standard for specification, generation and verification of HCI. This extension is a methodological approach for the construction of PNML with Petri nets. The design principle uses the concept of composition of elementary structures of Petri nets as PNML Modular. The objective is to obtain a valid interface through verification of properties of elementary Petri nets represented with PNML.
WiSANCloud: a set of UML-based specifications for the integration of Wireless...Priscill Orue Esquivel
Giving the current trend to combine the advantages of Wireless Sensor and Actor Networks (WSANs) with the Cloud Computing technology, this work proposes a set of specifications, based on the Unified Modeling Language - UML, in order to provide the general framework for the design of the integration of said components. One of the keys of the integration is the architecture of the WSAN, due to its structural relationship with the Cloud in the definition of the combination. Regarding the standard applied in the integration, UML and its subset, Systems Modeling Language - SysML , are proposed by the Object Management Group - OMG to deal with cloud applications; so, this indicates the starting point of the process of the design of specifications for WSAN-Cloud Integration. Based on the current state of UML tools for analysis and design, there are several aspects to take into account in order to define the integration process.
An Implementation on Effective Robot Mission under Critical Environemental Co...IJERA Editor
Software engineering is a field of engineering, for designing and writing programs for computers or other electronic devices. A software engineer, or programmer, writes software (or changes existing software) and compiles software using methods that make it better quality. Is the application of engineering to the design, development, implementation, testingand main tenance of software in a systematic method. Now a days the robotics are also plays an important role in present automation concepts. But we have several challenges in that robots when they are operated in some critical environments. Motion planning and task planning are two fundamental problems in robotics that have been addressed from different perspectives. For resolve this there are Temporal logic based approaches that automatically generate controllers have been shown to be useful for mission level planning of motion, surveillance and navigation, among others. These approaches critically rely on the validity of the environment models used for synthesis. Yet simplifying assumptions are inevitable to reduce complexity and provide mission-level guarantees; no plan can guarantee results in a model of a world in which everything can go wrong. In this paper, we show how our approach, which reduces reliance on a single model by introducing a stack of models, can endow systems with incremental guarantees based on increasingly strengthened assumptions, supporting graceful degradation when the environment does not behave as expected, and progressive enhancement when it does.
The Dynamic Host Configuration Protocol (DHCP)
provides a framework for passing configuration information
to hosts on a UDP network. Computers that are connected to
IP networks must be configured before they can communicate
with other hosts. The most essential information needed is
an IP address. DHCP eliminates the manual task by a network
administrator. DHCP is based on the Bootstrap Protocol
(BOOTP), adding the capability of automatic allocation of
reusable network addresses and additional configuration
options. DHCP captures the behavior of BOOTP relay agents,
and DHCP participants can interoperate with BOOTP
participants. Proposed system, i.e., Customized DHCP aims
to give the security for DHCP, which was not present in the
older one and it uses UDP instead of TCP thus reducing the
number of fields as compared to the old DHCP, in turn which
decreases the execution time and still providing the basic
functionality of the usual DHCP.
On-line Fault diagnosis of Arbitrary Connected NetworksIDES Editor
This paper proposes an on-line two phase fault
diagnosis algorithm for arbitrary connected networks. The
algorithm addresses a realistic fault model considering crash
and value faults in the nodes. Fault diagnosis is achieved by
comparing the heartbeat message generated by neighboring
nodes and dissemination of decision made at each node.
Theoretical analysis shows that time and message complexity
of the diagnosis scheme is O(n) for a n-node network. The
message and time complexity are comparable to the existing
state of art approaches and thus well suited for design of
different fault tolerant wireless communication networks
High Capacity Robust Medical Image Data Hiding using CDCS with Integrity Chec...IDES Editor
While transferring electronic patient report (EPR)
data along with corresponding medical images over network,
confidentiality must be assured. This can be achieved by
embedding EPR data in corresponding medical image itself.
However, as the size of EPR increases, security and
robustness of the embedded information becomes major issue
to monitor. Also checking the integrity of this embedded data
must be needed in order to assure that retrieved EPR data is
original and not manipulated by different types of attacks.
This paper proposes high capacity, robust secured blind data
hiding technique in Discrete Cosine Transform (DCT) domain
along with integrity checking. A new coding technique called
Class Dependent Coding Scheme (CDCS) is used to increase
the embedding capacity. High imperceptibility is achieved by
adaptively selecting the efficient DCT blocks. Even a slight
modification of stego image in embedded region as well as in
ROI (Region of Interest) can be detected at receiver so to
confirm that attack has been done. The embedding scheme
also takes care of ROI which is diagnostically important part
of the medical images and generates security key
automatically. Experimental results show that the proposed
scheme exhibits high imperceptibility as well as low
perceptual variations in Stego-images. Security and
robustness have been tested against various image
manipulation attacks.
Data oriented and Process oriented Strategies for Legacy Information Systems ...IDES Editor
The legacy information systems often implement
manual data updates for information obtained from external
systems. The manual updates are cumbersome, error prone,
and expensive. The legacy systems miss interfaces to external
systems that could be used for automatic updates of system
data. Moreover, the legacy systems also lack extensions to
supplier or customer systems that are essential for creating
supply chain relationships. This paper explores the data
oriented and process oriented models of legacy systems, and
discusses the details of systems development and evolution
models mainly aiming at an ongoing reengineering of legacy
systems. This paper proposes simple strategies for creating
interfaces to external systems for automatic updates of data,
and for adapting to the process evolution that requires a legacy
information system to extend its communications with
external systems that could help in creating successful supply
chain relationships. These strategies can reshape a legacy
system to be reengineered into a new enterprise information
system whether the legacy system is of a data oriented model,
or of a process oriented model.
The Study of MOSFET Parallelism in High Frequency DC/DC ConverterIDES Editor
The study of MOSFET parallelism and the impact
on body diode conduction loss of the switch are presented in
this paper. The simulation is carried out for synchronous
rectifier buck converter (SRBC) in continuous conduction
mode where several configurations of the MOSFET connected
in parallel are applied. It is found that the body diode
conduction loss has been reduced of more than 35 % in fourparallel
S1 with one S2 compared to the single pair totempoled
switched SRBC circuit.
A Framework and Methods for Dynamic Scheduling of a Directed Acyclic Graph on...IDES Editor
The data flow model is gaining popularity as a
programming paradigm for multi-core processors. Efficient
scheduling of an application modeled by Directed Acyclic
Graph (DAG) is a key issue when performance is very
important. DAG represents computational solutions, in which
the nodes represent tasks to be executed and edges represent
precedence constraints among the tasks. The task scheduling
problem in general is a NP-complete problem[2]. Several static
scheduling heuristics have been proposed. But the major
problem in static list scheduling is the inherent difficulty in
exact estimation of task cost and edge cost in a DAG and also
its inability to consider and manage with runtime behavior of
tasks. This underlines the need for dynamic scheduling of a
DAG. This paper presents how in general, dynamic scheduling
of a DAG can be done. Also proposes 4 simple methods to
perform dynamic scheduling of a DAG. These methods have
been simulated and experimented using a representative set
of DAG structured computations from both synthetic and real
problems. The proposed dynamic scheduler performance is
found to be in comparable with that of static scheduling
methods. The performance comparison of the proposed
dynamic scheduling methods is also carried out.
Prototyping a Wireless Sensor Node using FPGA for Mines Safety ApplicationIDES Editor
The sensor nodes in a wireless sensor network are
normally microcontroller based which are having limited
computational capability related to various applications. This
paper describes the selection, specification and realization of
a wireless sensor node using the field programmable gate
array (FPGA) based architecture for an early detection of
hazards (e.g fire and gas-leak ) in mines area. The FPGAs in
it’s place are more efficient for complex computations in
compare to microcontrollers, which is tested by implementing
the adaptive algorithm for removing the noise in sensor
received data in our work. Another advantage of using FPGA
is also due to it’s reconfigurable feature without changing
the hardware itself. The node is implemented using cyclone
II FPGA device present in Altera dE2 board .In this work the
network comprises of 4 nodes out of which 2 are test nodes,
one routing node and one base station node. An energy
efficient MAC protocol is tested for transmitting the data from
test node to base station node.
An Area Efficient, High Performance, Low Dead Zone, Phase Frequency Detector ...IDES Editor
The phase frequency detector has been designed for
high frequency phase locked loop in 180 nm CMOS Technology
with 1.8V supply voltage using CADENCE Spectre tool. A
Virtuoso Analog Design Environment and Virtuoso LayoutXL
tools of Cadence have used to design and simulate schematic
and layout of phase frequency detector respectively.
Architecture of phase frequency detector (PFD) has simulated
to get low dead zone and low power consumption. A layout has
designed by above tool and DRC by Assura. This circuit has
designed with low power dissipation and small area .The total
area required without pad is 0.06988 mm2 and current
consumption is found to be 132.6 uA respectively.
A Suite of Metrics for UML Behavioral Diagrams based on Complexity Perspectivessebastianku31
Submit your Research Articles!!
International Journal of Software Engineering & Applications(IJSEA)
ISSN:0975-3834 [Online]; 0975-4679 [Print]
ERA Indexed, H Index 31
Web Page URL : https://airccse.org/journal/ijsea/ijsea.html
Current Issue link: https://www.airccse.org/journal/ijsea/vol15.html
A Suite of Metrics for UML Behavioral Diagrams based on Complexity Perspectives
Ann Wambui King’ori, Geoffrey Muchiri Muketha and John Gichuki Ndia, Murang’a University of Technology, Kenya
Abstract URL :https://aircconline.com/abstract/ijsea/v15n2/15224ijsea01.html
Article URL :https://aircconline.com/ijsea/V15N2/15224ijsea01.pdf
#Softwarecomplexity #softwaremetrics #UMLbehavioraldiagrams #qualityanalysis, #theoreticalvalidations
Submission System: https://airccse.com/submissioncs/home.html
Contact Us : ijseajournal@airccse.org or ijsea@aircconline.com
Application Of UML In Real-Time Embedded Systemsijseajournal
The UML was designed as a graphical notation for use with object-oriented systems and applications.
Because of its popularity, now it is emerging in the field of embedded systems design as a modeling
language. The UML notation is useful in capturing the requirements, documenting the structure,
decomposing into objects and defining relationships between objects. It is a notational language that is
very useful in modelling the real-time embedded systems. This paper presents the requirements and
analysis modelling of a real-time embedded system related to a control system application for platform
stabilization using COMET method of design with UML notation. These applications involve designing of
electromechanical systems that are controlled by multi-processors.
Development of Mobile Cloud Applications using UML IJECEIAES
With the proliferation of cloud computing technologies, smartphone users are able to use a variety of cloud computing-based mobile services such as games, education, entertainment, and social networking. Despite the popularity of such a mobile cloud computing, the complicated multi-tier system configuration of the mobile application must be one of the major impediments to develop mobile cloud applications. This paper presents development processes and procedures for developing mobile cloud applications by effectively applying Unified Modeling Language (UML), a representative object-oriented modeling language. The paper is intended to enhance the development productivity of the mobile cloud application and to improve the effectiveness of communication between software developers. In addition, we used the Android mobile platform and Amazon Web Service for cloud computing in order to demonstrate the applicability of the proposed approach to systematically apply the UML profiles and diagrams for cloudbased mobile applications.
an analysis and new methodology for reverse engineering of uml behavioralINFOGAIN PUBLICATION
The emergence of Unified Modeling Language (UML) as a standard for modeling systems has encouraged the use of automated software tools that facilitate the development process from analysis through coding. Reverse Engineering has become a viable method to measure an existing system and reconstruct the necessary model from its original. The Reverse Engineering of behavioral models consists in extracting high-level models that help understand the behavior of existing software systems. In this paper we present an ongoing work on extracting UML diagrams from object-oriented programming languages. we propose an approach for the reverse engineering of UML behavior from the analysis of execution traces produced dynamically by an object-oriented application using formal and semi-formal techniques for modeling the dynamic behavior of a system. Our methods show that this approach can produce UML behavioral diagrams in reasonable time and suggest that these diagrams are helpful in understanding the behavior of the underlying application.
TRACEABILITY OF UNIFIED MODELING LANGUAGE DIAGRAMS FROM USE CASE MAPSijseajournal
The Unified Modeling Language (UML) is a general purpose modeling language for specifying, constructing and documenting the artifacts of software systems. It is used in developing systems by combining the use of different types of diagrams to express different views of the systems. These diagrams allow transition between requirements and implementation. The lack of traceability between the diagrams
makes any changes difficult and expensive. In this paper, it is proposed using the Use Case Maps (UCMs) notation which allows the full description of the system in terms of high-level causal scenario and helps in visualizing and understanding the system in early stage. UCMs was used in the early stage to describe the system and generate the proper UML diagrams from UCMs. By defining a traceability relationship between UCMs and UML, we facilitate the maintains and the consistency of the UML diagrams.
General Methodology for developing UML models from UIijwscjournal
In recent past every discipline and every industry have their own methods of developing products. It may
be software development, mechanics, construction, psychology and so on. These demarcations work fine
as long as the requirements are within one discipline. However, if the project extends over several
disciplines, interfaces have to be created and coordinated between the methods of these disciplines.
Performance is an important quality aspect of Web Services because of their distributed nature.
Predicting the performance of web services during early stages of software development is significant. In
Industry, Prototype of these applications is developed during analysis phase of Software Development Life
Cycle (SDLC). However, Performance models are generated from UML models. Methodologies for
predicting the performance from UML models is available. Hence, In this paper, a methodology for
developing Use Case model and Activity model from User Interface is presented. The methodology is
illustrated with a case study on Amazon.com.
General Methodology for developing UML models from UI ijwscjournal
In recent past every discipline and every industry have their own methods of developing products. It may be software development, mechanics, construction, psychology and so on. These demarcations work fine as long as the requirements are within one discipline. However, if the project extends over several disciplines, interfaces have to be created and coordinated between the methods of these disciplines.
Performance is an important quality aspect of Web Services because of their distributed nature. Predicting the performance of web services during early stages of software development is significant. In Industry, Prototype of these applications is developed during analysis phase of Software Development Life
Cycle (SDLC). However, Performance models are generated from UML models. Methodologies for predicting the performance from UML models is available. Hence, In this paper, a methodology for developing Use Case model and Activity model from User Interface is presented. The methodology is illustrated with a case study on Amazon.com.
General Methodology for developing UML models from UIijwscjournal
In recent past every discipline and every industry have their own methods of developing products. It may be software development, mechanics, construction, psychology and so on. These demarcations work fine as long as the requirements are within one discipline. However, if the project extends over several disciplines, interfaces have to be created and coordinated between the methods of these disciplines. Performance is an important quality aspect of Web Services because of their distributed nature. Predicting the performance of web services during early stages of software development is significant. In Industry, Prototype of these applications is developed during analysis phase of Software Development Life Cycle (SDLC). However, Performance models are generated from UML models. Methodologies for predicting the performance from UML models is available. Hence, In this paper, a methodology for developing Use Case model and Activity model from User Interface is presented. The methodology is illustrated with a case study on Amazon.com.
General Methodology for developing UML models from UIijwscjournal
In recent past every discipline and every industry have their own methods of developing products. It may be software development, mechanics, construction, psychology and so on. These demarcations work fine as long as the requirements are within one discipline. However, if the project extends over several disciplines, interfaces have to be created and coordinated between the methods of these disciplines. Performance is an important quality aspect of Web Services because of their distributed nature. Predicting the performance of web services during early stages of software development is significant. In Industry, Prototype of these applications is developed during analysis phase of Software Development Life Cycle (SDLC). However, Performance models are generated from UML models. Methodologies for predicting the performance from UML models is available. Hence, In this paper, a methodology for developing Use Case model and Activity model from User Interface is presented. The methodology is illustrated with a case study on Amazon.com.
General Methodology for developing UML models from UIijwscjournal
In recent past every discipline and every industry have their own methods of developing products. It may
be software development, mechanics, construction, psychology and so on. These demarcations work fine
as long as the requirements are within one discipline. However, if the project extends over several
disciplines, interfaces have to be created and coordinated between the methods of these disciplines.
Performance is an important quality aspect of Web Services because of their distributed nature.
Predicting the performance of web services during early stages of software development is significant. In
Industry, Prototype of these applications is developed during analysis phase of Software Development Life
Cycle (SDLC). However, Performance models are generated from UML models. Methodologies for
predicting the performance from UML models is available. Hence, In this paper, a methodology for
developing Use Case model and Activity model from User Interface is presented. The methodology is
illustrated with a case study on Amazon.com
A LITERATURE SURVEY OF COGNITIVE COMPLEXITY METRICS FOR STATECHART DIAGRAMSijseajournal
Statechart diagrams have inherent complexity which keeps increasing every time the diagrams are modified. This complexity poses problems in comprehending statechart diagrams. The study of cognitive complexity has over the years provided valuable information for the design of improved software systems. Researchers have proposed numerous metrics that have been used to measure and therefore control the complexity of software. However, there is inadequate literature related to cognitive complexity metrics that can apply to measure statechart diagrams. In this study, a literature survey of statechart diagrams is conducted to investigate if there are any gaps in the literature. Initially, a description of UML and statechart diagrams is presented, followed by the complexities associated with statechart diagrams and finally an analysis of existing cognitive complexity metrics and metrics related to statechart diagrams. Findings indicate that metrics that employ cognitive weights to measure statechart diagrams are lacking.
Unlock Your Future as a Software Architect: Master UML and Design Software with Ease
Don't Just Code—Command! I'll Transform You from Developer to Architect with UML Expertise. Make Software Design Your Second Nature."
AI in UML: Discover the power of generative AI in automating and enhancing UML diagram creation.
Are you a software developer looking to escalate your career and transition into software architecture? Look no further. This course is designed to bridge that gap, transforming you from a skilled developer into a visionary software architect.
Coding is Just the Start: Soar to Architect Status with UML Mastery! Design, Communicate, and Lead Projects with Unmatched Clarity
Why This Course Is Essential:
As software development evolves, there's an increasing need for professionals who can see the big picture, create robust system designs, and lead teams effectively. Understanding Unified Modeling Language (UML) is crucial for anyone aspiring to become a software architect. UML serves as the common language that fosters clear communication, collaboration, and a shared understanding among team members and stakeholders.
Skyrocket Your Career from Coder to Architect: Master UML and Design Systems that Wow Stakeholders. Be the Architect Everyone Needs!
What You'll Learn:
Master UML: Grasp the essential UML diagrams and how they contribute to a project’s success.
Transitioning Skills: Practical steps to shift from a software developer to a software architect role.
Team Leadership: How to communicate effectively with stakeholders and lead a development team.
Design Principles: Master the art of designing robust and scalable software architectures.
Course Highlights:
Hands-on UML projects
Real-world case studies
A special 15-minute video on leveraging generative AI for UML diagramming
Interactive quizzes and assignments
Expert-led video lectures
Peer discussions and network opportunities
Who This Course Is For:
This course is ideal for software developers, junior architects, project managers, technical leads, software analysts, and anyone interested in progressing into software architecture roles.
Elevate Your Code to Architecture: Master UML and Become the Software Architect You're Meant to Be! Cut Through Complexity and Design Like a Pro.
Prerequisites:
Basic to intermediate programming skills
Familiarity with software development lifecycles
A willing mind and eagerness to learn
Course Outcomes:
Proficient understanding of UML
Understanding of how AI can streamline and innovate UML diagram generation
Ability to design complex software systems
Enhanced leadership and communication skills
Certificate of Completion
Enroll today to transition from coding tasks to leading architectural visions and designing software with ease!
Unlock Architect-Level Design Skills: I Fast-Track Developers into Master Architects with UML—Turn Complex Systems into Child's Play!
Documenting Software Architectural Component and Connector with UML 2editor1knowledgecuddle
Earlierversions of the UML have been an out of depth for documenting software architectures like component, port, connector and system. Users have adopted conventions for representing architectural concepts using different grouping of UML modeling element. They can also create profiles to focus the UML. Changes incorporated in UML 2 have improved UML’s suitability for software architectural documentation, but UML is still an out of your depth for documenting some types of architectural information. In this paper, there is description of documenting component and connector using UML but in particular case, documenting architectural connectors and components remains problematic. Keywords: - component, connector
Similar to Extending UML State Diagrams to Model Agent Mobility (20)
Power System State Estimation - A ReviewIDES Editor
The aim of this article is to provide a comprehensive
survey on power system state estimation techniques. The
algorithms used for finding the system states under both static
and dynamic state estimations are discussed in brief. The
authors are opinion that the scope of pursuing research in the
area of state estimation with PMU and SCADA measurements
is the state of the art and timely.
Artificial Intelligence Technique based Reactive Power Planning Incorporating...IDES Editor
Reactive Power Planning is a major concern in the
operation and control of power systems This paper compares
the effectiveness of Evolutionary Programming (EP) and
New Improved Differential Evolution (NIMDE) to solve
Reactive Power Planning (RPP) problem incorporating
FACTS Controllers like Static VAR Compensator (SVC),
Thyristor Controlled Series Capacitor (TCSC) and Unified
power flow controller (UPFC) considering voltage stability.
With help of Fast Voltage Stability Index (FVSI), the critical
lines and buses are identified to install the FACTS controllers.
The optimal settings of the control variables of the generator
voltages,transformer tap settings and allocation and parameter
settings of the SVC,TCSC,UPFC are considered for reactive
power planning. The test and Validation of the proposed
algorithm are conducted on IEEE 30–bus system and 72-bus
Indian system.Simulation results shows that the UPFC gives
better results than SVC and TCSC and the FACTS controllers
reduce the system losses.
Design and Performance Analysis of Genetic based PID-PSS with SVC in a Multi-...IDES Editor
Damping of power system oscillations with the help
of proposed optimal Proportional Integral Derivative Power
System Stabilizer (PID-PSS) and Static Var Compensator
(SVC)-based controllers are thoroughly investigated in this
paper. This study presents robust tuning of PID-PSS and
SVC-based controllers using Genetic Algorithms (GA) in
multi machine power systems by considering detailed model
of the generators (model 1.1). The effectiveness of FACTSbased
controllers in general and SVC-based controller in
particular depends upon their proper location. Modal
controllability and observability are used to locate SVC–based
controller. The performance of the proposed controllers is
compared with conventional lead-lag power system stabilizer
(CPSS) and demonstrated on 10 machines, 39 bus New England
test system. Simulation studies show that the proposed genetic
based PID-PSS with SVC based controller provides better
performance.
Optimal Placement of DG for Loss Reduction and Voltage Sag Mitigation in Radi...IDES Editor
This paper presents the need to operate the power
system economically and with optimum levels of voltages has
further led to an increase in interest in Distributed
Generation. In order to reduce the power losses and to improve
the voltage in the distribution system, distributed generators
(DGs) are connected to load bus. To reduce the total power
losses in the system, the most important process is to identify
the proper location for fixing and sizing of DGs. It presents a
new methodology using a new population based meta heuristic
approach namely Artificial Bee Colony algorithm(ABC) for
the placement of Distributed Generators(DG) in the radial
distribution systems to reduce the real power losses and to
improve the voltage profile, voltage sag mitigation. The power
loss reduction is important factor for utility companies because
it is directly proportional to the company benefits in a
competitive electricity market, while reaching the better power
quality standards is too important as it has vital effect on
customer orientation. In this paper an ABC algorithm is
developed to gain these goals all together. In order to evaluate
sag mitigation capability of the proposed algorithm, voltage
in voltage sensitive buses is investigated. An existing 20KV
network has been chosen as test network and results are
compared with the proposed method in the radial distribution
system.
Line Losses in the 14-Bus Power System Network using UPFCIDES Editor
Controlling power flow in modern power systems
can be made more flexible by the use of recent developments
in power electronic and computing control technology. The
Unified Power Flow Controller (UPFC) is a Flexible AC
transmission system (FACTS) device that can control all the
three system variables namely line reactance, magnitude and
phase angle difference of voltage across the line. The UPFC
provides a promising means to control power flow in modern
power systems. Essentially the performance depends on proper
control setting achievable through a power flow analysis
program. This paper presents a reliable method to meet the
requirements by developing a Newton-Raphson based load
flow calculation through which control settings of UPFC can
be determined for the pre-specified power flow between the
lines. The proposed method keeps Newton-Raphson Load Flow
(NRLF) algorithm intact and needs (little modification in the
Jacobian matrix). A MATLAB program has been developed to
calculate the control settings of UPFC and the power flow
between the lines after the load flow is converged. Case studies
have been performed on IEEE 5-bus system and 14-bus system
to show that the proposed method is effective. These studies
indicate that the method maintains the basic NRLF properties
such as fast computational speed, high degree of accuracy and
good convergence rate.
Study of Structural Behaviour of Gravity Dam with Various Features of Gallery...IDES Editor
The size and shape of opening in dam causes the
stress concentration, it also causes the stress variation in the
rest of the dam cross section. The gravity method of the analysis
does not consider the size of opening and the elastic property
of dam material. Thus the objective of study is comprises of
the Finite Element Method which considers the size of
opening, elastic property of material, and stress distribution
because of geometric discontinuity in cross section of dam.
Stress concentration inside the dam increases with the opening
in dam which results in the failure of dam. Hence it is
necessary to analyses large opening inside the dam. By making
the percentage area of opening constant and varying size and
shape of opening the analysis is carried out. For this purpose
a section of Koyna Dam is considered. Dam is defined as a
plane strain element in FEM, based on geometry and loading
condition. Thus this available information specified our path
of approach to carry out 2D plane strain analysis. The results
obtained are then compared mutually to get most efficient
way of providing large opening in the gravity dam.
Assessing Uncertainty of Pushover Analysis to Geometric ModelingIDES Editor
Pushover Analysis a popular tool for seismic
performance evaluation of existing and new structures and is
nonlinear Static procedure where in monotonically increasing
loads are applied to the structure till the structure is unable
to resist the further load .During the analysis, whatever the
strength of concrete and steel is adopted for analysis of
structure may not be the same when real structure is
constructed and the pushover analysis results are very sensitive
to material model adopted, geometric model adopted, location
of plastic hinges and in general to procedure followed by the
analyzer. In this paper attempt has been made to assess
uncertainty in pushover analysis results by considering user
defined hinges and frame modeled as bare frame and frame
with slab modeled as rigid diaphragm and results compared
with experimental observations. Uncertain parameters
considered includes the strength of concrete, strength of steel
and cover to the reinforcement which are randomly generated
and incorporated into the analysis. The results are then
compared with experimental observations.
Secure Multi-Party Negotiation: An Analysis for Electronic Payments in Mobile...IDES Editor
This paper is an attempt to base on auctions which
presents a frame work for the secure multi-party decision
protocols. In addition to the implementations which are very
light weighted, the main focus is on synchronizing security
features for avoiding agreements manipulations and reducing
the user traffic. Through this paper one can understand that
this different auction protocols on top of the frame work can
be collaborated using mobile devices. This paper present the
negotiation between auctioneer and the proffered and this
negotiation shows that multiparty security is far better than
the existing system.
Selfish Node Isolation & Incentivation using Progressive ThresholdsIDES Editor
The problems associated with selfish nodes in
MANET are addressed by a collaborative watchdog approach
which reduces the detection time for selfish nodes thereby
improves the performance and accuracy of watchdogs[1]. In
the related works they make use of credit based systems, reputation
based mechanisms, pathrater and watchdog mechanism
to detect such selfish nodes. In this paper we follow an approach
of collaborative watchdog which reduces the detection
time for selfish nodes and also involves the removal of such
selfish nodes based on some progressively assessed thresholds.
The threshold gives the nodes a chance to stop misbehaving
before it is permanently deleted from the network.
The node passes through several isolation processes before it
is permanently removed. Another version of AODV protocol
is used here which allows the simulation of selfish nodes in
NS2 by adding or modifying log files in the protocol.
Various OSI Layer Attacks and Countermeasure to Enhance the Performance of WS...IDES Editor
Wireless sensor networks are networks having non
wired infrastructure and dynamic topology. In OSI model each
layer is prone to various attacks, which halts the performance
of a network .In this paper several attacks on four layers of
OSI model are discussed and security mechanism is described
to prevent attack in network layer i.e wormhole attack. In
Wormhole attack two or more malicious nodes makes a covert
channel which attracts the traffic towards itself by depicting a
low latency link and then start dropping and replaying packets
in the multi-path route. This paper proposes promiscuous mode
method to detect and isolate the malicious node during
wormhole attack by using Ad-hoc on demand distance vector
routing protocol (AODV) with omnidirectional antenna. The
methodology implemented notifies that the nodes which are
not participating in multi-path routing generates an alarm
message during delay and then detects and isolate the
malicious node from network. We also notice that not only
the same kind of attacks but also the same kind of
countermeasures can appear in multiple layer. For example,
misbehavior detection techniques can be applied to almost all
the layers we discussed.
Responsive Parameter based an AntiWorm Approach to Prevent Wormhole Attack in...IDES Editor
The recent advancements in the wireless technology
and their wide-spread deployment have made remarkable
enhancements in efficiency in the corporate and industrial
and Military sectors The increasing popularity and usage of
wireless technology is creating a need for more secure wireless
Ad hoc networks. This paper aims researched and developed
a new protocol that prevents wormhole attacks on a ad hoc
network. A few existing protocols detect wormhole attacks but
they require highly specialized equipment not found on most
wireless devices. This paper aims to develop a defense against
wormhole attacks as an Anti-worm protocol which is based on
responsive parameters, that does not require as a significant
amount of specialized equipment, trick clock synchronization,
no GPS dependencies.
Cloud Security and Data Integrity with Client Accountability FrameworkIDES Editor
The Cloud based services provide much efficient
and seamless ways for data sharing across the cloud. The fact
that the data owners no longer possess data makes it very
difficult to assure data confidentiality and to enable secure
data sharing in the cloud. Despite of all its advantages this
will remain a major limitation that acts as a barrier to the
wider deployment of cloud based services. One of the possible
ways for ensuring trust in this aspect is the introduction of
accountability feature in the cloud computing scenario. The
Cloud framework requires promotion of distributed
accountability for such dynamic environment[1]. In some
works, there‘s an accountable framework suggested to ensure
distributed accountability for data sharing by the generation
of only a log of data access, but without any embedded feedback
mechanism for owner permission towards data
protection[2].The proposed system is an enhanced client
accountability framework which provides an additional client
side verification for each access towards enhanced security of
data. The integrity of content of data which resides in the
cloud service provider is also maintained by secured
outsourcing. Besides, the authentication of JAR(Java Archive)
files are done to ensure file protection and to maintain a safer
environment for data sharing. The analysis of various
functionalities of the framework depicts both the
accountability and security feature in an efficient manner.
Genetic Algorithm based Layered Detection and Defense of HTTP BotnetIDES Editor
A System state in HTTP botnet uses HTTP protocol
for the creation of chain of Botnets thereby compromising
other systems. By using HTTP protocol and port number 80,
attacks can not only be hidden but also pass through the
firewall without being detected. The DPR based detection
leads to better analysis of botnet attacks [3]. However, it
provides only probabilistic detection of the attacker and also
time consuming and error prone. This paper proposes a Genetic
algorithm based layered approach for detecting as well as
preventing botnet attacks. The paper reviews p2p firewall
implementation which forms the basis of filtering.
Performance evaluation is done based on precision, F-value
and probability. Layered approach reduces the computation
and overall time requirement [7]. Genetic algorithm promises
a low false positive rate.
Enhancing Data Storage Security in Cloud Computing Through SteganographyIDES Editor
in cloud computing data storage is a significant issue
because the entire data reside over a set of interconnected
resource pools that enables the data to be accessed through
virtual machines. It moves the application software’s and
databases to the large data centers where the management of
data is actually done. As the resource pools are situated over
various corners of the world, the management of data and
services may not be fully trustworthy. So, there are various
issues that need to be addressed with respect to the
management of data, service of data, privacy of data, security
of data etc. But the privacy and security of data is highly
challenging. To ensure privacy and security of data-at-rest in
cloud computing, we have proposed an effective and a novel
approach to ensure data security in cloud computing by means
of hiding data within images following is the concept of
steganography. The main objective of this paper is to prevent
data access from cloud data storage centers by unauthorized
users. This scheme perfectly stores data at cloud data storage
centers and retrieves data from it when it is needed.
The main tasks of a Wireless Sensor Network
(WSN) are data collection from its nodes and communication
of this data to the base station (BS). The protocols used for
communication among the WSN nodes and between the WSN
and the BS, must consider the resource constraints of nodes,
battery energy, computational capabilities and memory. The
WSN applications involve unattended operation of the network
over an extended period of time. In order to extend the lifetime
of a WSN, efficient routing protocols need to be adopted. The
proposed low power routing protocol based on tree-based
network structure reliably forwards the measured data towards
the BS using TDMA. An energy consumption analysis of the
WSN making use of this protocol is also carried out. It is
found that the network is energy efficient with an average
duty cycle of 0:7% for the WSN nodes. The OmNET++
simulation platform along with MiXiM framework is made
use of.
Permutation of Pixels within the Shares of Visual Cryptography using KBRP for...IDES Editor
The security of authentication of internet based
co-banking services should not be susceptible to high risks.
The passwords are highly vulnerable to virus attacks due to
the lack of high end embedding of security methods. In order
for the passwords to be more secure, people are generally
compelled to select jumbled up character based passwords
which are not only less memorable but are also equally prone
to insecurity. Multiple use of distributed shares has been
studied to solve the problem of authentication by algorithms
based on thresholding of pixels in image processing and visual
cryptography concepts where the subset of shares is considered
for the recovery of the original image for authentication using
correlation function[1][2].The main disadvantage in the above
study is the plain storage of shares and also one of the shares
is being supplied to the customer, which will lead to the
possibility of misuse by a third party. This paper proposes a
technique for scrambling of pixels by key based random
permutation (KBRP) within the shares before the
authentication has been attempted. Total number of shares to
be created is dependent on the multiplicity of ownership of
the account. By this method the problem of uncertainty among
the customers with regard to security, storage, retrieval of
holding of half of the shares is minimized.
This paper presents a trifocal Rotman Lens Design
approach. The effects of focal ratio and element spacing on
the performance of Rotman Lens are described. A three beam
prototype feeding 4 element antenna array working in L-band
has been simulated using RLD v1.7 software. Simulated
results show that the simulated lens has a return loss of –
12.4dB at 1.8GHz. Beam to array port phase error variation
with change in the focal ratio and element spacing has also
been investigated.
Band Clustering for the Lossless Compression of AVIRIS Hyperspectral ImagesIDES Editor
Hyperspectral images can be efficiently compressed
through a linear predictive model, as for example the one
used in the SLSQ algorithm. In this paper we exploit this
predictive model on the AVIRIS images by individuating,
through an off-line approach, a common subset of bands, which
are not spectrally related with any other bands. These bands
are not useful as prediction reference for the SLSQ 3-D
predictive model and we need to encode them via other
prediction strategies which consider only spatial correlation.
We have obtained this subset by clustering the AVIRIS bands
via the clustering by compression approach. The main result
of this paper is the list of the bands, not related with the
others, for AVIRIS images. The clustering trees obtained for
AVIRIS and the relationship among bands they depict is also
an interesting starting point for future research.
Microelectronic Circuit Analogous to Hydrogen Bonding Network in Active Site ...IDES Editor
A microelectronic circuit of block-elements
functionally analogous to two hydrogen bonding networks is
investigated. The hydrogen bonding networks are extracted
from â-lactamase protein and are formed in its active site.
Each hydrogen bond of the network is described in equivalent
electrical circuit by three or four-terminal block-element.
Each block-element is coded in Matlab. Static and dynamic
analyses are performed. The resultant microelectronic circuit
analogous to the hydrogen bonding network operates as
current mirror, sine pulse source, triangular pulse source as
well as signal modulator.
Texture Unit based Monocular Real-world Scene Classification using SOM and KN...IDES Editor
In this paper a method is proposed to discriminate
real world scenes in to natural and manmade scenes of similar
depth. Global-roughness of a scene image varies as a function
of image-depth. Increase in image depth leads to increase in
roughness in manmade scenes; on the contrary natural scenes
exhibit smooth behavior at higher image depth. This particular
arrangement of pixels in scene structure can be well explained
by local texture information in a pixel and its neighborhood.
Our proposed method analyses local texture information of a
scene image using texture unit matrix. For final classification
we have used both supervised and unsupervised learning using
K-Nearest Neighbor classifier (KNN) and Self Organizing
Map (SOM) respectively. This technique is useful for online
classification due to very less computational complexity.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
GridMate - End to end testing is a critical piece to ensure quality and avoid...ThomasParaiso2
End to end testing is a critical piece to ensure quality and avoid regressions. In this session, we share our journey building an E2E testing pipeline for GridMate components (LWC and Aura) using Cypress, JSForce, FakerJS…
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
zkStudyClub - Reef: Fast Succinct Non-Interactive Zero-Knowledge Regex ProofsAlex Pruden
This paper presents Reef, a system for generating publicly verifiable succinct non-interactive zero-knowledge proofs that a committed document matches or does not match a regular expression. We describe applications such as proving the strength of passwords, the provenance of email despite redactions, the validity of oblivious DNS queries, and the existence of mutations in DNA. Reef supports the Perl Compatible Regular Expression syntax, including wildcards, alternation, ranges, capture groups, Kleene star, negations, and lookarounds. Reef introduces a new type of automata, Skipping Alternating Finite Automata (SAFA), that skips irrelevant parts of a document when producing proofs without undermining soundness, and instantiates SAFA with a lookup argument. Our experimental evaluation confirms that Reef can generate proofs for documents with 32M characters; the proofs are small and cheap to verify (under a second).
Paper: https://eprint.iacr.org/2023/1886
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AIVladimir Iglovikov, Ph.D.
Presented by Vladimir Iglovikov:
- https://www.linkedin.com/in/iglovikov/
- https://x.com/viglovikov
- https://www.instagram.com/ternaus/
This presentation delves into the journey of Albumentations.ai, a highly successful open-source library for data augmentation.
Created out of a necessity for superior performance in Kaggle competitions, Albumentations has grown to become a widely used tool among data scientists and machine learning practitioners.
This case study covers various aspects, including:
People: The contributors and community that have supported Albumentations.
Metrics: The success indicators such as downloads, daily active users, GitHub stars, and financial contributions.
Challenges: The hurdles in monetizing open-source projects and measuring user engagement.
Development Practices: Best practices for creating, maintaining, and scaling open-source libraries, including code hygiene, CI/CD, and fast iteration.
Community Building: Strategies for making adoption easy, iterating quickly, and fostering a vibrant, engaged community.
Marketing: Both online and offline marketing tactics, focusing on real, impactful interactions and collaborations.
Mental Health: Maintaining balance and not feeling pressured by user demands.
Key insights include the importance of automation, making the adoption process seamless, and leveraging offline interactions for marketing. The presentation also emphasizes the need for continuous small improvements and building a friendly, inclusive community that contributes to the project's growth.
Vladimir Iglovikov brings his extensive experience as a Kaggle Grandmaster, ex-Staff ML Engineer at Lyft, sharing valuable lessons and practical advice for anyone looking to enhance the adoption of their open-source projects.
Explore more about Albumentations and join the community at:
GitHub: https://github.com/albumentations-team/albumentations
Website: https://albumentations.ai/
LinkedIn: https://www.linkedin.com/company/100504475
Twitter: https://x.com/albumentations
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.