This document provides a review of simulation techniques for parallel and distributed computing. It discusses several key topics:
1) It defines parallel computing, distributed computing, and parallel and distributed computing systems. Various classification schemes for parallel and distributed systems are also described.
2) It examines several modeling techniques for parallel and distributed systems including system modeling, network modeling, performance modeling, and mathematical modeling. It provides details on parallel discrete event simulation.
3) It reviews several simulation software tools used for modeling parallel and distributed systems including SimOS, SimJava, and MicroGrid.
4) It concludes with a focused discussion on cloud computing as the latest development in parallel and distributed computing.
Development of a Suitable Load Balancing Strategy In Case Of a Cloud Computi...IJMER
Cloud computing is an attracting technology in the field of computer science. In
Gartner’s report, it says that the cloud will bring changes to the IT industry. The cloud is changing
our life by providing users with new types of services. Users get service from a cloud without paying
attention to the details. NIST gave a definition of cloud computing as a model for enabling
ubiquitous, convenient, on-demand network access to a shared pool of configurable computing
resources (e.g., networks, servers, storage, applications, and services) that can be rapidly
provisioned and released with minimal management effort or service provider interaction. More
and more people pay attention to cloud computing. Cloud computing is efficient and scalable but
maintaining the stability of processing so many jobs in the cloud computing environment is a very
complex problem with load balancing receiving much attention for researchers. Since the job
arrival pattern is not predictable and the capacities of each node in the cloud differ, for load
balancing problem, workload control is crucial to improve system performance and maintain
stability. Load balancing schemes depending on whether the system dynamics are important can be
either static or dynamic. Static schemes do not use the system information and are less complex
while dynamic schemes will bring additional costs for the system but can change as the system
status changes. A dynamic scheme is used here for its flexibility. The model has a main controller
and balancers to gather and analyze the information. Thus, the dynamic control has little influence
on the other working nodes. The system status then provides a basis for choosing the right load
balancing strategy. The load balancing model given in this research article is aimed at the public
cloud which has numerous nodes with distributed computing resources in many different
geographic locations. Thus, this model divides the public cloud into several cloud partitions. When
the environment is very large and complex, these divisions simplify the load balancing. The cloud
has a main controller that chooses the suitable partitions for arriving jobs while the balancer for
each cloud partition chooses the best load balancing strategy.
Disambiguating Advanced Computing for Humanities ResearchersBaden Hughes
The document discusses how advanced computing can enable new opportunities in data-intensive humanities research by providing computational capabilities beyond what is ordinarily available. It outlines the types of advanced computing including clusters, grids, and middleware. It also describes application execution models, interfaces to advanced computing resources, and how this can help humanities researchers answer old questions and discover new ones through computational analysis of large datasets.
A Comparative Study: Taxonomy of High Performance Computing (HPC) IJECEIAES
The computer technologies have rapidly developed in both software and hardware field. The complexity of software is increasing as per the market demand because the manual systems are going to become automation as well as the cost of hardware is decreasing. High Performance Computing (HPC) is very demanding technology and an attractive area of computing due to huge data processing in many applications of computing. The paper focus upon different applications of HPC and the types of HPC such as Cluster Computing, Grid Computing and Cloud Computing. It also studies, different classifications and applications of above types of HPC. All these types of HPC are demanding area of computer science. This paper also done comparative study of grid, cloud and cluster computing based on benefits, drawbacks, key areas of research, characterstics, issues and challenges.
A CLOUD BASED ARCHITECTURE FOR WORKING ON BIG DATA WITH WORKFLOW MANAGEMENTIJwest
In real environment there is a collection of many noisy and vague data, called Big Data. On the other hand,
to work on the data middleware have been developed and is now very widely used. The challenge of
working on Big Data is its processing and management. Here, integrated management system is required
to provide a solution for integrating data from multiple sensors and maximize the target success. This is in
situation that the system has constant time constrains for processing, and real-time decision-making
processes. A reliable data fusion model must meet this requirement and steadily let the user monitor data
stream. With widespread using of workflow interfaces, this requirement can be addressed. But, the work
with Big Data is also challenging. We provide a multi-agent cloud-based architecture for a higher vision to
solve this problem. This architecture provides the ability to Big Data Fusion using a workflow management
interface. The proposed system is capable of self-repair in the presence of risks and its risk is low.
A survey of models for computer networks managementIJCNCJournal
The virtualization concept along with its underlyin
g technologies has been warmly adopted in many fiel
ds
of computer science. In this direction, network vir
tualization research has presented considerable res
ults.
In a parallel development, the convergence of two d
istinct worlds, communications and computing, has
increased the use of computing server resources (vi
rtual machines and hypervisors acting as active
network elements) in network implementations. As a
result, the level of detail and complexity in such
architectures has increased and new challenges need
to be taken into account for effective network
management. Information and data models facilitate
infrastructure representation and management and
have been used extensively in that direction. In th
is paper we survey available modelling approaches a
nd
discuss how these can be used in the virtual machin
e (host) based computer network landscape; we prese
nt
a qualitative analysis of the current state-of-the-
art and offer a set of recommendations on adopting
any
particular method.
This document discusses architectural and security management for grid computing. It begins by defining grid computing as an environment that enables sharing of distributed resources across organizations to achieve common goals. It then describes the key components of a grid, including computation resources, storage, communications, software/licenses, and special equipment. The document outlines a four-level grid architecture including a fabric level, core middleware level, user middleware level, and application level. It also discusses important aspects of grid computing such as resource balancing, reliability through distribution, parallel CPU capacity, and management of different projects. Finally, it emphasizes that security is a major concern for grid computing due to the open nature of sharing resources across organizational boundaries.
The advent of Big Data has seen the emergence of new processing and storage challenges. These challenges are often solved by distributed processing. Distributed systems are inherently dynamic and unstable, so it is realistic to expect that some resources will fail during use. Load balancing and task scheduling is an important step in determining the performance of parallel applications. Hence the need to design load balancing algorithms adapted to grid computing. In this paper, we propose a dynamic and hierarchical load balancing strategy at two levels: Intrascheduler load balancing, in order to avoid the use of the large-scale communication network, and interscheduler load balancing, for a load regulation of our whole system. The strategy allows improving the average response time of CLOAK-Reduce application tasks with minimal communication. We first focus on the three performance indicators, namely response time, process latency and running time of MapReduce tasks.
This document discusses load balancing strategies for grid computing. It proposes a dynamic tree-based model to represent grid architecture in a hierarchical way that supports heterogeneity and scalability. It then develops a hierarchical load balancing strategy and algorithms based on neighborhood properties to decrease communication overhead. Conventional scheduling algorithms like Min-Min, Max-Min, and Sufferage are discussed but determined to ignore dynamic network status, which is important for load balancing. Genetic algorithms are also mentioned as a potential solution.
Development of a Suitable Load Balancing Strategy In Case Of a Cloud Computi...IJMER
Cloud computing is an attracting technology in the field of computer science. In
Gartner’s report, it says that the cloud will bring changes to the IT industry. The cloud is changing
our life by providing users with new types of services. Users get service from a cloud without paying
attention to the details. NIST gave a definition of cloud computing as a model for enabling
ubiquitous, convenient, on-demand network access to a shared pool of configurable computing
resources (e.g., networks, servers, storage, applications, and services) that can be rapidly
provisioned and released with minimal management effort or service provider interaction. More
and more people pay attention to cloud computing. Cloud computing is efficient and scalable but
maintaining the stability of processing so many jobs in the cloud computing environment is a very
complex problem with load balancing receiving much attention for researchers. Since the job
arrival pattern is not predictable and the capacities of each node in the cloud differ, for load
balancing problem, workload control is crucial to improve system performance and maintain
stability. Load balancing schemes depending on whether the system dynamics are important can be
either static or dynamic. Static schemes do not use the system information and are less complex
while dynamic schemes will bring additional costs for the system but can change as the system
status changes. A dynamic scheme is used here for its flexibility. The model has a main controller
and balancers to gather and analyze the information. Thus, the dynamic control has little influence
on the other working nodes. The system status then provides a basis for choosing the right load
balancing strategy. The load balancing model given in this research article is aimed at the public
cloud which has numerous nodes with distributed computing resources in many different
geographic locations. Thus, this model divides the public cloud into several cloud partitions. When
the environment is very large and complex, these divisions simplify the load balancing. The cloud
has a main controller that chooses the suitable partitions for arriving jobs while the balancer for
each cloud partition chooses the best load balancing strategy.
Disambiguating Advanced Computing for Humanities ResearchersBaden Hughes
The document discusses how advanced computing can enable new opportunities in data-intensive humanities research by providing computational capabilities beyond what is ordinarily available. It outlines the types of advanced computing including clusters, grids, and middleware. It also describes application execution models, interfaces to advanced computing resources, and how this can help humanities researchers answer old questions and discover new ones through computational analysis of large datasets.
A Comparative Study: Taxonomy of High Performance Computing (HPC) IJECEIAES
The computer technologies have rapidly developed in both software and hardware field. The complexity of software is increasing as per the market demand because the manual systems are going to become automation as well as the cost of hardware is decreasing. High Performance Computing (HPC) is very demanding technology and an attractive area of computing due to huge data processing in many applications of computing. The paper focus upon different applications of HPC and the types of HPC such as Cluster Computing, Grid Computing and Cloud Computing. It also studies, different classifications and applications of above types of HPC. All these types of HPC are demanding area of computer science. This paper also done comparative study of grid, cloud and cluster computing based on benefits, drawbacks, key areas of research, characterstics, issues and challenges.
A CLOUD BASED ARCHITECTURE FOR WORKING ON BIG DATA WITH WORKFLOW MANAGEMENTIJwest
In real environment there is a collection of many noisy and vague data, called Big Data. On the other hand,
to work on the data middleware have been developed and is now very widely used. The challenge of
working on Big Data is its processing and management. Here, integrated management system is required
to provide a solution for integrating data from multiple sensors and maximize the target success. This is in
situation that the system has constant time constrains for processing, and real-time decision-making
processes. A reliable data fusion model must meet this requirement and steadily let the user monitor data
stream. With widespread using of workflow interfaces, this requirement can be addressed. But, the work
with Big Data is also challenging. We provide a multi-agent cloud-based architecture for a higher vision to
solve this problem. This architecture provides the ability to Big Data Fusion using a workflow management
interface. The proposed system is capable of self-repair in the presence of risks and its risk is low.
A survey of models for computer networks managementIJCNCJournal
The virtualization concept along with its underlyin
g technologies has been warmly adopted in many fiel
ds
of computer science. In this direction, network vir
tualization research has presented considerable res
ults.
In a parallel development, the convergence of two d
istinct worlds, communications and computing, has
increased the use of computing server resources (vi
rtual machines and hypervisors acting as active
network elements) in network implementations. As a
result, the level of detail and complexity in such
architectures has increased and new challenges need
to be taken into account for effective network
management. Information and data models facilitate
infrastructure representation and management and
have been used extensively in that direction. In th
is paper we survey available modelling approaches a
nd
discuss how these can be used in the virtual machin
e (host) based computer network landscape; we prese
nt
a qualitative analysis of the current state-of-the-
art and offer a set of recommendations on adopting
any
particular method.
This document discusses architectural and security management for grid computing. It begins by defining grid computing as an environment that enables sharing of distributed resources across organizations to achieve common goals. It then describes the key components of a grid, including computation resources, storage, communications, software/licenses, and special equipment. The document outlines a four-level grid architecture including a fabric level, core middleware level, user middleware level, and application level. It also discusses important aspects of grid computing such as resource balancing, reliability through distribution, parallel CPU capacity, and management of different projects. Finally, it emphasizes that security is a major concern for grid computing due to the open nature of sharing resources across organizational boundaries.
The advent of Big Data has seen the emergence of new processing and storage challenges. These challenges are often solved by distributed processing. Distributed systems are inherently dynamic and unstable, so it is realistic to expect that some resources will fail during use. Load balancing and task scheduling is an important step in determining the performance of parallel applications. Hence the need to design load balancing algorithms adapted to grid computing. In this paper, we propose a dynamic and hierarchical load balancing strategy at two levels: Intrascheduler load balancing, in order to avoid the use of the large-scale communication network, and interscheduler load balancing, for a load regulation of our whole system. The strategy allows improving the average response time of CLOAK-Reduce application tasks with minimal communication. We first focus on the three performance indicators, namely response time, process latency and running time of MapReduce tasks.
This document discusses load balancing strategies for grid computing. It proposes a dynamic tree-based model to represent grid architecture in a hierarchical way that supports heterogeneity and scalability. It then develops a hierarchical load balancing strategy and algorithms based on neighborhood properties to decrease communication overhead. Conventional scheduling algorithms like Min-Min, Max-Min, and Sufferage are discussed but determined to ignore dynamic network status, which is important for load balancing. Genetic algorithms are also mentioned as a potential solution.
This document summarizes the internship work conducted by Marta de la Cruz Martos at CITSEM within the GRyS group. The internship focused on developing algorithms to analyze energy consumption for smart grids as part of the I3RES project, which aims to integrate renewable energy sources into distributed networks using artificial intelligence. Specifically, the internship involved studying relevant technologies, participating in software component design, developing and implementing algorithms, and preparing reports. The document provides background on distributed systems and databases, describes the work conducted, and presents results and conclusions.
Improving Cloud Performance through Performance Based Load Balancing ApproachIRJET Journal
The document proposes a performance-based load balancing approach to improve cloud computing performance through load balancing and fault tolerance. It considers success ratio and past load data when distributing tasks among nodes. A fault handler is used to detect and recover from faults reactively. When a fault occurs, the handler updates node records, restarts servers, or transfers pending tasks. Task outcomes are evaluated based on status and deadlines. Nodes with successful outcomes have their success ratios incremented, while unsuccessful nodes have ratios decremented or fault handling triggered. The approach aims to map tasks to nodes with higher success ratios and lower current loads to improve quality of service. Cloudsim simulations show how success ratios for sample nodes change with this approach over multiple task assignments.
WiSANCloud: a set of UML-based specifications for the integration of Wireless...Priscill Orue Esquivel
Giving the current trend to combine the advantages of Wireless Sensor and Actor Networks (WSANs) with the Cloud Computing technology, this work proposes a set of specifications, based on the Unified Modeling Language - UML, in order to provide the general framework for the design of the integration of said components. One of the keys of the integration is the architecture of the WSAN, due to its structural relationship with the Cloud in the definition of the combination. Regarding the standard applied in the integration, UML and its subset, Systems Modeling Language - SysML , are proposed by the Object Management Group - OMG to deal with cloud applications; so, this indicates the starting point of the process of the design of specifications for WSAN-Cloud Integration. Based on the current state of UML tools for analysis and design, there are several aspects to take into account in order to define the integration process.
This document discusses grid computing and provides an overview of the topic. It begins with an introduction to grid computing, explaining that it utilizes distributed resources over a network to solve large computational problems. It then covers aspects of grid computing such as data, computation, types of grids, how grid computing works, and the grid architecture with different layers. The document also discusses applications of grid computing, advantages, limitations, and provides a case study on using a grid-like approach for weather prediction.
This document discusses security issues in grid computing and proposes an enhanced amalgam encryption approach. It begins with an overview of distributed, cloud, and grid computing. Grid computing involves coordinating shared resources across distributed, heterogeneous environments. Major security issues in grid computing include integration with existing security systems, interoperability across domains, and establishing trust relationships. The document then discusses cryptography approaches used to provide security, including symmetric and asymmetric encryption. It proposes a hybrid encryption solution combining AES and RC4 algorithms to address overhead limitations of previous approaches for large distributed networks like smart grids.
This document provides a survey of file replication techniques used in grid systems. It begins with an introduction to grid systems and discusses their use of replication to improve response times and reduce bandwidth consumption. It then categorizes replication techniques as static or dynamic and describes challenges of replication including maintaining consistency and overhead. The document surveys various replication strategies for different grid topologies like peer-to-peer, tree and hybrid. It evaluates strategies based on factors like access latency, bandwidth consumption and fault tolerance. Specific replication techniques are discussed for peer-to-peer architectures aimed at availability, placement strategies and balancing workloads.
This document discusses security protocols for high performance grid computing architectures. It analyzes the different network layers in grid computing protocols and identifies various security disciplines. It also analyzes various security suites available in the TCP/IP protocol architecture. The paper aims to define security disciplines at different levels of cluster computing architecture and propose applicable security suites from the TCP/IP security protocol suite. Grid computing allows sharing and aggregation of distributed computing resources to enable more powerful applications. Security is an important consideration in grid computing due to sharing resources across administrative domains.
Cloud computing is used to define a new class of computing that is based on the network technology. Cloud computing takes place over the internet. It comprises of a collection of integrated and networked hardware, software and internet infrastructures. These infrastructures are used to provide various services to the users. Distributed computing comprises of multiple software components that belong to multiple computers. The system works or runs as a single system. Cloud computing can be referred to as a form that originated from distributed computing and virtualization.
This document provides definitions and examples of distributed systems. It defines a distributed system as a collection of independent computers that appears as a single system to users. Key characteristics include transparency, fault tolerance, scalability, and concurrency. Middleware hides distribution and heterogeneity between systems. Examples discussed include search engines like Google with hundreds of thousands of servers, online games like World of Warcraft run across many servers, and social networks like Twitter that process billions of tweets daily across large distributed infrastructures. The document also briefly describes grid computing networks like the European Grid Infrastructure.
Cloud computing is a technological paradigm that enables the consumer to enjoy the benefits of computing
services and applications without necessarily worrying about the investment and maintenance costs. This paper focuses on
the applicability of a new fully homomorphic encryption scheme (FHE) in solving data security in cloud computing. Different types
of existing homomorphic encryption schemes, including both partial and fully homomorphic encryption schemes are reviewed. The
study was aimed at constructing a fully homomorphic encryption scheme that lessens the computational strain on the computing
assets as compared to Gentry’s contribution on partial homomorphic encryption schemes where he constructed homomorphic
encryption based on ideal lattices using both additive and multiplicative Homomorphisms. In this study both addition and
composition operations implementing a fully homomorphic encryption scheme that secures data within cloud computing is used. The
work is founded on mathematical theory that is translated into an algorithm implementable in JAVA. The work was tested by a single
computing hardware to ascertain its suitability. The newly developed FHE scheme posted better results that confirmed its suitability
for data security in cloud computing.
CYBER INFRASTRUCTURE AS A SERVICE TO EMPOWER MULTIDISCIPLINARY, DATA-DRIVEN S...ijcsit
In supporting its large scale, multidisciplinary scientific research efforts across all the university campuses and by the research personnel spread over literally every corner of the state, the state of Nevada needs to build and leverage its own Cyber infrastructure. Following the well-established as-a-service model, this state-wide Cyber infrastructure that consists of data acquisition, data storage, advanced instruments, visualization, computing and information processing systems, and people, all seamlessly linked together through a high-speed network, is designed and operated to deliver the benefits of Cyber infrastructure-as-aService (CaaS).There are three major service groups in this CaaS, namely (i) supporting infrastructural
services that comprise sensors, computing/storage/networking hardware, operating system, management tools, virtualization and message passing interface (MPI); (ii) data transmission and storage services that provide connectivity to various big data sources, as well as cached and stored datasets in a distributed
storage backend; and (iii) processing and visualization services that provide user access to rich processing and visualization tools and packages essential to various scientific research workflows. Built on commodity hardware and open source software packages, the Southern Nevada Research Cloud(SNRC)and a data repository in a separate location constitute a low cost solution to deliver all these services around CaaS. The service-oriented architecture and implementation of the SNRC are geared to encapsulate as much detail of big data processing and cloud computing as possible away from end users; rather scientists only need to learn and access an interactive web-based interface to conduct their collaborative, multidisciplinary, dataintensive research. The capability and easy-to-use features of the SNRC are demonstrated through a use case that attempts to derive a solar radiation model from a large data set by regression analysis.
The document discusses different models for distributed systems including physical, architectural and fundamental models. It describes the physical model which captures the hardware composition and different generations of distributed systems. The architectural model specifies the components and relationships in a system. Key architectural elements discussed include communicating entities like processes and objects, communication paradigms like remote invocation and indirect communication, roles and responsibilities of entities, and their physical placement. Common architectures like client-server, layered and tiered are also summarized.
This document provides a summary of key concepts in computer networks:
1. It defines a computer network and describes the basic components - PCs, interconnections like network cards and cables, switches, and routers.
2. It discusses common network applications like email, web browsers, instant messaging, and collaboration tools.
3. It describes the seven-layer OSI model and compares it to the TCP/IP model, explaining the functions of the physical, data link, network, and transport layers.
4. It discusses networking software, network performance metrics like bandwidth and latency, and link layer services like acknowledged and unacknowledged connection-oriented services.
This document proposes a smart semantic middleware for the Internet of Things. The middleware would allow for self-managed complex systems consisting of distributed and heterogeneous components. Each component would be represented by an autonomous software agent that monitors and controls the component. Semantic technologies and ontologies would be used to enable discovery and interoperability between heterogeneous components. The proposed middleware aims to support self-configuration, optimization, protection and healing of complex systems.
A Third Party Auditor Based Technique for Cloud Securityijsrd.com
Cloud security means providing security to users data. There are so many methods for doing this task. They all have their merits and demerits. To ensure the security of users' data in the cloud, we propose an effective, scalable and flexible cryptography based scheme. Extensive security and performance analysis shows that the proposed scheme is highly efficient and resilient against malicious data modification attack, The proposed scheme not only achieves scalability due to its hierarchical structure, but also inherits flexibility. We implement our scheme and show that it is both efficient and flexible in dealing with access control for outsourced data in cloud computing with comprehensive experiments.
Synchronization of the GPS Coordinates Between Mobile Device and Oracle Datab...idescitation
The article describes an architecture and implementation of module for
acquiring a synchronization of GPS data between mobile device and central database
system. The process of data exchange is inspired by SAMD algorithm. The article
sequentially presents a solution of individual system components. Special attention is paid to
the exchange data format. The processing of the exchanged data is also described in detail.
The resulting solution was deployed and tested in a real production environment.
Analysis and Implementation Wireless Sensor Network of Information Technology...IJERDJOURNAL
ABSTRACT:- This paper, we propose a smart home system based on two approaches. The first approach is topologi architecture mesh and the second is the protocol of Wireless Sensor Network (WSN) are efficient. This system has two working environment, indoor and outdoor. Indoor environment using WSN system, while the external environment usingsystem. internet-cloudThis scheme is known as the Internet-of-Things (IOT). Indoor and outdoor environments connected to each other by means of a connecting bridge. WSN system formed from the components of WSN that uses a mesh topology. Each component of the WSN designed for implementing efficient data protocols are proposed. For outdoor environments, system-cloud Internet that there is a major infrastructure. Thus, the smart home system can be monitored and controlled from a smartphone, anytime and anywhere, as long as access to mobile data is available. For the evaluation of the system, several tests have been done to get the system profile.
Cloud middleware and services-a systematic mapping reviewjournalBEEI
Cloud computing currently plays a crucial role in the delivery of vital information technology services. A unique aspect of cloud computing is the cloud middleware and other related entities that support applications and networks. A specific field of research may be considered, particularly as regards cloud middleware and services at all levels, and thus needs analysis and paper surveys to elucidate possible study limitations. The purpose of this paper is to perform a systematic mapping for studies that capture cloud computing middleware, stacks, tools and services. The methodology adopted for this study is a systematic mapping review. The results showed that more papers on the contribution facet were published with tool, model, method and process having 18.10%, 13.79%, 6.03% and 8.62% respectively. In addition, in terms of tool, evaluation and solution research had the largest number of articles with 14.17% and 26.77% respectively. A striking feature of the systemic map is the high number of articles in solution research with respect to all aspects of the features applied in the studies. This study showed clearly that there are gaps in cloud computing middleware and delivery services that would interest researchers and industry professionals desirous of research in this area.
This document discusses research into producing high-strength concrete with a compressive strength of 80 MPa using locally available materials in Sudan. It describes two statistical approaches to developing optimized concrete mix designs: the ACI 211.4 method and using JMP statistical software. Hundreds of concrete specimens were tested with local Sudanese aggregates, silica fume, fly ash, and superplasticizers at different ages and water-cement ratios. Thirty-three high-strength concrete mix designs were successfully developed and evaluated. The results provide guidance on optimizing mix properties and demonstrate that local materials can be used to produce high-strength concrete in Sudan.
The document experimentally investigates conditions for producing hypochlorous acid water with high efficiency. It examines the effects of various parameters on the production efficiency, defined as the ratio of actual available chlorine produced to the theoretical maximum. Tests were conducted using a flow reactor with parallel electrode plates and no separating membrane. Results show that production efficiency is strongly affected by flow rate and current density. Higher flow rates and lower current densities yielded more efficient production, with an optimum efficiency found around 20,000 mg/L NaCl concentration. Maintaining appropriate conditions can lead to high concentration, high efficiency hypochlorous acid production.
The document describes the development of a force acquisition instrument for measuring forces applied by orthodontic components to teeth. Specifically, it aims to create an instrument that can accurately measure forces between 5-550 grams with high resolution. The instrument design includes a full-bridge force sensor configuration connected to a 24-bit analog-to-digital converter and microcontroller to digitally process and display the force readings. Testing of the instrument will evaluate its linearity, bias, and ability to provide consistent measurements under varying temperatures. The goal is to develop a more precise tool for orthodontists to standardize forces used in tooth movement.
This document summarizes the internship work conducted by Marta de la Cruz Martos at CITSEM within the GRyS group. The internship focused on developing algorithms to analyze energy consumption for smart grids as part of the I3RES project, which aims to integrate renewable energy sources into distributed networks using artificial intelligence. Specifically, the internship involved studying relevant technologies, participating in software component design, developing and implementing algorithms, and preparing reports. The document provides background on distributed systems and databases, describes the work conducted, and presents results and conclusions.
Improving Cloud Performance through Performance Based Load Balancing ApproachIRJET Journal
The document proposes a performance-based load balancing approach to improve cloud computing performance through load balancing and fault tolerance. It considers success ratio and past load data when distributing tasks among nodes. A fault handler is used to detect and recover from faults reactively. When a fault occurs, the handler updates node records, restarts servers, or transfers pending tasks. Task outcomes are evaluated based on status and deadlines. Nodes with successful outcomes have their success ratios incremented, while unsuccessful nodes have ratios decremented or fault handling triggered. The approach aims to map tasks to nodes with higher success ratios and lower current loads to improve quality of service. Cloudsim simulations show how success ratios for sample nodes change with this approach over multiple task assignments.
WiSANCloud: a set of UML-based specifications for the integration of Wireless...Priscill Orue Esquivel
Giving the current trend to combine the advantages of Wireless Sensor and Actor Networks (WSANs) with the Cloud Computing technology, this work proposes a set of specifications, based on the Unified Modeling Language - UML, in order to provide the general framework for the design of the integration of said components. One of the keys of the integration is the architecture of the WSAN, due to its structural relationship with the Cloud in the definition of the combination. Regarding the standard applied in the integration, UML and its subset, Systems Modeling Language - SysML , are proposed by the Object Management Group - OMG to deal with cloud applications; so, this indicates the starting point of the process of the design of specifications for WSAN-Cloud Integration. Based on the current state of UML tools for analysis and design, there are several aspects to take into account in order to define the integration process.
This document discusses grid computing and provides an overview of the topic. It begins with an introduction to grid computing, explaining that it utilizes distributed resources over a network to solve large computational problems. It then covers aspects of grid computing such as data, computation, types of grids, how grid computing works, and the grid architecture with different layers. The document also discusses applications of grid computing, advantages, limitations, and provides a case study on using a grid-like approach for weather prediction.
This document discusses security issues in grid computing and proposes an enhanced amalgam encryption approach. It begins with an overview of distributed, cloud, and grid computing. Grid computing involves coordinating shared resources across distributed, heterogeneous environments. Major security issues in grid computing include integration with existing security systems, interoperability across domains, and establishing trust relationships. The document then discusses cryptography approaches used to provide security, including symmetric and asymmetric encryption. It proposes a hybrid encryption solution combining AES and RC4 algorithms to address overhead limitations of previous approaches for large distributed networks like smart grids.
This document provides a survey of file replication techniques used in grid systems. It begins with an introduction to grid systems and discusses their use of replication to improve response times and reduce bandwidth consumption. It then categorizes replication techniques as static or dynamic and describes challenges of replication including maintaining consistency and overhead. The document surveys various replication strategies for different grid topologies like peer-to-peer, tree and hybrid. It evaluates strategies based on factors like access latency, bandwidth consumption and fault tolerance. Specific replication techniques are discussed for peer-to-peer architectures aimed at availability, placement strategies and balancing workloads.
This document discusses security protocols for high performance grid computing architectures. It analyzes the different network layers in grid computing protocols and identifies various security disciplines. It also analyzes various security suites available in the TCP/IP protocol architecture. The paper aims to define security disciplines at different levels of cluster computing architecture and propose applicable security suites from the TCP/IP security protocol suite. Grid computing allows sharing and aggregation of distributed computing resources to enable more powerful applications. Security is an important consideration in grid computing due to sharing resources across administrative domains.
Cloud computing is used to define a new class of computing that is based on the network technology. Cloud computing takes place over the internet. It comprises of a collection of integrated and networked hardware, software and internet infrastructures. These infrastructures are used to provide various services to the users. Distributed computing comprises of multiple software components that belong to multiple computers. The system works or runs as a single system. Cloud computing can be referred to as a form that originated from distributed computing and virtualization.
This document provides definitions and examples of distributed systems. It defines a distributed system as a collection of independent computers that appears as a single system to users. Key characteristics include transparency, fault tolerance, scalability, and concurrency. Middleware hides distribution and heterogeneity between systems. Examples discussed include search engines like Google with hundreds of thousands of servers, online games like World of Warcraft run across many servers, and social networks like Twitter that process billions of tweets daily across large distributed infrastructures. The document also briefly describes grid computing networks like the European Grid Infrastructure.
Cloud computing is a technological paradigm that enables the consumer to enjoy the benefits of computing
services and applications without necessarily worrying about the investment and maintenance costs. This paper focuses on
the applicability of a new fully homomorphic encryption scheme (FHE) in solving data security in cloud computing. Different types
of existing homomorphic encryption schemes, including both partial and fully homomorphic encryption schemes are reviewed. The
study was aimed at constructing a fully homomorphic encryption scheme that lessens the computational strain on the computing
assets as compared to Gentry’s contribution on partial homomorphic encryption schemes where he constructed homomorphic
encryption based on ideal lattices using both additive and multiplicative Homomorphisms. In this study both addition and
composition operations implementing a fully homomorphic encryption scheme that secures data within cloud computing is used. The
work is founded on mathematical theory that is translated into an algorithm implementable in JAVA. The work was tested by a single
computing hardware to ascertain its suitability. The newly developed FHE scheme posted better results that confirmed its suitability
for data security in cloud computing.
CYBER INFRASTRUCTURE AS A SERVICE TO EMPOWER MULTIDISCIPLINARY, DATA-DRIVEN S...ijcsit
In supporting its large scale, multidisciplinary scientific research efforts across all the university campuses and by the research personnel spread over literally every corner of the state, the state of Nevada needs to build and leverage its own Cyber infrastructure. Following the well-established as-a-service model, this state-wide Cyber infrastructure that consists of data acquisition, data storage, advanced instruments, visualization, computing and information processing systems, and people, all seamlessly linked together through a high-speed network, is designed and operated to deliver the benefits of Cyber infrastructure-as-aService (CaaS).There are three major service groups in this CaaS, namely (i) supporting infrastructural
services that comprise sensors, computing/storage/networking hardware, operating system, management tools, virtualization and message passing interface (MPI); (ii) data transmission and storage services that provide connectivity to various big data sources, as well as cached and stored datasets in a distributed
storage backend; and (iii) processing and visualization services that provide user access to rich processing and visualization tools and packages essential to various scientific research workflows. Built on commodity hardware and open source software packages, the Southern Nevada Research Cloud(SNRC)and a data repository in a separate location constitute a low cost solution to deliver all these services around CaaS. The service-oriented architecture and implementation of the SNRC are geared to encapsulate as much detail of big data processing and cloud computing as possible away from end users; rather scientists only need to learn and access an interactive web-based interface to conduct their collaborative, multidisciplinary, dataintensive research. The capability and easy-to-use features of the SNRC are demonstrated through a use case that attempts to derive a solar radiation model from a large data set by regression analysis.
The document discusses different models for distributed systems including physical, architectural and fundamental models. It describes the physical model which captures the hardware composition and different generations of distributed systems. The architectural model specifies the components and relationships in a system. Key architectural elements discussed include communicating entities like processes and objects, communication paradigms like remote invocation and indirect communication, roles and responsibilities of entities, and their physical placement. Common architectures like client-server, layered and tiered are also summarized.
This document provides a summary of key concepts in computer networks:
1. It defines a computer network and describes the basic components - PCs, interconnections like network cards and cables, switches, and routers.
2. It discusses common network applications like email, web browsers, instant messaging, and collaboration tools.
3. It describes the seven-layer OSI model and compares it to the TCP/IP model, explaining the functions of the physical, data link, network, and transport layers.
4. It discusses networking software, network performance metrics like bandwidth and latency, and link layer services like acknowledged and unacknowledged connection-oriented services.
This document proposes a smart semantic middleware for the Internet of Things. The middleware would allow for self-managed complex systems consisting of distributed and heterogeneous components. Each component would be represented by an autonomous software agent that monitors and controls the component. Semantic technologies and ontologies would be used to enable discovery and interoperability between heterogeneous components. The proposed middleware aims to support self-configuration, optimization, protection and healing of complex systems.
A Third Party Auditor Based Technique for Cloud Securityijsrd.com
Cloud security means providing security to users data. There are so many methods for doing this task. They all have their merits and demerits. To ensure the security of users' data in the cloud, we propose an effective, scalable and flexible cryptography based scheme. Extensive security and performance analysis shows that the proposed scheme is highly efficient and resilient against malicious data modification attack, The proposed scheme not only achieves scalability due to its hierarchical structure, but also inherits flexibility. We implement our scheme and show that it is both efficient and flexible in dealing with access control for outsourced data in cloud computing with comprehensive experiments.
Synchronization of the GPS Coordinates Between Mobile Device and Oracle Datab...idescitation
The article describes an architecture and implementation of module for
acquiring a synchronization of GPS data between mobile device and central database
system. The process of data exchange is inspired by SAMD algorithm. The article
sequentially presents a solution of individual system components. Special attention is paid to
the exchange data format. The processing of the exchanged data is also described in detail.
The resulting solution was deployed and tested in a real production environment.
Analysis and Implementation Wireless Sensor Network of Information Technology...IJERDJOURNAL
ABSTRACT:- This paper, we propose a smart home system based on two approaches. The first approach is topologi architecture mesh and the second is the protocol of Wireless Sensor Network (WSN) are efficient. This system has two working environment, indoor and outdoor. Indoor environment using WSN system, while the external environment usingsystem. internet-cloudThis scheme is known as the Internet-of-Things (IOT). Indoor and outdoor environments connected to each other by means of a connecting bridge. WSN system formed from the components of WSN that uses a mesh topology. Each component of the WSN designed for implementing efficient data protocols are proposed. For outdoor environments, system-cloud Internet that there is a major infrastructure. Thus, the smart home system can be monitored and controlled from a smartphone, anytime and anywhere, as long as access to mobile data is available. For the evaluation of the system, several tests have been done to get the system profile.
Cloud middleware and services-a systematic mapping reviewjournalBEEI
Cloud computing currently plays a crucial role in the delivery of vital information technology services. A unique aspect of cloud computing is the cloud middleware and other related entities that support applications and networks. A specific field of research may be considered, particularly as regards cloud middleware and services at all levels, and thus needs analysis and paper surveys to elucidate possible study limitations. The purpose of this paper is to perform a systematic mapping for studies that capture cloud computing middleware, stacks, tools and services. The methodology adopted for this study is a systematic mapping review. The results showed that more papers on the contribution facet were published with tool, model, method and process having 18.10%, 13.79%, 6.03% and 8.62% respectively. In addition, in terms of tool, evaluation and solution research had the largest number of articles with 14.17% and 26.77% respectively. A striking feature of the systemic map is the high number of articles in solution research with respect to all aspects of the features applied in the studies. This study showed clearly that there are gaps in cloud computing middleware and delivery services that would interest researchers and industry professionals desirous of research in this area.
This document discusses research into producing high-strength concrete with a compressive strength of 80 MPa using locally available materials in Sudan. It describes two statistical approaches to developing optimized concrete mix designs: the ACI 211.4 method and using JMP statistical software. Hundreds of concrete specimens were tested with local Sudanese aggregates, silica fume, fly ash, and superplasticizers at different ages and water-cement ratios. Thirty-three high-strength concrete mix designs were successfully developed and evaluated. The results provide guidance on optimizing mix properties and demonstrate that local materials can be used to produce high-strength concrete in Sudan.
The document experimentally investigates conditions for producing hypochlorous acid water with high efficiency. It examines the effects of various parameters on the production efficiency, defined as the ratio of actual available chlorine produced to the theoretical maximum. Tests were conducted using a flow reactor with parallel electrode plates and no separating membrane. Results show that production efficiency is strongly affected by flow rate and current density. Higher flow rates and lower current densities yielded more efficient production, with an optimum efficiency found around 20,000 mg/L NaCl concentration. Maintaining appropriate conditions can lead to high concentration, high efficiency hypochlorous acid production.
The document describes the development of a force acquisition instrument for measuring forces applied by orthodontic components to teeth. Specifically, it aims to create an instrument that can accurately measure forces between 5-550 grams with high resolution. The instrument design includes a full-bridge force sensor configuration connected to a 24-bit analog-to-digital converter and microcontroller to digitally process and display the force readings. Testing of the instrument will evaluate its linearity, bias, and ability to provide consistent measurements under varying temperatures. The goal is to develop a more precise tool for orthodontists to standardize forces used in tooth movement.
This document summarizes a study that investigated using a combination of recycled coarse aggregates and e-waste as a replacement for conventional aggregates in concrete. The study tested different proportions of e-waste and recycled coarse aggregates to determine the optimal mix. Specimens were tested for compressive strength at 7, 14, and 28 days as well as flexural strength. The results showed that a mix with 5% e-waste and 95% recycled coarse aggregates as a replacement achieved compressive strengths meeting standards for use as a sub-base material in low-traffic pavements, providing a potential application for reusing these wastes. Testing of beams with e-waste and steel reinforcement also showed that while e-waste reduced load capacity significantly compared
1) The document describes a study that created a Bicycle Design Approach Model Based on Fashion Styles to help generate new product ideas that appeal to customer preferences.
2) Researchers used eye tracking to identify the areas of bicycle design (e.g. frame, tires) that attract the most visual attention. They also analyzed different women's fashion styles and identified subjective words (e.g. stylish, sporty) that appeal to each style.
3) The model involves using statistical analysis to determine how design elements influence subjective impressions for each fashion group. This allows creating bicycle designs tailored to match different fashion preferences. The effectiveness of using this approach model for design was then verified.
This document discusses the influence of chloride compounds on the swelling and strength properties of expansive subgrade soil. Three chloride compounds - ammonium chloride, magnesium chloride, and aluminum chloride - were added to expansive soil from Andhra Pradesh, India in varying percentages. Laboratory tests found that all three compounds reduced the soil's differential free swell and swell pressure, with aluminum chloride providing the greatest reductions. Unconfined compressive strength increased with chemical addition and curing time, with aluminum chloride giving the highest strength values at 1% addition and 14 days of curing. Aluminum chloride was the most effective chloride compound in improving the engineering properties of the expansive soil.
Correlation Coefficient Based Average Textual Similarity Model for Informatio...IOSR Journals
The document presents a proposed model for a textual similarity approach for information retrieval systems in wide area networks. It evaluates the performance of four similarity functions (Jaccard, Cosine, Dice, Overlap) using correlation coefficients. Three approaches are proposed: 1) Combining Cosine and Overlap similarity scores, which performed best. 2) Combining Cosine, Dice, and Overlap scores. 3) Combining all four similarity functions. The model is represented as a triangle where the vertices are the results from the three proposed approaches to measure textual similarity between retrieved documents.
In this paper, the terms chained ternary semigroup, cancellable clement , cancellative ternary
semigroup, A-regular element, π- regular element, π- invertible element, noetherian ternary semigroup are
introduced. It is proved that in a commutative chained ternary semigroup T, i) if P is a prime ideal of T and
x ∉ P then n
n 1
x PT
= P for all odd natural numbers n . ii) T is a semiprimary ternary semigroup. iii) If a ε T is
a semisimple element of T, then < a > w ≠ . iv) If < a >w = 𝜙 for all a ε T, then T has no semisimple
elements. v) T has no regular elements, then for any a ε T, < a >w = 𝜙 or < a >w is a prime ideal. vi) If T is a
commutative chained cancellative ternary semigroup then for every non π-invertible element a, < a >w is either
empty or a prime ideal of T. Further it is proved that if T is a chained ternary semigroup with T\T3= { x } for
some x ε T, then i) T\ { x } is an ideal of T. ii) T = xT1T1 = T1xT1 = T1T1x and T 3 = xTT = TxT = TTx is the
unique maximal ideal of T. iii) If a T and a < x >w then a = xn for some odd natural number n > 1.
iv) T\ < x >w = { x, x 3, x5, . . . . .} or T\< x >w ={x, x 3, . . . , xr} for some odd natural number r. v) If a T
and a < x >w then a = xr for some odd natural number r or a = xn sn tn and sn < x >w or tn < x >w
for every odd natural number n. vi) If T contains cancellable elements then x is cancellable element and < x >w
is either empty or a prime ideal of T. It is also prove that, in a commutative chained ternary semigroup T,
T is archemedian ternary semigroup without idempotent elements if and only if < a >w = for every a T.
Further it is proved that if T is a commutative chained ternary semigroup containing cancellable elements and
< a >w = for every a T , then T is a cancellative ternary semigroup. It is proved that if T is a noetherian
ternary semigroup containing proper ideals then T has a maximal ideal. Finally it is proved that if T is a
commutative ternary semigroup such that T = < x > for some x T, then the following are equivalent.
1) T = {x, x2, x3, ............} is infinite. 2) T is a noetherian cancellative ternary semigroup with x xTT.
3) T is a noetherian cancellative ternary semigroup without idempotents. 4) < a >w = for all a T.
5) < x >w = . and if T is a commutative chained ternary semigroup with T ≠ T 3 , then the following are
equivalent. (1) T={x, x 3, x5, . . . . . . .}, where x T\ T 3 (2) T is Noetherian cancellative ternary semigroup
without idempotents. (3) < a >w = for all a T. Finally, it is proved that If T is a commutative chained
noetherian cancellative ternary semigroup without regular elements, then < a >w = for all a T.
1) The document discusses techniques for reducing the peak-to-average power ratio (PAPR) of orthogonal frequency division multiplexing (OFDM) signals, including repeated clipping and filtering (RCF) and nonlinear commanding transform (NCT).
2) RCF projects clipping noise into the feasible extension area while removing out-of-band interference through filtering, but suffers from problems like in-band distortion, peak re-growth, and out-of-band radiation.
3) NCT aims to reduce PAPR by compressing peak signals and expanding small signals while maintaining constant average power through optimal parameter selection. The proposed NCT technique outperforms RCF in terms of lower clipping ratio,
1) The study investigated the influence of branding on higher educational institutions in India. Questionnaires were distributed to students at 26 engineering institutions to understand the impact of various branding dimensions.
2) The findings showed that brand rating was statistically significantly influenced by the branding dimensions of service, innovation, quality, price, image, and external exposure. External exposure had the strongest influence on brand rating.
3) A regression analysis confirmed that all the branding dimensions, including service, innovation, quality, price, image, and external exposure, had a statistically significant impact on an institution's overall brand rating as perceived by students.
This document describes the design, implementation, and testing of a 16-bit reduced instruction set computer (RISC) processor. The processor was implemented on a Xilinx XC3S400 field programmable gate array (FPGA) and has an instruction set of 8 instructions. The processor design includes a 5-stage pipeline with forwarding logic to handle data hazards and a modified datapath to handle control hazards from branches. Testing was done by writing sample assembly code to the instruction memory and observing the processor's operation via serial transmission of register values and other signals. The results demonstrated stalls from data dependencies and a one clock cycle branch delay as intended.
An inexpensive embedded system was designed and developed to measure magnetic fields for low frequency nuclear magnetic resonance (NMR) applications. The system uses a Hall effect sensor to accurately measure magnetic fields. It is connected to a microcontroller with an analog-to-digital converter to convert the sensor output voltage to measured magnetic field values in Gauss. The system was able to successfully measure different magnetic fields and provides a low-cost option for NMR applications requiring magnetic field measurements.
This document summarizes research into the influence of ground granulated blast furnace slag (GGBS) and silica fume on the workability and mechanical properties of self-compacting concrete (SCC). Seven SCC mixes were tested with GGBS replacements of cement up to 30% and silica fume up to 9%. Testing showed that as GGBS and silica fume increased, workability decreased due to their physical properties. Compressive strength generally increased with the mineral admixtures, up to 20% GGBS and 6% silica fume. Stress-strain analysis found ductility increased with silica fume content. The study demonstrated the potential of GGBS and silica f
This document describes a proposed integrated power electronics interface (IPEI) for battery electric vehicles (BEVs). The IPEI is designed to optimize power flow management for different operating modes like charging/discharging and traction/braking. It integrates a DC/DC converter, onboard battery charger, and DC/AC inverter to improve system efficiency and reliability while reducing component size and costs compared to other topologies. The document discusses the circuit design of the IPEI and its four operating modes. It also provides the dynamic modeling of the electric power train components like the battery and electric motor, as well as the control strategy design for the IPEI, which is analyzed using MATLAB/Simulink simulations.
This document describes using MATLAB Simulink to program and communicate between two dsPIC30f microcontrollers over a Controller Area Network (CAN) protocol. It presents the development of a real-time digital system using Simulink, Real-Time Workshop and Embedded Coder to generate C code from Simulink models. The C code is then compiled and run on the dsPIC30f microcontrollers. Specifically, it communicates engine RPM values transmitted by one microcontroller over CAN to the other microcontroller, which decodes and displays the RPM on an LCD screen. The decoding algorithm is also described.
The document presents a model for an InAlAs/InGaAs/InAlAs double-gate high electron mobility transistor (DG-HEMT) mixer for microwave applications. The mixer design employs a heterogeneous configuration where the local oscillator and radio frequency signals are applied to the same gate while a DC offset is applied to the other gate for better charge control. Analytical expressions are developed for the output voltage spectrum of the DG-HEMT mixer. Simulation results show the mixer generates intermediate frequencies at the sum and difference of the input signals, along with some spurious frequencies. The DG-HEMT mixer is found to provide better conversion gain and isolation of undesired frequencies compared to a mixer based on a conventional double
Review on security issues of AODV routing protocol for MANETsIOSR Journals
This document discusses security issues with the Ad Hoc On-Demand Distance Vector (AODV) routing protocol used in mobile ad hoc networks. AODV is vulnerable to attacks where malicious nodes manipulate routing information like sequence numbers and hop counts. The document reviews these vulnerabilities and proposes securing AODV through symmetric encryption of routing control packets between nodes to prevent modification by unauthorized nodes. It suggests approaches for key exchange without a central authority and describes securing the route discovery and maintenance processes in AODV to authenticate routing updates and detect malicious nodes.
This document analyzes the performance of Wi-Fi networks under three conditions: no fading, flat fading, and dispersive fading. It simulates these conditions using an IEEE 802.11a WLAN physical layer model in Matlab. The simulation measures packet error rate and bit rate as the signal-to-noise ratio and maximum Doppler shift are varied. With no fading, there is no packet error and bit rate increases with SNR. Under flat and dispersive fading, packet error and bit rates are affected differently based on the maximum Doppler shift. The best performance occurs under flat fading with a lower maximum Doppler shift of 100Hz.
Blind Signature Scheme Based On Elliptical Curve Cryptography (ECC)IOSR Journals
This document summarizes a blind signature scheme based on elliptic curve cryptography. It begins with an introduction to cryptography and the history of cryptography. It then discusses symmetric key cryptography, asymmetric key cryptography including public and private key pairs. It describes digital signatures, how they are generated and verified. It introduces the concept of blind signatures, how a message can be signed without revealing its contents to the signer. It discusses the mathematics behind elliptic curves including elliptic curve equations and algorithms for performing group operations on elliptic curves. It provides details on representing divisor classes using Mumford representation and using Cantor's algorithm to compute the sum of two divisor classes.
This document presents a mathematical analysis of the unsteady free convection and mass transfer flow through a porous medium with variable viscosity and thermal conductivity. The governing equations for momentum, temperature, and concentration are non-dimensionalized and solved using an explicit finite difference method. Key aspects include:
1) The flow is induced by an exponentially accelerated vertical plate with variable temperature and concentration.
2) Viscosity and thermal conductivity are assumed to vary linearly with temperature.
3) The resulting non-dimensional equations are nonlinear partial differential equations solved numerically.
4) Skin friction and Nusselt numbers are defined and analyzed graphically.
IJERD (www.ijerd.com) International Journal of Engineering Research and Devel...IJERD Editor
This document summarizes and compares three computing models: cluster computing, grid computing, and cloud computing. Cluster computing involves linking together multiple computers to work as a single system for high performance computing tasks. Grid computing divides and distributes large programs across interconnected computers. Cloud computing provides on-demand access to shared computing resources over the internet. The document discusses challenges, examples of projects and applications for each model to provide an overview of how they differ and are applied.
Cloud Computing: A Perspective on Next Basic Utility in IT World IRJET Journal
This document discusses cloud computing and its architecture. It begins with an introduction to cloud computing, defining it as a model that provides infrastructure, platforms, and software as services. The key characteristics and service models of cloud computing are described.
The document then discusses the architecture of cloud computing, including the layers of Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS). It also describes the deployment models of private cloud, public cloud, community cloud, and hybrid cloud.
The document outlines several challenges of cloud computing, such as resource allocation and scheduling, cost optimization, processing time and speed, memory management, load balancing, security issues, fault
Providing a multi-objective scheduling tasks by Using PSO algorithm for cost ...Editor IJCATR
This article is intended to use the multi-PSO algorithm for scheduling tasks for cost management in cloud computing. This means that
any migration costs due to supply failure consider as a one objective and each task is a little particle and recognize by use of the
appropriate fitness schedule function (how the particles arrangement) that cost at least amount of total expense. In addition to, the weight
is granted to the each expenditure that reflects the importance of cost. The data which is used to simulate proposed method are series of
academic and research data that are prepared from the Internet and MATLAB software is used for simulation. We simulate two issues,
in the first issue, consider four task by four vehicles and divide tasks. In the second issue, make the issue more complicated and consider
six tasks by four vehicles. We write PSO's output for each two issues of various iterations. Finally, the particles dispersion and as well
as the output of the cost function were computed for each pa
This document discusses various computing paradigms such as fog computing, cloud computing, edge computing, mobile cloud computing, and fog-based computing. It provides an overview of fog computing, describing its layered architecture and comparing it to similar paradigms like cloud and edge computing. Some key points discussed include:
- Fog computing enhances cloud computing by extending services and resources to the network edge, supporting low-latency applications.
- It has a 3-layer architecture with end devices, fog nodes, and cloud layers, placing resources closer to end users than the cloud.
- Characteristics of fog computing include low latency, mobility support, location awareness, and decentralized storage and analytics.
- Challen
Locality Sim : Cloud Simulator with Data Localityneirew J
Cloud Computing (CC) is a model for enabling on-demand access to a shared pool of configurable
computing resources. Testing and evaluating the performance of the cloud environment for allocating,
provisioning, scheduling, and data allocation policy have great attention to be achieved. Therefore, using
cloud simulator would save time and money, and provide a flexible environment to evaluate new research
work. Unfortunately, the current simulators (e.g., CloudSim, NetworkCloudSim, GreenCloud, etc..) deal
with the data as for size only without any consideration about the data allocation policy and locality. On
the other hand, the NetworkCloudSim simulator is considered one of the most common used simulators
because it includes different modules which support needed functions to a simulated cloud environment,
and it could be extended to include new extra modules. According to work in this paper, the
NetworkCloudSim simulator has been extended and modified to support data locality. The modified
simulator is called LocalitySim. The accuracy of the proposed LocalitySim simulator has been proved by
building a mathematical model. Also, the proposed simulator has been used to test the performance of the
three-tire data center as a case study with considering the data locality feature.
LOCALITY SIM: CLOUD SIMULATOR WITH DATA LOCALITY ijccsa
Cloud Computing (CC) is a model for enabling on-demand access to a shared pool of configurable computing resources. Testing and evaluating the performance of the cloud environment for allocating, provisioning, scheduling, and data allocation policy have great attention to be achieved. Therefore, using cloud simulator would save time and money, and provide a flexible environment to evaluate new research
work. Unfortunately, the current simulators (e.g., CloudSim, NetworkCloudSim, GreenCloud, etc..) deal with the data as for size only without any consideration about the data allocation policy and locality. On the other hand, the NetworkCloudSim simulator is considered one of the most common used simulators because it includes different modules which support needed functions to a simulated cloud environment,
and it could be extended to include new extra modules. According to work in this paper, the NetworkCloudSim simulator has been extended and modified to support data locality. The modified simulator is called LocalitySim. The accuracy of the proposed LocalitySim simulator has been proved by building a mathematical model. Also, the proposed simulator has been used to test the performance of the
three-tire data center as a case study with considering the data locality feature
CLOUD COMPUTING: A NEW VISION OF THE DISTRIBUTED SYSTEM cscpconf
Cloud computing is a new emerging system which offers information technologies via Internet. Clients use services they need when they need and at the place they want and pay only for what they have consumed. So, cloud computing offers many advantages especially for business. A deep study and understanding of this emerging system and the inherent components help a lot in identifying what should we do in order to improve its performance. In this work, we present first cloud computing and its components then we describe an idea which attempts to optimize the management of cloud computing system that are composed of many data centers.
Recommendations for implementing cloud computing management platforms using o...IAEME Publication
This document discusses and compares four open source cloud computing management platforms: Eucalyptus, OpenNebula, Abicloud, and Nimbus. It provides an overview of each platform, including their architectures, features, and licenses. The document establishes criteria to compare the platforms, such as the types of clouds they support, supported hypervisors, security measures, and more. It then evaluates each platform based on these criteria. Finally, it provides recommendations for which types of organizations or use cases each platform may be best suited for.
This document compares and contrasts cloud computing and grid computing. Grid computing refers to cooperation between multiple computers and servers to boost computational power, with a focus on high-capacity CPU tasks. Cloud computing delivers on-demand access to shared computing resources like networks, servers, storage and applications via the internet. Key differences include grid computing having a lower level of abstraction and scalability compared to cloud computing. Cloud computing also has stronger fault tolerance, is more widely accessible via the internet, and offers real-time services through its utility-based pricing model.
Evaluation Of The Data Security Methods In Cloud Computing Environmentsijfcstjournal
This document discusses methods for ensuring data security in cloud computing environments. It begins by introducing cloud computing models including infrastructure as a service (IaaS), platform as a service (PaaS), and software as a service (SaaS). The main goals of data security - confidentiality, integrity, and availability - are then described. Several methods for data security are proposed, including data fragmentation where sensitive data is divided and distributed across different domains. Encryption techniques are also discussed as ways to protect confidential data during storage and transmission. Overall, the document aims to evaluate approaches for addressing key issues around securing user data in cloud systems.
Model-Driven Architecture for Cloud Applications Development, A survey Editor IJCATR
Model Driven Architecture and Cloud computing are among the most important paradigms in software service engineering now a days. As cloud computing continues to gain more activities, more issues and challenges for many systems with its dynamic usage are introduced. Model Driven Architecture (MDA) approach for development and maintenance becomes an evident choice for ensuring software solutions that are robust, flexible and agile for developing applications.
This paper aims to survey and analyze the research issues and challenges that have been emerging in cloud computing applications with a focus on using Model Driven architecture (MDA) development. We discuss the open research issues and highlight future research problems.
Model-Driven Architecture for Cloud Applications Development, A surveyEditor IJCATR
Model Driven Architecture and Cloud computing are among the most important paradigms in software service engineering
now a days. As cloud computing continues to gain more activities, more issues and challenges for many systems with its dynamic usage
are introduced. Model Driven Architecture (MDA) approach for development and maintenance becomes an evident choice for ensuring
software solutions that are robust, flexible and agile for developing applications.
This paper aims to survey and analyze the research issues and challenges that have been emerging in cloud computing applications with
a focus on using Model Driven architecture (MDA) development. We discuss the open research issues and highlight future research
problems.
Model-Driven Architecture for Cloud Applications Development, A surveyEditor IJCATR
Model Driven Architecture and Cloud computing are among the most important paradigms in software service engineering
now a days. As cloud computing continues to gain more activities, more issues and challenges for many systems with its dynamic usage
are introduced. Model Driven Architecture (MDA) approach for development and maintenance becomes an evident choice for ensuring
software solutions that are robust, flexible and agile for developing applications.
This paper aims to survey and analyze the research issues and challenges that have been emerging in cloud computing applications with
a focus on using Model Driven architecture (MDA) development. We discuss the open research issues and highlight future research
problems.
A premeditated cdm algorithm in cloud computing environment for fpm 2IAEME Publication
This document discusses applying data mining techniques in a cloud computing environment. It begins with an introduction to cloud computing, describing its key characteristics including on-demand self-service, broad network access, resource pooling, rapid elasticity, and measured service. The document then provides an overview of common data mining techniques like association, classification, clustering, prediction, and sequential patterns. It reviews recent studies applying these techniques in cloud and distributed systems. The document proposes a new Cloud Data Mining (CDM) algorithm that performs data mining in both cloud and non-cloud environments and compares the costs of each approach.
Analyzing the Difference of Cluster, Grid, Utility & Cloud ComputingIOSRjournaljce
: Virtualization and cloud computing is creating a fundamental change in computer architecture,
software and tools development, in the way we store, distribute and consume information. In the recent era of
autonomic computing it comes the importance and need of basic concepts of having and sharing various
hardware and software and other resources & applications that can manage themself with high level of human
guidance. Virtualization or Autonomic computing is not a new to the world, but it developed rapidly with Cloud
computing. In this paper there give an overview of various types of computing. There will be discussion on
Cluster, Grid computing, Utility & Cloud Computing. Analysis architecture, differences between them,
characteristics , its working, advantages and disadvantages
This document summarizes distributed computing. It discusses the history and origins of distributed computing in the 1960s with concurrent processes communicating through message passing. It describes how distributed computing works by splitting a program into parts that run simultaneously on multiple networked computers. Examples of distributed systems include telecommunication networks, network applications, real-time process control systems, and parallel scientific computing. The advantages of distributed computing include economics, speed, reliability, and scalability while the disadvantages include complexity and network problems.
An approach of software engineering through middlewareIAEME Publication
The document discusses middleware and its role in facilitating the construction of distributed systems. It outlines some of the key challenges in building distributed systems, such as network communication, coordination between distributed components, reliability, scalability, and heterogeneity. Middleware aims to address these challenges by providing high-level abstractions and services that conceal low-level complexities related to distribution from application developers. The document argues that middleware is important for simplifying distributed system construction and should be a key consideration in software engineering research on distributed systems.
A Survey of File Replication Techniques In Grid SystemsEditor IJCATR
Grid is a type of parallel and distributed systems that is designed to provide reliable access to data
and computational resources in wide area networks. These resources are distributed in different geographical
locations. Efficient data sharing in global networks is complicated by erratic node failure, unreliable network
connectivity and limited bandwidth. Replication is a technique used in grid systems to improve the
applications’ response time and to reduce the bandwidth consumption. In this paper, we present a survey on
basic and new replication techniques that have been proposed by other researchers. After that, we have a full
comparative study on these replication strategies.
A Survey of File Replication Techniques In Grid SystemsEditor IJCATR
Grid is a type of parallel and distributed systems that is designed to provide reliable access to data
and computational resources in wide area networks. These resources are distributed in different geographical
locations. Efficient data sharing in global networks is complicated by erratic node failure, unreliable network
connectivity and limited bandwidth. Replication is a technique used in grid systems to improve the
applications’ response time and to reduce the bandwidth consumption. In this paper, we present a survey on
basic and new replication techniques that have been proposed by other researchers. After that, we have a full
comparative study on these replication strategies
Data Distribution Handling on Cloud for Deployment of Big Dataneirew J
This document summarizes a research paper that proposes an algorithm to reduce data distribution and processing time in cloud computing for big data deployment. The paper discusses different data distribution techniques for virtual machines (VMs) in cloud computing, such as centralized, semi-centralized, hierarchical, and peer-to-peer approaches. It also reviews related work on MapReduce frameworks and load balancing algorithms. The authors implemented their proposed peer-to-peer distribution technique and Round Robin and Throttled load balancing algorithms in CloudSim. Experimental results showed the Throttled algorithm achieved significantly lower average response times than Round Robin.
This document provides a technical review of secure banking using RSA and AES encryption methodologies. It discusses how RSA and AES are commonly used encryption standards for secure data transmission between ATMs and bank servers. The document first provides background on ATM security measures and risks of attacks. It then reviews related work analyzing encryption techniques. The document proposes using a one-time password in addition to a PIN for ATM authentication. It concludes that implementing encryption standards like RSA and AES can make transactions more secure and build trust in online banking.
This document analyzes the performance of various modulation schemes for achieving energy efficient communication over fading channels in wireless sensor networks. It finds that for long transmission distances, low-order modulations like BPSK are optimal due to their lower SNR requirements. However, as transmission distance decreases, higher-order modulations like 16-QAM and 64-QAM become more optimal since they can transmit more bits per symbol, outweighing their higher SNR needs. Simulations show lifetime extensions up to 550% are possible in short-range networks by using higher-order modulations instead of just BPSK. The optimal modulation depends on transmission distance and balancing the energy used by electronic components versus power amplifiers.
This document provides a review of mobility management techniques in vehicular ad hoc networks (VANETs). It discusses three modes of communication in VANETs: vehicle-to-infrastructure (V2I), vehicle-to-vehicle (V2V), and hybrid vehicle (HV) communication. For each communication mode, different mobility management schemes are required due to their unique characteristics. The document also discusses mobility management challenges in VANETs and outlines some open research issues in improving mobility management for seamless communication in these dynamic networks.
This document provides a review of different techniques for segmenting brain MRI images to detect tumors. It compares the K-means and Fuzzy C-means clustering algorithms. K-means is an exclusive clustering algorithm that groups data points into distinct clusters, while Fuzzy C-means is an overlapping clustering algorithm that allows data points to belong to multiple clusters. The document finds that Fuzzy C-means requires more time for brain tumor detection compared to other methods like hierarchical clustering or K-means. It also reviews related work applying these clustering algorithms to segment brain MRI images.
1) The document simulates and compares the performance of AODV and DSDV routing protocols in a mobile ad hoc network under three conditions: when users are fixed, when users move towards the base station, and when users move away from the base station.
2) The results show that both protocols have higher packet delivery and lower packet loss when users are either fixed or moving towards the base station, since signal strength is better in those scenarios. Performance degrades when users move away from the base station due to weaker signals.
3) AODV generally has better performance than DSDV, with higher throughput and packet delivery rates observed across the different user mobility conditions.
This document describes the design and implementation of 4-bit QPSK and 256-bit QAM modulation techniques using MATLAB. It compares the two techniques based on SNR, BER, and efficiency. The key steps of implementing each technique in MATLAB are outlined, including generating random bits, modulation, adding noise, and measuring BER. Simulation results show scatter plots and eye diagrams of the modulated signals. A table compares the results, showing that 256-bit QAM provides better performance than 4-bit QPSK. The document concludes that QAM modulation is more effective for digital transmission systems.
The document proposes a hybrid technique using Anisotropic Scale Invariant Feature Transform (A-SIFT) and Robust Ensemble Support Vector Machine (RESVM) to accurately identify faces in images. A-SIFT improves upon traditional SIFT by applying anisotropic scaling to extract richer directional keypoints. Keypoints are processed with RESVM and hypothesis testing to increase accuracy above 95% by repeatedly reprocessing images until the threshold is met. The technique was tested on similar and different facial images and achieved better results than SIFT in retrieval time and reduced keypoints.
This document studies the effects of dielectric superstrate thickness on microstrip patch antenna parameters. Three types of probes-fed patch antennas (rectangular, circular, and square) were designed to operate at 2.4 GHz using Arlondiclad 880 substrate. The antennas were tested with and without an Arlondiclad 880 superstrate of varying thicknesses. It was found that adding a superstrate slightly degraded performance by lowering the resonant frequency and increasing return loss and VSWR, while decreasing bandwidth and gain. Specifically, increasing the superstrate thickness or dielectric constant resulted in greater changes to the antenna parameters.
This document describes a wireless environment monitoring system that utilizes soil energy as a sustainable power source for wireless sensors. The system uses a microbial fuel cell to generate electricity from the microbial activity in soil. Two microbial fuel cells were created using different soil types and various additives to produce different current and voltage outputs. An electronic circuit was designed on a printed circuit board with components like a microcontroller and ZigBee transceiver. Sensors for temperature and humidity were connected to the circuit to monitor the environment wirelessly. The system provides a low-cost way to power remote sensors without needing battery replacement and avoids the high costs of wiring a power source.
1) The document proposes a model for a frequency tunable inverted-F antenna that uses ferrite material.
2) The resonant frequency of the antenna can be significantly shifted from 2.41GHz to 3.15GHz, a 31% shift, by increasing the static magnetic field placed on the ferrite material.
3) Altering the permeability of the ferrite allows tuning of the antenna's resonant frequency without changing the physical dimensions, providing flexibility to operate over a wide frequency range.
This document summarizes a research paper that presents a speech enhancement method using stationary wavelet transform. The method first classifies speech into voiced, unvoiced, and silence regions based on short-time energy. It then applies different thresholding techniques to the wavelet coefficients of each region - modified hard thresholding for voiced speech, semi-soft thresholding for unvoiced speech, and setting coefficients to zero for silence. Experimental results using speech from the TIMIT database corrupted with white Gaussian noise at various SNR levels show improved performance over other popular denoising methods.
This document reviews the design of an energy-optimized wireless sensor node that encrypts data for transmission. It discusses how sensing schemes that group nodes into clusters and transmit aggregated data can reduce energy consumption compared to individual node transmissions. The proposed node design calculates the minimum transmission power needed based on received signal strength and uses a periodic sleep/wake cycle to optimize energy when not sensing or transmitting. It aims to encrypt data at both the node and network level to further optimize energy usage for wireless communication.
This document discusses group consumption modes. It analyzes factors that impact group consumption, including external environmental factors like technological developments enabling new forms of online and offline interactions, as well as internal motivational factors at both the group and individual level. The document then proposes that group consumption modes can be divided into four types based on two dimensions: vertical (group relationship intensity) and horizontal (consumption action period). These four types are instrument-oriented, information-oriented, enjoyment-oriented, and relationship-oriented consumption modes. Finally, the document notes that consumption modes are dynamic and can evolve over time.
The document summarizes a study of different microstrip patch antenna configurations with slotted ground planes. Three antenna designs were proposed and their performance evaluated through simulation: a conventional square patch, an elliptical patch, and a star-shaped patch. All antennas were mounted on an FR4 substrate. The effects of adding different slot patterns to the ground plane on resonance frequency, bandwidth, gain and efficiency were analyzed parametrically. Key findings were that reshaping the patch and adding slots increased bandwidth and shifted resonance frequency. The elliptical and star patches in particular performed better than the conventional design. Three antenna configurations were selected for fabrication and measurement based on the simulations: a conventional patch with a slot under the patch, an elliptical patch with slots
1) The document describes a study conducted to improve call drop rates in a GSM network through RF optimization.
2) Drive testing was performed before and after optimization using TEMS software to record network parameters like RxLevel, RxQuality, and events.
3) Analysis found call drops were occurring due to issues like handover failures between sectors, interference from adjacent channels, and overshooting due to antenna tilt.
4) Corrective actions taken included defining neighbors between sectors, adjusting frequencies to reduce interference, and lowering the mechanical tilt of an antenna.
5) Post-optimization drive testing showed improvements in RxLevel, RxQuality, and a reduction in dropped calls.
This document describes the design of an intelligent autonomous wheeled robot that uses RF transmission for communication. The robot has two modes - automatic mode where it can make its own decisions, and user control mode where a user can control it remotely. It is designed using a microcontroller and can perform tasks like object recognition using computer vision and color detection in MATLAB, as well as wall painting using pneumatic systems. The robot's movement is controlled by DC motors and it uses sensors like ultrasonic sensors and gas sensors to navigate autonomously. RF transmission allows communication between the robot and a remote control unit. The overall aim is to develop a low-cost robotic system for industrial applications like material handling.
This document reviews cryptography techniques to secure the Ad-hoc On-Demand Distance Vector (AODV) routing protocol in mobile ad-hoc networks. It discusses various types of attacks on AODV like impersonation, denial of service, eavesdropping, black hole attacks, wormhole attacks, and Sybil attacks. It then proposes using the RC6 cryptography algorithm to secure AODV by encrypting data packets and detecting and removing malicious nodes launching black hole attacks. Simulation results show that after applying RC6, the packet delivery ratio and throughput of AODV increase while delay decreases, improving the security and performance of the network under attack.
The document describes a proposed modification to the conventional Booth multiplier that aims to increase its speed by applying concepts from Vedic mathematics. Specifically, it utilizes the Urdhva Tiryakbhyam formula to generate all partial products concurrently rather than sequentially. The proposed 8x8 bit multiplier was coded in VHDL, simulated, and found to have a path delay 44.35% lower than a conventional Booth multiplier, demonstrating its potential for higher speed.
This document discusses image deblurring techniques. It begins by introducing image restoration and focusing on image deblurring. It then discusses challenges with image deblurring being an ill-posed problem. It reviews existing approaches to screen image deconvolution including estimating point spread functions and iteratively estimating blur kernels and sharp images. The document also discusses handling spatially variant blur and summarizes the relationship between the proposed method and previous work for different blur types. It proposes using color filters in the aperture to exploit parallax cues for segmentation and blur estimation. Finally, it proposes moving the image sensor circularly during exposure to prevent high frequency attenuation from motion blur.
This document describes modeling an adaptive controller for an aircraft roll control system using PID, fuzzy-PID, and genetic algorithm. It begins by introducing the aircraft roll control system and motivation for developing an adaptive controller to minimize errors from noisy analog sensor signals. It then provides the mathematical model of aircraft roll dynamics and describes modeling the real-time flight control system in MATLAB/Simulink. The document evaluates PID, fuzzy-PID, and PID-GA (genetic algorithm) controllers for aircraft roll control and finds that the PID-GA controller delivers the best performance.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
Infrastructure Challenges in Scaling RAG with Custom AI modelsZilliz
Building Retrieval-Augmented Generation (RAG) systems with open-source and custom AI models is a complex task. This talk explores the challenges in productionizing RAG systems, including retrieval performance, response synthesis, and evaluation. We’ll discuss how to leverage open-source models like text embeddings, language models, and custom fine-tuned models to enhance RAG performance. Additionally, we’ll cover how BentoML can help orchestrate and scale these AI components efficiently, ensuring seamless deployment and management of RAG systems in the cloud.
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
UiPath Test Automation using UiPath Test Suite series, part 6
B1802030511
1. IOSR Journal of Computer Engineering (IOSR-JCE)
e-ISSN: 2278-0661,p-ISSN: 2278-8727, Volume 18, Issue 2, Ver. III (Mar-Apr. 2016), PP 05-11
www.iosrjournals.org
DOI: 10.9790/0661-1802030511 www.iosrjournals.org 5 | Page
Simulation of Parallel and Distributed Computing: A Review
Mohd Ahsan Kabir Rizvi
Department of Electrical and Computer Engineering, Sultan Qaboos University, Oman
Abstract : Parallel and Distributed Computing has made it possible to simulate large infrastructures like Tel-
ecom networks, air traffic etc. in an easy and effective way. It not only makes the processes faster and reliable
but also provides geographical distribution among the systems. Due to its great capabilities, it has led to many
types of research in this field. This led to many types of computing like, grid computing, cluster computing, utili-
ty computing and cloud computing. This paper reviews some of the various advances in the modelling and simu-
lation aspect of this field. Some modellings techniques can be PES, the system modelling, performance model-
ling and network modelling. The Simulation techniques and software like SimOS, SimJava and MicroGrid are
also discussed and evaluated. The paper then goes deeper into its latest form, the cloud computing.
Keywords -Parallel and Distributed Systems (PADS), Parallel and Distributed Computing, Parallel Discrete
Event Simulation (PES), Modelling & Simulation.
I. Introduction
Parallel and Distributed Computing is a field which is being evolved for almost half a century now.
During these years, it has led to intensive research and development to bring about better and better advances
into the field. In the last decade, as it started finding more and more applicability in various fields, the research
has just become more flourished. The applications of PADS are immense. Some of the fields where it finds ap-
plication are Air traffic management, Weather forecasting, Ocean modelling, oil reservoir simulations, pollution
tracking, semiconductor simulation etc.
Research in PADS has also led to the birth of many sub-fields like Grid computing, Cluster computing
and Automatic computing. Each of these has their unique applicability and it’s making PADS more popular
among the scientific community and the general public. It is also proved by several projects that have been
based on PADS. Some of them are ABC@home [2], which finds triples related the ABC conjecture [1], Bitcoin,
a peer to peer electronic cash [3], Austrian Grid II [4], which developed of a topology-aware supercomputing
API for the grid, Active Harmony Project, which is a framework that supports execution of computational ob-
jects [5], [6], [7] and MilkyWay@Home, which creates a highly accurate three-dimensional model of the Milky
Way galaxy [8], The Earth Simulator and The Blue Gene by IBM[16].
To understand their applicability, it is essential to understand the concept of PADS. It is a topic, which
requires a vast knowledge of the hardware which is to be connected in the parallel and distributed form and
then, the software that is required to control the hardware in an efficient way. It is very easy to understand that at
the end of any research or invention, the basic demand is the applicability, efficiency and the performance of the
system. So, once the hardware and software issues are addressed, it is then required to model the system to ob-
tain the best possible orientation and performance.
This paper concentrates more on the modelling and simulations part of the field than the hardware and
software. It first introduces the topics like parallel computing, distributed computing and PADS. Then few of the
Modelling techniques and the models are discussed. Special emphasis is put on PES since it’s the most exten-
sively used modelling technique. The paper then examines the various simulation techniques and the software
developed and used for the purpose. The paper then gives a detailed study on the latest sub-field of the PADS,
which is cloud computing.
II. Parallel and Distributed Computing
2.1 Parallel Computing
Parallel computing is a technique in which multiple processors share the same memory to execute the
required computations. The processors built by companies like Intel, AMD and ARM in recent times, imple-
ments parallel computing using multiple processor cores. This makes the executions faster, reliable and efficient.
Depending on the requirements, the number of processors, the orientation etc may vary. A visual representation
is depicted in Fig. 1.
2. Simulation of Parallel And Distributed Computing: A Review
DOI: 10.9790/0661-1802030511 www.iosrjournals.org 6 | Page
Fig. 1: Parallel Computing
2.2 Distributed Computing
This is a technique in which multiple systems of processors is connected using a network like LAN and
WAN. Here, each processor consists of its own memory. The network is used to create a unified system that can
break a large computation job into smaller parts and then share them among themselves for execution. This
makes the simulations faster, reliable and efficient. Its other benefit is that it allows a geographical distribution
among the systems. A visual representation is depicted in Fig. 2.
Fig. 2: Distributed Computing
2.3 Parallel and Distributed Computing
This forms a hybrid of the two types of configurations mentioned above. It forms a unique amalgam
that combines the strengths and benefits of the two. Fig. 3 depicts a typical Parallel and Distributed system.
Some of its advantages are:
Capability of executing large Simulations and computations
Higher Reliability and Fault tolerance
Geographical Distribution
Heterogeneous systems[27]
The classification of Parallel Computing is most widely done by the Flynn’s Taxonomy [9]. The classi-
fication is based on the mode of execution of the instruction set. They are:
SISD: Single instruction stream, Single data stream
SIMD: Single instruction stream, Multiple data streams
MISD: Multiple instruction streams, Single data stream
MIMD: Multiple instruction streams, Multiple data streams
Fig. 3: Parallel & Distributed System
Based on the orientation, design and application, PADS can be classified into several types [10].
2.3.1 Cluster Computing
Here many small computers are locally interconnected to virtually form a single high power computer.
2.3.2 Grid Computing
3. Simulation of Parallel And Distributed Computing: A Review
DOI: 10.9790/0661-1802030511 www.iosrjournals.org 7 | Page
It can be considered as an interconnection of clusters to form a geographically distributed system. It is
generally used to interconnect the various branches of a single organization or the various organizations having
similar interest.
2.3.3 Utility Computing
It a mode of proving access to the PADS to the smaller consumers who cannot build their own system.
Utilities like hardware, software, infrastructure support etc. can be provided. Thus, it becomes a business field,
where utility is provided at a cost. It generally does not involve the internet for its functioning.
2.3.4 Cloud Computing
It a form of PADS where the interconnection is achieved using the internet. Since no extra cabling and
configuring costs are involved, it forms the most popular among its siblings. Due to this, it is also available to
the general public and provides a great opportunity for business.
III. Modelling Parallel and Distributed Systems
Since the beginning of the research and development of parallel and distributed systems (PADS), many
models and modelling techniques have been developed to understand and apply the technology in a better and
efficient way. Some of the major modelsare discussed in this section.
3.1 System Models
A PADS system may contain various components like processors, memory, bus, networks, switches etc.
These components communicate among one another by sharing and executing the smaller tasks so as to com-
plete the bigger simulation as a whole. System Model is the modelling technique that helps to model the PADS
system in various ways [18].
3.1.1 Hierarchical Model
Here, a hierarchical model of the overall system is created. It helps to deal with the various levels of the
system in detail. Each entity is governed by another entity at a higher level. The level goes higher and higher to
the top, where the central controller is placed which acts as the supervisor.
3.1.2 Layered system Model
In this type of model, the entities are built into a network of n layers. Each n entity implements func-
tions of the nth layer and has peer protocols to communicate with other n entities. The peer protocol in each
layer is a set of rules which enforce syntax, semantics and timing. This model relates the entities with strict rules
between peers and layer interactions. Information hiding principle is also an important property of this model.
3.1.3 Object-based model
This model overcomes some of the flaws in the Layered system model, which have strict rules and reg-
ulations. Objects are small elements whose logic is stored in the system and they control a certain part of the
systems and it may hide the hardware which it serves and controls. The objects are generally divided into 4
classes. Actorsare objects that operate on other objects. Servers are objects that come into operation only when
used by other objects. Agents are objects that operate on one or more objects on behalf of another object. Pas-
sive are objects which are only made to work but cannot work on others.
3.2 Network Based Model
Depending on the requirements and the benefits, PADS can be executed in various kinds of network
models. These models can be static or dynamic. Some static models are a fully connected model, star topology,
ring topology, 2D mesh with wraparound etc. Dynamic models consist of switches that can instantaneously
switch the connection among the systems during runtime. Fig. 4 illustrates to show a static and a dynamic mod-
el. Some Network topologies that are currently being used are Bus Based networks, crossbar networks, multis-
tage networks, Omega networks, Hypercube networks and Tree networks.
4. Simulation of Parallel And Distributed Computing: A Review
DOI: 10.9790/0661-1802030511 www.iosrjournals.org 8 | Page
Fig. 4: Static and Dynamic Networks
3.3 Performance Modelling
This is a method of modelling using which the performance of the PADS can be monitored using the
model development effort, model evaluation effort, efficiencyand accuracy of the system.Performance and Ar-
chitecture Laboratory (PAL), developed at Los Alamos National Laboratory [11], [12]. To measure the perfor-
mance of the PADS system, the executions in the system are represented by a mathematical model. This model
depends on the problem size and the computation power of the Machine. The Parameters are program execution
time, computation time, communication time and memory exchange time in the multiprocessor system [11]. The
workload model depends on the overall source code of the program and the machine model depends on the
communication and the computation among the nodes in the network. Performance Prophet is another tool
which was developed to provide a graphical user interface which tackles the problems related to specification
and modification [13].This program requires the user to graphically specify the model in Unified Modelling
Language. Performance Oriented End-to-end Modelling System (POEMS) [14], developed between 1997 and
2000, is made by a composition of various evaluation tools. Once the model is created using the component
model, each component of the system is evaluated using their respective evaluation tool. This technique can be
used as an automatic task graph generator for High-Performance Fortran. Parallel Object-oriented Simulation
Environment (POSE) [15], is used to observe the performance of programs in large systems like IBM Blue Gene
[14]. It uses Charm++, which is a parallel library of C++, for the modelling purposes.
3.4 Mathematical Modelling
This modelling uses the mathematical techniques [21] to model the given PADS. Any typical system
like a flight simulator or an air traffic control can be represented using differential equations. Once the equation
is defined it become possible to obtain the various possible outputs based on parameters like time, situations and
values of inputs. Since the systems that are being modelled are complex, these calculations become complex, so
the mathematical model can be obtained using PADS as the calculations require parallel and distributed envi-
ronment.
3.5 Parallel Discrete Event Simulations (PES)
This is the modelling which has become the most popular and most extensively used technique for
PADS. Modelling can be done in two ways. Time stepped model and Event stepped models. Event stepped
models have proved to be very efficient and reliable and thus it became a hot topic for research. In PES, the si-
mulation is done an event based order. Fig. 5 shows the 3 different types of PES. The first one is a single proces-
sor model, the second shows the 2 processor model, where one event in one processor may lead to another event
in the other processor. PES mains consist of four parameters that lead to smooth functioning and modelling of
the system [17].
3.5.1 Queuing
It’s seen in the figure, the events in the system are required to be ordered into a queue and the technique
is called event ordering or Queuing. This is achieved by time stamping each event so that the executions can be
done in an ordered.
3.5.2 Timing
Timing is required so as to understand the duration of the simulation and the actual process. It is of
three types. Physical Time gives the actual time taken by an event in real life. This event may be a game of foot-
ball or the building of a plane etc. When a real life event is simulated, the physical time becomes the Simulation
5. Simulation of Parallel And Distributed Computing: A Review
DOI: 10.9790/0661-1802030511 www.iosrjournals.org 9 | Page
time. The wall clock time gives the output of the hardware clock. This means that wall clock time gives the ac-
tual time taken for the whole simulation process.
3.5.3 Synchronization
Synchronization among the processors is very essentials task in the system. Synchronization is mainly
of two types. In the optimistic type of synchronization, an event can be executed even if there is a chance that an
event may come up at a later stage which was to be executed before the current event. Once such a violation is
detected, the system rolls back to the earlier state. This system always needs to store its current state into the
memory before going to the next state so that it can roll back whenever required. Though it avoids the reliance
on look ahead, it may lead to overhead due to too many rollbacks. In Conservative type of approach, an event is
kept on hold if there is a chance that an event of an earlier state is still required to be processed. Thus, events are
executed only when it is sure that all the events with a time stamp earlier that the current state has been ex-
ecuted. This may lead to a Deadlock state, where every processor goes to hold state because it is waiting for the
earlier event at another processor to be executed first. Its advantage over-optimistic approach is that it does not
require and extra memory to keep storing its current state.
3.5.4 Execution
Execution may be the final end product of the PES. Based on the requirements of the user, execution
may be of various types. As fast as possible is employed when only the end results are required by the user. The
Simulation is done at one go as fast as possible and the results are given. Real Time execution is used when it is
required to know each and every step of the simulation from start to the end. Scaled Real Time can be used to
speed up or slow down the simulation process so as to understand the processes in detail.
Since the beginning of the PES, several algorithms and techniques have been developed for the various
synchronization and execution. Many techniques have been developed to avoid the deadlock in a Conservative
approach and the overhead in the optimistic approach.Next, some of the simulation software and tools used for
PADS will be discussed.
Fig. 5: Parallel and Distributed Simulation
a) Single processor PES
b) Dual processor PES
c) N-processor PES
IV. Simulating Parallel and Distributed Systems
Simulation tools help the users to obtain the final functionality and evaluate the outcome and feasibility
of a given model. Some of the most used simulations will be discussed in this section.
4.1 SimOS
As the name suggests, SimOS [19] is capable of modelling and simulation a complete operating sys-
tem. It can also simulate a complete computer system. It is made for studying complex systems like operating
systems, computer architecture and other complex workloads. It runs on a layer between the host, which is the
hardware, and software on which it runs, and the target, which is the OS and the hardware which is being simu-
lated.
6. Simulation of Parallel And Distributed Computing: A Review
DOI: 10.9790/0661-1802030511 www.iosrjournals.org 10 |
Page
4.2 SimJava
This is a Java-based tool which is built using DEVS and it helps to simulate complex systems [20]. It is
capable of providing a visual animation of the objects in the simulation and being a Java software, it also makes
it easy to display these simulations or diagrams on web pages and also provide compatibility among heteroge-
neous systems.
4.3 MicroGrid
It is a tool based on the C programming language. It enables the user to develop a virtual grip infra-
structure that can be implemented. It also allows the user to experiment on the system created and evaluate the
various parameters at different levels like middleware, application and network layers. It is generally used in the
Grid Computing applications.
4.4 Others
There are many other application specific tools which are used for PADS. GloMoSim [25] is a tool
which is used in mobile communication systems to simulate the wireless communication protocols. Ns2 [26] is a
very powerful tool that is capable of simulating a wide range of networks and also allows the flexibility in the
use of coding language. Bricks [23], SimGrid [24] and GridSim [22] are two other tools, built by different or-
ganizations, to simulate the Grid computing environment. Ptolemy [27] is a tool based on Java and is generally
used for embedded systems. The component-based design methodology is employed here.
V. Cloud Computing
Cloud Computing is a branch of Parallel and Distributed System where the processes are done using
the internet. It forms a subscription based service which allows users to use network based storage and computer
resources. Daily life examples of cloud computing are the email and social networking sites like Gmail, Hotmail
and Facebook. It makes it possible to access the required resources at any time from any place. The require-
ments are just an internet connection and the respective device like laptop and PDA. This also becomes a cost-
effective, efficient and reliable version PADS.
Cloud Computing evolved with time in three stages [28]. SPI forms the oldest form of classification,
then came the IBM ontology and the Hoff’s cloud model. Initially, the model was brief and simple. But as time
progressed, more classifications were required to provide better flexibilities and thus the model grew larger and
larger. The three general parts of the system is [31]:
Software as a Service (SaaS) is the lowest form of cloud computing and requires the least skill for its
use. The services like the email clients are an example.
Platform as a Service (PaaS) forms the intermediate level of the cloud computing paradigm. The user
is provided with a platform to create products and services. An example can be the Google AppEngine,
which provides an environment to the user to build Google Apps.
Infrastructure as a Service (IaaS) is the highest level of cloud service. Here the whole Infrastructure
is provided to the user and this makes it most flexible but also requires the maximum skill from the us-
ers side. Resources like memory and processor in the cloud can be controlled. Amazon EC2 is an ex-
ample of IaaS.
One of the most popular simulation tools is the CloudSim [29]. It is a Java-based tool built on the SimJava plat-
form. It is an event-based simulator and the different entities in the cloud system. Other Cloud simulators are the
GDCSim (Green Data Center Simulator) [32], Green Cloud Simulator [30], which is built on the ns2 platform.
VI. Conclusion
This paper has given the reader about the various modelling and simulation techniques used in PADS.
As explained throughout the paper, PADS promises immense applicability and due to this, further rigorous re-
search is inevitable. The applications and the feasibility of cloud computing have also been discussed. Also, a
brief description of the simulation tools for cloud computing was provided. In the future, the author plans to
evaluate and compare the modelling and simulation techniques. The knowledge will be implemented to provide
innovative and effective solutions to the current cloud computing technology.
References
[1]. Elkies, Noam D. 2007. The ABC's of number theory. The Harvard College Mathematics Review 1(1): 57-76.
[2]. http://www.abcathome.com/ by Mathematical Institute of Leiden University. On 06/05/2015
[3]. Satoshi Nakamoto, “Bitcoin: A Peer-to-Peer Electronic Cash System”, www.bitcoin.org
[4]. Karoly Bosa, Wolfgang Schreiner, Friedrich Priewasser, Report on the First Feature-Complete Prototype of a Distributed
Supercomputing API for the Grid. Research Institute for Symbolic Computation (RISC), Johannes Kepler University, Linz, Aus-
tria. Technical report no. AG-D4-1-2010_1, March 2010. Austrian Grid Deliverable.
7. Simulation of Parallel And Distributed Computing: A Review
DOI: 10.9790/0661-1802030511 www.iosrjournals.org 11 | Page
[5]. Peter Keleher and Jeffrey K. Hollingsworth, Prediction and Adaptation in Active Harmony, July 1998 IEEE International Sym-
posium on High-Performance Distributed Computing (HPDC)
[6]. Peter Keleher and Jeffrey K. Hollingsworth, Prediction and Adaptation in Active Harmony, July 1999 Cluster Computing, 2
(1999)
[7]. Jeffrey K. Hollingsworth and Ananta Tiwari, End-to-end Auto-tuning with Active Harmony, 2010 in Performance Tuning in
Scientific Computing, D. Bailey & S. Williams, ed.,
[8]. Nathan Cole, Travis Desell, Daniel Lombranaa Gonzalez, Fran-cisco Fernandez de Vega, Malik Magdon-Ismail, Heidi New-berg,
Boleslaw Szymanski and Carlos Varela. Evolutionary Algorithms on Volunteer Computing Platforms: The Milky-Way@Home
Project. In F. Fernandez de Vega, E. Cantu-Paz (Eds.): Parallel and Distributed Computational Intelligence, SCI 269, pp 63-90.
Springer-Verlag Berlin Heidelberg. 2010
[9]. Flynn, Michael J. "Very high-speed computing systems." Proceedings of the IEEE 54.12 (1966): 1901-1909.
[10]. Indu Gandotra, Pawanesh Abrol, Pooja Gupta, Rohit Uppaland and Sandeep Singh, Cloud Computing Over Cluster, Grid Com-
puting: a Comparative Analysis. In Journal of Grid and Distributed Computing, Volume 1, Issue 1, 2011, pp-01-04.
[11]. D. Kerbyson, H. Alme, A. Hoisie, F. Petrini, H.Wasserman, and M. Gittings. Predictive Performance and Scalability Modeling of
a Large-scale Application. In ACM/IEEE conference on Supercomputing (SC 2001), Denver, CO, USA, November 2001. ACM.
[12]. D. Kerbyson, A. Hoisie, and H. Wasserman. Use of Predictive Performance Modeling During Large-Scale System Installation.
Parallel Processing Letters, 15(4), December 2005.
[13]. S. Pllana and T. Fahringer. PerformanceProphet: A Performance Modeling and Prediction Tool for Parallel and Distributed
Programs. In The 2005 International Conference on Parallel Processing (ICPP-05). Performance Evaluation of Networks for Pa-
rallel, Cluster and Grid Computing Systems., Oslo, Norway, June 2005. IEEE Computer Society.
[14]. V. Adve, R. Bagrodia, J. Browne, E. Deelman, A. Dube, E. Houstis, J. Rice, R. Sakellariou, D. Sundaram-Stukel, P. Teller, and
M. Vernon. POEMS: End-to-End Performance Design of Large Parallel Adaptive Computational Systems. IEEE Transactions on
Software Engineering, 26:1027–1048, November 2000.
[15]. T. Wilmarth, G. Zheng, E. Bohm, Y. Mehta, N. Choudhury, P. Jagadishprasad, and L. Kale. Performance Prediction using Simula-
tion of Large-scale Interconnection Networks in POSE. In 2005 Workshop on Principles of Advanced and Distributed Simulation
(PADS), Monterey, California, June 2005. IEEE Computer Society.
[16]. IBM Research: Blue Gene Project. http://www.research.ibm.com/bluegene/. On 7 May 2015
[17]. Kalyan S. Perumalla. “Parallel and Distributed Simulation: Traditional Techniques and Recent Advances” in Proceedings of the
2006 Winter Simulation Conference
[18]. Shem-Tov Levi and Ashok K. Agrawala, Fault Tolerant System Design, McGraw-Hill Series on Computer Engineering, 1994
[19]. M. Rosanblum, S. A. Herrod, E. Witchel, A. Gupta. Complete Computer System Simulation: The SimOS Approach. IEEE Paral-
lel and Distributed Technology, 1995.
[20]. F. Howell and R. McNab. SimJava: A Discrete Event Simulation Package for Java with Applications in Computer Systems Mod-
elling. First International Conference on Web-based Modelling and Simulation, San Diego, CA, Society for Computer Simula-
tion, Jan 1998.
[21]. KVASNICA, Peter, Igor Kvakvasnica, and Mária Igazová. "Mathematical Model and its Parallel Simulation by Web Ser-vices."
Journal of Information, Control and Management Systems 7.2 (2009).
[22]. R. Buyya and M. Murshed. GridSim: A Toolkit for the Modelling and Simulation of Distributed Resource Management and
Scheduling for Grid Computing. The Journal of Concurrency and Computation: Practice and Experience (CCPE), Wiley Press,
May 2002.
[23]. Takefusa, Atsuko. "Bricks: A performance evaluation system for scheduling algorithms on the grids." JSPS Workshop on Applied
Information Technology for Science (JWAITS 2001). 2001.
[24]. Casanova H. Simgrid: A toolkit for the simulation of application scheduling. Proceedings 1st IEEE/ACM International Sympo-
sium on Cluster Computing and the Grid (CCGrid2001), Brisbane, Australia, May 2001. IEEE Computer Society Press: Los
Alamitos, CA, 2001.
[25]. Zeng X, Bagrodia R, Gerla M. GloMoSim: A library for parallel simulation of large-scale wireless networks. Proceedings 12th
Workshop on Parallel and Distributed Simulations (PADS98), Banff, Alberta, Canada, May 1998. IEEE Computer Society Press:
Los Alamitos, CA, 1998; 154–161.
[26]. Fall K, Varadhan K (eds.). The NS manual. Technical Report, University of California at Berkeley, June 2003.
[27]. Buck, Joseph T., et al. "Ptolemy: A framework for simulating and prototyping heterogeneous systems." (1994).
[28]. Syed A. Ahson , Mohammed Ilyas, “Cloud Computing Software Service: Theory and Techniques”, CRC Press, Taylor & Francis
Group, 2011
[29]. R. Calheiros, R. Ranjan, A. Beloglazov, C. De Rose, and R. Buyya, “Cloudsim: a toolkit for modeling and simulation of cloud
computing environments and evaluation of resource provisioning algorithms,” Software: Practice and Experience, vol. 41, no. 1,
pp. 23–50, 2011.
[30]. D. Kliazovich, P. Bouvry, and S. Khan, “Greencloud: a packet-level simulator of energy-aware cloud computing data centers,”
The Journal of Supercomputing, 2010.
[31]. Fujimoto, Richard M., Asad Waqar Malik, and A. Park. "Parallel and distributed simulation in the cloud." SCS M&S Magazine 3
(2010): 1-10.
[32]. Sandeep K.S. Gupta, Rose Robin Gilbert, Ayan Banerjee, Zahra Abbasi, Tridib Mukherjee, Georgios Varsamopoulos,“ GDCSim:
A Tool for Analyzing Green Data Center Design and Resource Management Techniques,” in Latin American Network Operations
and Management Symposium. 2011.