IJERD (www.ijerd.com) International Journal of Engineering Research and Development IJERD : hard copy of journal, Call for Papers 2012, publishing of journal, journal of science and technology, research paper publishing, where to publish research paper,
This document summarizes and compares three computing models: cluster computing, grid computing, and cloud computing. Cluster computing involves linking together multiple computers to work as a single system for high performance computing tasks. Grid computing divides and distributes large programs across interconnected computers. Cloud computing provides on-demand access to shared computing resources over the internet. The document discusses challenges, examples of projects and applications for each model to provide an overview of how they differ and are applied.
Scheduling in Virtual Infrastructure for High-Throughput Computing IJCSEA Journal
For the execution of the scientific applications, different methods have been proposed to dynamically provide execution environments for such applications that hide the complexity of underlying distributed and heterogeneous infrastructures. Recently virtualization has emerged as a promising technology to provide such environments. Virtualization is a technology that abstracts away the details of physical hardware and provides virtualized resources for high-level scientific applications. Virtualization offers a cost-effective and flexible way to use and manage computing resources. Such an abstraction is appealing in Grid computing and Cloud computing for better matching jobs (applications) to computational resources. This work applies the virtualization concept to the Condor dynamic resource management system by using Condor Virtual Universe to harvest the existing virtual computing resources to their maximum utility. It allows existing computing resources to be dynamically provisioned at run-time by users based on application requirements instead of statically at design-time thereby lay the basis for efficient use of the
available resources, thus providing way for the efficient use of the available resources.
BUILDING A PRIVATE HPC CLOUD FOR COMPUTE AND DATA-INTENSIVE APPLICATIONSijccsa
Traditional HPC (High Performance Computing) clusters are best suited for well-formed calculations. The
orderly batch-oriented HPC cluster offers maximal potential for performance per application, but limits
resource efficiency and user flexibility. An HPC cloud can host multiple virtual HPC clusters, giving the
scientists unprecedented flexibility for research and development. With the proper incentive model,
resource efficiency will be automatically maximized. In this context, there are three new challenges. The
first is the virtualization overheads. The second is the administrative complexity for scientists to manage
the virtual clusters. The third is the programming model. The existing HPC programming models were
designed for dedicated homogeneous parallel processors. The HPC cloud is typically heterogeneous and
shared. This paper reports on the practice and experiences in building a private HPC cloud using a subset
of a traditional HPC cluster. We report our evaluation criteria using Open Source software, and
performance studies for compute-intensive and data-intensive applications. We also report the design and
implementation of a Puppet-based virtual cluster administration tool called HPCFY. In addition, we show
that even if the overhead of virtualization is present, efficient scalability for virtual clusters can be achieved
by understanding the effects of virtualization overheads on various types of HPC and Big Data workloads.
We aim at providing a detailed experience report to the HPC community, to ease the process of building a
private HPC cloud using Open Source software.
Scheduling in Virtual Infrastructure for High-Throughput Computing IJCSEA Journal
For the execution of the scientific applications, different methods have been proposed to dynamically provide execution environments for such applications that hide the complexity of underlying distributed and heterogeneous infrastructures. Recently virtualization has emerged as a promising technology to provide such environments. Virtualization is a technology that abstracts away the details of physical hardware and provides virtualized resources for high-level scientific applications. Virtualization offers a cost-effective and flexible way to use and manage computing resources. Such an abstraction is appealing in Grid computing and Cloud computing for better matching jobs (applications) to computational resources. This work applies the virtualization concept to the Condor dynamic resource management system by using Condor Virtual Universe to harvest the existing virtual computing resources to their maximum utility. It allows existing computing resources to be dynamically provisioned at run-time by users based on application requirements instead of statically at design-time thereby lay the basis for efficient use of the
available resources, thus providing way for the efficient use of the available resources.
BUILDING A PRIVATE HPC CLOUD FOR COMPUTE AND DATA-INTENSIVE APPLICATIONSijccsa
Traditional HPC (High Performance Computing) clusters are best suited for well-formed calculations. The
orderly batch-oriented HPC cluster offers maximal potential for performance per application, but limits
resource efficiency and user flexibility. An HPC cloud can host multiple virtual HPC clusters, giving the
scientists unprecedented flexibility for research and development. With the proper incentive model,
resource efficiency will be automatically maximized. In this context, there are three new challenges. The
first is the virtualization overheads. The second is the administrative complexity for scientists to manage
the virtual clusters. The third is the programming model. The existing HPC programming models were
designed for dedicated homogeneous parallel processors. The HPC cloud is typically heterogeneous and
shared. This paper reports on the practice and experiences in building a private HPC cloud using a subset
of a traditional HPC cluster. We report our evaluation criteria using Open Source software, and
performance studies for compute-intensive and data-intensive applications. We also report the design and
implementation of a Puppet-based virtual cluster administration tool called HPCFY. In addition, we show
that even if the overhead of virtualization is present, efficient scalability for virtual clusters can be achieved
by understanding the effects of virtualization overheads on various types of HPC and Big Data workloads.
We aim at providing a detailed experience report to the HPC community, to ease the process of building a
private HPC cloud using Open Source software.
A cluster is a type of parallel or distributed computer system, which consists of a collection of inter-connected stand-alone computers working together as a single integrated computing resource.
Data Distribution Handling on Cloud for Deployment of Big Dataijccsa
Cloud computing is a new emerging model in the field of computer science. For varying workload Cloud computing presents a large scale on demand infrastructure. The primary usage of clouds in practice is to process massive amounts of data. Processing large datasets has become crucial in research and business environments. The big challenges associated with processing large datasets is the vast infrastructure required. Cloud computing provides vast infrastructure to store and process Big data. Vms can be provisioned on demand in cloud to process the data by forming cluster of Vms . Map Reduce paradigm can be used to process data wherein the mapper assign part of task to particular Vms in cluster and reducer combines individual output from each Vms to produce final result. we have proposed an algorithm to reduce the overall data distribution and processing time. We tested our solution in Cloud Analyst Simulation environment wherein, we found that our proposed algorithm significantly reduces the overall data processing time in cloud.
The concept of Genetic algorithm is specifically useful in load balancing for best virtual
machines distribution across servers. In this paper, we focus on load balancing and also on
efficient use of resources to reduce the energy consumption without degrading cloud
performance. Cloud computing is an on demand service in which shared resources, information,
software and other devices are provided according to the clients requirement at specific time. It‟s
a term which is generally used in case of Internet. The whole Internet can be viewed as a cloud.
Capital and operational costs can be cut using cloud computing. Cloud computing is defined as a
large scale distributed computing paradigm that is driven by economics of scale in which a pool
of abstracted virtualized dynamically scalable , managed computing power ,storage , platforms
and services are delivered on demand to external customer over the internet. cloud computing is
a recent field in the computational intelligence techniques which aims at surmounting the
computational complexity and provides dynamically services using very large scalable and
virtualized resources over the Internet. It is defined as a distributed system containing a
collection of computing and communication resources located in distributed data enters which
are shared by several end users. It has widely been adopted by the industry, though there are
many existing issues like Load Balancing, Virtual Machine Migration, Server Consolidation,
Energy Management, etc.
An Algorithm to synchronize the local database with cloud DatabaseAM Publications
Since the cloud computing [1] platform is widely accepted by the industry, variety of applications are designed targeting to a cloud platform. Database as a Service (DaaS) is one of the powerful platform of cloud computing. There are many research issues in DaaS platform and one among them is the data synchronization issue. There are many approaches suggested in the literature to synchronise a local database by being in cloud environment. Unfortunately, very few work only available in the literature to synchronise a cloud database by being in the local database. The aim of this paper is to provide an algorithm to solve the problem of data synchronization from local database to cloud database.
Cloud computing is a realized wonder. It delights its users by providing applications, platforms and infrastructure without any initial investment. The “pay as you use” strategy comforts the users. The usage can be increased by adding infrastructure, tools or applications to the existing application. The realistic beauty of cloud computing is that there is no need for any sophisticated tool for access, web browser or even smartphone will do. Cloud computing is a windfall for small organizations having less sensitive information. But for large organizations, the risks related to security may be daunting. Necessary steps have to be taken for managing the issues like confidentiality, integrity, privacy, availability and so on. In this paper availability is taken and studied in a multi-dimensional perspective. Availability is taken a key issue and the mechanisms that enable enhancement are analyzed.
Welcome to International Journal of Engineering Research and Development (IJERD)IJERD Editor
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
- In last few years, rapidly increasing businesses
and their capabilities & capacities in terms of computing has
grown in very large scale. To manage business requirements
High performance computing with very large scale resources is
required. Businesses do not want to invest & concentrate on
managing these computing issues rather than their core business.
Thus, they move to service providers. Service providers such as
data centers serve their clients by sharing resources for
computing, storage etc. and maintaining all those.
ANALYSIS OF ATTACK TECHNIQUES ON CLOUD BASED DATA DEDUPLICATION TECHNIQUESneirew J
ABSTRACT
Data in the cloud is increasing rapidly. This huge amount of data is stored in various data centers around the world. Data deduplication allows lossless compression by removing the duplicate data. So, these data centers are able to utilize the storage efficiently by removing the redundant data. Attacks in the cloud computing infrastructure are not new, but attacks based on the deduplication feature in the cloud computing is relatively new and has made its urge nowadays. Attacks on deduplication features in the cloud environment can happen in several ways and can give away sensitive information. Though, deduplication feature facilitates efficient storage usage and bandwidth utilization, there are some drawbacks of this feature. In this paper, data deduplication features are closely examined. The behavior of data deduplication depending on its various parameters are explained and analyzed in this paper.
Efficient Point Cloud Pre-processing using The Point Cloud LibraryCSCJournals
Robotics, video games, environmental mapping and medical are some of the fields that use 3D data processing. In this paper we propose a novel optimization approach for the open source Point Cloud Library (PCL) that is frequently used for processing 3D data. Three main aspects of the PCL are discussed: point cloud creation from disparity of color image pairs; voxel grid downsample filtering to simplify point clouds; and passthrough filtering to adjust the size of the point cloud. Additionally, OpenGL shader based rendering is examined. An optimization technique based on CPU cycle measurement is proposed and applied in order to optimize those parts of the pre-processing chain where measured performance is slowest. Results show that with optimized modules the performance of the pre-processing chain has increased 69 fold.
Analyzing the Difference of Cluster, Grid, Utility & Cloud ComputingIOSRjournaljce
: Virtualization and cloud computing is creating a fundamental change in computer architecture,
software and tools development, in the way we store, distribute and consume information. In the recent era of
autonomic computing it comes the importance and need of basic concepts of having and sharing various
hardware and software and other resources & applications that can manage themself with high level of human
guidance. Virtualization or Autonomic computing is not a new to the world, but it developed rapidly with Cloud
computing. In this paper there give an overview of various types of computing. There will be discussion on
Cluster, Grid computing, Utility & Cloud Computing. Analysis architecture, differences between them,
characteristics , its working, advantages and disadvantages
A cluster is a type of parallel or distributed computer system, which consists of a collection of inter-connected stand-alone computers working together as a single integrated computing resource.
Data Distribution Handling on Cloud for Deployment of Big Dataijccsa
Cloud computing is a new emerging model in the field of computer science. For varying workload Cloud computing presents a large scale on demand infrastructure. The primary usage of clouds in practice is to process massive amounts of data. Processing large datasets has become crucial in research and business environments. The big challenges associated with processing large datasets is the vast infrastructure required. Cloud computing provides vast infrastructure to store and process Big data. Vms can be provisioned on demand in cloud to process the data by forming cluster of Vms . Map Reduce paradigm can be used to process data wherein the mapper assign part of task to particular Vms in cluster and reducer combines individual output from each Vms to produce final result. we have proposed an algorithm to reduce the overall data distribution and processing time. We tested our solution in Cloud Analyst Simulation environment wherein, we found that our proposed algorithm significantly reduces the overall data processing time in cloud.
The concept of Genetic algorithm is specifically useful in load balancing for best virtual
machines distribution across servers. In this paper, we focus on load balancing and also on
efficient use of resources to reduce the energy consumption without degrading cloud
performance. Cloud computing is an on demand service in which shared resources, information,
software and other devices are provided according to the clients requirement at specific time. It‟s
a term which is generally used in case of Internet. The whole Internet can be viewed as a cloud.
Capital and operational costs can be cut using cloud computing. Cloud computing is defined as a
large scale distributed computing paradigm that is driven by economics of scale in which a pool
of abstracted virtualized dynamically scalable , managed computing power ,storage , platforms
and services are delivered on demand to external customer over the internet. cloud computing is
a recent field in the computational intelligence techniques which aims at surmounting the
computational complexity and provides dynamically services using very large scalable and
virtualized resources over the Internet. It is defined as a distributed system containing a
collection of computing and communication resources located in distributed data enters which
are shared by several end users. It has widely been adopted by the industry, though there are
many existing issues like Load Balancing, Virtual Machine Migration, Server Consolidation,
Energy Management, etc.
An Algorithm to synchronize the local database with cloud DatabaseAM Publications
Since the cloud computing [1] platform is widely accepted by the industry, variety of applications are designed targeting to a cloud platform. Database as a Service (DaaS) is one of the powerful platform of cloud computing. There are many research issues in DaaS platform and one among them is the data synchronization issue. There are many approaches suggested in the literature to synchronise a local database by being in cloud environment. Unfortunately, very few work only available in the literature to synchronise a cloud database by being in the local database. The aim of this paper is to provide an algorithm to solve the problem of data synchronization from local database to cloud database.
Cloud computing is a realized wonder. It delights its users by providing applications, platforms and infrastructure without any initial investment. The “pay as you use” strategy comforts the users. The usage can be increased by adding infrastructure, tools or applications to the existing application. The realistic beauty of cloud computing is that there is no need for any sophisticated tool for access, web browser or even smartphone will do. Cloud computing is a windfall for small organizations having less sensitive information. But for large organizations, the risks related to security may be daunting. Necessary steps have to be taken for managing the issues like confidentiality, integrity, privacy, availability and so on. In this paper availability is taken and studied in a multi-dimensional perspective. Availability is taken a key issue and the mechanisms that enable enhancement are analyzed.
Welcome to International Journal of Engineering Research and Development (IJERD)IJERD Editor
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
- In last few years, rapidly increasing businesses
and their capabilities & capacities in terms of computing has
grown in very large scale. To manage business requirements
High performance computing with very large scale resources is
required. Businesses do not want to invest & concentrate on
managing these computing issues rather than their core business.
Thus, they move to service providers. Service providers such as
data centers serve their clients by sharing resources for
computing, storage etc. and maintaining all those.
ANALYSIS OF ATTACK TECHNIQUES ON CLOUD BASED DATA DEDUPLICATION TECHNIQUESneirew J
ABSTRACT
Data in the cloud is increasing rapidly. This huge amount of data is stored in various data centers around the world. Data deduplication allows lossless compression by removing the duplicate data. So, these data centers are able to utilize the storage efficiently by removing the redundant data. Attacks in the cloud computing infrastructure are not new, but attacks based on the deduplication feature in the cloud computing is relatively new and has made its urge nowadays. Attacks on deduplication features in the cloud environment can happen in several ways and can give away sensitive information. Though, deduplication feature facilitates efficient storage usage and bandwidth utilization, there are some drawbacks of this feature. In this paper, data deduplication features are closely examined. The behavior of data deduplication depending on its various parameters are explained and analyzed in this paper.
Efficient Point Cloud Pre-processing using The Point Cloud LibraryCSCJournals
Robotics, video games, environmental mapping and medical are some of the fields that use 3D data processing. In this paper we propose a novel optimization approach for the open source Point Cloud Library (PCL) that is frequently used for processing 3D data. Three main aspects of the PCL are discussed: point cloud creation from disparity of color image pairs; voxel grid downsample filtering to simplify point clouds; and passthrough filtering to adjust the size of the point cloud. Additionally, OpenGL shader based rendering is examined. An optimization technique based on CPU cycle measurement is proposed and applied in order to optimize those parts of the pre-processing chain where measured performance is slowest. Results show that with optimized modules the performance of the pre-processing chain has increased 69 fold.
Efficient Point Cloud Pre-processing using The Point Cloud Library
Similar to IJERD (www.ijerd.com) International Journal of Engineering Research and Development IJERD : hard copy of journal, Call for Papers 2012, publishing of journal, journal of science and technology, research paper publishing, where to publish research paper,
Analyzing the Difference of Cluster, Grid, Utility & Cloud ComputingIOSRjournaljce
: Virtualization and cloud computing is creating a fundamental change in computer architecture,
software and tools development, in the way we store, distribute and consume information. In the recent era of
autonomic computing it comes the importance and need of basic concepts of having and sharing various
hardware and software and other resources & applications that can manage themself with high level of human
guidance. Virtualization or Autonomic computing is not a new to the world, but it developed rapidly with Cloud
computing. In this paper there give an overview of various types of computing. There will be discussion on
Cluster, Grid computing, Utility & Cloud Computing. Analysis architecture, differences between them,
characteristics , its working, advantages and disadvantages
A Virtualization Model for Cloud ComputingSouvik Pal
Cloud Computing is now a very emerging field in the IT industry as well as research field. The advancement of Cloud Computing came up due to fast-growing usage of internet among the people. Cloud Computing is basically on-demand network access to a collection of physical resources which can be provisioned according to the need of cloud user under the supervision of Cloud Service provider interaction. From business prospective, the viable achievements of Cloud Computing and recent developments in Grid computing have brought the platform that has introduced virtualization technology into the era of high performance computing. Virtualization technology is widely applied to modern data center for cloud computing. Virtualization is used computer resources to imitate other computer resources or whole computers. This paper provides a Virtualization model for cloud computing that may lead to faster access and better performance. This model may help to combine self-service capabilities and ready-to-use facilities for computing resources.
Achieving High Performance Distributed System: Using Grid, Cluster and Cloud ...IJERA Editor
To increase the efficiency of any task, we require a system that would provide high performance along with flexibilities and cost efficiencies for user. Distributed computing, as we are all aware, has become very popular over the past decade. Distributed computing has three major types, namely, cluster, grid and cloud. In order to develop a high performance distributed system, we need to utilize all the above mentioned three types of computing. In this paper, we shall first have an introduction of all the three types of distributed computing. Subsequently examining them we shall explore trends in computing and green sustainable computing to enhance the performance of a distributed system. Finally presenting the future scope, we conclude the paper suggesting a path to achieve a Green high performance distributed system using cluster, grid and cloud computing.
An Efficient MDC based Set Partitioned Embedded Block Image CodingDr. Amarjeet Singh
In this paper, fast, efficient, simple and widely used
Set Partitioned Embedded bloCK based coding is done on
Multiple Descriptions of transformed image. The maximum
potential of this type of coding can be exploited with discrete
wavelet transform (DWT) of images. Two correlated
descriptions are generated from a wavelet transformed image
to ensure meaningful transmission of the image over noise
prone wireless channels. These correlated descriptions are
encoded by set partitioning technique through SPECK coders
and transmitted over wireless channels. Quality of
reconstructed image at the decoder side depends upon the
number of descriptions received. More the number of
descriptions received at output side, more enhance the quality
of reconstructed image. However, if any of the multiple
description is lost, the receive can estimate it exploiting the
correlation between the descriptions. The simulations
performed on an image on MATLAB gives decent
performance and results even after half of the descriptions is
lost in transmission.
Swiftly increasing demand of computational
calculations in the process of business, transferring of files
under certain protocols and data centers force to develop an
emerging technology cater to the services for computational
need, highly manageable and secure storage. To fulfill these
technological desires cloud computing is the best answer by
introducing various sorts of service platforms in high
computational environment. Cloud computing is the most
recent paradigm promising to turn around the vision of
“computing utilities” into reality. The term “cloud
computing” is relatively new, there is no universal agreement
on this definition. In this paper, we go through with different
area of expertise of research and novelty in cloud computing
domain and its usefulness in the genre of management. Even
though the cloud computing provides many distinguished
features, it still has certain sorts of short comings amidst with
comparatively high cost for both private and public clouds. It
is the way of congregating amasses of information and
resources stored in personal computers and other gadgets
and further putting them on the public cloud for serving
users. Resource management in a cloud environment is a
hard problem, due to the scale of modern data centers, their
interdependencies along with the range of objectives of the
different actors in a cloud ecosystem. Cloud computing is
turning to be one of the most explosively expanding
technologies in the computing industry in this era. It
authorizes the users to transfer their data and computation to
remote location with minimal impact on system performance.
With the evolution of virtualization technology, cloud
computing has been emerged to be distributed systematically
or strategically on full basis. The idea of cloud computing has
not only restored the field of distributed systems but also
fundamentally changed how business utilizes computing
today. Resource management in cloud computing is in fact a
typical problem which is due to the scale of modern data
centers, the variety of resource types and their inter
dependencies, unpredictability of load along with the range of
objectives of the different actors in a cloud ecosystem.
Efficient architectural framework of cloud computing Souvik Pal
Cloud computing is that enables adaptive, favorable and on-demand network access to a collective pool of adjustable and configurable computing physical resources which networks, servers, bandwidth, storage that can be swiftly provisioned and released with negligible supervision endeavor or service provider interaction. From business prospective, the viable achievements of Cloud Computing and recent developments in Grid computing have brought the platform that has introduced virtualization technology into the era of high performance computing. However, clouds are Internet-based concept and try to disguise complexity overhead for end users. Cloud service providers (CSPs) use many structural designs combined with self-service capabilities and ready-to-use facilities for computing resources, which are enabled through network infrastructure especially the internet which is an important consideration. This paper provides an efficient architectural Framework for cloud computing that may lead to better performance and faster access.
In this paper we are study-ing about cloud computing, their types, need to use cloud computing. We also study the architecture of the mobile cloud computing. So we included new techniques for backup and restoring data from mobile to cloud. Here we proposed to apply some compres-sion technique while backup and restore data from Smartphone to cloud and cloud to the Smartphone.
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Similar to IJERD (www.ijerd.com) International Journal of Engineering Research and Development IJERD : hard copy of journal, Call for Papers 2012, publishing of journal, journal of science and technology, research paper publishing, where to publish research paper, (20)
A Novel Method for Prevention of Bandwidth Distributed Denial of Service AttacksIJERD Editor
Distributed Denial of Service (DDoS) Attacks became a massive threat to the Internet. Traditional
Architecture of internet is vulnerable to the attacks like DDoS. Attacker primarily acquire his army of Zombies,
then that army will be instructed by the Attacker that when to start an attack and on whom the attack should be
done. In this paper, different techniques which are used to perform DDoS Attacks, Tools that were used to
perform Attacks and Countermeasures in order to detect the attackers and eliminate the Bandwidth Distributed
Denial of Service attacks (B-DDoS) are reviewed. DDoS Attacks were done by using various Flooding
techniques which are used in DDoS attack.
The main purpose of this paper is to design an architecture which can reduce the Bandwidth
Distributed Denial of service Attack and make the victim site or server available for the normal users by
eliminating the zombie machines. Our Primary focus of this paper is to dispute how normal machines are
turning into zombies (Bots), how attack is been initiated, DDoS attack procedure and how an organization can
save their server from being a DDoS victim. In order to present this we implemented a simulated environment
with Cisco switches, Routers, Firewall, some virtual machines and some Attack tools to display a real DDoS
attack. By using Time scheduling, Resource Limiting, System log, Access Control List and some Modular
policy Framework we stopped the attack and identified the Attacker (Bot) machines
Hearing loss is one of the most common human impairments. It is estimated that by year 2015 more
than 700 million people will suffer mild deafness. Most can be helped by hearing aid devices depending on the
severity of their hearing loss. This paper describes the implementation and characterization details of a dual
channel transmitter front end (TFE) for digital hearing aid (DHA) applications that use novel micro
electromechanical- systems (MEMS) audio transducers and ultra-low power-scalable analog-to-digital
converters (ADCs), which enable a very-low form factor, energy-efficient implementation for next-generation
DHA. The contribution of the design is the implementation of the dual channel MEMS microphones and powerscalable
ADC system.
Influence of tensile behaviour of slab on the structural Behaviour of shear c...IJERD Editor
-A composite beam is composed of a steel beam and a slab connected by means of shear connectors
like studs installed on the top flange of the steel beam to form a structure behaving monolithically. This study
analyzes the effects of the tensile behavior of the slab on the structural behavior of the shear connection like slip
stiffness and maximum shear force in composite beams subjected to hogging moment. The results show that the
shear studs located in the crack-concentration zones due to large hogging moments sustain significantly smaller
shear force and slip stiffness than the other zones. Moreover, the reduction of the slip stiffness in the shear
connection appears also to be closely related to the change in the tensile strain of rebar according to the increase
of the load. Further experimental and analytical studies shall be conducted considering variables such as the
reinforcement ratio and the arrangement of shear connectors to achieve efficient design of the shear connection
in composite beams subjected to hogging moment.
Gold prospecting using Remote Sensing ‘A case study of Sudan’IJERD Editor
Gold has been extracted from northeast Africa for more than 5000 years, and this may be the first
place where the metal was extracted. The Arabian-Nubian Shield (ANS) is an exposure of Precambrian
crystalline rocks on the flanks of the Red Sea. The crystalline rocks are mostly Neoproterozoic in age. ANS
includes the nations of Israel, Jordan. Egypt, Saudi Arabia, Sudan, Eritrea, Ethiopia, Yemen, and Somalia.
Arabian Nubian Shield Consists of juvenile continental crest that formed between 900 550 Ma, when intra
oceanic arc welded together along ophiolite decorated arc. Primary Au mineralization probably developed in
association with the growth of intra oceanic arc and evolution of back arc. Multiple episodes of deformation
have obscured the primary metallogenic setting, but at least some of the deposits preserve evidence that they
originate as sea floor massive sulphide deposits.
The Red Sea Hills Region is a vast span of rugged, harsh and inhospitable sector of the Earth with
inimical moon-like terrain, nevertheless since ancient times it is famed to be an abode of gold and was a major
source of wealth for the Pharaohs of ancient Egypt. The Pharaohs old workings have been periodically
rediscovered through time. Recent endeavours by the Geological Research Authority of Sudan led to the
discovery of a score of occurrences with gold and massive sulphide mineralizations. In the nineties of the
previous century the Geological Research Authority of Sudan (GRAS) in cooperation with BRGM utilized
satellite data of Landsat TM using spectral ratio technique to map possible mineralized zones in the Red Sea
Hills of Sudan. The outcome of the study mapped a gossan type gold mineralization. Band ratio technique was
applied to Arbaat area and a signature of alteration zone was detected. The alteration zones are commonly
associated with mineralization. The alteration zones are commonly associated with mineralization. A filed check
confirmed the existence of stock work of gold bearing quartz in the alteration zone. Another type of gold
mineralization that was discovered using remote sensing is the gold associated with metachert in the Atmur
Desert.
Reducing Corrosion Rate by Welding DesignIJERD Editor
The paper addresses the importance of welding design to prevent corrosion at steel. Welding is
used to join pipe, profiles at bridges, spindle, and a lot more part of engineering construction. The
problems happened associated with welding are common issues in these fields, especially corrosion.
Corrosion can be reduced with many methods, they are painting, controlling humidity, and also good
welding design. In the research, it can be found that reducing residual stress on the welding can be
solved in corrosion rate reduction problem.
Preheating on 500oC and 600oC give better condition to reduce corosion rate than condition after
preheating 400oC. For all welding groove type, material with 500oC and 600oC preheating after 14 days
corrosion test is 0,5%-0,69% lost. Material with 400oC preheating after 14 days corrosion test is 0,57%-0,76%
lost.
Welding groove also influence corrosion rate. X and V type welding groove give better condition to reduce
corrosion rate than use 1/2V and 1/2 X welding groove. After 14 days corrosion test, the samples with
X welding groove type is 0,5%-0,57% lost. The samples with V welding groove after 14 days corrosion test is
0,51%-0,59% lost. The samples with 1/2V and 1/2X welding groove after 14 days corrosion test is 0,58%-
0,71% lost.
Router 1X3 – RTL Design and VerificationIJERD Editor
Routing is the process of moving a packet of data from source to destination and enables messages
to pass from one computer to another and eventually reach the target machine. A router is a networking device
that forwards data packets between computer networks. It is connected to two or more data lines from different
networks (as opposed to a network switch, which connects data lines from one single network). This paper,
mainly emphasizes upon the study of router device, it‟s top level architecture, and how various sub-modules of
router i.e. Register, FIFO, FSM and Synchronizer are synthesized, and simulated and finally connected to its top
module.
Active Power Exchange in Distributed Power-Flow Controller (DPFC) At Third Ha...IJERD Editor
This paper presents a component within the flexible ac-transmission system (FACTS) family, called
distributed power-flow controller (DPFC). The DPFC is derived from the unified power-flow controller (UPFC)
with an eliminated common dc link. The DPFC has the same control capabilities as the UPFC, which comprise
the adjustment of the line impedance, the transmission angle, and the bus voltage. The active power exchange
between the shunt and series converters, which is through the common dc link in the UPFC, is now through the
transmission lines at the third-harmonic frequency. DPFC multiple small-size single-phase converters which
reduces the cost of equipment, no voltage isolation between phases, increases redundancy and there by
reliability increases. The principle and analysis of the DPFC are presented in this paper and the corresponding
simulation results that are carried out on a scaled prototype are also shown.
Mitigation of Voltage Sag/Swell with Fuzzy Control Reduced Rating DVRIJERD Editor
Power quality has been an issue that is becoming increasingly pivotal in industrial electricity
consumers point of view in recent times. Modern industries employ Sensitive power electronic equipments,
control devices and non-linear loads as part of automated processes to increase energy efficiency and
productivity. Voltage disturbances are the most common power quality problem due to this the use of a large
numbers of sophisticated and sensitive electronic equipment in industrial systems is increased. This paper
discusses the design and simulation of dynamic voltage restorer for improvement of power quality and
reduce the harmonics distortion of sensitive loads. Power quality problem is occurring at non-standard
voltage, current and frequency. Electronic devices are very sensitive loads. In power system voltage sag,
swell, flicker and harmonics are some of the problem to the sensitive load. The compensation capability
of a DVR depends primarily on the maximum voltage injection ability and the amount of stored
energy available within the restorer. This device is connected in series with the distribution feeder at
medium voltage. A fuzzy logic control is used to produce the gate pulses for control circuit of DVR and the
circuit is simulated by using MATLAB/SIMULINK software.
Study on the Fused Deposition Modelling In Additive ManufacturingIJERD Editor
Additive manufacturing process, also popularly known as 3-D printing, is a process where a product
is created in a succession of layers. It is based on a novel materials incremental manufacturing philosophy.
Unlike conventional manufacturing processes where material is removed from a given work price to derive the
final shape of a product, 3-D printing develops the product from scratch thus obviating the necessity to cut away
materials. This prevents wastage of raw materials. Commonly used raw materials for the process are ABS
plastic, PLA and nylon. Recently the use of gold, bronze and wood has also been implemented. The complexity
factor of this process is 0% as in any object of any shape and size can be manufactured.
Spyware triggering system by particular string valueIJERD Editor
This computer programme can be used for good and bad purpose in hacking or in any general
purpose. We can say it is next step for hacking techniques such as keylogger and spyware. Once in this system if
user or hacker store particular string as a input after that software continually compare typing activity of user
with that stored string and if it is match then launch spyware programme.
A Blind Steganalysis on JPEG Gray Level Image Based on Statistical Features a...IJERD Editor
This paper presents a blind steganalysis technique to effectively attack the JPEG steganographic
schemes i.e. Jsteg, F5, Outguess and DWT Based. The proposed method exploits the correlations between
block-DCTcoefficients from intra-block and inter-block relation and the statistical moments of characteristic
functions of the test image is selected as features. The features are extracted from the BDCT JPEG 2-array.
Support Vector Machine with cross-validation is implemented for the classification.The proposed scheme gives
improved outcome in attacking.
Secure Image Transmission for Cloud Storage System Using Hybrid SchemeIJERD Editor
- Data over the cloud is transferred or transmitted between servers and users. Privacy of that
data is very important as it belongs to personal information. If data get hacked by the hacker, can be
used to defame a person’s social data. Sometimes delay are held during data transmission. i.e. Mobile
communication, bandwidth is low. Hence compression algorithms are proposed for fast and efficient
transmission, encryption is used for security purposes and blurring is used by providing additional
layers of security. These algorithms are hybridized for having a robust and efficient security and
transmission over cloud storage system.
Application of Buckley-Leverett Equation in Modeling the Radius of Invasion i...IJERD Editor
A thorough review of existing literature indicates that the Buckley-Leverett equation only analyzes
waterflood practices directly without any adjustments on real reservoir scenarios. By doing so, quite a number
of errors are introduced into these analyses. Also, for most waterflood scenarios, a radial investigation is more
appropriate than a simplified linear system. This study investigates the adoption of the Buckley-Leverett
equation to estimate the radius invasion of the displacing fluid during waterflooding. The model is also adopted
for a Microbial flood and a comparative analysis is conducted for both waterflooding and microbial flooding.
Results shown from the analysis doesn’t only records a success in determining the radial distance of the leading
edge of water during the flooding process, but also gives a clearer understanding of the applicability of
microbes to enhance oil production through in-situ production of bio-products like bio surfactans, biogenic
gases, bio acids etc.
Gesture Gaming on the World Wide Web Using an Ordinary Web CameraIJERD Editor
- Gesture gaming is a method by which users having a laptop/pc/x-box play games using natural or
bodily gestures. This paper presents a way of playing free flash games on the internet using an ordinary webcam
with the help of open source technologies. Emphasis in human activity recognition is given on the pose
estimation and the consistency in the pose of the player. These are estimated with the help of an ordinary web
camera having different resolutions from VGA to 20mps. Our work involved giving a 10 second documentary to
the user on how to play a particular game using gestures and what are the various kinds of gestures that can be
performed in front of the system. The initial inputs of the RGB values for the gesture component is obtained by
instructing the user to place his component in a red box in about 10 seconds after the short documentary before
the game is finished. Later the system opens the concerned game on the internet on popular flash game sites like
miniclip, games arcade, GameStop etc and loads the game clicking at various places and brings the state to a
place where the user is to perform only gestures to start playing the game. At any point of time the user can call
off the game by hitting the esc key and the program will release all of the controls and return to the desktop. It
was noted that the results obtained using an ordinary webcam matched that of the Kinect and the users could
relive the gaming experience of the free flash games on the net. Therefore effective in game advertising could
also be achieved thus resulting in a disruptive growth to the advertising firms.
Hardware Analysis of Resonant Frequency Converter Using Isolated Circuits And...IJERD Editor
-LLC resonant frequency converter is basically a combo of series as well as parallel resonant ckt. For
LCC resonant converter it is associated with a disadvantage that, though it has two resonant frequencies, the
lower resonant frequency is in ZCS region[5]. For this application, we are not able to design the converter
working at this resonant frequency. LLC resonant converter existed for a very long time but because of
unknown characteristic of this converter it was used as a series resonant converter with basically a passive
(resistive) load. . Here, it was designed to operate in switching frequency higher than resonant frequency of the
series resonant tank of Lr and Cr converter acts very similar to Series Resonant Converter. The benefit of LLC
resonant converter is narrow switching frequency range with light load[6] . Basically, the control ckt plays a
very imp. role and hence 555 Timer used here provides a perfect square wave as the control ckt provides no
slew rate which makes the square wave really strong and impenetrable. The dead band circuit provides the
exclusive dead band in micro seconds so as to avoid the simultaneous firing of two pairs of IGBT’s where one
pair switches off and the other on for a slightest period of time. Hence, the isolator ckt here is associated with
each and every ckt used because it acts as a driver and an isolation to each of the IGBT is provided with one
exclusive transformer supply[3]. The IGBT’s are fired using the appropriate signal using the previous boards
and hence at last a high frequency rectifier ckt with a filtering capacitor is used to get an exact dc
waveform .The basic goal of this particular analysis is to observe the wave forms and characteristics of
converters with differently positioned passive elements in the form of tank circuits.
Simulated Analysis of Resonant Frequency Converter Using Different Tank Circu...IJERD Editor
LLC resonant frequency converter is basically a combo of series as well as parallel resonant ckt. For
LCC resonant converter it is associated with a disadvantage that, though it has two resonant frequencies, the
lower resonant frequency is in ZCS region [5]. For this application, we are not able to design the converter
working at this resonant frequency. LLC resonant converter existed for a very long time but because of
unknown characteristic of this converter it was used as a series resonant converter with basically a passive
(resistive) load. . Here, it was designed to operate in switching frequency higher than resonant frequency of the
series resonant tank of Lr and Cr converter acts very similar to Series Resonant Converter. The benefit of LLC
resonant converter is narrow switching frequency range with light load[6] . Basically, the control ckt plays a
very imp. role and hence 555 Timer used here provides a perfect square wave as the control ckt provides no
slew rate which makes the square wave really strong and impenetrable. The dead band circuit provides the
exclusive dead band in micro seconds so as to avoid the simultaneous firing of two pairs of IGBT’s where one
pair switches off and the other on for a slightest period of time. Hence, the isolator ckt here is associated with
each and every ckt used because it acts as a driver and an isolation to each of the IGBT is provided with one
exclusive transformer supply[3]. The IGBT’s are fired using the appropriate signal using the previous boards
and hence at last a high frequency rectifier ckt with a filtering capacitor is used to get an exact dc
waveform .The basic goal of this particular analysis is to observe the wave forms and characteristics of
converters with differently positioned passive elements in the form of tank circuits. The supported simulation
is done through PSIM 6.0 software tool
Amateurs Radio operator, also known as HAM communicates with other HAMs through Radio
waves. Wireless communication in which Moon is used as natural satellite is called Moon-bounce or EME
(Earth -Moon-Earth) technique. Long distance communication (DXing) using Very High Frequency (VHF)
operated amateur HAM radio was difficult. Even with the modest setup having good transceiver, power
amplifier and high gain antenna with high directivity, VHF DXing is possible. Generally 2X11 YAGI antenna
along with rotor to set horizontal and vertical angle is used. Moon tracking software gives exact location,
visibility of Moon at both the stations and other vital data to acquire real time position of moon.
“MS-Extractor: An Innovative Approach to Extract Microsatellites on „Y‟ Chrom...IJERD Editor
Simple Sequence Repeats (SSR), also known as Microsatellites, have been extensively used as
molecular markers due to their abundance and high degree of polymorphism. The nucleotide sequences of
polymorphic forms of the same gene should be 99.9% identical. So, Microsatellites extraction from the Gene is
crucial. However, Microsatellites repeat count is compared, if they differ largely, he has some disorder. The Y
chromosome likely contains 50 to 60 genes that provide instructions for making proteins. Because only males
have the Y chromosome, the genes on this chromosome tend to be involved in male sex determination and
development. Several Microsatellite Extractors exist and they fail to extract microsatellites on large data sets of
giga bytes and tera bytes in size. The proposed tool “MS-Extractor: An Innovative Approach to extract
Microsatellites on „Y‟ Chromosome” can extract both Perfect as well as Imperfect Microsatellites from large
data sets of human genome „Y‟. The proposed system uses string matching with sliding window approach to
locate Microsatellites and extracts them.
Importance of Measurements in Smart GridIJERD Editor
- The need to get reliable supply, independence from fossil fuels, and capability to provide clean
energy at a fixed and lower cost, the existing power grid structure is transforming into Smart Grid. The
development of a smart energy distribution grid is a current goal of many nations. A Smart Grid should have
new capabilities such as self-healing, high reliability, energy management, and real-time pricing. This new era
of smart future grid will lead to major changes in existing technologies at generation, transmission and
distribution levels. The incorporation of renewable energy resources and distribution generators in the existing
grid will increase the complexity, optimization problems and instability of the system. This will lead to a
paradigm shift in the instrumentation and control requirements for Smart Grids for high quality, stable and
reliable electricity supply of power. The monitoring of the grid system state and stability relies on the
availability of reliable measurement of data. In this paper the measurement areas that highlight new
measurement challenges, development of the Smart Meters and the critical parameters of electric energy to be
monitored for improving the reliability of power systems has been discussed.
Study of Macro level Properties of SCC using GGBS and Lime stone powderIJERD Editor
One of the major environmental concerns is the disposal of the waste materials and utilization of
industrial by products. Lime stone quarries will produce millions of tons waste dust powder every year. Having
considerable high degree of fineness in comparision to cement this material may be utilized as a partial
replacement to cement. For this purpose an experiment is conducted to investigate the possibility of using lime
stone powder in the production of SCC with combined use GGBS and how it affects the fresh and mechanical
properties of SCC. First SCC is made by replacing cement with GGBS in percentages like 10, 20, 30, 40, 50 and
by taking the optimum mix with GGBS lime stone powder is blended to mix in percentages like 5, 10, 15, 20 as
a partial replacement to cement. Test results shows that the SCC mix with combination of 30% GGBS and 15%
limestone powder gives maximum compressive strength and fresh properties are also in the limits prescribed by
the EFNARC.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Essentials of Automations: Optimizing FME Workflows with Parameters
IJERD (www.ijerd.com) International Journal of Engineering Research and Development IJERD : hard copy of journal, Call for Papers 2012, publishing of journal, journal of science and technology, research paper publishing, where to publish research paper,
1. International Journal of Engineering Research and Development
e-ISSN: 2278-067X, p-ISSN: 2278-800X, www.ijerd.com
Volume 3, Issue 1 (August 2012), PP. 45-50
Study and Applications of Cluster Grid and Cloud Computing
Umesh Chandra Jaiswal
Department of Computer Science & Engineering, Madan Mohan Malaviya Engineering College, Gorakhpur – 273 010
(UP)
Abstract––The technology is changing very rapidly in the modern age to fulfill the computing requirements. A cluster
is a collection of parallel or distributed computers which are interconnected among them using high-speed networks. Clusters
are used mainly for high availability, load-balancing and for high speed computation purpose. The grid gained momentum and
became a mean for effective way of coordinated resource sharing and problem solving in dynamic, multi-institutional virtual
organizations after many years of research and development. A Grid basically uses the processing capabilities of different
computing units for a single task. This task though is managed by a single primary computing machine. It divides the tasks into
numerous tasks and gets processed from different computing machine in the cluster. As soon as these tasks are completed by the
machines, they send the result back to the primary machine which takes care of controlling all the tasks. All the results are
clubbed together and a single output is generated. Grid computing has been used in environments where users make few but
large allocation requests. Cloud computing provides a pool of computing resources which the users can access through Internet.
This paper presents an end-to-end comparison among different computing environments such as Cluster, Grid and Cloud
computing. This paper discusses all these models in detail that will give a better way to understand these models and to know
how these models are different from each other. This also discusses the ongoing projects and applications that use these
computing models as a platform for execution. This paper also discusses about tools to be used to design and develop
applications using discussed computing models. This paper may provide a platform to have some innovative ideas for
implementation of these computing models.
Keywords––Cluster Computing, Grid Computing, Cloud Computing, Related projects, Applications of these models,
Simulation Techniques
I. INTRODUCTION
Computer environment is changing so rapidly and modern computers have changed the definition parameters of
performance factors completely. High-performance computing (HPC) was once restricted to institutions that could afford the
significantly expensive and dedicated supercomputers of the time. There was a huge requirement for HPC in small scale and at a
lower cost which lead to Cluster computing. Modern conventional computing changed the way computers are used. In the mid of
1990s Grid computing originated in academia with an aim to facilitate the uses of idle computing power placed in remote places
with in other computing centres when local machine is busy. Initially it was only referred as a computing grid and had a limited
number of users. Grid computing became an effective way for coordinated resources sharing and problem solving in dynamic multi-
institutional virtual organizations after long dedicated researches in the area. Cloud computing, is a kind of computing model that
came into existence around the end of 2007. It provides a pool of computing resources which the users can access through Internet.
The basic principle of cloud computing is to shift the computing performed from the local computer into the networks [4]. This
makes the enterprise to use the resource which includes network, server, storage, application, service and so on that is required without
huge investment on its purchase, implementation, maintenance rather use it for other significant purpose. Resources are requested on-
demand with out any prior reservation and hence eliminate over provisioning and improve resource utilization. Cloud computing
is an emerging computing technology that uses the internet and central remote server to maintain data and applications. This
technology allows for much more efficient computing by centralizing storage, memory processing and bandwidth. Cloud computing
provides a pool of computing resources which the users can access through Internet. This paper presents an end-to-end comparison
among different computing environments like Cluster Computing, Grid computing and Cloud computing.
The paper is explained in six sections. Section II briefly explaines cloud computing, grid computing, and cluster
computing models. Issues and challenges related to these computing models are listed in Section III that compares these three
models from different perspectives. Section IV discusses projects and applications. Tools and simulation environment are discussed in
Section V. Concluding remarks are presented in Section VI.
II. THREE COMPUTING MODELS
This section presents in brief the three computing models such as cluster, Grid and cloud computing:
Cluster Computing: the super-computer was the leader in the field of computing for many yers. But due to some of the problems
faced in the area of science, engineering and business, which could not be effectively dealt with using supercomputers, they were
replaced with clusters with the aim of overcoming these problems and also offered a very cheap way for gaining access to
potentially huge computing power.
45
2. Study and Applications of Cluster Grid and Cloud Computing
Definition: A cluster is a collection of parallel or distributed computers which are interconnected among them using high-speed
networks, such as gigabit Ethernet, SCI, Myrinet and Infiniband. They work together in the execution of com-pate intensive and
data intensive tasks that would be not feasible to execute on a single computer. Clusters are used mainly for high availability, load-
balancing and for compute purpose. They are used for high availability purpose as they maintain redundant nodes which are used to
provide service when system components fail. The performance of the system is improved here because even if one node fails there
is another standby node which will carry the task and eliminates single points of failure without any hindrance. When multiple
computers are linked together in a cluster, they share computational workload as a single virtual computer. From the users view
point they are multiple machines, but they function as a single virtual machine. The user's request are received and distributed
among all the standalone computers to form a cluster. This results in balanced computational work among different machines,
improving the performance of the cluster systems.
Grid computing: This combines computers from multiple to divide and apportion pieces of a program among several computers.
Grid computing involves computation in a distributed fashion, which may also involve the aggregation of large-scale cluster
computing- based systems. The size of a grid may vary from small a network of computer workstations within a corporation to large
collaborations across many companies and networks. Some of the definitions on grid computing are given below:
Definition: Ian Foster defined grid as a system that coordinates resources which are not subject to centralized control, using standard,
open, general-purpose protocols and interfaces to deliver nontrivial qualities of service.
Cloud Computing: According to Buyya et. al. [10] a cloud is a type of parallel and distributed system consisting of a collection of
interconnected and virtualized computers that are dynamically provisioned and presented as one or more unified computing resources
based on service-level agreement. Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a
shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly
provisioned and released with minimal management effort or service provider interaction [11]. Cloud computing provides
basically three kinds of ser- vice: Software as a Service (SaaS), Platform as a Service (PaaS) and Infrastructure as a Service
(IaaS) as shown in figure1.
SaaS is a kind of service where many users can make use of the software hosted by the service provider and pay only
for time its being used. It will be better than buying the hardware and software as keeps off the burden of updating the software to
the latest version, licensing and is of course more economical. Some example service providers are Sales force, Customer
Relationships Management (CRM) system and Google Apps.
PaaS provides a high level integrated environment to demand sign, build, test, deploy and update online custom
applications. Some example service providers are Google's App different Engine, Microsoft Azure RightScale and SalesForce.
IaaS refers to the service provided to the user processing power, storage, network, and other computing resources to run
any software including Operating system and applications. Some of the IaaS providers are AWS, Eucalyptus, Open Stack, GoGrid
and Flexi scale. This structure is shown in Fig. I.
III. CHALLENGES IN CLUSTER, GRID AND CLOUD COMPUTING
A. Challenges in the Cluster Computing:
The research challenges of cluster computing are as follows:
i) Middleware: To produce software environments that provides an illusion of a single system image, rather than a
collection of independent computers.
ii) Program: The applications that run on the clusters must be explicitly written which incorporates the division of tasks
between nodes; also the communication between them should be taken care of.
iii) Elasticity: The variance in real-time response time when the number of service requests changes dramatically.
iv) Scalability: To meet the requirements of resources thus affecting the performance of the system.
B. Challenges in the Grid Computing:
The research challenges faced in grids include:
i) Dynamicity: Resources in grid are owned and managed by more than one organization which may enter and leave the
grid at any time causing burden on the grid.
ii) Administration: To form a unified resource pool, a heavy system administration burden is raised along with other
maintenance work to coordinate local administration policies with global ones.
iii) Development: Problems are concerned with ways of writing software to run on grid-computing platforms, which
includes decomposing and distributing to processing elements, and then assembling solutions.
iv) Accounting: Finding ways to support different accounting infrastructure, economic model and application models that
can cope well with tasks that communicate frequently and are interdependent.
v) Heterogeneity: Finding ways to create a wide area data intensive programming and scheduling framework in
heterogeneous set of resources.
vi) Programming: The low-coupling between nodes and the distributed nature of processing make the programming of
applications over grids more complex.
C. Challenges in the Cloud Computing:
i) Dynamic scalability: The compute nodes are scaled up and down dynamically by the application according to the
response time of the user's queries. The scheduling delays involved are real concern which leads to the need of effective and
dynamic load management system.
46
3. Study and Applications of Cluster Grid and Cloud Computing
ii) Multi-tenancy: When the number of applications running on the same compute node increases, it will reduce the amount of
bandwidth allocated to each application which may lead to performance degradation.
iii) Querying and access: Scalable provenance querying and secure access of provenance information is open problems for
both grid and cloud environment.
iv) Standardization: As every organization has their own APIs and protocols used which makes the user data or vendor
lock-in. Thus integration and interoperability of all the services and application is a challenge.
v) Reliability and fault-tolerance: Tools for testing the application against fault tolerance and compute failures are required
which help in developing a reliable system.
vi) Debugging and profiling: Parallel and remote debugging has always been a problem for developing HPC programs and is an
issue in cloud computing also.
vii) Security and Privacy: The user has no idea where data is stored and who will use it as there are more hackers than
developers.
viii) Power: Though cloud computing offers many type of services finally to meet the needs of users, enormous amount
of power is consumed. An autonomic energy aware resource management is very much required.
The computing comparisons of cluster, grid and cloud have been given in the Table-I.
IV. PROJECTS AND APPLICATIONS IN CLUSTER, GRID AND CLOUD COMPUTING
A. Cluster computing projects and applications
Some of the projects in the field of cluster computing are:
i) Condor [4]: It is a multifaceted project engaged in five primary activities.
-Research in distributed computing.
- Participation in the scientific community.
- Engineering of complex software.
- Maintenance of production environments.
- Education of students
ii) ShaRCS (Shared Research Computing Services)[5]:This pilot project has been designed to define, demonstrate and
measure how shared computing and storage clusters residing in regional data centers can provide computing services
to investigators
iii) The Weather Research and Forecast (WRF): This is a collaborative project by several institutions to develop a next-
generation regional forecast model and data assimilation system for operational numerical weather prediction and
atmospheric research.
iv) Hadoop [4]: This is an open-source framework for running data-intensive applications in a processing cluster built from
commodity server hardware. Some customers use Hadoop clustering to analyze customer search patterns for targeted
advertising. Other applications include filtering and indexing of web listings, facial recognition algorithms to search for
images in a large database.
Clusters were also used for solving grand challenging applications such as weather modeling, automobile crash
simulations, life sciences, computational fluid dynamics, nuclear simulations, image processing, electromagnetic, data mining,
aerodynamics and astrophysics. Clusters were used as a platform for data mining applications which involve both compute and data
intensive operations. They are also used for commercial applications like in banking sector to have achieved high availability and
for backup. Clusters are used to host many new Internet service sites like Hotmail, web applications, database, and other
commerce a application.
B. Grid Projects and Applications
Some of the projects in grid computing have been discussed in brief below.
i) Globus [5]: It is open source grid software that addresses most challenging problems in distributed resource sharing.
The first release of Globus Demo Grid [6], a tool that will build an instructional grid environment that can be
deployed using virtual machines on a cloud or on physical resources. The main goal of Demo Grid is to provide easy
access to an environment with various grid tools, without having to install the tools themselves or needing an account on an
existing grid.
ii) EGI-InSPIRE [7] project (Integrated Sustainable Pan-European Infrastructure for Researchers in Europe):
Started in 2010, co-funded by the European Commission for four years, as a collaborative effort involving more than 50
institutions in over 40 countries. Its mission is to establish a sustainable European Grid Infrastructure (EGI) to join
together the new Distributed Computing Infrastructures (DCIs) such as clouds, supercomputing networks and desktop
grids, for the benefit of user communities within the European Research Area. The ultimate goal of EGI-InSPIRE is
to provide European scientists and their international partners with a sustainable, reliable e-Infrastructure that can support
their needs for large-scale data analysis.
iii) Some other grid projects are the NSFs National Technology Grid, NASAs Information Power Grid, GriPhyN, NEESgrid,
Particle Physics Data Grid and the European Data Grid. The grid applications range from advanced manufacturing, numerical
wind tunnel, oil reservoir simulation, particle physics research, High Energy Nuclear Physics (HENP), weather modelling,
bio-informatics, terrain analysis of nature observation, scientific database, and popular science web services.
iv) MammoGrid [7]: It is a service-oriented architecture based medical grid application. The aim is to deliver a set of
evolutionary prototypes to demonstrate the memo gram analysts, specialist radiologists working in breast cancer
screening who can use the grid information infrastructure to resolve common image analysis problems.
47
4. Study and Applications of Cluster Grid and Cloud Computing
v) DDGrid (Drug Discovery Grid) [7]: This project aims to build a collaboration platform for drug discovery using the state-of-
the-art P2P and grid computing technology.
This project intends to solve large-scale computation and data intensive scientific applications in the field of medicine
chemistry and molecular biology with the help of grid middleware. Over 106 compounds database with D structure and
physiochemical properties are also provided to identify potential drug candidates.
C. Cloud Projects and Applications
An overview of the initiative projects in the area of cloud computing is presented below:
i) CERN: The European organization for Nuclear Research is developing a mega computing cloud to distribute data to
scientists around the world as part of the LHC project.
ii) Unified Cloud Interface (UCI): This project is proposed by the Cloud Computing Interoperability Forum (CCIF) working
on the development of standard APIs which can be used by all cloud service providers to overcome the interoperability
issues.
iii) Cloud-Enabled Space Weather Platform (CESWP): The purpose is to bring the power and legibility of cloud
computing to space weather physicists. The goal is to lower the barriers for the physicists to conduct their science,
i.e., to make it easier to collaborate with other scientists, develop space weather models, run simulations, produce
visualizations and enable provenance.
iv) OpenNebula: This project is successful with the release of the most advanced and flexible enterprise-ready cloud
management tool, and the growth of an active open- source community. They aim to develop the most- advanced,
highly-scalable and adaptable software toolkit for cloud computing management. They plan for operation network to simplify
the management of OpenNebula cloud instances, fault tolerance functionality to maximize uptime in the cloud, enhanced
management of images and templates, new security functionality, enhanced support for federation of data centers and
support for multitier architectures.
v) TClouds [5]: The goal of the project is to prototype an advanced cloud infrastructure that can deliver a new level of
secure, private and resilient computing and storage that is cost-efficient, simple and scalable. To achieve security,
resiliency and scalability needed when outsourcing critical IT-systems to a cloud, scientists will build an advanced Cloud of
Clouds framework for the project. This framework provides multiple back-ups of the T Clouds data and applications in
case of a hardware failure or intrusion. Some of the applications that can be deployed on cloud include web hosting,
media hosting, multi-tenant service, HPC, distributed storage, multi-enterprise integration, etc. With the expansion of cloud
computing and establishing a pubic cloud among many University libraries, it can conserve library resources and
improve its user satisfaction. Although there are OPAC (Online Public Access Catalog) and ILL (Inter-library loan)
services, access to the shared resources through an uniform access platform is difficult which can be made possible with the
adoption of cloud computing in such libraries.
vii) RoboEarth: This is a European project led by the Eindhoven University of Technology, Netherlands, to develop a
WWW for robots, a giant database where robots can share information about objects, environments, and tasks.
Researchers at ASORO (A-Star Social Robotics Laboratory, Singapore) have built a cloud-computing infrastructure that
allows robots to generate 3-D maps of their environments much faster than they could with their onboard computers.
viii) Cloudo: A free computer that lives on the Internet, right in the web browser. It allows accessing documents, photos, music,
movies and all other files from any computer or mobile phone.
ix) Panda Cloud: An Antivirus, the first free antivirus from the cloud. Regular updates is not a problem and it occupies
very little system resources, uses collective intelligence servers for fast detection, simple interface and protects PC
offline
V. TOOLS AND SIMULATION ENVIRONMENT
We highlight some of the tools and simulators that can be used to develop applications on the three computing models.
A. Tools for cluster computing
i) Nimrod: This is a tool for parametric computing on clusters and it provides a simple declarative parametric modeling
language for expressing a parametric experiment. By using this it can be easy to create a plan for a parametric
computing and use the Nimrod runtime system to submit, run, and collect the results from multiple cluster nodes.
ii) PARMON: This is a tool that allows the monitoring of system resource and their activities at three different levels:
system, node and component. It is used to monitor C-DAC PARAM 10000 supercomputer, a cluster of 48 Ultra-4
workstations powered by Solaris.
iii) Condor: This is a specialized job and resource management system for compute intensive jobs. It provides a job
management mechanism, scheduling policy, priority scheme, and resource monitoring and management. Users submit
their jobs, tool chooses when and where to run them based upon a policy, monitors their progress, and ultimately informs
the user upon completion.
iv) MPI and OpenMP: Message passing libraries provide a high-level means of passing data between process executing on
distributed memory systems. They help to achieve high performance out of collections of individual cluster nodes. MPI I is
the de facto standard for parallel programming on clusters which consists of a rich set of library functions to do both point-
to-point and collective communication among parallel tasks.
Other cluster simulators include Flexi-Cluster a simulator for a single compute cluster, VERITAS a cluster server simulator, etc.
48
5. Study and Applications of Cluster Grid and Cloud Computing
B. Tools used in Grid Computing
The tools used in the field of grid computing for resource discovery, management and performance analysis are given as below:
i) Paradyn: This is a performance analysis tool supports performance experiment management through techniques for
quantitatively comparing several experiments and performance diagnosis based on dynamic instrumentation. Experiments
have to be set up manually, whereas performance analysis is done automatically.
ii) Nimrod-G: This uses the Globus middleware services for dynamic resource discovery and dispatching jobs over
computational grids. It allows scientists and engineers to model parametric experiments and transparently stage the data
and program at remote sites, run the program on each element of a data set on different machines and finally gathers
results from remote sites to the user site.
iii) Condor-G: This represents the work of Globus and Condor projects which enables the utilization of large collections of
resources that span across multiple domains as if they all belonged to the user's personal domain. From Globus comes
the use of protocols for secure inter domain communications and standardized access to remote batch systems. From
Condor comes the concern of job submission, job allocation, error recovery, and creation of a friendly execution
environment.
iv) Globus: An open source software toolkit that facilitates construction of computational grids and grid based applications,
across corporate, institutional and geographic boundaries.
v) Gridbus (GRID computing and BUSiness) toolkit project: This is associated with the design and development of
cluster and grid middleware technologies for service- oriented computing. It provides end-to-end services to aggregate or
lease services of distributed resources de- pending on their availability, capability, and performance, cost, and QoS
requirements.
vii) Legion: This is an object-based meta system that supports transparent core scheduling, data management, fault tolerance,
site autonomy, and a middleware with a wide range of security options.
Other simulators used in grid computing include Gridsim, ZENTURIO, Optorsim, Chicsim, etc.
C. Tools used in Cloud Computing
Various tools and products which provide assistance in development applications on cloud computing are given as below:
i) Zenoss: This is a single, integrated product that monitors the entire IT infrastructure, wherever it is deployed
(physical, virtual, or in cloud). It manages the networks, servers, virtual devices, storage, and cloud deployments.
ii) Spring Roo: This is a next generation rapid application development tool, combined with the power of Google Web
Toolkit (GWT) that enables developers to build rich browser apps in enterprise production environments.
iii) CloudSim and Cloud Analyst: They are important for developers to evaluate the requirements of large-scale cloud
applications in terms of geographic distribution of both computing servers and user workloads. The former was
developed with the purpose of studying the behaviour of applications under various deployment configurations
whereas the latter helps developers with insights in how to distribute applications among cloud infrastructures and
value added services.
iv) Cloudera: This is an open-source Hadoop software framework is increasingly used in cloud computing deployments
due to its flexibility with cluster-based, data- intensive queries and other tasks. It allows exploring complex, non-
relational data in its native form.
VI. CONCLUSION
This paper presented detail comparison on the three computing models, cluster, grid and cloud computing. The issues
and challenges related to these computing models are highlighted. The projects and applications in various fields are briefly
discussed. We have also discussed the tools and simulation environments useful for the development of applications. Such a
comparison in different perspectives will make easy to understand the computing models since the features of these computing
models seems to be similar conceptually. It also helps in identifying the similarities and differences from each other. Grid and
cloud computing appears to be a promising model especially focusing on standardizing APIs, security, interoperability, new
business models, and dynamic pricing systems for complex services. Hence there is a scope for further research in these areas.
REFERENCES
[1]. PARMON", http://www.cloudbus.org/course/parmon.php.
[2]. Cluster Computing", http://en.wikipedia.org/wiki/Cluster computing
[3]. Cloud Security alliance CSA)", http://www.cloudsecurityalliance.org/trustedcloud.html. Globus",
http://www.globus.org/ogsa
[4]. Buyya, C. S. Yeo, S. Venugopal, J. Broberg, and I. Brandic, "Cloud computing and emerging it platforms: Vision,
hype, and reality for delivering csag.ucsd.edu/projects/hpvm.html. http://csrc.nist.gov/groups/SNS/cloud-
computing
[5]. “Panda Cloud” www.cloudantivirus.com
[6]. Cloudera” hhtp://cloudera.com
[7]. 6th International Conference on computers science & Education ( ICCSE 2011) Cluster ,Grid and Cloud
computing a Detail
[8]. Cloud Computing: New challenges to entire computer industry 2010 1st International Conference on Parallel
,Distributed and Grid Computin.
49
6. Study and Applications of Cluster Grid and Cloud Computing
Table I: Computing Comparison of Cluster, Grid and Cloud
PARAMETERS CLUSTER GRID CLOUD
SLA Limited Yes Yes
Allocation Centralized Decentralized Both
Resource Handling Centralized Distributed Both
Loose coupling No Both Yes
Protocols/API MPI ,Parallel Virtual MPI,GRAM,GIS TCP/IP,SOAP, AJAX
Reliability No Half Full
Security No Half No
User friendliness No Half Yes
Virtualization Half Half Yes
Interoperability Yes Yes Half
Business Model No No Yes
Task Size Single large Single large Small & Medium
SOA No Yes Yes
Multi-tenancy No Yes Yes
System Performance Improve Improve Improve
Self service No Yes Yes
Computation service Computing Max. computing On demand
Heterogeneity No Yes Yes
Inexpensive No No Yes
Fig. I The Structure
50