This document describes building a simple grid computing environment from existing computing resources at Taiz University in Yemen. It outlines:
1) Installing and configuring software like Globus Toolkit, Tomcat, and OGCE portal on three machines to set up basic grid services like a certificate authority server, MyProxy server, and portal server.
2) Configuring the hardware nodes, installing the portal server, setting up the certificate authority server, and MyProxy server.
3) Testing basic grid services like credential delegation to MyProxy, retrieval from MyProxy, and GridFTP file transfers.
The results indicate the proposed grid model is promising for teaching and research at Taiz University and could serve as a
A Comparative Study: Taxonomy of High Performance Computing (HPC) IJECEIAES
The computer technologies have rapidly developed in both software and hardware field. The complexity of software is increasing as per the market demand because the manual systems are going to become automation as well as the cost of hardware is decreasing. High Performance Computing (HPC) is very demanding technology and an attractive area of computing due to huge data processing in many applications of computing. The paper focus upon different applications of HPC and the types of HPC such as Cluster Computing, Grid Computing and Cloud Computing. It also studies, different classifications and applications of above types of HPC. All these types of HPC are demanding area of computer science. This paper also done comparative study of grid, cloud and cluster computing based on benefits, drawbacks, key areas of research, characterstics, issues and challenges.
Dynamic Resource Provisioning with Authentication in Distributed DatabaseEditor IJCATR
Data center have the largest consumption amounts of energy in sharing the power. The public cloud workloads of different
priorities and performance requirements of various applications [4]. Cloud data center have capable of sensing an opportunity to present
different programs. In my proposed construction and the name of the security level of imperturbable privacy leakage rarely distributed
cloud system to deal with the persistent characteristics there is a substantial increases and information that can be used to augment the
profit, retrenchment overhead or both. Data Mining Analysis of data from different perspectives and summarizing it into useful
information is a process. Three empirical algorithms have been proposed assignments estimate the ratios are dissected theoretically and
compared using real Internet latency data recital of testing methods
The advent of Big Data has seen the emergence of new processing and storage challenges. These challenges are often solved by distributed processing. Distributed systems are inherently dynamic and unstable, so it is realistic to expect that some resources will fail during use. Load balancing and task scheduling is an important step in determining the performance of parallel applications. Hence the need to design load balancing algorithms adapted to grid computing. In this paper, we propose a dynamic and hierarchical load balancing strategy at two levels: Intrascheduler load balancing, in order to avoid the use of the large-scale communication network, and interscheduler load balancing, for a load regulation of our whole system. The strategy allows improving the average response time of CLOAK-Reduce application tasks with minimal communication. We first focus on the three performance indicators, namely response time, process latency and running time of MapReduce tasks.
Ensuring secure transfer, access and storage over the cloud storageeSAT Journals
Abstract The main concern in today’s growing IT sectors is the storage and maintenance of the data. As the data keep on updating according to the needs of users, there is a huge overhead in the maintenance of the hardware by the company. One of the solutions to this problem is the use of cloud storage for this enormous data. The cloud storage uses the huge data centers, which are remotely located, to store the data. In addition to the easy storage of the data, this huge data center also reduces the cost of maintenance of the data .However this distinct feature of cloud storage leads to many security issues which should be lucidly understood by the IT sectors. One of the emerging security issue would be the integrity of the data stored in the data center i.e. to check whether the cloud provider misuses the data or not .The cloud provider can misuses the data in many ways like they can copy or modify the file .Due to the storage of data on the data center, user is not able to access the data physically thus there should be the way by which user can check the reliability of the data on the cloud. In this paper we provide the scheme to check the reliability of the data and this scheme can be agreed upon by both the user and the cloud provider. Keywords: Cloud Security, Masking, Cloud Storage Security, Data Center
Ensuring secure transfer, access and storage over the cloud storageeSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
A Comparative Study: Taxonomy of High Performance Computing (HPC) IJECEIAES
The computer technologies have rapidly developed in both software and hardware field. The complexity of software is increasing as per the market demand because the manual systems are going to become automation as well as the cost of hardware is decreasing. High Performance Computing (HPC) is very demanding technology and an attractive area of computing due to huge data processing in many applications of computing. The paper focus upon different applications of HPC and the types of HPC such as Cluster Computing, Grid Computing and Cloud Computing. It also studies, different classifications and applications of above types of HPC. All these types of HPC are demanding area of computer science. This paper also done comparative study of grid, cloud and cluster computing based on benefits, drawbacks, key areas of research, characterstics, issues and challenges.
Dynamic Resource Provisioning with Authentication in Distributed DatabaseEditor IJCATR
Data center have the largest consumption amounts of energy in sharing the power. The public cloud workloads of different
priorities and performance requirements of various applications [4]. Cloud data center have capable of sensing an opportunity to present
different programs. In my proposed construction and the name of the security level of imperturbable privacy leakage rarely distributed
cloud system to deal with the persistent characteristics there is a substantial increases and information that can be used to augment the
profit, retrenchment overhead or both. Data Mining Analysis of data from different perspectives and summarizing it into useful
information is a process. Three empirical algorithms have been proposed assignments estimate the ratios are dissected theoretically and
compared using real Internet latency data recital of testing methods
The advent of Big Data has seen the emergence of new processing and storage challenges. These challenges are often solved by distributed processing. Distributed systems are inherently dynamic and unstable, so it is realistic to expect that some resources will fail during use. Load balancing and task scheduling is an important step in determining the performance of parallel applications. Hence the need to design load balancing algorithms adapted to grid computing. In this paper, we propose a dynamic and hierarchical load balancing strategy at two levels: Intrascheduler load balancing, in order to avoid the use of the large-scale communication network, and interscheduler load balancing, for a load regulation of our whole system. The strategy allows improving the average response time of CLOAK-Reduce application tasks with minimal communication. We first focus on the three performance indicators, namely response time, process latency and running time of MapReduce tasks.
Ensuring secure transfer, access and storage over the cloud storageeSAT Journals
Abstract The main concern in today’s growing IT sectors is the storage and maintenance of the data. As the data keep on updating according to the needs of users, there is a huge overhead in the maintenance of the hardware by the company. One of the solutions to this problem is the use of cloud storage for this enormous data. The cloud storage uses the huge data centers, which are remotely located, to store the data. In addition to the easy storage of the data, this huge data center also reduces the cost of maintenance of the data .However this distinct feature of cloud storage leads to many security issues which should be lucidly understood by the IT sectors. One of the emerging security issue would be the integrity of the data stored in the data center i.e. to check whether the cloud provider misuses the data or not .The cloud provider can misuses the data in many ways like they can copy or modify the file .Due to the storage of data on the data center, user is not able to access the data physically thus there should be the way by which user can check the reliability of the data on the cloud. In this paper we provide the scheme to check the reliability of the data and this scheme can be agreed upon by both the user and the cloud provider. Keywords: Cloud Security, Masking, Cloud Storage Security, Data Center
Ensuring secure transfer, access and storage over the cloud storageeSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Privacy Preserving Public Auditing and Data Integrity for Secure Cloud Storag...INFOGAIN PUBLICATION
Using cloud services, anyone can remotely store their data and can have the on-demand high quality applications and services from a shared pool of computing resources, without the burden of local data storage and maintenance. Cloud is a commonplace for storing data as well as sharing of that data. However, preserving the privacy and maintaining integrity of data during public auditing remains to be an open challenge. In this paper, we introducing a third party auditor (TPA), which will keep track of all the files along with their integrity. The task of TPA is to verify the data, so that the user will be worry-free. Verification of data is done on the aggregate authenticators sent by the user and Cloud Service Provider (CSP). For this, we propose a secure cloud storage system which supports privacy-preserving public auditing and blockless data verification over the cloud
Performing initiative data prefetchingKamal Spring
Abstract—This paper presents an initiative data prefetching scheme on the storage servers in distributed file systems for cloud
computing. In this prefetching technique, the client machines are not substantially involved in the process of data prefetching, but the
storage servers can directly prefetch the data after analyzing the history of disk I/O access events, and then send the prefetched data
to the relevant client machines proactively. To put this technique to work, the information about client nodes is piggybacked onto the
real client I/O requests, and then forwarded to the relevant storage server. Next, two prediction algorithms have been proposed to
forecast future block access operations for directing what data should be fetched on storage servers in advance. Finally, the prefetched
data can be pushed to the relevant client machine from the storage server. Through a series of evaluation experiments with a
collection of application benchmarks, we have demonstrated that our presented initiative prefetching technique can benefit distributed
file systems for cloud environments to achieve better I/O performance. In particular, configuration-limited client machines in the cloud
are not responsible for predicting I/O access operations, which can definitely contribute to preferable system performance on them.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Service oriented cloud architecture for improved performance of smart grid ap...eSAT Journals
Abstract An effective and flexible computational platform is needed for the data coordination and processing associated with real time operational and application services in smart grid. A server environment where multiple applications are hosted by a common pool of virtualized server resources demands an open source structure for ensuring operational flexibility. In this paper, open source architecture is proposed for real time services which involve data coordination and processing. The architecture enables secure and reliable exchange of information and transactions with users over the internet to support various services. Prioritizing the applications based on complexity enhances efficiency of resource allocation in such situations. A priority based scheduling algorithm is proposed in the work for application level performance management in the structure. Analytical model based on queuing theory is developed for evaluating the performance of the test bed. The implementation is done using open stack cloud and the test results show a significant gain of 8% with the algorithm. Index Terms: Service Oriented Architecture, Smart grid, Mean response time, Open stack, Queuing model
Abstract:-
This paper is based on the study of grid computing and cloud computing technology. These two technologies are related with geographically defined network standards. The main aspect of this paper is deep learning of latest technology and trends in the field of networking.
Keywords:-Technology,Cloud Computing,Grid Computing
Load balancing in public cloud by division of cloud based on the geographical...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
The past decade has seen increasingly ambitious and successful methods for outsourcing computing. Approaches such as utility computing, on-demand computing, grid computing, software as a service, and cloud computing all seek to free computer applications from the limiting confines of a single computer. Software that thus runs "outside the box" can be more powerful (think Google, TeraGrid), dynamic (think Animoto, caBIG), and collaborative (think FaceBook, myExperiment). It can also be cheaper, due to economies of scale in hardware and software. The combination of new functionality and new economics inspires new applications, reduces barriers to entry for application providers, and in general disrupts the computing ecosystem. I discuss the new applications that outside-the-box computing enables, in both business and science, and the hardware and software architectures that make these new applications possible.
Open source grid middleware packages – Globus Toolkit (GT4) Architecture , Configuration – Usage of Globus – Main components and Programming model - Introduction to Hadoop Framework - Mapreduce, Input splitting, map and reduce functions, specifying input and output parameters, configuring and running a job – Design of Hadoop file system, HDFS concepts, command line and java interface, dataflow of File read & File write.
Centralized Data Verification Scheme for Encrypted Cloud Data ServicesEditor IJMTER
Cloud environment supports data sharing between multiple users. Data integrity is violated
due to hardware / software failures and human errors. Data owners and public verifiers are involved to
efficiently audit cloud data integrity without retrieving the entire data from the cloud server. File and
block signatures are used in the integrity verification process.
“One Ring to RUle Them All” (Oruta) scheme is used for privacy-preserving public auditing process. In
oruta homomorphic authenticators are constructed using Ring Signatures. Ring signatures are used to
compute verification metadata needed to audit the correctness of shared data. The identity of the signer
on each block in shared data is kept private from public verifiers. Homomorphic authenticable ring
signature (HARS) scheme is applied to provide identity privacy with blockless verification. Batch
auditing mechanism supports to perform multiple auditing tasks simultaneously. Oruta is compatible
with random masking to preserve data privacy from public verifiers. Dynamic data management process
is handled with index hash tables. Traceability is not supported in oruta scheme. Data dynamism
sequence is not managed by the system. The system obtains high computational overhead
The proposed system is designed to perform public data verification with privacy. Traceability features
are provided with identity privacy. Group manager or data owner can be allowed to reveal the identity of
the signer based on verification metadata. Data version management mechanism is integrated with the
system.
Grid Computing - Collection of computer resources from multiple locationsDibyadip Das
Grid computing is the collection of computer resources from multiple locations to reach a common goal. The grid can be thought of as a distributed system with non-interactive workloads that involve a large number of files.
Privacy Preserving Public Auditing and Data Integrity for Secure Cloud Storag...INFOGAIN PUBLICATION
Using cloud services, anyone can remotely store their data and can have the on-demand high quality applications and services from a shared pool of computing resources, without the burden of local data storage and maintenance. Cloud is a commonplace for storing data as well as sharing of that data. However, preserving the privacy and maintaining integrity of data during public auditing remains to be an open challenge. In this paper, we introducing a third party auditor (TPA), which will keep track of all the files along with their integrity. The task of TPA is to verify the data, so that the user will be worry-free. Verification of data is done on the aggregate authenticators sent by the user and Cloud Service Provider (CSP). For this, we propose a secure cloud storage system which supports privacy-preserving public auditing and blockless data verification over the cloud
Performing initiative data prefetchingKamal Spring
Abstract—This paper presents an initiative data prefetching scheme on the storage servers in distributed file systems for cloud
computing. In this prefetching technique, the client machines are not substantially involved in the process of data prefetching, but the
storage servers can directly prefetch the data after analyzing the history of disk I/O access events, and then send the prefetched data
to the relevant client machines proactively. To put this technique to work, the information about client nodes is piggybacked onto the
real client I/O requests, and then forwarded to the relevant storage server. Next, two prediction algorithms have been proposed to
forecast future block access operations for directing what data should be fetched on storage servers in advance. Finally, the prefetched
data can be pushed to the relevant client machine from the storage server. Through a series of evaluation experiments with a
collection of application benchmarks, we have demonstrated that our presented initiative prefetching technique can benefit distributed
file systems for cloud environments to achieve better I/O performance. In particular, configuration-limited client machines in the cloud
are not responsible for predicting I/O access operations, which can definitely contribute to preferable system performance on them.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Service oriented cloud architecture for improved performance of smart grid ap...eSAT Journals
Abstract An effective and flexible computational platform is needed for the data coordination and processing associated with real time operational and application services in smart grid. A server environment where multiple applications are hosted by a common pool of virtualized server resources demands an open source structure for ensuring operational flexibility. In this paper, open source architecture is proposed for real time services which involve data coordination and processing. The architecture enables secure and reliable exchange of information and transactions with users over the internet to support various services. Prioritizing the applications based on complexity enhances efficiency of resource allocation in such situations. A priority based scheduling algorithm is proposed in the work for application level performance management in the structure. Analytical model based on queuing theory is developed for evaluating the performance of the test bed. The implementation is done using open stack cloud and the test results show a significant gain of 8% with the algorithm. Index Terms: Service Oriented Architecture, Smart grid, Mean response time, Open stack, Queuing model
Abstract:-
This paper is based on the study of grid computing and cloud computing technology. These two technologies are related with geographically defined network standards. The main aspect of this paper is deep learning of latest technology and trends in the field of networking.
Keywords:-Technology,Cloud Computing,Grid Computing
Load balancing in public cloud by division of cloud based on the geographical...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
The past decade has seen increasingly ambitious and successful methods for outsourcing computing. Approaches such as utility computing, on-demand computing, grid computing, software as a service, and cloud computing all seek to free computer applications from the limiting confines of a single computer. Software that thus runs "outside the box" can be more powerful (think Google, TeraGrid), dynamic (think Animoto, caBIG), and collaborative (think FaceBook, myExperiment). It can also be cheaper, due to economies of scale in hardware and software. The combination of new functionality and new economics inspires new applications, reduces barriers to entry for application providers, and in general disrupts the computing ecosystem. I discuss the new applications that outside-the-box computing enables, in both business and science, and the hardware and software architectures that make these new applications possible.
Open source grid middleware packages – Globus Toolkit (GT4) Architecture , Configuration – Usage of Globus – Main components and Programming model - Introduction to Hadoop Framework - Mapreduce, Input splitting, map and reduce functions, specifying input and output parameters, configuring and running a job – Design of Hadoop file system, HDFS concepts, command line and java interface, dataflow of File read & File write.
Centralized Data Verification Scheme for Encrypted Cloud Data ServicesEditor IJMTER
Cloud environment supports data sharing between multiple users. Data integrity is violated
due to hardware / software failures and human errors. Data owners and public verifiers are involved to
efficiently audit cloud data integrity without retrieving the entire data from the cloud server. File and
block signatures are used in the integrity verification process.
“One Ring to RUle Them All” (Oruta) scheme is used for privacy-preserving public auditing process. In
oruta homomorphic authenticators are constructed using Ring Signatures. Ring signatures are used to
compute verification metadata needed to audit the correctness of shared data. The identity of the signer
on each block in shared data is kept private from public verifiers. Homomorphic authenticable ring
signature (HARS) scheme is applied to provide identity privacy with blockless verification. Batch
auditing mechanism supports to perform multiple auditing tasks simultaneously. Oruta is compatible
with random masking to preserve data privacy from public verifiers. Dynamic data management process
is handled with index hash tables. Traceability is not supported in oruta scheme. Data dynamism
sequence is not managed by the system. The system obtains high computational overhead
The proposed system is designed to perform public data verification with privacy. Traceability features
are provided with identity privacy. Group manager or data owner can be allowed to reveal the identity of
the signer based on verification metadata. Data version management mechanism is integrated with the
system.
Grid Computing - Collection of computer resources from multiple locationsDibyadip Das
Grid computing is the collection of computer resources from multiple locations to reach a common goal. The grid can be thought of as a distributed system with non-interactive workloads that involve a large number of files.
El medio acuático abarca una gran variedad de factores :
Fisicoquímicos ,Biológicos
Interrelacionados entre si dando lugar a lo que se llama calidad del agua
Architectural figures of urban metabolism. A research and a project | Seminar...Saverio Massaro
The presentation has been shown during the seminar "Cities in the metabolic loop", promoted by Metrolab at ULB - Bruxelles.
The first part belong to my personal PhD research at Sapienza University. The second part is related to the project Albula, developed by deltastudio.
The evolution of Internet of Things (IoT) brought about several challenges for the existing Hardware, Network and Application development. Some of these are handling real-time streaming and batch bigdata, real- time event handling, dynamic cluster resource allocation for computation, Wired and Wireless Network of Things etc. In order to combat these technicalities, many new technologies and strategies are being developed. Tiarrah Computing comes up with integration the concept of Cloud Computing, Fog Computing and Edge Computing. The main objectives of Tiarrah Computing are to decouple application deployment and achieve High Performance, Flexible Application Development, High Availability, Ease of Development, Ease of Maintenances etc. Tiarrah Computing focus on using the existing opensource technologies to overcome the challenges that evolve along with IoT. This paper gives you overview of the technologies and design your application as well as elaborate how to overcome most of existing challenge.
Ant colony Optimization: A Solution of Load balancing in Cloud dannyijwest
As the cloud computing is a new style of computing over internet. It has many advantages along with some
crucial issues to be resolved in order to improve reliability of cloud environment. These issues are related
with the load management, fault tolerance and different security issues in cloud environment. In this paper
the main concern is load balancing in cloud computing. The load can be CPU load, memory capacity,
delay or network load. Load balancing is the process of distributing the load among various nodes of a
distributed system to improve both resource utilization and job response time while also avoiding a
situation where some of the nodes are heavily loaded while other nodes are idle or doing very little work.
Load balancing ensures that all the processor in the system or every node in the network does
approximately the equal amount of work at any instant of time. Many methods to resolve this problem has
been came into existence like Particle Swarm Optimization, hash method, genetic algorithms and several
scheduling based algorithms are there. In this paper we are proposing a method based on Ant Colony
optimization to resolve the problem of load balancing in cloud environment.
Tiarrah Computing: The Next Generation of ComputingIJECEIAES
The evolution of Internet of Things (IoT) brought about several challenges for the existing Hardware, Network and Application development. Some of these are handling real-time streaming and batch bigdata, real- time event handling, dynamic cluster resource allocation for computation, Wired and Wireless Network of Things etc. In order to combat these technicalities, many new technologies and strategies are being developed. Tiarrah Computing comes up with integration the concept of Cloud Computing, Fog Computing and Edge Computing. The main objectives of Tiarrah Computing are to decouple application deployment and achieve High Performance, Flexible Application Development, High Availability, Ease of Development, Ease of Maintenances etc. Tiarrah Computing focus on using the existing opensource technologies to overcome the challenges that evolve along with IoT. This paper gives you overview of the technologies and design your application as well as elaborate how to overcome most of existing challenge.
Grid computing or network computing is developed to make the available electric power in the similar way
as it is available for the grid. For that we just plug in the power and whoever needs power, may use it. In
grid computing if a system needs more power than available it can share the computing with other
machines connected in a grid. In this way we can use the power of a super computer without a huge cost
and the CPU cycles that were wasted previously can also be utilized. For performing grid computation in
joined computers through the internet, the software must be installed which supports grid computation on
each computer inside the VO. The software handles information queries, storage management, processing
scheduling, authentication and data encryption to ensure information security.
Analyzing the Difference of Cluster, Grid, Utility & Cloud ComputingIOSRjournaljce
: Virtualization and cloud computing is creating a fundamental change in computer architecture,
software and tools development, in the way we store, distribute and consume information. In the recent era of
autonomic computing it comes the importance and need of basic concepts of having and sharing various
hardware and software and other resources & applications that can manage themself with high level of human
guidance. Virtualization or Autonomic computing is not a new to the world, but it developed rapidly with Cloud
computing. In this paper there give an overview of various types of computing. There will be discussion on
Cluster, Grid computing, Utility & Cloud Computing. Analysis architecture, differences between them,
characteristics , its working, advantages and disadvantages
Efficient architectural framework of cloud computing Souvik Pal
Cloud computing is that enables adaptive, favorable and on-demand network access to a collective pool of adjustable and configurable computing physical resources which networks, servers, bandwidth, storage that can be swiftly provisioned and released with negligible supervision endeavor or service provider interaction. From business prospective, the viable achievements of Cloud Computing and recent developments in Grid computing have brought the platform that has introduced virtualization technology into the era of high performance computing. However, clouds are Internet-based concept and try to disguise complexity overhead for end users. Cloud service providers (CSPs) use many structural designs combined with self-service capabilities and ready-to-use facilities for computing resources, which are enabled through network infrastructure especially the internet which is an important consideration. This paper provides an efficient architectural Framework for cloud computing that may lead to better performance and faster access.
Similar to An Exploration of Grid Computing to be Utilized in Teaching and Research at TU (20)
Content-Based Image Retrieval (CBIR) systems have been used for the searching of relevant images in various research areas. In CBIR systems features such as shape, texture and color are used. The extraction of features is the main step on which the retrieval results depend. Color features in CBIR are used as in the color histogram, color moments, conventional color correlogram and color histogram. Color space selection is used to represent the information of color of the pixels of the query image. The shape is the basic characteristic of segmented regions of an image. Different methods are introduced for better retrieval using different shape representation techniques; earlier the global shape representations were used but with time moved towards local shape representations. The local shape is more related to the expressing of result instead of the method. Local shape features may be derived from the texture properties and the color derivatives. Texture features have been used for images of documents, segmentation-based recognition,and satellite images. Texture features are used in different CBIR systems along with color, shape, geometrical structure and sift features.
The cyber attacks have become most prevalent in the past few years. During this time, attackers have discovered new vulnerabilities to carry out malicious activities on the internet. Both the clients and the servers have been victimized by the attackers. Clickjacking is one of the attacks that have been adopted by the attackers to deceive the innocuous internet users to initiate some action. Clickjacking attack exploits one of the vulnerabilities existing in the web applications. This attack uses a technique that allows cross domain attacks with the help of userinitiated clicks and performs unintended actions. This paper traces out the vulnerabilities that make a website vulnerable to clickjacking attack and proposes a solution for the same.
Performance Analysis of Audio and Video Synchronization using Spreaded Code D...Eswar Publications
The audio and video synchronization plays an important role in speech recognition and multimedia communication. The audio-video sync is a quite significant problem in live video conferencing. It is due to use of various hardware components which introduces variable delay and software environments. The objective of the synchronization is used to preserve the temporal alignment between the audio and video signals. This paper proposes the audio-video synchronization using spreading codes delay measurement technique. The performance of the proposed method made on home database and achieves 99% synchronization efficiency. The audio-visual
signature technique provides a significant reduction in audio-video sync problems and the performance analysis of audio and video synchronization in an effective way. This paper also implements an audio- video synchronizer and analyses its performance in an efficient manner by synchronization efficiency, audio-video time drift and audio-video delay parameters. The simulation result is carried out using mat lab simulation tools and simulink. It is automatically estimating and correcting the timing relationship between the audio and video signals and maintaining the Quality of Service.
Due to the availability of complicated devices in industry, models for consumers at lower cost of resources are developed. Home Automation systems have been developed by several researchers. The limitations of home automation includes complexity in architecture, higher costs of the equipment, interface inflexibility. In this paper as we have proposed, the working protocol of PIC 16F72 technology is which is secure, cost efficient, flexible that leads to the development of efficient home automation systems. The system is operational to control various home appliances like fans, Bulbs, Tube light. The following paper describes about components used and working of all components connected. The home automation system makes use of Android app entitled “Home App” which gives
flexibility and easy to use GUI.
Semantically Enchanced Personalised Adaptive E-Learning for General and Dysle...Eswar Publications
E-learning plays an important role in providing required and well formed knowledge to a learner. The medium of e- learning has achieved advancement in various fields such as adaptive e-learning systems. The need for enhancing e-learning semantically can enhance the retrieval and adaptability of the learning curriculum. This paper provides a semantically enhanced module based e-learning for computer science programme on a learnercentric perspective. The learners are categorized based on their proficiency for providing personalized learning environment for users. Learning disorders on the platform of e-learning still require lots of research. Therefore, this paper also provides a personalized assessment theoretical model for alphabet learning with learning objects for
children’s who face dyslexia.
Agriculture plays an important role in the economy of our country. Over 58 percent of the rural households depend on the agriculture sector as their means of livelihood. Agriculture is one of the major contributors to Gross Domestic Product(GDP). Seeds are the soul of agriculture. This application helps in reducing the time for the researchers as well as farmers to know the seedling parameters. The application helps the farmers to know about the percentage of seedlings that will grow and it is very essential in estimating the yield of that particular crop. Manual calculation may lead to some error, to minimize that error, the developed app is used. The scientist and farmers require the app to know about the physiological seed quality parameters and to take decisions regarding their farming activities. In this article a desktop app for seed germination percentage and vigour index calculation are developed in PHP scripting language.
What happens when adaptive video streaming players compete in time-varying ba...Eswar Publications
Competition among adaptive video streaming players severely diminishes user-QoE. When players compete at a bottleneck link many do not obtain adequate resources. This imbalance eventually causes ill effects such as screen flickering and video stalling. There have been many attempts in recent years to overcome some of these problems. However, added to the competition at the bottleneck link there is also the possibility of varying network bandwidth which can make the situation even worse. This work focuses on such a situation. It evaluates current heuristic adaptive video players at a bottleneck link with time-varying bandwidth conditions. Experimental setup includes the TAPAS player and emulated network conditions. The results show PANDA outperforms FESTIVE, ELASTIC and the Conventional players.
WLI-FCM and Artificial Neural Network Based Cloud Intrusion Detection SystemEswar Publications
Security and Performance aspects of cloud computing are the major issues which have to be tended to in Cloud Computing. Intrusion is one such basic and imperative security problem for Cloud Computing. Consequently, it is essential to create an Intrusion Detection System (IDS) to detect both inside and outside assaults with high detection precision in cloud environment. In this paper, cloud intrusion detection system at hypervisor layer is developed and assesses to detect the depraved activities in cloud computing environment. The cloud intrusion detection system uses a hybrid algorithm which is a fusion of WLI- FCM clustering algorithm and Back propagation artificial Neural Network to improve the detection accuracy of the cloud intrusion detection system. The proposed system is implemented and compared with K-means and classic FCM. The DARPA’s KDD cup dataset 1999 is used for simulation. From the detailed performance analysis, it is clear that the proposed system is able to detect the anomalies with high detection accuracy and low false alarm rate.
Spreading Trade Union Activities through Cyberspace: A Case StudyEswar Publications
This report present the outcome of an investigative research conducted to examine the modu-operandi of academic staff union of polytechnics (ASUP) YabaTech. The investigation covered the logistics and cost implication for spreading union activities among members. It was discovered that cost of management and dissemination of information to members was at high side, also logistics problem constitutes to loss of information in transit hence cut away some members from union activities. To curtail the problem identified, we proposed the
design of secure and dynamic website for spreading union activities among members and public. The proposed system was implemented using HTML5 technology, interface frameworks like Bootstrap and j query which enables the responsive feature of the application interface. The backend was designed using PHPMYSQL. It was discovered from the evaluation of the new system that cost of managing information has reduced considerably, and logistic problems identified in the old system has become a forgotten issue.
Identifying an Appropriate Model for Information Systems Integration in the O...Eswar Publications
Nowadays organizations are using information systems for optimizing processes in order to increase coordination and interoperability across the organizations. Since Oil and Gas Industry is one of the large industries in whole of the world, there is a need to compatibility of its Information Systems (IS) which consists three categories of systems: Field IS, Plant IS and Enterprise IS to create interoperability and approach the
optimizing processes as its result. In this paper we introduce the different models of information systems integration, identify the types of information systems that are using in the upstream and downstream sectors of petroleum industry, and finally based on expert’s opinions will identify a suitable model for information systems integration in this industry.
Link-and Node-Disjoint Evaluation of the Ad Hoc on Demand Multi-path Distance...Eswar Publications
This work illustrates the AOMDV routing protocol. Its ancestor, the AODV routing protocol is also described. This tutorial demonstrates how forward and reverse paths are created by the AOMDV routing protocol. Loop free paths formulation is described, together with node and link disjoint paths. Finally, the performance of the AOMDV routing protocol is investigated along link and node disjoint paths. The WSN with the AOMDV routing protocol using link disjoint paths is better than the WSN with the AOMDV routing protocol using node disjoint paths for energy consumption.
Bridging Centrality: Identifying Bridging Nodes in Transportation NetworkEswar Publications
To identify the importance of node of a network, several centralities are used. Majority of these centrality measures are dominated by components' degree due to their nature of looking at networks’ topology. We propose a centrality to identification model, bridging centrality, based on information flow and topological aspects. We apply bridging centrality on real world networks including the transportation network and show that the nodes distinguished by bridging centrality are well located on the connecting positions between highly connected regions. Bridging centrality can discriminate bridging nodes, the nodes with more information flowed through them and locations between highly connected regions, while other centrality measures cannot.
Now a days we are living in an era of Information Technology where each and every person has to become IT incumbent either intentionally or unintentionally. Technology plays a vital role in our day to day life since last few decades and somehow we all are depending on it in order to obtain maximum benefit and comfort. This new era equipped with latest advents of technology, enlightening world in the form of Internet of Things (IoT). Internet of things is such a specified and dignified domain which leads us to the real world scenarios where each object can perform some task while communicating with some other objects. The world with full of devices, sensors and other objects which will communicate and make human life far better and easier than ever. This paper provides an overview of current research work on IoT in terms of architecture, a technology used and applications. It also highlights all the issues related to technologies used for IoT, after the literature review of research work. The main purpose of this survey is to provide all the latest technologies, their corresponding
trends and details in the field of IoT in systematic manner. It will be helpful for further research.
Automatic Monitoring of Soil Moisture and Controlling of Irrigation SystemEswar Publications
In past couple of decades, there is immediate growth in field of agricultural technology. Utilization of proper method of irrigation by drip is very reasonable and proficient. A various drip irrigation methods have been proposed, but they have been found to be very luxurious and dense to use. The farmer has to maintain watch on irrigation schedule in the conventional drip irrigation system, which is different for different types of crops. In remotely monitored embedded system for irrigation purposes have become a new essential for farmer to accumulate his energy, time and money and will take place only when there will be requirement of water. In this approach, the soil test for chemical constituents, water content, and salinity and fertilizer requirement data collected by wireless and processed for better drip irrigation plan. This paper reviews different monitoring systems and proposes an automatic monitoring system model using Wireless Sensor Network (WSN) which helps the farmer to improve the yield.
Multi- Level Data Security Model for Big Data on Public Cloud: A New ModelEswar Publications
With the advent of cloud computing the big data has emerged as a very crucial technology. The certain type of cloud provides the consumers with the free services like storage, computational power etc. This paper is intended to make use of infrastructure as a service where the storage service from the public cloud providers is going to leveraged by an individual or organization. The paper will emphasize the model which can be used by anyone without any cost. They can store the confidential data without any type of security issue, as the data will be altered
in such a way that it cannot be understood by the intruder if any. Not only that but the user can retrieve back the original data within no time. The proposed security model is going to effectively and efficiently provide a robust security while data is on cloud infrastructure as well as when data is getting migrated towards cloud infrastructure or vice versa.
Impact of Technology on E-Banking; Cameroon PerspectivesEswar Publications
The financial services industry is experiencing rapid changes in services delivery and channels usage, and financial companies and users of financial services are looking at new technologies as they emerge and deciding whether or not to embrace them and the new opportunities to save and manage enormous time, cost and stress.
There is no doubt about the favourable and manifold impact of technology on e-banking as pictured in this review paper, almost all banks are with the least and most access e-banking Technological equipments like ATMs and Cards. On the other Hand cheap and readily available technology has opened a favourable competition in ebanking services business with a lot of wide range competitors competing with Commercial Banks in Cameroon in providing digital financial services.
Classification Algorithms with Attribute Selection: an evaluation study using...Eswar Publications
Attribute or feature selection plays an important role in the process of data mining. In general the data set contains more number of attributes. But in the process of effective classification not all attributes are relevant.
Attribute selection is a technique used to extract the ranking of attributes. Therefore, this paper presents a comparative evaluation study of classification algorithms before and after attribute selection using Waikato Environment for Knowledge Analysis (WEKA). The evaluation study concludes that the performance metrics of the classification algorithm, improves after performing attribute selection. This will reduce the work of processing irrelevant attributes.
Mining Frequent Patterns and Associations from the Smart meters using Bayesia...Eswar Publications
In today’s world migration of people from rural areas to urban areas is quite common. Health care services are one of the most challenging aspect that is must require to the people with abnormal health. Advancements in the technologies lead to build the smart homes, which contains various sensor or smart meter devices to automate the process of other electronic device. Additionally these smart meters can be able to capture the daily activities of the patients and also monitor the health conditions of the patients by mining the frequent patterns and
association rules generated from the smart meters. In this work we proposed a model that is able to monitor the activities of the patients in home and can send the daily activities to the corresponding doctor. We can extract the frequent patterns and association rules from the log data and can predict the health conditions of the patients and can give the suggestions according to the prediction. Our work is divided in to three stages. Firstly, we used to record the daily activities of the patient using a specific time period at three regular intervals. Secondly we applied the frequent pattern growth for extracting the association rules from the log file. Finally, we applied k means clustering for the input and applied Bayesian network model to predict the health behavior of the patient and precautions will be given accordingly.
Network as a Service Model in Cloud Authentication by HMAC AlgorithmEswar Publications
Resource pooling on internet-based accessing on use as pay environmental technology and ruled in IT field is the
cloud. Present, in every organization has trusted the web, however, the information must flow but not hold the
data. Therefore, all customers have to use the cloud. While the cloud progressing info by securing-protocols. Third
party observing and certain circumstances directly stale in flow and kept of packets in the virtual private cloud.
Global security statistics in the year 2017, hacking sensitive information in cloud approximately maybe 75.35%,
and the world security analyzer said this calculation maybe reached to 100%. For this cause, this proposed
research work concentrates on Authentication-Message-Digest-Key with authentication in routing the Network as
a Service of packets in OSPF (Open Shortest Path First) implementing Cloud with GNS3 has tested them to
securing from attackers.
Microstrip patch antennas are recently used in wireless detection applications due to their low power consumption, low cost, versatility, field excitation, ease of fabrication etc. The microstrip patch antennas are also called as printed antennas which is suffer with an array elements of antenna and narrow bandwidth. To overcome the above drawbacks, Flame Retardant Material is used as the substrate. Rectangular shape of microstrip patch antenna with FR4 material as the substrate which is more suitable for the explosive detection applications. The proposed printed antenna was designed with the dimension of 60 x 60 mm2. FR-4 material has a dielectric constant value of 4.3 with thickness 1.56 mm, length and width 60 mm and 60 mm respectively. One side of the substrate contains the ground plane of dimensions 60 x60 mm2 made of copper and the other side of the substrate contains the patch which have dimensions 34 x 29 mm2 and thickness 0.03mm which is also made of copper. RMPA without slot, Vertical slot RMPA, Double horizontal slot RMPA and Centre slot RMPA structures were
designed and the performance of the antennas were analysed with various parameters such as gain, directivity, Efield, VSWR and return loss. From the performance analysis, double horizontal slot RMPA antenna provides a better result and it provides maximum gain (8.61dB) and minimum return loss (-33.918dB). Based on the E-field excitation value the SEMTEX explosive material is detected and it was simulated using CST software.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
JMeter webinar - integration with InfluxDB and Grafana
An Exploration of Grid Computing to be Utilized in Teaching and Research at TU
1. Int. J. Advanced Networking and Applications
Volume: 6 Issue: 3 Pages: 2291-2299 (2014) ISSN : 0975-0290
2291
An Exploration of Grid Computing to be
Utilized in Teaching and Research at TU
Dr. Mohammed A.M. Ibrahim & Group of Grid Computing
Department of Information Technology, Faculty of Engineering and Information Technology, Taiz University
Email: sabri1966@yahoo.com
-------------------------------------------------------------------ABSTRACT---------------------------------------------------------------
Taiz University (TU) has a hundreds of computing resources on different campuses for use in areas from offices
work to general access student labs. However, these resources are not used to their full potential. Grid computing
is a technology that is capable to unify these resources and utilize them in very significant way. The difficulties
of funding a complete grid computing environment and also, the difficulties of grid tools makes teachers and
researchers in TU unable to involve in teaching and research in grid computing or in distributed computing.
These problems raised up our awareness to mitigate this problem by build a simple environment for Grid
computing from resources are available in TU and the built environment we can use it for teaching and research.
The objective of this paper is to build, implement and testing a grid computing environment (Globus Toolkit). To
achieving this objective we built the hardware and software parts, and configured several basic grid services
commands line and web portal. The test result for basic grid services have been indicated that our proposed grid
computing model is promising and can use in teaching and research in TU. The paper takes a look at how grid
computing is realizing this aim and have created unbelievable opportunities for students, teachers and
researchers at TU in addition the result of this paper will make TU a pilot to the other universities in whole
Yemen in field of Grid and distributing computing.
Keywords - Grid Computing, Network Computing, Software and Hardware of Grid Computing,
Grid Evaluation.
--------------------------------------------------------------------------------------------------------------------------------------------------
Date of Submission: October 20, 2014 Date of Acceptance: November 20, 2014
--------------------------------------------------------------------------------------------------------------------------------------------------
1. INTRODUCTION
Grid computing is an approach where the end user can
be offered any of the services that provide by a grid or a
network of computer system located either in a Locally or
in geographical area. in grid computing user can
dynamically select and locate any recourses such as
(processing power, disk storage, applications, etc). In
another hand grid computing integrate hardware and
software by network , for multiple different organizations
to be available to any of the users of any of these multiple
organizations. Grid computing has many goals such as :
providing remote access to IT assets, and aggregating
processing power more detail in [1]. huge number of
research have studied the grid computing in different areas
and also, many researchers have defined the grid
computing, Foster and Kesselman [2] Define a
computational Grid “as a hardware and software
infrastructure that provides dependable, consistent,
pervasive, and inexpensive access to high-end
computational capabilities.” Grid computing is concerned
with coordinated resource sharing and problem solving in
dynamic, multi-institutional virtual organizations. TU
saver a shortages in funding and also in skills for grid
computing technology for mitigating these shortages we
come up with this paper which is maiming to implement a
grid computing environment (Globus Toolkit).We choose
to implement grid and its tools because Grids can more
easily deal with the enormous increasing amounts of data
and provide tools to access, securely share data between
trusted sources, storage and to solve other problems. To
achieving this objective we built the hardware and
software parts, and configured several basic grid services
of a grid, this grid not be targeted to specific domain, we
followed a general simplified architecture that leads to
general practical results and build a wide skills knowledge
in grid computing technology, we separated out the group
of the different components and functionalities of the grid
into smaller groups or parts distributed over different
machines.
2. TYPES OF GRID
Grid computing address various kinds of applications
used in many areas, grid computing are categorized by
the type which are summarized below [3]:
• Computational: A computational grid is
focused on setting aside resources specifically for
compute power. In this type of grid most of the
machines are high-performance servers.
• Scavenging: A scavenging grid is most
commonly used with large numbers of desktop
machines. In [4,5] machines are scavenged for
available CPU cycles and other resources.
Owners of the desktop machines are usually
2. Int. J. Advanced Networking and Applications
Volume: 6 Issue: 3 Pages: 2291-2299 (2014) ISSN : 0975-0290
2292
given control of when their resources are
available to participate in the grid.
• Data grid: The main task of data grid is storing
and providing access to data across multiple
organizations. an example, that if multiple
organizations using an application, each
organization has unique data. A data grid would
allow these organizations to share their data,
manage the data, and manage security issues such
as who has access to what data[6].
3. GRID ARCHITECTURE
Grid computing should has number of characteristics
and features that are required by a grid to provide
users with a computing environment and the detail of
these characteristics can be find in [7, 8]. Foster,
Kesselman, & Tuecke, proposed a new grid
architecture exploration can be seen in [9] this
architecture, identifies the basic components of a grid
system, defines the purpose and functions of such
components and indicates how each of these
components interacts with one another, the new
architectures shown in Figure 1.
Figure 1. Grid Architecture
4. Grid Components
The major components that are necessary to form a
grid computing and the detail of those components
can be find in [10].
5. Implementation Requirements
To implement a grid computing model (Globus Toolkit)
the hardware requirements and software requirements
should be introduce as follows:
1. Hardware
The implementation test bad of grid computing model
consisted of three machines each machine can be
regarded as a source of computing power and data
storage capacity and all machines are identical in
specification, which is listed below.
Intel® Core ® 2 Due Processor (2CPUs)
2.40GHz, 2M cache
Integrated Broadcom 802.11a/b/g/n Wi-Fi
Adapter
2MB DDR2 Memory
250GB Serial ATA Hard Drive
These three machines will play the roles as servers as
follow:
a). CA Server: Is the Digital certification authority
server Used for issuing digital certificates (X.509
certificates) to grid users, resources and services.
b). MyProxy Server: is An online credential repository
used to store X.509 proxy credentials, protected by a
passphrase, for later retrieval over the network so you
and other applications can access your credentials
remotely. This eliminates the need for manually
copying private key and certificate files between
machines. MyProxy-Server can also be used for
authentication to grid portals and credential renewal
with job managers.
c). Portal Server: is a portal-server where the grid
portal resides, the Grid portal is the access point to a
web system; it provides an environment where the user
can access the resources and services of the Grid,
implement and monitor network applications and
collaborate with other users.
2. Software Requirements
The basic software requirements that must be
satisfied by any grid
implementation are :
• Ubuntu Desktop Edition 11.04
We use Ubuntu as the main platform. Development
tools and various other packages were installed as
needed on each of the machine. Globus Toolkit
Version 4.2.1 (GT4)
The open source Globus® Toolkit is a fundamental
enabling technology for the "Grid," letting people
share computing power, databases, and other tools
securely online across corporate, institutional, and
geographic boundaries without sacrificing local
autonomy. The toolkit includes software services
and libraries for resource monitoring, discovery,
and management, plus security and file
management[11,12].
• Grid Portal
we used the OGCE portal v2.5, it includes
everything you need to get started building a Java-
based Grid portal, including Tomcat 5.5 web
server, GridSphere 2.1 portlet container, and
OGCE Grid portlets.
• Apache Tomcat 5.5
Apache Tomcat is an open source web server and
servlet container developed by the Apache
Software Foundation (ASF). Tomcat implements
Application
Collective
Resource
Connectivity
Fabric
3. Int. J. Advanced Networking and Applicatio
Volume: 6 Issue: 3 Pages: 2291-2299 (20
the Java Servlet and the JavaServe
specifications from Sun Micro
provides a "pure Java" HTTP
environment for Java code to run in.
• MyProxy v 4.2
MyProxy is an open source software
X.509 Public Key Infrastructure
credentials (certificates and private k
• GridSphere 2.1 portlet cont
The GridSphere portal framework
open-source portlet bas
portal. GridSphere enables develope
develop and package third-party
applications that can be run and
within the GridSphere portlet contain
• OGCE Portal v2.5
OGCE portal v2.5 includes everythin
get started building a Java-based
including Tomcat 5.5 web server, G
portlet container, and OGCE Grid portle
6. GRID COMPUTING MODEL DE
ARCHITECTURE
The main focus of this section is
hardware and software parts, and con
basic grid services of a grid in w
implement and test them. Our pr
computing model will not be target
domain, we will follow a gene
architecture that leads to general practi
are going to separate out the group o
components and functionalities of
smaller groups or parts distributed
machines so we can examine and get a
each one functionalities. Figure 2
proposed architecture.
Figure 2. Proposed Grid Architecture Mode
ons
14) ISSN : 0975-0290
er Pages (JSP)
osystems, and
web server
e for managing
(PKI) security
keys).
tainer
provides an
sed Web
ers to quickly
y portlet web
d administered
ner.
ng you need to
Grid portal,
GridSphere 2.1
ets
ESIGN AND
s building the
nfigure several
which we will
roposed grid
ted to specific
ral simplified
ical results, we
of the different
the grid into
over different
a clear view of
2. shows our
el
7. ACCESSING THE GRID
In order for the accessing the propos
computing, user should create a se
key cryptography and request his
Certificate Authority and a copy o
the CA.
1. Obtaining signed certificate
Before users be able to request th
a specific CA, they first should c
to trust that CA by coping CA’s
GSI in their hosts, Figure 3.
procedure describe the steps to
communication [13]:
1. Copy the Certificate Author
our grid host with which we s
2. Creating private key and a ce
3. after that we send the certif
by e-mail or another more se
running a production sys
positively identify the sender
Figure 3. Obtaining Signed Certificat
When that procedure has been co
digital certificate should to be re
user will have three important fil
host and these files are:
• The CA’s public key
• The grid host’s private key
• The grid host’s digital cert
Mapping Global name of a Grid
name in each resource of the gri
user is going to use.
2. Accessing the grid through the g
A certain things should be done firs
access and use the grid resources a
the grid portal from any machine th
portal web server, before anythin
2293
sed architecture grid
et of keys for public
certificate from the
of the public key of
s from CA server:
heir certificates from
configure their hosts
public key to setup
and the following
o establish the GSI
rity’s public key to
setup GSI.
ertificate request.
ficate request to CA
ecure way if you are
stem and need to
r.
tes from CA Server
ompleted the signed
eceived, at that time
les on his own grid
y
tificate
user to a local user
id resources that the
grid portal
st before the user can
and services through
hat has access to the
ng a kind of trust
4. Int. J. Advanced Networking and Applicatio
Volume: 6 Issue: 3 Pages: 2291-2299 (20
relationship between portal web ser
underling grid architecture should exist. I
to access the grid through the portal the p
able to access the grid on that user beha
achieved with the help of Myproxy
manages grid users credentials and dele
privileges to the portal server or any othe
has access to the grid. The followin
describe all the process in order.
a). Delegation of Credential to MyP
After the user has acquired his Grid cr
use this credential to delegate his pr
grid portal. To delegate (store) a prox
the MyProxy server, the user would run
client program (contained in the M
package) on the machine (or logge
manner - e.g. an encrypted Secure
where he can access his X.509 creden
passphrase that encrypts the private
with this credential) and delegate a pr
to the MyProxy server repository
authentication information.
Figure 4. Delegation of Privileges to MyPro
b). Retrieval of Credential from My
To obtain a proxy credential from
server, the user must provide a user
passphrase then user delegated a prox
the MyProxy server. Figure 5. shows
retrieval of proxy credential from
server.
ons
14) ISSN : 0975-0290
rver and the
If a user wants
portal should be
alf, this can be
Server which
egation of user
er machine that
ng subsections
Proxy server:
edential he can
rivileges to the
xy credential in
n myproxy-init
MyProxy server
d in a secure
Shell session)
ntial (using the
key associated
roxy credential
y along with
oxy server.
yProxy server:
the MyProxy
rname and the
xy credential in
s the steps for
the MyProxy
Figure 5. Retrieval Proxy Credent
server.
2. Data Management
GridFTP: GridFTP is a high-pe
reliable data transfer protocol o
bandwidth wide-area networks. Th
is based on FTP [14].One of the
GridFTP is that it enables third-pa
party transfer is suitable for an envir
is a large file in remote storage and
copy it to another remote server our
is similar to that in [15] , as illustrate
Figure 6. GridFTP third-part
• Reliable File Transfer Se
web service that provid
controlling and monitoring
transfers using GridFTP ser
controlling the transfer which
Grid service so it can be man
state model and queried usin
interfaces available to all Gri
shows the RST and GridFTP1
2294
tial from MyProxy
erformance, secure,
ptimized for high-
he GridFTP protocol
e major features of
arty transfer. Third-
ronment where there
d the client wants to
r work in this section
ed in Figure 6.
y transfer .
ervice (RFT): is a
des interfaces for
g third party file
rvers. The client
h hosted inside of a
naged using the soft
ng the Service Data
id services figure 7.
16].
56
5. Int. J. Advanced Networking and Applicatio
Volume: 6 Issue: 3 Pages: 2291-2299 (20
Figure 7. RST and GridFTP
8. INSTALLATION
CONFIGURATION STEPS O
COMPUTING MODEL
step 1:
Install and configure the software require
grid computing in all machines and s
software are as follows:
• Globus Toolkit installer, from G
Java J2SE 1.5.0+ SDK from Sun
BEA (do not use GCJ). IOD
requirement for RLS) ,Tomcat
WebMDS, optional for other servic
VOMS parsing
•Setting up the Resource or Ma
This Settings applies to all node ma
considered as a resource donor, in our
these machines are:
node1.grid.local
node2.grid.local
node3.grid.local
step 2: Install Portal Server
Step 3: Setup Certificate Authority Serve
include the following:
1) 1. Create users: user account, A g
account.
2) Run the setup script.
3) Configure the Subject Name:
• CA Name components:
• Identifies the particular certifica
certificate.
• Identifies the CA from other CA
SimpleCA.
4. Configure the CA's email
5. Configure the expiration date
6. Enter a passphrase
7. Confirm generated certificate
8. Complete setup of GSI
Step 4:
ons
14) ISSN : 0975-0290
AND
OF GRID
ed for build our
some of these
Globus Toolkit,
n, IBM, HP, or
DBC (compile
(required by
ces), gLite Java
achine
achines that is
r infrastructure
er and this step
generic globus
ate as the CA
As created by
1. Configure MyProxy version
server can be configured
the myproxy-server-setup comman
2. Manage User Credentials
A. Initialize Credentials
B. Retrieve Credentials
Step5:
1. Distribute Service Credentials
A. Initialize Credentials
B. Retrieve Credentials
9. RESULT OVERVIEW.
1). After finished the Installation
steps to the proposed model for gr
8. shown the result or the work ha
Figure 8. System after Insta
Configuration
2). Testing Grid Nodes
a). Testing Java WS Core
after setup, install and configure ou
started testing if it is work properl
promised and encouragement resul
grid model can be used as a
environment for research and for tea
9. shown promised result In each
configured to run the Globus contain
run all commands which used for g
as:
globus-start-container this com
security requirements has been set
requirements has not yet been set
following command: globus-start-co
2295
4.0 and Myproxy-
automatically by
nd
n and Configuration
rid computing, figure
as been done:
allation and
ur model of Grid we
y we have found a
lts that indicate our
an grid computing
aching in TU , figure
grid's nodes that is
ner, globus user can
grid computing such
mmand use only if
up , but if security
up user can run the
ontainer –nosec
6. Int. J. Advanced Networking and Applications
Volume: 6 Issue: 3 Pages: 2291-2299 (2014) ISSN : 0975-0290
2296
Figure 9. Starting the Java WS Core Container
3. Testing GridFTP
To testing GridFTP user should get a user
credentials from MyProxy Server than he can
use myproxy client tools- by typing: myproxy-
logon -l griduser3 -s myproxy-server.grid.local.
figure 10. illustrated the following result
which is the affection of pervious command
which also indicted the capability of high
performance and transfer secure data between
grid's nodes
Figure 10.Ttransfer with Globus-url-copy Command
4. Testing Reliable File Transfer Service (RFT)
Figure 11. shows the result of testing RFT by
Submitting a transfer to the Reliable File Transfer
Service and prints out the status of the transfer on the
console.
Figure 11. RFT File Transfer Result
5. Testing WS GRAM: Services provide secure job
submission in grid computing. WS GRAM enables the
client to add a self-generated resource Figure 12. shows
the result of running simple WS GRAM command.
Figure 12. Running Simple WS GRAM Command
6. Testing Grid Servers
1. Testing MyProxy Server:
in this test we have testing MyProxy server for two
processes these are:
• First process is storing users credentials to
MyProxy server – through command line :
myproxy-server.grid.local. Figure 13. shows the
process of storing user credentials of user griduser3
to myproxy server and this result is powerful
indication for our grid model that can be used for
different applications such as cultural heritage
available in digital form which will be test bed on
the proposed grid model.
Figure 13. Storing Users Credentials to MyProxy Server
• second process is retrieving users credentials
from MyProxy server – through command line:
myproxy-server.grid.local, Here Figure 14. shows
the process of retrieving user’s credentials of user
griduser3 from MyProxy server.
Figure 14. Retrieving Users Credentials
2. Testing grid Portal Server
Web portal provides a pool of services and information
that users can access. A Grid portal is a web server that
provides the framework in which grid services are
housed and accessed and user can submit compute jobs,
transfer files, and query Grid information services from
a standard web browser. we have Configured and
installed Open Grid Computing Environment
(OGCE) Portal , we have previously shown how we can
access our grid computing model using the commands
line now we are going to access the proposed grid
computing model using grid web portal and Figure 15.
shows the login page, we can access it from any
machine by pointing the browser to http://portal-
server.grid.local:8080/gridsphere,this is myproxy
server address, we have already registered a portal user
with login name gridadmin which we will get any grid
user credentials from myproxy server and start using the
grid with that credentials
7. Int. J. Advanced Networking and Applications
Volume: 6 Issue: 3 Pages: 2291-2299 (2014) ISSN : 0975-0290
2297
Figure 15. Login page of the grid portal
At the time we login, all basic deployed portals will be
displayed in taps as in shown in figure 16. deployed
portlets are the following:
Figure 16. Taps of Deployed Portal.
3. Using Grid Portal to Get User proxy Credentials
from Myproxy server:
Figure 17 shows the File Management Portal. Files can
be uploaded or downloaded through the browser.
Myproxy-portlet provides us with the credentials to use
the grid due to after logging to the grid portal the user is
not able to use the grid services until getting a proxy
credentials of an already registered grid user from
myproxy server, using a grid user name and password
via the following address: myproxy-server.grid.local
address.
Figure 17. File Management Portal
4.Grid Security Infrastructure GSI: is responsible for
providing APIs and tools for authentication, authorization
and certificate management to our proposed grid model
figure 18. shows the GSI proxy credentials loaded into an
account and will be valid for two hours.
Figure 18. GSI Proxy Credentials Loaded from Myproxy
Server
5. File Transfer and Job Submission Through the
Grid Portal:
Figure 19 shows the Job Submission Portal, Information
of fields available and resources of sending and
receiving the job must be entered as a GRAM resource
in a configuration file in order to show up in the Host list
as once can see in the figure 19,20.
Figure 19. Job Submission Portlet
8. Int. J. Advanced Networking and Applications
Volume: 6 Issue: 3 Pages: 2291-2299 (2014) ISSN : 0975-0290
2298
Figure 20.File Transfer Portlet not Connected
10. CONCLUSION & FUTURE WORKS
TU's campuses have more one thousand computing
resources in computer labs and offices most of them
connecting to networks and almost their time are idle time
. Grid computing is mechanism to integrate or unify all of
these resources and harness their computing power
effectively, all these features of grid motivated us to
brought TU into the grid computing field. Globus Toolkit
provide the necessary features such as the ability of utilize
idle time on any machines around campuses and this will
give TU powerful high performance computing resources
without any additional cost, in addition the results we have
gained them from the Adoption and testing the proposed
grid computing model in two ways commands line and
web portal interfaces which are mitigate the complexity of
the shortage of using grid computing fields in TU. After
the successes of this work which was as starting point to
get the grid computing technology work in place and the
skills and knowledge in grid computing that we have
gained, we about to start scale up our grid model to
integrate all computing resources in labs and offices to
built large-scale grid computing and make it available to
students, faculty, and researchers. we have built our grid
model and installed Globus Toolkit Version 4.2.1on three
machines including security components, Apon
GridPortal, Ubuntu Desktop Edition 11.04, OGCE portal
v2.5, Apache Tomcat 5.5, MyProxy v 4.2, GridSphere 2.1,
portlet container and others grid software need.
Many tasks involved in effective use of the grid were
discussed with example executions in section 6. Among
these were credential management by requesting
credentials with gridcert-request, creating proxy
credentials with grid proxyinit, and using the MyProxy
Server. result gained from commands line and portal
interfaces ware shown , Java WS Core, GridFTP, RFT,
WS GRAM, MyProxy Server, Portal Server, Grid Portal to
Get User proxy Credentials, GSI, Transfer and Job
Submission Through the Grid Portal using command-line
and portal interfaces both mechanisms were works
correctly. Results with the final deployment of GridSphere
and GridPort , proxy management, file management, and
the resource monitoring services work without a problem.
After a successful testing period for the proposed grid
computing and gained the promising and encouragement
results which motivated us to start integrating the
computing resources at TU into what we plan to called it
the campus grid and we believe our future large-scale
campus grid computing will help to further unite the
campus' computing power and give students, faculty and
researchers a stronger system with which to work.
REFRENCES
[1] R. Al-Khannak, B. Bitzer. Load Balancing for
Distributed and Integrated Power Systems using Grid
Computing”. ICCEP 07, Capri, Italy, 23 November
2009, from IEEE database..
[2] S. Tarun and N. Sharma, "Grid Computing: A
Collaborative Approach in Distributed Environment
for Achieving Parallel Performance and Better
Resource Utilization": International Journal on
Computer Science and Engineering (IJCSE), January
2011Vol. 3, Issue.01, 0975–3397
[3] B. Jacob, L. Ferreira, N. Bieberstein, C. Gilzean.
Enabling Applications for Grid Computing with
Globus, Springer, CSA2011&WCC2011 Proceeding.
[4] J. Xiao, D. Lin Survey of Security in Grid Services
The Fourth International Conference on Electronic
Business ICEB, Beijing, May,1st
200.
[5] P. Dabas A. Arya Grid Computing: An Introduction ,
International Journal of Advanced Research in
Computer Science and Software Engineering, Volume
3, Issue 3, ISSN: 2277 128X , March 2013.
[6] N. Thenmozhi1 , M. Madheswaran2 Content Based
Data Tarnsfer Mechanism for Efficient Bulk Data
Transfer in Grid Computing Environment"
International Journal of Grid Computing &
Applications (IJGCA) Vol.2, No.4, December 2011
[7] X. Shi · H. Jin · S. Wu · W. Zhu · L. Qi Adapting grid
computing environments dependable with virtual
machines: design, implementation, and evaluations,
The Journal of Supercomputing An International
Journal of High-Performance Computer Design,
Analysis, and Use ISSN 0920-8542 J Supercomput
DOI 10.1007/ s11227-011-0664-7.
[8] M.A. Baker, R. Buyya, and D. Laforenza, The Grid:
International Efforts in Global Computing, SSGRR
2000 The Computer & eBusiness Conference,
l`Aquila, Italy July 31. 2000 - August 6. 2000.
[9] C. Germain, V. Néri,..,. Building an experimental
platform for global computing Grid2000 December
2000, IEEE Press.
[10]Chervenak, I.Foster, C.Kesselman Salisbury, “The
Data Grid: Towards an Architecture for the
Distributed Management and Analysis of Large
Scientific Data Sets”, J. Network and Computer
Applications (23). 187-200, (2001),
[11] GlobusToolkit4.2.1 Download:
https://www.globus.org/toolkit/downloads/4.
2.1/.
[12]GlobusToolkit4.2.1Release Manuals:https://www.glob
us.org/toolkit/docs/4.2/4.2.1/