1. Nebula is NASA's open source cloud computing platform built using OpenStack that provides on-demand access to computing resources and storage for large datasets.
2. It allows NASA researchers to run computationally intensive tasks in virtual machines and store huge datasets over 100 terabytes in size.
3. The document discusses Nebula's architecture, services, and case studies of its use at various NASA research centers to support activities like processing images from Mars missions.
Extending Grids with Cloud Resource Management for Scientific ComputingBharat Kalia
Grid computing gained high popularity in the field of scientific computing through the idea of distributed resource sharing among institutions and scientists. Scientific computing is traditionally a high-utilization workload, with production Grids often running at over 80% utilization (generating high and often unpredictable latencies), and with smaller national Grids offering a rather limited amount of high-performance resources. Running large-scale simulations in such overloaded Grid environments often becomes latency bound or suffers from well-known Grid reliability problems. Today, a new research direction coined by the term Cloud computing proposes an alternative attractive to scientific computing scientists primarily because of four main advantages.
Modeling and Optimization of Resource Allocation in Cloud [PhD Thesis Progres...AtakanAral
The magnitude of data being stored and processed in the cloud is quickly increasing due to advancements in areas that rely on cloud computing, e.g. Big Data, Internet of Things and computation offloading. Efficient management of limited computing and network resources is necessary to handle such an increase in cloud workload. Some of the critical issues in resource management for cloud computing are \emph{modeling resources / requirements} and \emph{allocating resources to users}. Potential benefits of tackling these issues include increases in utilization, scalability, Quality of Service (QoS) and throughput as well as decreases in latency and costs.
Extending Grids with Cloud Resource Management for Scientific ComputingBharat Kalia
Grid computing gained high popularity in the field of scientific computing through the idea of distributed resource sharing among institutions and scientists. Scientific computing is traditionally a high-utilization workload, with production Grids often running at over 80% utilization (generating high and often unpredictable latencies), and with smaller national Grids offering a rather limited amount of high-performance resources. Running large-scale simulations in such overloaded Grid environments often becomes latency bound or suffers from well-known Grid reliability problems. Today, a new research direction coined by the term Cloud computing proposes an alternative attractive to scientific computing scientists primarily because of four main advantages.
Modeling and Optimization of Resource Allocation in Cloud [PhD Thesis Progres...AtakanAral
The magnitude of data being stored and processed in the cloud is quickly increasing due to advancements in areas that rely on cloud computing, e.g. Big Data, Internet of Things and computation offloading. Efficient management of limited computing and network resources is necessary to handle such an increase in cloud workload. Some of the critical issues in resource management for cloud computing are \emph{modeling resources / requirements} and \emph{allocating resources to users}. Potential benefits of tackling these issues include increases in utilization, scalability, Quality of Service (QoS) and throughput as well as decreases in latency and costs.
Survey on Division and Replication of Data in Cloud for Optimal Performance a...IJSRD
Outsourcing information to an outsider authoritative control, as is done in distributed computing, offers ascend to security concerns. The information trade off may happen because of assaults by different clients and hubs inside of the cloud. Hence, high efforts to establish safety are required to secure information inside of the cloud. On the other hand, the utilized security procedure should likewise consider the advancement of the information recovery time. In this paper, we propose Division and Replication of Data in the Cloud for Optimal Performance and Security (DROPS) that all in all methodologies the security and execution issues. In the DROPS procedure, we partition a record into sections, and reproduce the divided information over the cloud hubs. Each of the hubs stores just a itary part of a specific information record that guarantees that even in the event of a fruitful assault, no important data is uncovered to the assailant. Additionally, the hubs putting away the sections are isolated with certain separation by method for diagram T-shading to restrict an assailant of speculating the areas of the sections. Moreover, the DROPS procedure does not depend on the customary cryptographic procedures for the information security; in this way alleviating the arrangement of computationally costly approaches. We demonstrate that the likelihood to find and bargain the greater part of the hubs putting away the sections of a solitary record is to a great degree low. We likewise analyze the execution of the DROPS system with ten different plans. The more elevated amount of security with slight execution overhead was watched.
Neuro-Fuzzy System Based Dynamic Resource Allocation in Collaborative Cloud C...neirew J
Cloud collaboration is an emerging technology which enables sharing of computer files using cloud
computing. Here the cloud resources are assembled and cloud services are provided using these resources.
Cloud collaboration technologies are allowing users to share documents. Resource allocation in the cloud
is challenging because resources offer different Quality of Service (QoS) and services running on these
resources are risky for user demands. We propose a solution for resource allocation based on multi
attribute QoS Scoring considering parameters such as distance to the resource from user site, reputation of
the resource, task completion time, task completion ratio, and load at the resource. The proposed algorithm
referred to as Multi Attribute QoS scoring (MAQS) uses Neuro Fuzzy system. We have also included a
speculative manager to handle fault tolerance. In this paper it is shown that the proposed algorithm
perform better than others including power trust reputation based algorithms and harmony method which
use single attribute to compute the reputation score of each resource allocated.
NEURO-FUZZY SYSTEM BASED DYNAMIC RESOURCE ALLOCATION IN COLLABORATIVE CLOUD C...ijccsa
Cloud collaboration is an emerging technology which enables sharing of computer files using cloud
computing. Here the cloud resources are assembled and cloud services are provided using these resources.
Cloud collaboration technologies are allowing users to share documents. Resource allocation in the cloud
is challenging because resources offer different Quality of Service (QoS) and services running on these
resources are risky for user demands. We propose a solution for resource allocation based on multi
attribute QoS Scoring considering parameters such as distance to the resource from user site, reputation of
the resource, task completion time, task completion ratio, and load at the resource. The proposed algorithm
referred to as Multi Attribute QoS scoring (MAQS) uses Neuro Fuzzy system. We have also included a
speculative manager to handle fault tolerance. In this paper it is shown that the proposed algorithm
perform better than others including power trust reputation based algorithms and harmony method which
use single attribute to compute the reputation score of each resource allocated.
The magnitude of data being stored and processed in the Cloud is quickly increasing due to advancements in areas that rely on cloud computing, e.g. Big Data, Internet of Things and mobile code offloading. Concurrently, cloud services are getting more global and geographically distributed. To handle such changes in its usage scenario, the Cloud needs to transform into a completely decentralized, federated and ubiquitous environment similar to the historical transformation of the Internet. Indeed, research ideas for the transformation has already started to emerge including but not limited to Cloud Federations, Multi-Clouds, Fog Computing, Edge Computing, Cloudlets, Nano data centers, etc.
Standardization and resource management come up as the most significant issues for the realization of the distributed cloud paradigm. The focus in this thesis is the latter: efficient management of limited computing and network resources to adapt to the decentralization. Specifically, cloud services that consist of several virtual machines, dedicated network connections and databases are mapped to a multi-provider, geographically distributed and dynamic cloud infrastructure. The objective of the mapping is to improve quality of service in a cost-effective way. To that end; network latency and bandwidth as well as the cost of storage and computation are subjected to a multi-objective optimization.
The first phase of the resource mapping optimization is the topology mapping. In this phase, the virtual machines and network connections (i.e. the virtual cluster) of the cloud service are mapped to the physical cloud infrastructure. The hypothesis is that mapping the virtual cluster to a group of data centers with a similar topology would be the optimal solution.
Replication management is the second phase where the focus is on the data storage. Data objects that constitute the database are replicated and mapped to the storage as a service providers and end devices. The hypothesis for this phase is that an objective function adapted from the facility location problem optimizes the replica placement.
Detailed experiments under real-world as well as synthetic workloads prove that the hypotheses of the both phases are true.
Distributed Framework for Data Mining As a Service on Private CloudIJERA Editor
Data mining research faces two great challenges: i. Automated mining ii. Mining of distributed data.
Conventional mining techniques are centralized and the data needs to be accumulated at central location. Mining
tool needs to be installed on the computer before performing data mining. Thus, extra time is incurred in
collecting the data. Mining is 4 done by specialized analysts who have access to mining tools. This technique is
not optimal when the data is distributed over the network. To perform data mining in distributed scenario, we
need to design a different framework to improve efficiency. Also, the size of accumulated data grows
exponentially with time and is difficult to mine using a single computer. Personal computers have limitations in
terms of computation capability and storage capacity.
Cloud computing can be exploited for compute-intensive and data intensive applications. Data mining
algorithms are both compute and data intensive, therefore cloud based tools can provide an infrastructure for
distributed data mining. This paper is intended to use cloud computing to support distributed data mining. We
propose a cloud based data mining model which provides the facility of mass data storage along with distributed
data mining facility. This paper provide a solution for distributed data mining on Hadoop framework using an
interface to run the algorithm on specified number of nodes without any user level configuration. Hadoop is
configured over private servers and clients can process their data through common framework from anywhere in
private network. Data to be mined can either be chosen from cloud data server or can be uploaded from private
computers on the network. It is observed that the framework is helpful in processing large size data in less time
as compared to single system.
Private cloud storage implementation using OpenStack SwiftTELKOMNIKA JOURNAL
The use of distributed and parallel computer systems is growing rapidly, requiring an appropriate system to support its work processes. One technology that supports distributed computer systems is cloud computing. This system can generate the need to maximize the use of existing computing resources, one of which is in the form of cloud-based storage. The computer laboratory of Informatics Department of Petra Christian University has very large resources, but they have not been optimized in the utilization of existing storage devices. This condition gives the idea to utilize computers in the laboratory with cloud, so the storage can be used well. This implementation used the OpenStack cloud framework, which could provide IaaS service. From some existing OpenStack services, storage management used OpenStack Swift on its processing. OpenStack Swift is a cloud-based storage service that leverages various computing resources. After the implementation process, testing was done by way of data management, so storage could store, retrieve, and delete data. In addition, testing was also done by turning off some physical machines to ensure cloud services could remain well accessible, and measure the speed of data transfer in cloud storage. The resulting data was used to evaluate the cloud storage systems that had been created.
An Efficient Cloud based Approach for Service CrawlingIDES Editor
In this paper, we have designed a crawler that
searches services provided by different clouds connected in a
network. Proposed method provides details of freshness and
age of cloud clusters. Crawler checks each router available in
a network providing services. On basis of search criteria, our
design generates output guiding users for accessing requested
cloud services in efficient manner. We have planned to store
the result in an m-way tree and to use traversal technique for
extraction of specific data from the crawling result. We have
compared the result with other typical search techniques.
International Refereed Journal of Engineering and Science (IRJES) is a peer reviewed online journal for professionals and researchers in the field of computer science. The main aim is to resolve emerging and outstanding problems revealed by recent social and technological change. IJRES provides the platform for the researchers to present and evaluate their work from both theoretical and technical aspects and to share their views.
Scheduling in Virtual Infrastructure for High-Throughput Computing IJCSEA Journal
For the execution of the scientific applications, different methods have been proposed to dynamically provide execution environments for such applications that hide the complexity of underlying distributed and heterogeneous infrastructures. Recently virtualization has emerged as a promising technology to provide such environments. Virtualization is a technology that abstracts away the details of physical hardware and provides virtualized resources for high-level scientific applications. Virtualization offers a cost-effective and flexible way to use and manage computing resources. Such an abstraction is appealing in Grid computing and Cloud computing for better matching jobs (applications) to computational resources. This work applies the virtualization concept to the Condor dynamic resource management system by using Condor Virtual Universe to harvest the existing virtual computing resources to their maximum utility. It allows existing computing resources to be dynamically provisioned at run-time by users based on application requirements instead of statically at design-time thereby lay the basis for efficient use of the
available resources, thus providing way for the efficient use of the available resources.
1.How to make the passive
2.When we want to give more importance to the action instead of the person who performs it we use the passive voice.
3.Use of the passive voice:
To transform an active sentence to passive we consider the following points: -The subject of a verb in passive voice corresponds to the direct object of a verb in active voice. -The passive voice is formed using the verb “to be” + the main verb participle (past participle). -The subject of the main sentence becomes the complement agent of the passive. -To say who person does the action we use “by”.
4. Present Simple I write a letter to Charles A letter is written to Charles (by me) Past Simple You ate some fruit yesterday Some fruit was eaten yesterday (by you) Present Perfect She has drunk a lot of tea A lot of tea has been drunk by her Past Perfect Simple Anna had sung an emotional song An emotional song had been sung by Anna Future (Will) We will take a family picture A family picture will be taken by us Future (Going to) They are going to have a trip A trip is going to be had by them Modal verb I must write a letter/ I can write a letter/ I should write a letter The letter must be written/ The letter can be written/ The letter should be written .
5.They buy the house - they house is bought
He ate all of the cookies-All of the cookies were eaten.
The letter was mailed for Peter -Peter miled the letter.
A mistake has been made for Anna. Anna has made a mistake.
They use the yen in Japan- The yen is used in Japan.
Survey on Division and Replication of Data in Cloud for Optimal Performance a...IJSRD
Outsourcing information to an outsider authoritative control, as is done in distributed computing, offers ascend to security concerns. The information trade off may happen because of assaults by different clients and hubs inside of the cloud. Hence, high efforts to establish safety are required to secure information inside of the cloud. On the other hand, the utilized security procedure should likewise consider the advancement of the information recovery time. In this paper, we propose Division and Replication of Data in the Cloud for Optimal Performance and Security (DROPS) that all in all methodologies the security and execution issues. In the DROPS procedure, we partition a record into sections, and reproduce the divided information over the cloud hubs. Each of the hubs stores just a itary part of a specific information record that guarantees that even in the event of a fruitful assault, no important data is uncovered to the assailant. Additionally, the hubs putting away the sections are isolated with certain separation by method for diagram T-shading to restrict an assailant of speculating the areas of the sections. Moreover, the DROPS procedure does not depend on the customary cryptographic procedures for the information security; in this way alleviating the arrangement of computationally costly approaches. We demonstrate that the likelihood to find and bargain the greater part of the hubs putting away the sections of a solitary record is to a great degree low. We likewise analyze the execution of the DROPS system with ten different plans. The more elevated amount of security with slight execution overhead was watched.
Neuro-Fuzzy System Based Dynamic Resource Allocation in Collaborative Cloud C...neirew J
Cloud collaboration is an emerging technology which enables sharing of computer files using cloud
computing. Here the cloud resources are assembled and cloud services are provided using these resources.
Cloud collaboration technologies are allowing users to share documents. Resource allocation in the cloud
is challenging because resources offer different Quality of Service (QoS) and services running on these
resources are risky for user demands. We propose a solution for resource allocation based on multi
attribute QoS Scoring considering parameters such as distance to the resource from user site, reputation of
the resource, task completion time, task completion ratio, and load at the resource. The proposed algorithm
referred to as Multi Attribute QoS scoring (MAQS) uses Neuro Fuzzy system. We have also included a
speculative manager to handle fault tolerance. In this paper it is shown that the proposed algorithm
perform better than others including power trust reputation based algorithms and harmony method which
use single attribute to compute the reputation score of each resource allocated.
NEURO-FUZZY SYSTEM BASED DYNAMIC RESOURCE ALLOCATION IN COLLABORATIVE CLOUD C...ijccsa
Cloud collaboration is an emerging technology which enables sharing of computer files using cloud
computing. Here the cloud resources are assembled and cloud services are provided using these resources.
Cloud collaboration technologies are allowing users to share documents. Resource allocation in the cloud
is challenging because resources offer different Quality of Service (QoS) and services running on these
resources are risky for user demands. We propose a solution for resource allocation based on multi
attribute QoS Scoring considering parameters such as distance to the resource from user site, reputation of
the resource, task completion time, task completion ratio, and load at the resource. The proposed algorithm
referred to as Multi Attribute QoS scoring (MAQS) uses Neuro Fuzzy system. We have also included a
speculative manager to handle fault tolerance. In this paper it is shown that the proposed algorithm
perform better than others including power trust reputation based algorithms and harmony method which
use single attribute to compute the reputation score of each resource allocated.
The magnitude of data being stored and processed in the Cloud is quickly increasing due to advancements in areas that rely on cloud computing, e.g. Big Data, Internet of Things and mobile code offloading. Concurrently, cloud services are getting more global and geographically distributed. To handle such changes in its usage scenario, the Cloud needs to transform into a completely decentralized, federated and ubiquitous environment similar to the historical transformation of the Internet. Indeed, research ideas for the transformation has already started to emerge including but not limited to Cloud Federations, Multi-Clouds, Fog Computing, Edge Computing, Cloudlets, Nano data centers, etc.
Standardization and resource management come up as the most significant issues for the realization of the distributed cloud paradigm. The focus in this thesis is the latter: efficient management of limited computing and network resources to adapt to the decentralization. Specifically, cloud services that consist of several virtual machines, dedicated network connections and databases are mapped to a multi-provider, geographically distributed and dynamic cloud infrastructure. The objective of the mapping is to improve quality of service in a cost-effective way. To that end; network latency and bandwidth as well as the cost of storage and computation are subjected to a multi-objective optimization.
The first phase of the resource mapping optimization is the topology mapping. In this phase, the virtual machines and network connections (i.e. the virtual cluster) of the cloud service are mapped to the physical cloud infrastructure. The hypothesis is that mapping the virtual cluster to a group of data centers with a similar topology would be the optimal solution.
Replication management is the second phase where the focus is on the data storage. Data objects that constitute the database are replicated and mapped to the storage as a service providers and end devices. The hypothesis for this phase is that an objective function adapted from the facility location problem optimizes the replica placement.
Detailed experiments under real-world as well as synthetic workloads prove that the hypotheses of the both phases are true.
Distributed Framework for Data Mining As a Service on Private CloudIJERA Editor
Data mining research faces two great challenges: i. Automated mining ii. Mining of distributed data.
Conventional mining techniques are centralized and the data needs to be accumulated at central location. Mining
tool needs to be installed on the computer before performing data mining. Thus, extra time is incurred in
collecting the data. Mining is 4 done by specialized analysts who have access to mining tools. This technique is
not optimal when the data is distributed over the network. To perform data mining in distributed scenario, we
need to design a different framework to improve efficiency. Also, the size of accumulated data grows
exponentially with time and is difficult to mine using a single computer. Personal computers have limitations in
terms of computation capability and storage capacity.
Cloud computing can be exploited for compute-intensive and data intensive applications. Data mining
algorithms are both compute and data intensive, therefore cloud based tools can provide an infrastructure for
distributed data mining. This paper is intended to use cloud computing to support distributed data mining. We
propose a cloud based data mining model which provides the facility of mass data storage along with distributed
data mining facility. This paper provide a solution for distributed data mining on Hadoop framework using an
interface to run the algorithm on specified number of nodes without any user level configuration. Hadoop is
configured over private servers and clients can process their data through common framework from anywhere in
private network. Data to be mined can either be chosen from cloud data server or can be uploaded from private
computers on the network. It is observed that the framework is helpful in processing large size data in less time
as compared to single system.
Private cloud storage implementation using OpenStack SwiftTELKOMNIKA JOURNAL
The use of distributed and parallel computer systems is growing rapidly, requiring an appropriate system to support its work processes. One technology that supports distributed computer systems is cloud computing. This system can generate the need to maximize the use of existing computing resources, one of which is in the form of cloud-based storage. The computer laboratory of Informatics Department of Petra Christian University has very large resources, but they have not been optimized in the utilization of existing storage devices. This condition gives the idea to utilize computers in the laboratory with cloud, so the storage can be used well. This implementation used the OpenStack cloud framework, which could provide IaaS service. From some existing OpenStack services, storage management used OpenStack Swift on its processing. OpenStack Swift is a cloud-based storage service that leverages various computing resources. After the implementation process, testing was done by way of data management, so storage could store, retrieve, and delete data. In addition, testing was also done by turning off some physical machines to ensure cloud services could remain well accessible, and measure the speed of data transfer in cloud storage. The resulting data was used to evaluate the cloud storage systems that had been created.
An Efficient Cloud based Approach for Service CrawlingIDES Editor
In this paper, we have designed a crawler that
searches services provided by different clouds connected in a
network. Proposed method provides details of freshness and
age of cloud clusters. Crawler checks each router available in
a network providing services. On basis of search criteria, our
design generates output guiding users for accessing requested
cloud services in efficient manner. We have planned to store
the result in an m-way tree and to use traversal technique for
extraction of specific data from the crawling result. We have
compared the result with other typical search techniques.
International Refereed Journal of Engineering and Science (IRJES) is a peer reviewed online journal for professionals and researchers in the field of computer science. The main aim is to resolve emerging and outstanding problems revealed by recent social and technological change. IJRES provides the platform for the researchers to present and evaluate their work from both theoretical and technical aspects and to share their views.
Scheduling in Virtual Infrastructure for High-Throughput Computing IJCSEA Journal
For the execution of the scientific applications, different methods have been proposed to dynamically provide execution environments for such applications that hide the complexity of underlying distributed and heterogeneous infrastructures. Recently virtualization has emerged as a promising technology to provide such environments. Virtualization is a technology that abstracts away the details of physical hardware and provides virtualized resources for high-level scientific applications. Virtualization offers a cost-effective and flexible way to use and manage computing resources. Such an abstraction is appealing in Grid computing and Cloud computing for better matching jobs (applications) to computational resources. This work applies the virtualization concept to the Condor dynamic resource management system by using Condor Virtual Universe to harvest the existing virtual computing resources to their maximum utility. It allows existing computing resources to be dynamically provisioned at run-time by users based on application requirements instead of statically at design-time thereby lay the basis for efficient use of the
available resources, thus providing way for the efficient use of the available resources.
1.How to make the passive
2.When we want to give more importance to the action instead of the person who performs it we use the passive voice.
3.Use of the passive voice:
To transform an active sentence to passive we consider the following points: -The subject of a verb in passive voice corresponds to the direct object of a verb in active voice. -The passive voice is formed using the verb “to be” + the main verb participle (past participle). -The subject of the main sentence becomes the complement agent of the passive. -To say who person does the action we use “by”.
4. Present Simple I write a letter to Charles A letter is written to Charles (by me) Past Simple You ate some fruit yesterday Some fruit was eaten yesterday (by you) Present Perfect She has drunk a lot of tea A lot of tea has been drunk by her Past Perfect Simple Anna had sung an emotional song An emotional song had been sung by Anna Future (Will) We will take a family picture A family picture will be taken by us Future (Going to) They are going to have a trip A trip is going to be had by them Modal verb I must write a letter/ I can write a letter/ I should write a letter The letter must be written/ The letter can be written/ The letter should be written .
5.They buy the house - they house is bought
He ate all of the cookies-All of the cookies were eaten.
The letter was mailed for Peter -Peter miled the letter.
A mistake has been made for Anna. Anna has made a mistake.
They use the yen in Japan- The yen is used in Japan.
A gentle introduction on the use of implicit values & conversions in Scala.
It also introduces some design patterns for which implicit(s) are the building blocks.
http://blog.stratio.com/developers-guide-scala-implicit-values-part/
Implementation of the Open Source Virtualization Technologies in Cloud Computingneirew J
The “Virtualization and Cloud Computing” is a recent buzzword in the digital world. Behind this fancy
poetic phrase there lies a true picture of future computing for both in technical and social perspective.
Though the “Virtualization and Cloud Computing are recent but the idea of centralizing computation and
storage in distributed data centres maintained by any third party companies is not new but it came in way
back in 1990s along with distributed computing approaches like grid computing, Clustering and Network
load Balancing. Cloud computing provide IT as a service to the users on-demand basis. This service has
greater flexibility, availability, reliability and scalability with utility computing model. This new concept of
computing has an immense potential in it to be used in the field of e-governance and in the overall IT
development perspective in developing countries like Bangladesh.
Implementation of the Open Source Virtualization Technologies in Cloud Computingijccsa
The “Virtualization and Cloud Computing” is a recent buzzword in the digital world. Behind this fancy
poetic phrase there lies a true picture of future computing for both in technical and social perspective.
Though the “Virtualization and Cloud Computing are recent but the idea of centralizing computation and
storage in distributed data centres maintained by any third party companies is not new but it came in way
back in 1990s along with distributed computing approaches like grid computing, Clustering and Network
load Balancing. Cloud computing provide IT as a service to the users on-demand basis. This service has
greater flexibility, availability, reliability and scalability with utility computing model. This new concept of
computing has an immense potential in it to be used in the field of e-governance and in the overall IT
development perspective in developing countries like Bangladesh.
CONTAINERIZED SERVICES ORCHESTRATION FOR EDGE COMPUTING IN SOFTWARE-DEFINED W...IJCNCJournal
As SD-WAN disrupts legacy WAN technologies and becomes the preferred WAN technology adopted by corporations, and Kubernetes becomes the de-facto container orchestration tool, the opportunities for deploying edge-computing containerized applications running over SD-WAN are vast. Service orchestration in SD-WAN has not been provided with enough attention, resulting in the lack of research focused on service discovery in these scenarios. In this article, an in-house service discovery solution that works alongside Kubernetes’ master node for allowing improved traffic handling and better user experience when running micro-services is developed. The service discovery solution was conceived following a design science research approach. Our research includes the implementation of a proof-ofconcept SD-WAN topology alongside a Kubernetes cluster that allows us to deploy custom services and delimit the necessary characteristics of our in-house solution. Also, the implementation's performance is tested based on the required times for updating the discovery solution according to service updates. Finally, some conclusions and modifications are pointed out based on the results, while also discussing possible enhancements.
Improved Utilization of Infrastructure of Clouds by using Upgraded Functional...AM Publications
This paper discusses a propose cloud infrastructure that combines On-Demand allocation of resources with
improved utilization, opportunistic provisioning of cycles from idle cloud nodes to other processes. Because for cloud
computing to avail all the demanded services to the cloud consumers is very difficult. It is a major issue to meet cloud
consumer’s requirements. Hence On-Demand cloud infrastructure using Hadoop configuration with improved CPU
utilization and storage utilization is proposed using splitting algorithm by using Map-Reduce. Hence all cloud nodes which
remains idle are all in use and also improvement in security challenges and achieves load balancing and fast processing of
large data in less amount of time. Here we compare the FTP and HDFS for file uploading and file downloading; and
enhance the CPU utilization and storage utilization. Cloud computing moves the application software and databases to the
large data centres, where the management of the data and services may not be fully trustworthy. Therefore this security
problem is solve by encrypting the data using encryption/decryption algorithm and Map-Reducing algorithm which solve
the problem of utilization of all idle cloud nodes for larger data.
An advanced ensemble load balancing approach for fog computing applicationsIJECEIAES
Fog computing has emerged as a viable concept for expanding the capabilities of cloud computing to the periphery of the network allowing for efficient data processing and analysis from internet of things (IoT) devices. Load balancing is essential in fog computing because it ensures optimal resource utilization and performance among distributed fog nodes. This paper proposed an ensemble-based load-balancing approach for fog computing environments. An advanced ensemble load balancing approach (AELBA) uses real-time monitoring and analysis of fog node metrics, such as resource utilization, network congestion, and service response times, to facilitate effective load distribution. Based on the ensemble's collective decision-making, these metrics are fed into a centralized load-balancing controller, which dynamically adjusts the load distribution across fog nodes. Performance of the proposed ensemble load-balancing approach is evaluated and compared it to traditional load-balancing techniques in fog using extensive simulation experiments. The results demonstrate that our ensemble-based approach outperforms individual load-balancing algorithms regarding response time, resource utilization, and scalability. It adapts to dynamic fog environments, providing efficient load balancing even under varying workload conditions.
In supporting its large scale, multidisciplinary scientific research efforts across all the university campuses and by the research personnel spread over literally every corner of the state, the state of Nevada needs to build and leverage its own Cyber infrastructure. Following the well-established as-a-service model, this state-wide Cyber infrastructure that consists of data acquisition, data storage, advanced instruments, visualization, computing and information processing systems, and people, all seamlessly linked together through a high-speed network, is designed and operated to deliver the benefits of Cyber infrastructure-as-aService (CaaS).There are three major service groups in this CaaS, namely (i) supporting infrastructural services that comprise sensors, computing/storage/networking hardware, operating system, management tools, virtualization and message passing interface (MPI); (ii) data transmission and storage services that provide connectivity to various big data sources, as well as cached and stored datasets in a distributed storage backend; and (iii) processing and visualization services that provide user access to rich processing and visualization tools and packages essential to various scientific research workflows. Built on commodity hardware and open source software packages, the Southern Nevada Research Cloud(SNRC)and a data repository in a separate location constitute a low cost solution to deliver all these services around CaaS. The service-oriented architecture and implementation of the SNRC are geared to encapsulate as much detail of big data processing and cloud computing as possible away from end users; rather scientists only need to learn and access an interactive web-based interface to conduct their collaborative, multidisciplinary, dataintensive research. The capability and easy-to-use features of the SNRC are demonstrated through a use case that attempts to derive a solar radiation model from a large data set by regression analysis.
CYBER INFRASTRUCTURE AS A SERVICE TO EMPOWER MULTIDISCIPLINARY, DATA-DRIVEN S...ijcsit
In supporting its large scale, multidisciplinary scientific research efforts across all the university campuses and by the research personnel spread over literally every corner of the state, the state of Nevada needs to build and leverage its own Cyber infrastructure. Following the well-established as-a-service model, this state-wide Cyber infrastructure that consists of data acquisition, data storage, advanced instruments, visualization, computing and information processing systems, and people, all seamlessly linked together through a high-speed network, is designed and operated to deliver the benefits of Cyber infrastructure-as-aService (CaaS).There are three major service groups in this CaaS, namely (i) supporting infrastructural
services that comprise sensors, computing/storage/networking hardware, operating system, management tools, virtualization and message passing interface (MPI); (ii) data transmission and storage services that provide connectivity to various big data sources, as well as cached and stored datasets in a distributed
storage backend; and (iii) processing and visualization services that provide user access to rich processing and visualization tools and packages essential to various scientific research workflows. Built on commodity hardware and open source software packages, the Southern Nevada Research Cloud(SNRC)and a data repository in a separate location constitute a low cost solution to deliver all these services around CaaS. The service-oriented architecture and implementation of the SNRC are geared to encapsulate as much detail of big data processing and cloud computing as possible away from end users; rather scientists only need to learn and access an interactive web-based interface to conduct their collaborative, multidisciplinary, dataintensive research. The capability and easy-to-use features of the SNRC are demonstrated through a use case that attempts to derive a solar radiation model from a large data set by regression analysis.
3
rd International Conference on Signal Processing, VLSI Design & Communication
Systems (SVC 2022) will provide an excellent international forum for sharing knowledge
and results in theory, methodology and applications of on Signal Processing, VLSI Design &
Communication Systems. The aim of the conference is to provide a platform to the
researchers and practitioners from both academia as well as industry to meet and share
cutting-edge development in the field.
In supporting its large scale, multidisciplinary scientific research efforts across all the university campuses and by the research personnel spread over literally every corner of the state, the state of Nevada needs to build and leverage its own Cyber infrastructure. Following the well-established as-a-service model, this state-wide Cyber infrastructure that consists of data acquisition, data storage, advanced instruments, visualization, computing and information processing systems, and people, all seamlessly linked together through a high-speed network, is designed and operated to deliver the benefits of Cyber infrastructure-as-aService (CaaS).There are three major service groups in this CaaS, namely (i) supporting infrastructural services that comprise sensors, computing/storage/networking hardware, operating system, management tools, virtualization and message passing interface (MPI); (ii) data transmission and storage services that provide connectivity to various big data sources, as well as cached and stored datasets in a distributed storage backend; and (iii) processing and visualization services that provide user access to rich processing and visualization tools and packages essential to various scientific research workflows. Built on commodity hardware and open source software packages, the Southern Nevada Research Cloud(SNRC)and a data repository in a separate location constitute a low cost solution to deliver all these services around CaaS. The service-oriented architecture and implementation of the SNRC are geared to encapsulate as much detail of big data processing and cloud computing as possible away from end users; rather scientists only need to learn and access an interactive web-based interface to conduct their collaborative, multidisciplinary, dataintensive research. The capability and easy-to-use features of the SNRC are demonstrated through a use case that attempts to derive a solar radiation model from a large data set by regression analysis.
Implementing K-Out-Of-N Computing For Fault Tolerant Processing In Mobile and...IJERA Editor
Despite the advances in hardware for hand-held mobile devices, resource-intensive applications (e.g., video and imagestorage and processing or map-reduce type) still remain off bounds since they require large computation and storage capabilities.Recent research has attempted to address these issues by employing remote servers, such as clouds and peer mobile devices.For mobile devices deployed in dynamic networks (i.e., with frequent topology changes because of node failure/unavailability andmobility as in a mobile cloud), however, challenges of reliability and energy efficiency remain largely unaddressed. To the best of ourknowledge, we are the first to address these challenges in an integrated manner for both data storage and processing in mobilecloud, an approach we call k-out-of-n computing. In our solution, mobile devices successfully retrieve or process data, in the mostenergy-efficient way, as long as k out of n remote servers are accessible. Through a real system implementation we prove the feasibilityof our approach. Extensive simulations demonstrate the fault tolerance and energy efficiency performance of our framework in largerscale networks.
2. Saumya Kumari et al NEBULA: Cloud Computer for Universe of Big Data
4013 | International Journal of Current Engineering and Technology, Vol.4, No.6 (Dec 2014)
sensor network connected to cloud periodically by satellite
connectionn.
Elastic incorporation of ad-hoc resource in cloud
Sharing of Virtual machine hosted on adhoc nodes are
participated by cloud service. A first device computational
resource is pooled by secondary device.
Nebula built on DTN (Delay Tolerant Network)
application layer
A set of protocol based on store and forward
communication ex-Bundle protocol extension is required
at session and application layer.
DTN environment includes
Low-propagation delay-DTN bundle agent utilise
Internet protocol for negotiating connectivity in real
time. Ex- in planetary surface environment.
High-propagation delay-To enables connectivity
between 2 agents DTN bundle agent use some form of
scheduling. Ex-in deep space.
NASA’s Cloud Computing platform Nebula supporting
NASA for viewing and exploration of moon and mars by
uploading hundred’s of high resolution images over 100
terabytes data. Nebula is designed to port data sets and
code. It saves time and labour. Nebula’s services allow
flexibility for NASA in mission stages and needs of
extension, timelines with delays and cancellation. Nebula
is supporting Federal government websites for storage as
data grows.
About Nebula
Nebula deploys private cloud computing infrastructures.
Nebula has developed a hardware appliance which allows
business to build a private computing cloud from
computers. Nebula was founded in April 2011 by Chris C.
Kemp at NASA Ames Research Centre. Nebula’s mission
is to ignite a new era of global innovation by laying the
foundation of the coming industrial revolution of big data.
Nebula Architecture
Nebula requirements
Characteristics
Type of Environment Cloud
Main Memory 96 GB
Network Interconnect Cisco Nexus 7000 , 10 GigE
switch
Network Topology Cisco proprietary
Number of Sockets 2
Cores per sockets 6
Cores per node 12
Compiler Intel 11.1
Processor Type Intel Westmere (Xenon
X5660)
Processor Speed 2.80 GHz
Hypervisor Kernel virtual machine(KVM)
Some Case Studies
Jet Propulsion Lab (JPL) focus on Mars
The portal BeAMartian.jpl.nasa.gov developed by MS
Azure using API to connect visitors of website with
pictures of Mars without any additional data storage on
JPL computers. User can see pictures, videos, post
questions read response and send message.
Enterprise data centre strategy
NASA’s re-evaluation of enterprise data centre strategy
for outsourced data centre services by delivering the use of
NEBULA. Ex-Message Passing Interface (MPI) by Flight
Vehicle Research and Technology Division.
Data Sources for Nebula
(a)Internal Data-Nessus data for producing detailed
vulnerability information about data hosts on
network. It is a basis of CVSS scores.Patch link data for
producing detailed information about patch status for hosts
on network. It is a basis for patch status scores.
(b)External Data-Nessus external scanner dumps DNS
tree and scan all ports on all hosts from external posture
for self-discovery for exposure to outside world.
TCP/Netflow data (SYN/ACK), Google Search
results(search API)
(c)Operational Data-It dumps asset database, DHCP log
files and MAC association.
(d)Intel Data- It determines threat sources and IR
(incident response) tools to look for hostname and sys-
admin which lead to increase risk factor.
Uses
Nebula is used to create virtual workstations to give
software developer more control over developed
environment for sharing modules and library over
cloud.
Nebula is used for collaboration with non-NASA
partner (Microsoft, Amazon) via FTP and running
web based application which helps in analyzing data
produced by NASA’s Airspace concept evaluation
system.
3. Saumya Kumari et al NEBULA: Cloud Computer for Universe of Big Data
4014 | International Journal of Current Engineering and Technology, Vol.4, No.6 (Dec 2014)
Benefits of working with Huge Data
(a)Security-Hybrid cloud offer protection by security
services such as Intrusion Prevention, Web Application
Firewalls, File Integrity Monitoring, and Event
Management. This environment allows adding layers of
security.
(b)Performance-varying nature of big data requires
infrastructure flexibility and elasticity. The main drawback
here is Cloud Bursting. It allows spinning up new
workloads when information from system signals the need
for additional resource avoiding jeopardizing workload
performance. Hybrid cloud allows cloud bursting on any
scale and offer adaptable cloud solution for managing and
storing big data.
(c)Saving-Hybrid cloud environment allow for adding
resource. Public cloud component of hybrid environment
allows financial flexibility of spinning up additional
resource and results in financial savings. Private cloud
component provide resources required for data processing.
Thus, this architecture of hybrid cloud provides security
through layers of security services, optimal performance
through cloud bursting capabilities and financial savings
through flexible resource offerings.
Some advantages and disadvantages of Nebula Cloud
computing platform
Advantages Disadvantages
1. Scalability-on-demand
provisioning of
computing resources.
Security-loss of control over
sensitive data.
2. Accessibility-location and
device independence.
Integration-difficult integration
with other system
3. Redundancy-
Redundancy of sites is
easier to implement.
Dependency-tied to cloud
service provider
4. Multi-tenancy-several
customer sharing same
infrastructure.
Cost-opaque cost structure.
5. Maintenance-Upgrades
are applied centrally by
IT experts.
Knowledge-most knowledge is
about cloud service provider.
6. Cost-Transformation of
capital expenditure for
servers into and operating
expense.
Flexibility-special
customization of computing
resource is not possible.
Nebula provide 3 class of storage
(a)Local Storage- Nebula use swappable commodity
drives in a hardware RAID configuration. Virtual machine
use local storage to run application.
(b)Persistent Block Drive (iSCSI)-Nebula use iSCSI to
provide persistent network storage block device used by
conventional application and decouples the storage from
connected server.
(c)Object Store-Easy storage of petabytes of data and
million files. Open-source implementation of object stores
used with custom code adds in Access Control Layer
(ALC).
Research Challenges
Universal Adoption-Stakeholders object to openness
of tool. Everyone is not open to the idea of open.
Fairness-Stakeholders must be convinced there is no
bias in scoring system.
False Positive-A robust system is needed for dealing
with this cloud.
DHCP/NAT (Dynamic Host Configuration
Protocol/Network Address Translation)-The
consistent attribution of hosts across various IPs is
required on various dates.
IPv6-To do discovery scans across hosts.
Infrastructure MUST include network monitoring and
aggregating the V6 auto-configuration logs.
Scan on demand-using API
Scan on Connect- Tying DHCP/IPv6 auto-
configuration logs to scan initiation.
Score-based and status-based situational gaming
for sysadmins.
Conclusion
Nebula cloud support middleware i.e. design and
implementation of virtual Nebula Node(NN) and
lightweight Nebula Node and integration of Nebula Node
with SOA technology such as OSGI(Open Service
Gateway Initiative), integration of Nebula Node with
event technology such as JMS(Java Message Service).
Nebula promotes machine- to- machine intelligence,
location based and personalised services. My findings
includes 2 folds-
(a) Utilisation of virtualisation layer in cloud computing
platform to support on-demand access.
(b) Low performance of 10GigE network used in cloud
computing systems to low latency high bandwidth
inter-connects used in Supercomputers
Current Technology suffers from resource poverty and
lack of maturity.
References
The Internet of Things- http://www.buildyourbestcloud.com /three-things-you-
might-not-know-about-openstack
NASA:http://open.nasa.gov/blog/2012/06/04/nebula-nasa-and-openstack/,
http://www.nasa.gov/open/plan/nebula.html,
p://www.nasa.gov/multimedia/imagegallery/image_feature_2526.html#.VAasEGP5
O74
IEEE:http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=6167432&url=http%3
A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D6167432
http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=6332192&url=http%3A%2F
%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D6332192
Smart Grid-http://smartgrid.ulitzer.com/node/975079
Wired-http://www.wired.com/2013/04/nebula-one-chris-kemp/
http://orgviz.ulitzer.com/node/97507
Steve Heistand, Subhash Saini , Rupak Biswas (NASA Supercomputer divisor):
Performance Evaluation of NASA’s NEBULA cloud computing Platform,