This document proposes a method called DROPS (Division and Replication of Data in Cloud for Optimal Performance and Security) that divides files into fragments and replicates the fragments across cloud nodes for improved security and performance. The method stores each file fragment on a separate node to prevent attackers from accessing full files even if some nodes are compromised. It also separates nodes storing fragments using graph coloring to obscure fragment locations. The method aims to improve retrieval time by selecting central nodes and replicating fragments on high-traffic nodes. The document compares DROPS to 10 replication strategies and evaluates it using 3 data center network architectures.
1) The document discusses quality of service (QoS)-aware data replication for data-intensive applications in cloud computing systems. It aims to minimize data replication cost and number of QoS violated replicas.
2) It presents a mathematical model and algorithm to optimally place QoS-satisfied and QoS-violated data replicas. The algorithm uses minimum-cost maximum flow to obtain the optimal placement.
3) The algorithm takes as input a set of requested nodes and outputs the optimal placement for QoS-satisfied and QoS-violated replicas by modeling the problem as a network flow graph and applying existing polynomial-time algorithms.
Survey on Division and Replication of Data in Cloud for Optimal Performance a...IJSRD
Outsourcing information to an outsider authoritative control, as is done in distributed computing, offers ascend to security concerns. The information trade off may happen because of assaults by different clients and hubs inside of the cloud. Hence, high efforts to establish safety are required to secure information inside of the cloud. On the other hand, the utilized security procedure should likewise consider the advancement of the information recovery time. In this paper, we propose Division and Replication of Data in the Cloud for Optimal Performance and Security (DROPS) that all in all methodologies the security and execution issues. In the DROPS procedure, we partition a record into sections, and reproduce the divided information over the cloud hubs. Each of the hubs stores just a itary part of a specific information record that guarantees that even in the event of a fruitful assault, no important data is uncovered to the assailant. Additionally, the hubs putting away the sections are isolated with certain separation by method for diagram T-shading to restrict an assailant of speculating the areas of the sections. Moreover, the DROPS procedure does not depend on the customary cryptographic procedures for the information security; in this way alleviating the arrangement of computationally costly approaches. We demonstrate that the likelihood to find and bargain the greater part of the hubs putting away the sections of a solitary record is to a great degree low. We likewise analyze the execution of the DROPS system with ten different plans. The more elevated amount of security with slight execution overhead was watched.
IRJET- Improving Data Availability by using VPC Strategy in Cloud Environ...IRJET Journal
This document discusses improving data availability in cloud environments using virtual private cloud (VPC) strategies and data replication strategies (DRS). It proposes using VPC to define private networks in public clouds and deploying cloud resources into those private networks for improved security and control. It also proposes using DRS to store multiple copies of data across different nodes to increase data availability, reduce bandwidth usage, and provide fault tolerance. The proposed approach identifies popular data files for replication, selects the best storage sites based on factors like request frequency, failure probability, and storage usage, and decides when to replace replicas to optimize resource usage. A simulation showed this hybrid VPC and DRS approach improved performance metrics like response time, network usage, and load balancing compared to
An asynchronous replication model to improve data available into a heterogene...Alexander Decker
This document summarizes a research paper that proposes an asynchronous replication model to improve data availability in heterogeneous systems. The proposed model uses a loosely coupled architecture between main and replication servers to reduce dependencies. It also supports heterogeneous systems, allowing different parts of an application to run on different systems for better performance. This makes it a cost-effective solution for data replication across different system types.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
Survey on cloud backup services of personal storageeSAT Journals
Abstract In widespread cloud environment cloud services is tremendously growing due to large amount of personal computation data. Deduplication process is used for avoiding the redundant data. A cloud storage environment for data backup in personal computing devices facing various challenge, of source deduplication for the cloud backup services with low deduplication efficiency. Challenges facing in the process of deduplication for cloud backup service are-1)Low deduplication efficiency due to exclusive access to large amount of data and limited system resources of PC based client site.2)Low data transfer efficiency due to transferring deduplicate data from source to backup server are typically small but that can be often across the WAN. Keywords- Cloud computing, Deduplication, cloud backup, application awareness
A survey of various scheduling algorithm in cloud computing environmenteSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
1) The document discusses quality of service (QoS)-aware data replication for data-intensive applications in cloud computing systems. It aims to minimize data replication cost and number of QoS violated replicas.
2) It presents a mathematical model and algorithm to optimally place QoS-satisfied and QoS-violated data replicas. The algorithm uses minimum-cost maximum flow to obtain the optimal placement.
3) The algorithm takes as input a set of requested nodes and outputs the optimal placement for QoS-satisfied and QoS-violated replicas by modeling the problem as a network flow graph and applying existing polynomial-time algorithms.
Survey on Division and Replication of Data in Cloud for Optimal Performance a...IJSRD
Outsourcing information to an outsider authoritative control, as is done in distributed computing, offers ascend to security concerns. The information trade off may happen because of assaults by different clients and hubs inside of the cloud. Hence, high efforts to establish safety are required to secure information inside of the cloud. On the other hand, the utilized security procedure should likewise consider the advancement of the information recovery time. In this paper, we propose Division and Replication of Data in the Cloud for Optimal Performance and Security (DROPS) that all in all methodologies the security and execution issues. In the DROPS procedure, we partition a record into sections, and reproduce the divided information over the cloud hubs. Each of the hubs stores just a itary part of a specific information record that guarantees that even in the event of a fruitful assault, no important data is uncovered to the assailant. Additionally, the hubs putting away the sections are isolated with certain separation by method for diagram T-shading to restrict an assailant of speculating the areas of the sections. Moreover, the DROPS procedure does not depend on the customary cryptographic procedures for the information security; in this way alleviating the arrangement of computationally costly approaches. We demonstrate that the likelihood to find and bargain the greater part of the hubs putting away the sections of a solitary record is to a great degree low. We likewise analyze the execution of the DROPS system with ten different plans. The more elevated amount of security with slight execution overhead was watched.
IRJET- Improving Data Availability by using VPC Strategy in Cloud Environ...IRJET Journal
This document discusses improving data availability in cloud environments using virtual private cloud (VPC) strategies and data replication strategies (DRS). It proposes using VPC to define private networks in public clouds and deploying cloud resources into those private networks for improved security and control. It also proposes using DRS to store multiple copies of data across different nodes to increase data availability, reduce bandwidth usage, and provide fault tolerance. The proposed approach identifies popular data files for replication, selects the best storage sites based on factors like request frequency, failure probability, and storage usage, and decides when to replace replicas to optimize resource usage. A simulation showed this hybrid VPC and DRS approach improved performance metrics like response time, network usage, and load balancing compared to
An asynchronous replication model to improve data available into a heterogene...Alexander Decker
This document summarizes a research paper that proposes an asynchronous replication model to improve data availability in heterogeneous systems. The proposed model uses a loosely coupled architecture between main and replication servers to reduce dependencies. It also supports heterogeneous systems, allowing different parts of an application to run on different systems for better performance. This makes it a cost-effective solution for data replication across different system types.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
Survey on cloud backup services of personal storageeSAT Journals
Abstract In widespread cloud environment cloud services is tremendously growing due to large amount of personal computation data. Deduplication process is used for avoiding the redundant data. A cloud storage environment for data backup in personal computing devices facing various challenge, of source deduplication for the cloud backup services with low deduplication efficiency. Challenges facing in the process of deduplication for cloud backup service are-1)Low deduplication efficiency due to exclusive access to large amount of data and limited system resources of PC based client site.2)Low data transfer efficiency due to transferring deduplicate data from source to backup server are typically small but that can be often across the WAN. Keywords- Cloud computing, Deduplication, cloud backup, application awareness
A survey of various scheduling algorithm in cloud computing environmenteSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Cloud computing Review over various scheduling algorithmsIJEEE
Cloud computing has taken an importantposition in the field of research as well as in thegovernment organisations. Cloud computing uses virtualnetwork technology to provide computer resources tothe end users as well as to the customer’s. Due tocomplex computing environment the use of high logicsand task scheduler algorithms are increase which resultsin costly operation of cloud network. Researchers areattempting to build such kind of job scheduling algorithms that are compatible and applicable in cloud computing environment.In this paper, we review research work which is recently proposed by researchers on the base of energy saving scheduling techniques. We also studying various scheduling algorithms and issues related to them in cloud computing.
This document summarizes a study on a new dynamic load balancing approach in cloud environments. It begins by outlining some of the major challenges of load balancing in cloud systems, including uneven distribution of workloads across CPUs. It then proposes a new approach with three main components: 1) A queueing and job assignment process that prioritizes assigning jobs to faster CPUs, 2) A timeout chart to determine when jobs should be migrated or terminated to avoid delays, and 3) Use of a "super node" to act as a proxy and backup in case other nodes fail. The approach is intended to more efficiently distribute jobs and help cloud systems maintain optimal performance. Finally, the document discusses how this approach could be integrated into existing cloud architectures
Coupling-Based Internal Clock Synchronization for Large Scale Dynamic Distrib...Angelo Corsaro
This paper studies the problem of realizing a common software clock among a large set of nodes without an external time reference (i.e., internal clock synchronization), any centralized control and where nodes can join and leave the distributed system at their will. The paper proposes an internal clock synchronization algorithm which combines the gossip-based paradigm with a nature-inspired approach, coming from the coupled oscillators phenomenon, to cope with scale and churn. The algorithm works on the top of an overlay network and uses a uniform peer sampling service to fullfill each node’s local view. Therefore, differently from clock synchronization protocols for small scale and static distributed systems, here each node synchronizes regularly with only the neighbors in its local view and not with the whole system. Theoretical and empirical evaluations of the convergence speed and of the synchronization error of the coupled-based internal clock synchronization algorithm have been carried out, showing how convergence time and the synchronization error depends on the coupling factor and on the local view size. Moreover the variation of the synchronization error with respect to churn and the impact of a sudden variation of the number of nodes have been analyzed to show the stability of the algorithm. In all these contexts, the algorithm shows nice performance and very good self-organizing properties. Finally, we showed how the assumption on the existence of a uniform peer-sampling service is instrumental for the good behavior of the algorithm.
Task Scheduling methodology in cloud computing Qutub-ud- Din
This document outlines a proposed methodology for developing efficient task scheduling strategies in cloud computing. It begins with introductions to cloud computing and task scheduling. It then reviews several relevant existing task scheduling algorithms from literature that focus on objectives like reducing costs, minimizing completion time, and maximizing resource utilization. The problem statement indicates the goals are to reduce costs, minimize completion time, and maximize resource allocation. An overview of the proposed methodology's flow is then provided, followed by references.
A Hybrid Cloud Approach for Secure Authorized De-DuplicationEditor IJMTER
The cloud backup is used for the personal storage of the people in terms of reducing the
mainlining process and managing the structure and storage space managing process. The challenging
process is the deduplication process in both the local and global backup de-duplications. In the prior
work they only provide the local storage de-duplication or vice versa global storage de-duplication in
terms of improving the storage capacity and the processing time. In this paper, the proposed system
is called as the ALG- Dedupe. It means the Application aware Local-Global Source De-duplication
proposed system to provide the efficient de-duplication process. It can provide the efficient deduplication process with the low system load, shortened backup window, and increased power
efficiency in the user’s personal storage. In the proposed system the large data is partitioned into
smaller part which is called as chunks of data. Here the data may contain the redundancy it will be
avoided before storing into the storage area.
Data Distribution Handling on Cloud for Deployment of Big Dataijccsa
Cloud computing is a new emerging model in the field of computer science. For varying workload Cloud computing presents a large scale on demand infrastructure. The primary usage of clouds in practice is to process massive amounts of data. Processing large datasets has become crucial in research and business environments. The big challenges associated with processing large datasets is the vast infrastructure required. Cloud computing provides vast infrastructure to store and process Big data. Vms can be provisioned on demand in cloud to process the data by forming cluster of Vms . Map Reduce paradigm can be used to process data wherein the mapper assign part of task to particular Vms in cluster and reducer combines individual output from each Vms to produce final result. we have proposed an algorithm to reduce the overall data distribution and processing time. We tested our solution in Cloud Analyst Simulation environment wherein, we found that our proposed algorithm significantly reduces the overall data processing time in cloud.
iaetsd Controlling data deuplication in cloud storageIaetsd Iaetsd
This document discusses controlling data deduplication in cloud storage. It proposes an architecture that provides duplicate check procedures with minimal overhead compared to normal cloud storage operations. The key aspects of the proposed system are:
1) It uses convergent encryption to encrypt data for privacy while still allowing for deduplication of duplicate files.
2) It introduces a private cloud that manages user privileges and generates tokens for authorized duplicate checking in a hybrid cloud architecture.
3) It evaluates the overhead of the proposed authorized duplicate checking scheme and finds it incurs negligible overhead compared to normal cloud storage operations.
This document discusses scheduling in cloud computing. It proposes a priority-based scheduling protocol to improve resource utilization, server performance, and minimize makespan. The protocol assigns priorities to jobs, allocates jobs to processors based on completion time, and processes jobs in parallel queues to efficiently schedule jobs in cloud computing. Future work includes analyzing time complexity and completion times through simulation to validate the protocol's efficiency.
An Analysis Of Cloud ReliabilityApproaches Based on Cloud Components And Reli...ijafrc
This document discusses several techniques for improving reliability in cloud computing systems. It analyzes approaches based on cloud components and reliability techniques, including adaptive fault tolerance, cloud service reliability modeling, fault-tolerant and reliable computation, fault tolerance and resilience, fault tolerance middleware, and a system-level approach. The key components of cloud reliability are discussed, such as virtual machines, cloud managers, and fault tolerance methods. Comparisons are made between the techniques based on their methodology for measuring and ensuring reliability.
Dynamic Resource Provisioning with Authentication in Distributed DatabaseEditor IJCATR
Data center have the largest consumption amounts of energy in sharing the power. The public cloud workloads of different
priorities and performance requirements of various applications [4]. Cloud data center have capable of sensing an opportunity to present
different programs. In my proposed construction and the name of the security level of imperturbable privacy leakage rarely distributed
cloud system to deal with the persistent characteristics there is a substantial increases and information that can be used to augment the
profit, retrenchment overhead or both. Data Mining Analysis of data from different perspectives and summarizing it into useful
information is a process. Three empirical algorithms have been proposed assignments estimate the ratios are dissected theoretically and
compared using real Internet latency data recital of testing methods
A Survey on Neural Network Based Minimization of Data Center in Power Consump...IJSTA
This document summarizes a survey on using neural networks to minimize data center power consumption. It discusses how load prediction using neural networks can balance server loads and optimize energy usage by selectively powering down unused servers. The document reviews several related works applying neural network prediction to schedule virtual machines and transition servers to low-power sleep states. The survey concludes neural network models trained on historical load data can accurately predict future loads to enable reliable and optimized load balancing across servers.
Modeling and Optimization of Resource Allocation in Cloud [PhD Thesis Progres...AtakanAral
The magnitude of data being stored and processed in the cloud is quickly increasing due to advancements in areas that rely on cloud computing, e.g. Big Data, Internet of Things and computation offloading. Efficient management of limited computing and network resources is necessary to handle such an increase in cloud workload. Some of the critical issues in resource management for cloud computing are \emph{modeling resources / requirements} and \emph{allocating resources to users}. Potential benefits of tackling these issues include increases in utilization, scalability, Quality of Service (QoS) and throughput as well as decreases in latency and costs.
International Refereed Journal of Engineering and Science (IRJES) is a peer reviewed online journal for professionals and researchers in the field of computer science. The main aim is to resolve emerging and outstanding problems revealed by recent social and technological change. IJRES provides the platform for the researchers to present and evaluate their work from both theoretical and technical aspects and to share their views.
This document summarizes a research paper that proposes a system for privacy-preserving public auditing of cloud data storage. The system allows a third-party auditor (TPA) to verify the integrity of data stored with a cloud service provider on behalf of users, without learning anything about the actual data contents. The system uses a public key-based homomorphic linear authenticator technique that enables the TPA to perform audits without having access to the full data. This technique allows the TPA to efficiently audit multiple users' data simultaneously. The document describes the system components, methodology used involving key generation and auditing protocols, and concludes the proposed system provides security and performance guarantees for privacy-preserving public auditing of cloud data
Provable Multicopy Dynamic Data Possession in Cloud Computing Systems1crore projects
IEEE PROJECTS 2015
1 crore projects is a leading Guide for ieee Projects and real time projects Works Provider.
It has been provided Lot of Guidance for Thousands of Students & made them more beneficial in all Technology Training.
Dot Net
DOTNET Project Domain list 2015
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
Java Project Domain list 2015
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
ECE IEEE Projects 2015
1. Matlab project
2. Ns2 project
3. Embedded project
4. Robotics project
Eligibility
Final Year students of
1. BSc (C.S)
2. BCA/B.E(C.S)
3. B.Tech IT
4. BE (C.S)
5. MSc (C.S)
6. MSc (IT)
7. MCA
8. MS (IT)
9. ME(ALL)
10. BE(ECE)(EEE)(E&I)
TECHNOLOGY USED AND FOR TRAINING IN
1. DOT NET
2. C sharp
3. ASP
4. VB
5. SQL SERVER
6. JAVA
7. J2EE
8. STRINGS
9. ORACLE
10. VB dotNET
11. EMBEDDED
12. MAT LAB
13. LAB VIEW
14. Multi Sim
CONTACT US
1 CRORE PROJECTS
Door No: 214/215,2nd Floor,
No. 172, Raahat Plaza, (Shopping Mall) ,Arcot Road, Vadapalani, Chennai,
Tamin Nadu, INDIA - 600 026
Email id: 1croreprojects@gmail.com
website:1croreprojects.com
Phone : +91 97518 00789 / +91 72999 51536
REVIEW PAPER on Scheduling in Cloud ComputingJaya Gautam
This document reviews scheduling algorithms for workflow applications in cloud computing. It discusses characteristics of cloud computing, deployment and service models, and the importance of scheduling in cloud computing. The document analyzes several scheduling algorithms proposed in literature that consider parameters like makespan, cost, load balancing, and priority. It finds that algorithms like Max-Min, Min-Min, and HEFT perform better than traditional algorithms in optimizing these parameters for workflow scheduling in cloud environments.
Comparative Analysis, Security Aspects & Optimization of Workload in Gfs Base...IOSR Journals
This document summarizes a research paper that proposes using Google File System (GFS) and MapReduce in a cloud computing environment to improve resource utilization and processing of large datasets. The paper discusses GFS architecture with a master node and chunk servers, and how MapReduce can split large files into chunks and process them in parallel across idle cloud nodes. It also proposes encrypting data for security and using a third party to audit client files. The goal is to provide fault tolerance, optimize workload processing time, and maximize utilization of cloud resources for data-intensive applications.
International Journal of Computational Engineering Research(IJCER)ijceronline
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology.
Survey on Division and Replication of Data in Cloud for Optimal Performance a...IJSRD
Outsourcing information to an outsider authoritative control, as is done in distributed computing, offers ascend to security concerns. The information trade off may happen because of assaults by different clients and hubs inside of the cloud. Hence, high efforts to establish safety are required to secure information inside of the cloud. On the other hand, the utilized security procedure should likewise consider the advancement of the information recovery time. In this paper, we propose Division and Replication of Data in the Cloud for Optimal Performance and Security (DROPS) that all in all methodologies the security and execution issues. In the DROPS procedure, we partition a record into sections, and reproduce the divided information over the cloud hubs. Each of the hubs stores just a itary part of a specific information record that guarantees that even in the event of a fruitful assault, no important data is uncovered to the assailant. Additionally, the hubs putting away the sections are isolated with certain separation by method for diagram T-shading to restrict an assailant of speculating the areas of the sections. Moreover, the DROPS procedure does not depend on the customary cryptographic procedures for the information security; in this way alleviating the arrangement of computationally costly approaches. We demonstrate that the likelihood to find and bargain the greater part of the hubs putting away the sections of a solitary record is to a great degree low. We likewise analyze the execution of the DROPS system with ten different plans. The more elevated amount of security with slight execution overhead was watched.
Fragmentation of Data in Large-Scale System For Ideal Performance and SecurityEditor IJCATR
Cloud computing is becoming prominent trend which offers the number of significant advantages. One of the ground laying
advantage of the cloud computing is the pay-as-per-use, where according to the use of the services, the customer has to pay. At present,
user’s storage availability improves the data generation. There is requiring farming out such large amount of data. There is indefinite
large number of Cloud Service Providers (CSP). The Cloud Service Providers is increasing trend for many number of organizations and
as well as for the customers that decreases the burden of the maintenance and local data storage. In cloud computing transferring data to
the third party administrator control will give rise to security concerns. Within the cloud, compromisation of data may occur due to
attacks by the unauthorized users and nodes. So, in order to protect the data in cloud the higher security measures are required and also
to provide security for the optimization of the data retrieval time. The proposed system will approach the issues of security and
performance. Initially in the DROPS methodology, the division of the files into fragments is done and replication of those fragmented
data over the cloud node is performed. Single fragment of particular file can be stored on each of the nodes which ensure that no
meaningful information is shown to an attacker on a successful attack. The separation of the nodes is done by T-Coloring in order to
prohibit an attacker to guess the fragment’s location. The complete data security is ensured by DROPS methodology
Evaluation Of The Data Security Methods In Cloud Computing Environmentsijfcstjournal
This document discusses methods for ensuring data security in cloud computing environments. It begins by introducing cloud computing models including infrastructure as a service (IaaS), platform as a service (PaaS), and software as a service (SaaS). The main goals of data security - confidentiality, integrity, and availability - are then described. Several methods for data security are proposed, including data fragmentation where sensitive data is divided and distributed across different domains. Encryption techniques are also discussed as ways to protect confidential data during storage and transmission. Overall, the document aims to evaluate approaches for addressing key issues around securing user data in cloud systems.
Cloud Computing is the revolution in current generation IT enterprise. Cloud computing displaces database and application software to the large data centres, where the management of services and data may not be predictable, where as the conventional solutions, for IT services are under proper logical, physical and personal controls. This aspect attribute, however comprises different security challenges which have not been well understood. It concentrates on cloud data storage security which has always been an important aspect of quality of service (QOS). In this paper, we designed and simulated an adaptable and efficient scheme to guarantee the correctness of user data stored in the cloud and also with some prominent features. Homomorphic token is used for distributed verification of erasure – coded data. By using this scheme, we can identify misbehaving servers. In spite of past works, our scheme supports effective and secure dynamic operations on data blocks such as data insertion, deletion and modification. In contrast to traditional solutions, where the IT services are under proper physical, logical and personnel controls, cloud computing moves the application software and databases to the large data centres, where the data management and services may not be absolutely truthful. This effective security and performance analysis describes that the proposed scheme is extremely flexible against malicious data modification, convoluted failures and server clouding attacks.
Cloud computing Review over various scheduling algorithmsIJEEE
Cloud computing has taken an importantposition in the field of research as well as in thegovernment organisations. Cloud computing uses virtualnetwork technology to provide computer resources tothe end users as well as to the customer’s. Due tocomplex computing environment the use of high logicsand task scheduler algorithms are increase which resultsin costly operation of cloud network. Researchers areattempting to build such kind of job scheduling algorithms that are compatible and applicable in cloud computing environment.In this paper, we review research work which is recently proposed by researchers on the base of energy saving scheduling techniques. We also studying various scheduling algorithms and issues related to them in cloud computing.
This document summarizes a study on a new dynamic load balancing approach in cloud environments. It begins by outlining some of the major challenges of load balancing in cloud systems, including uneven distribution of workloads across CPUs. It then proposes a new approach with three main components: 1) A queueing and job assignment process that prioritizes assigning jobs to faster CPUs, 2) A timeout chart to determine when jobs should be migrated or terminated to avoid delays, and 3) Use of a "super node" to act as a proxy and backup in case other nodes fail. The approach is intended to more efficiently distribute jobs and help cloud systems maintain optimal performance. Finally, the document discusses how this approach could be integrated into existing cloud architectures
Coupling-Based Internal Clock Synchronization for Large Scale Dynamic Distrib...Angelo Corsaro
This paper studies the problem of realizing a common software clock among a large set of nodes without an external time reference (i.e., internal clock synchronization), any centralized control and where nodes can join and leave the distributed system at their will. The paper proposes an internal clock synchronization algorithm which combines the gossip-based paradigm with a nature-inspired approach, coming from the coupled oscillators phenomenon, to cope with scale and churn. The algorithm works on the top of an overlay network and uses a uniform peer sampling service to fullfill each node’s local view. Therefore, differently from clock synchronization protocols for small scale and static distributed systems, here each node synchronizes regularly with only the neighbors in its local view and not with the whole system. Theoretical and empirical evaluations of the convergence speed and of the synchronization error of the coupled-based internal clock synchronization algorithm have been carried out, showing how convergence time and the synchronization error depends on the coupling factor and on the local view size. Moreover the variation of the synchronization error with respect to churn and the impact of a sudden variation of the number of nodes have been analyzed to show the stability of the algorithm. In all these contexts, the algorithm shows nice performance and very good self-organizing properties. Finally, we showed how the assumption on the existence of a uniform peer-sampling service is instrumental for the good behavior of the algorithm.
Task Scheduling methodology in cloud computing Qutub-ud- Din
This document outlines a proposed methodology for developing efficient task scheduling strategies in cloud computing. It begins with introductions to cloud computing and task scheduling. It then reviews several relevant existing task scheduling algorithms from literature that focus on objectives like reducing costs, minimizing completion time, and maximizing resource utilization. The problem statement indicates the goals are to reduce costs, minimize completion time, and maximize resource allocation. An overview of the proposed methodology's flow is then provided, followed by references.
A Hybrid Cloud Approach for Secure Authorized De-DuplicationEditor IJMTER
The cloud backup is used for the personal storage of the people in terms of reducing the
mainlining process and managing the structure and storage space managing process. The challenging
process is the deduplication process in both the local and global backup de-duplications. In the prior
work they only provide the local storage de-duplication or vice versa global storage de-duplication in
terms of improving the storage capacity and the processing time. In this paper, the proposed system
is called as the ALG- Dedupe. It means the Application aware Local-Global Source De-duplication
proposed system to provide the efficient de-duplication process. It can provide the efficient deduplication process with the low system load, shortened backup window, and increased power
efficiency in the user’s personal storage. In the proposed system the large data is partitioned into
smaller part which is called as chunks of data. Here the data may contain the redundancy it will be
avoided before storing into the storage area.
Data Distribution Handling on Cloud for Deployment of Big Dataijccsa
Cloud computing is a new emerging model in the field of computer science. For varying workload Cloud computing presents a large scale on demand infrastructure. The primary usage of clouds in practice is to process massive amounts of data. Processing large datasets has become crucial in research and business environments. The big challenges associated with processing large datasets is the vast infrastructure required. Cloud computing provides vast infrastructure to store and process Big data. Vms can be provisioned on demand in cloud to process the data by forming cluster of Vms . Map Reduce paradigm can be used to process data wherein the mapper assign part of task to particular Vms in cluster and reducer combines individual output from each Vms to produce final result. we have proposed an algorithm to reduce the overall data distribution and processing time. We tested our solution in Cloud Analyst Simulation environment wherein, we found that our proposed algorithm significantly reduces the overall data processing time in cloud.
iaetsd Controlling data deuplication in cloud storageIaetsd Iaetsd
This document discusses controlling data deduplication in cloud storage. It proposes an architecture that provides duplicate check procedures with minimal overhead compared to normal cloud storage operations. The key aspects of the proposed system are:
1) It uses convergent encryption to encrypt data for privacy while still allowing for deduplication of duplicate files.
2) It introduces a private cloud that manages user privileges and generates tokens for authorized duplicate checking in a hybrid cloud architecture.
3) It evaluates the overhead of the proposed authorized duplicate checking scheme and finds it incurs negligible overhead compared to normal cloud storage operations.
This document discusses scheduling in cloud computing. It proposes a priority-based scheduling protocol to improve resource utilization, server performance, and minimize makespan. The protocol assigns priorities to jobs, allocates jobs to processors based on completion time, and processes jobs in parallel queues to efficiently schedule jobs in cloud computing. Future work includes analyzing time complexity and completion times through simulation to validate the protocol's efficiency.
An Analysis Of Cloud ReliabilityApproaches Based on Cloud Components And Reli...ijafrc
This document discusses several techniques for improving reliability in cloud computing systems. It analyzes approaches based on cloud components and reliability techniques, including adaptive fault tolerance, cloud service reliability modeling, fault-tolerant and reliable computation, fault tolerance and resilience, fault tolerance middleware, and a system-level approach. The key components of cloud reliability are discussed, such as virtual machines, cloud managers, and fault tolerance methods. Comparisons are made between the techniques based on their methodology for measuring and ensuring reliability.
Dynamic Resource Provisioning with Authentication in Distributed DatabaseEditor IJCATR
Data center have the largest consumption amounts of energy in sharing the power. The public cloud workloads of different
priorities and performance requirements of various applications [4]. Cloud data center have capable of sensing an opportunity to present
different programs. In my proposed construction and the name of the security level of imperturbable privacy leakage rarely distributed
cloud system to deal with the persistent characteristics there is a substantial increases and information that can be used to augment the
profit, retrenchment overhead or both. Data Mining Analysis of data from different perspectives and summarizing it into useful
information is a process. Three empirical algorithms have been proposed assignments estimate the ratios are dissected theoretically and
compared using real Internet latency data recital of testing methods
A Survey on Neural Network Based Minimization of Data Center in Power Consump...IJSTA
This document summarizes a survey on using neural networks to minimize data center power consumption. It discusses how load prediction using neural networks can balance server loads and optimize energy usage by selectively powering down unused servers. The document reviews several related works applying neural network prediction to schedule virtual machines and transition servers to low-power sleep states. The survey concludes neural network models trained on historical load data can accurately predict future loads to enable reliable and optimized load balancing across servers.
Modeling and Optimization of Resource Allocation in Cloud [PhD Thesis Progres...AtakanAral
The magnitude of data being stored and processed in the cloud is quickly increasing due to advancements in areas that rely on cloud computing, e.g. Big Data, Internet of Things and computation offloading. Efficient management of limited computing and network resources is necessary to handle such an increase in cloud workload. Some of the critical issues in resource management for cloud computing are \emph{modeling resources / requirements} and \emph{allocating resources to users}. Potential benefits of tackling these issues include increases in utilization, scalability, Quality of Service (QoS) and throughput as well as decreases in latency and costs.
International Refereed Journal of Engineering and Science (IRJES) is a peer reviewed online journal for professionals and researchers in the field of computer science. The main aim is to resolve emerging and outstanding problems revealed by recent social and technological change. IJRES provides the platform for the researchers to present and evaluate their work from both theoretical and technical aspects and to share their views.
This document summarizes a research paper that proposes a system for privacy-preserving public auditing of cloud data storage. The system allows a third-party auditor (TPA) to verify the integrity of data stored with a cloud service provider on behalf of users, without learning anything about the actual data contents. The system uses a public key-based homomorphic linear authenticator technique that enables the TPA to perform audits without having access to the full data. This technique allows the TPA to efficiently audit multiple users' data simultaneously. The document describes the system components, methodology used involving key generation and auditing protocols, and concludes the proposed system provides security and performance guarantees for privacy-preserving public auditing of cloud data
Provable Multicopy Dynamic Data Possession in Cloud Computing Systems1crore projects
IEEE PROJECTS 2015
1 crore projects is a leading Guide for ieee Projects and real time projects Works Provider.
It has been provided Lot of Guidance for Thousands of Students & made them more beneficial in all Technology Training.
Dot Net
DOTNET Project Domain list 2015
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
Java Project Domain list 2015
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
ECE IEEE Projects 2015
1. Matlab project
2. Ns2 project
3. Embedded project
4. Robotics project
Eligibility
Final Year students of
1. BSc (C.S)
2. BCA/B.E(C.S)
3. B.Tech IT
4. BE (C.S)
5. MSc (C.S)
6. MSc (IT)
7. MCA
8. MS (IT)
9. ME(ALL)
10. BE(ECE)(EEE)(E&I)
TECHNOLOGY USED AND FOR TRAINING IN
1. DOT NET
2. C sharp
3. ASP
4. VB
5. SQL SERVER
6. JAVA
7. J2EE
8. STRINGS
9. ORACLE
10. VB dotNET
11. EMBEDDED
12. MAT LAB
13. LAB VIEW
14. Multi Sim
CONTACT US
1 CRORE PROJECTS
Door No: 214/215,2nd Floor,
No. 172, Raahat Plaza, (Shopping Mall) ,Arcot Road, Vadapalani, Chennai,
Tamin Nadu, INDIA - 600 026
Email id: 1croreprojects@gmail.com
website:1croreprojects.com
Phone : +91 97518 00789 / +91 72999 51536
REVIEW PAPER on Scheduling in Cloud ComputingJaya Gautam
This document reviews scheduling algorithms for workflow applications in cloud computing. It discusses characteristics of cloud computing, deployment and service models, and the importance of scheduling in cloud computing. The document analyzes several scheduling algorithms proposed in literature that consider parameters like makespan, cost, load balancing, and priority. It finds that algorithms like Max-Min, Min-Min, and HEFT perform better than traditional algorithms in optimizing these parameters for workflow scheduling in cloud environments.
Comparative Analysis, Security Aspects & Optimization of Workload in Gfs Base...IOSR Journals
This document summarizes a research paper that proposes using Google File System (GFS) and MapReduce in a cloud computing environment to improve resource utilization and processing of large datasets. The paper discusses GFS architecture with a master node and chunk servers, and how MapReduce can split large files into chunks and process them in parallel across idle cloud nodes. It also proposes encrypting data for security and using a third party to audit client files. The goal is to provide fault tolerance, optimize workload processing time, and maximize utilization of cloud resources for data-intensive applications.
International Journal of Computational Engineering Research(IJCER)ijceronline
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology.
Survey on Division and Replication of Data in Cloud for Optimal Performance a...IJSRD
Outsourcing information to an outsider authoritative control, as is done in distributed computing, offers ascend to security concerns. The information trade off may happen because of assaults by different clients and hubs inside of the cloud. Hence, high efforts to establish safety are required to secure information inside of the cloud. On the other hand, the utilized security procedure should likewise consider the advancement of the information recovery time. In this paper, we propose Division and Replication of Data in the Cloud for Optimal Performance and Security (DROPS) that all in all methodologies the security and execution issues. In the DROPS procedure, we partition a record into sections, and reproduce the divided information over the cloud hubs. Each of the hubs stores just a itary part of a specific information record that guarantees that even in the event of a fruitful assault, no important data is uncovered to the assailant. Additionally, the hubs putting away the sections are isolated with certain separation by method for diagram T-shading to restrict an assailant of speculating the areas of the sections. Moreover, the DROPS procedure does not depend on the customary cryptographic procedures for the information security; in this way alleviating the arrangement of computationally costly approaches. We demonstrate that the likelihood to find and bargain the greater part of the hubs putting away the sections of a solitary record is to a great degree low. We likewise analyze the execution of the DROPS system with ten different plans. The more elevated amount of security with slight execution overhead was watched.
Fragmentation of Data in Large-Scale System For Ideal Performance and SecurityEditor IJCATR
Cloud computing is becoming prominent trend which offers the number of significant advantages. One of the ground laying
advantage of the cloud computing is the pay-as-per-use, where according to the use of the services, the customer has to pay. At present,
user’s storage availability improves the data generation. There is requiring farming out such large amount of data. There is indefinite
large number of Cloud Service Providers (CSP). The Cloud Service Providers is increasing trend for many number of organizations and
as well as for the customers that decreases the burden of the maintenance and local data storage. In cloud computing transferring data to
the third party administrator control will give rise to security concerns. Within the cloud, compromisation of data may occur due to
attacks by the unauthorized users and nodes. So, in order to protect the data in cloud the higher security measures are required and also
to provide security for the optimization of the data retrieval time. The proposed system will approach the issues of security and
performance. Initially in the DROPS methodology, the division of the files into fragments is done and replication of those fragmented
data over the cloud node is performed. Single fragment of particular file can be stored on each of the nodes which ensure that no
meaningful information is shown to an attacker on a successful attack. The separation of the nodes is done by T-Coloring in order to
prohibit an attacker to guess the fragment’s location. The complete data security is ensured by DROPS methodology
Evaluation Of The Data Security Methods In Cloud Computing Environmentsijfcstjournal
This document discusses methods for ensuring data security in cloud computing environments. It begins by introducing cloud computing models including infrastructure as a service (IaaS), platform as a service (PaaS), and software as a service (SaaS). The main goals of data security - confidentiality, integrity, and availability - are then described. Several methods for data security are proposed, including data fragmentation where sensitive data is divided and distributed across different domains. Encryption techniques are also discussed as ways to protect confidential data during storage and transmission. Overall, the document aims to evaluate approaches for addressing key issues around securing user data in cloud systems.
Cloud Computing is the revolution in current generation IT enterprise. Cloud computing displaces database and application software to the large data centres, where the management of services and data may not be predictable, where as the conventional solutions, for IT services are under proper logical, physical and personal controls. This aspect attribute, however comprises different security challenges which have not been well understood. It concentrates on cloud data storage security which has always been an important aspect of quality of service (QOS). In this paper, we designed and simulated an adaptable and efficient scheme to guarantee the correctness of user data stored in the cloud and also with some prominent features. Homomorphic token is used for distributed verification of erasure – coded data. By using this scheme, we can identify misbehaving servers. In spite of past works, our scheme supports effective and secure dynamic operations on data blocks such as data insertion, deletion and modification. In contrast to traditional solutions, where the IT services are under proper physical, logical and personnel controls, cloud computing moves the application software and databases to the large data centres, where the data management and services may not be absolutely truthful. This effective security and performance analysis describes that the proposed scheme is extremely flexible against malicious data modification, convoluted failures and server clouding attacks.
DISTRIBUTED SCHEME TO AUTHENTICATE DATA STORAGE SECURITY IN CLOUD COMPUTINGijcsit
Cloud Computing is the revolution in current generation IT enterprise. Cloud computing displaces
database and application software to the large data centres, where the management of services and data
may not be predictable, where as the conventional solutions, for IT services are under proper logical,
physical and personal controls. This aspect attribute, however comprises different security challenges
which have not been well understood. It concentrates on cloud data storage security which has always been
an important aspect of quality of service (QOS). In this paper, we designed and simulated an adaptable and
efficient scheme to guarantee the correctness of user data stored in the cloud and also with some prominent
features. Homomorphic token is used for distributed verification of erasure – coded data. By using this
scheme, we can identify misbehaving servers. In spite of past works, our scheme supports effective and
secure dynamic operations on data blocks such as data insertion, deletion and modification. In contrast to
traditional solutions, where the IT services are under proper physical, logical and personnel controls,
cloud computing moves the application software and databases to the large data centres, where the data
management and services may not be absolutely truthful. This effective security and performance analysis
describes that the proposed scheme is extremely flexible against malicious data modification, convoluted
failures and server clouding attacks.
Cloud Computing is the revolution in current generation IT enterprise. Cloud computing displaces database and application software to the large data centres, where the management of services and data may not be predictable, where as the conventional solutions, for IT services are under proper logical, physical and personal controls. This aspect attribute, however comprises different security challenges which have not been well understood. It concentrates on cloud data storage security which has always been an important aspect of quality of service (QOS). In this paper, we designed and simulated an adaptable and efficient scheme to guarantee the correctness of user data stored in the cloud and also with some prominent features. Homomorphic token is used for distributed verification of erasure – coded data. By using this scheme, we can identify misbehaving servers. In spite of past works, our scheme supports effective and secure dynamic operations on data blocks such as data insertion, deletion and modification. In contrast to traditional solutions, where the IT services are under proper physical, logical and personnel controls, cloud computing moves the application software and databases to the large data centres, where the data management and services may not be absolutely truthful. This effective security and performance analysis describes that the proposed scheme is extremely flexible against malicious data modification, convoluted failures and server clouding attacks.
IRJET- Analysis of Cloud Security and Performance for Leakage of Critical...IRJET Journal
1) The DROPS methodology divides files into fragments and replicates each fragment to different cloud nodes to improve security and performance of outsourced data. Each node stores only a single fragment, so a successful attack would not reveal meaningful information.
2) Nodes storing fragments are separated using graph T-coloring to restrict an attacker's ability to guess fragment locations.
3) The methodology aims to increase the effort required for an attacker to retrieve data after intrusion while minimizing data loss. It evaluates both security and performance when securing and managing outsourced data in the cloud.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
This document summarizes a research paper that proposes a scheme for ensuring security and reliability of data stored in the cloud. The scheme utilizes erasure coding to redundantly store encrypted data fragments across multiple cloud servers. It generates homomorphic tokens that allow auditing of the data storage and identification of any misbehaving servers. The scheme supports secure dynamic operations like modification, deletion and append of cloud data files. Analysis shows the scheme is efficient and resilient against various security threats like server compromises or failures. It ensures storage correctness and fast localization of data errors in the cloud.
Secure Channel Establishment Techniques for Homomorphic Encryption in Cloud C...IRJET Journal
This document proposes a new cloud-manager-based encryption scheme (CMReS) to address key management and sharing issues in fully homomorphic encryption. CMReS distributes encryption, decryption, and re-encryption tasks between a trusted Encryption/Decryption Service Provider (EDSP) module and a Re-encryption Service Provider (RSP) module hosted on the cloud. The scheme uses Diffie-Hellman key exchange to generate session keys and one-time passwords for authentication between users and cloud services. Experimental results show the proposed technique reduces delay compared to previous approaches by distributing computational tasks between user devices, the EDSP, and RSP modules.
Review and Analysis of Self Destruction of Data in Cloud ComputingIRJET Journal
This document proposes and reviews a system for self-destruction of data in cloud computing. The system aims to enhance security by shuffling user-uploaded data between directories in the cloud after a set time interval, and by taking timely backups. The key aspects are:
1) Data is divided into fragments and distributed across different nodes in the cloud using coloring algorithms, while also being replicated for availability.
2) Advanced encryption and hashing techniques are used to secure the data fragments.
3) Algorithms are provided for initial fragment placement and subsequent replication to balanced load and improve retrieval time.
The system is designed to enhance both security and usability of cloud storage by preventing unauthorized access to data over time.
This document discusses the risks, countermeasures, costs and benefits of cloud computing. It identifies key risks like cyberattacks, lack of data location control, complex trust boundaries that make investigations difficult, and privacy issues. It recommends solutions like well-defined policies, service level agreements, continuous risk assessments, encryption, and guidance from NIST. While cloud computing offers cost savings and flexibility, users are ultimately responsible for security and must approach cloud adoption with care given its immature nature and risks.
This document discusses effective modular order preserving encryption on cloud using multivariate hypergeometric distribution (MHGD). It begins with an abstract that describes how order preserving encryption allows efficient range queries on encrypted data. It then provides background on cloud computing security concerns and discusses existing approaches to searchable encryption, including probabilistic encryption, deterministic encryption, homomorphic encryption, and order preserving encryption. The key proposed approach is to improve the security of existing modular order preserving encryption approaches by utilizing MHGD.
Guaranteed Availability of Cloud Data with Efficient CostIRJET Journal
This document discusses efficient and cost-effective methods for hosting data across multiple cloud storage providers (multi-cloud) to ensure high data availability and reduce costs. It proposes distributing data among different cloud providers using replication and erasure coding techniques. This approach guarantees data availability even if one cloud provider fails and minimizes monetary costs by taking advantage of varying cloud pricing models and data access patterns. The technique is shown to save around 20% of costs while providing high flexibility to handle data and pricing changes over time.
This document summarizes recent research on security issues related to single cloud and multi-cloud storage models. It finds that relying on a single cloud service provider poses risks to data availability and integrity if the provider experiences an outage. Storing data across multiple cloud providers (a multi-cloud model) can help address these issues but may increase costs. The document surveys various techniques proposed in recent research to improve security, availability, and integrity in single and multi-cloud environments, such as homomorphic tokens, file division, and the Depsky model. It concludes that while single cloud has been more widely researched, multi-cloud is an important area of ongoing work to help overcome the security and cost challenges of cloud storage.
Iaetsd secured and efficient data scheduling of intermediate data setsIaetsd Iaetsd
This document discusses securing and efficiently scheduling intermediate data sets in cloud computing. It proposes using an upper bound constraint approach to identify sensitive intermediate data sets for encryption. Suppression techniques like semi-suppression and full-suppression are applied to sensitive data sets to reduce time and costs while the Value Generalization Hierarchy protocol is used to provide security during data access. Optimized balanced scheduling is also used to balance system loads and minimize costs. The goal is to efficiently manage intermediate data sets while preserving privacy.
An approach for secured data transmission at client end in cloud computingIAEME Publication
This document summarizes a research paper that proposes an algorithm for securing data transmission between a client and cloud server in cloud computing environments. The algorithm uses an authentication function and key that are updated during transmission to verify authorization and detect any modifications by potential attackers. When a client connects to a server, they both initialize the key to the same value. Then, the key is incremented by one for each packet sent or received. If a client wants to verify security, it can send a packet with the current key value to the server for matching. This helps prevent man-in-the-middle attacks by making it difficult for attackers to modify packets without knowing the updated key values. The approach aims to securely transmit sensitive data from cloud servers
A scalabl e and cost effective framework for privacy preservation over big d...amna alhabib
This document proposes a scalable and cost-effective framework called SaC-FRAPP for preserving privacy over big data on the cloud. The key idea is to leverage cloud-based MapReduce to anonymize large datasets before releasing them to other parties. Anonymized datasets are then managed using HDFS to avoid re-computation costs. A prototype system is implemented to demonstrate that the framework can anonymize and manage anonymized big data sets in a highly scalable, efficient and cost-effective manner.
EFFECTIVE METHOD FOR MANAGING AUTOMATION AND MONITORING IN MULTI-CLOUD COMPUT...IJNSA Journal
Multi-cloud is an advanced version of cloud computing that allows its users to utilize different cloud systems from several Cloud Service Providers (CSPs) remotely. Although it is a very efficient computing
facility, threat detection, data protection, and vendor lock-in are the major security drawbacks of this infrastructure. These factors act as a catalyst in promoting serious cyber-crimes of the virtual world. Privacy and safety issues of a multi-cloud environment have been overviewed in this research paper. The
objective of this research is to analyze some logical automation and monitoring provisions, such as monitoring Cyber-physical Systems (CPS), home automation, automation in Big Data Infrastructure (BDI), Disaster Recovery (DR), and secret protection. The Results of this research investigation indicate that it is possible to avoid security snags of a multi-cloud interface by adopting these scientific solutions methodically.
Enhancing Data Storage Security in Cloud Computing Through SteganographyIDES Editor
This document summarizes a research paper that proposes a method for enhancing data security in cloud computing through steganography. The method hides user data in digital images stored on cloud servers. When data needs to be accessed, it is extracted from the images. The document outlines the cloud architecture and security issues addressed. It then describes the proposed system architecture, security model, and data storage and retrieval process. Data is partitioned and hidden in multiple images to improve security. The goal is to prevent unauthorized access to user data stored on cloud servers.
Similar to Drops division and replication of data in cloud for optimal performance and security (20)
This document proposes a reduced latency list decoding algorithm and high throughput decoder architecture for polar codes. The reduced latency list decoding algorithm visits fewer nodes in the decoding tree and considers fewer possibilities of information bits than existing successive cancellation list decoding algorithms, significantly reducing decoding latency and improving throughput with little performance degradation. An implementation of the proposed decoder architecture in a 90nm CMOS technology demonstrates significant latency reduction and area efficiency improvement compared to other list polar decoders.
A researcher developed a radix-8 divider for binary64 division units to improve energy efficiency at high clock rates. Simulation results showed the radix-8 divider requires less energy per division than radix-4 or radix-16 approaches. The researcher used Xilinx 10.1 and ModelSim 6.4b tools and VHDL/Verilog languages to develop and test the minimally redundant radix-8 divider.
Hybrid FPGA architectures containing a mixture of lookup tables (LUTs) and hardened multiplexers are evaluated to improve logic density and reduce chip area. Simulation results show that non-fracturable hybrid architectures naturally save up to 8% area after placement and routing without optimizing the technology mapper, and additional area is saved with architecture-aware mapping optimizations. Fracturable hybrid architectures only provide marginal area gains of up to 2% after placement and routing. For the most area-efficient hybrid architectures, timing performance is minimally impacted.
Input-Based Dynamic Reconfiguration of Approximate Arithmetic Units for Video...Pvrtechnologies Nellore
This document proposes a reconfigurable approximate architecture for MPEG encoders that optimizes power consumption while maintaining a particular Peak Signal-to-Noise Ratio threshold for any input video. It designs reconfigurable adder/subtractor blocks that can modulate their degree of approximation, and integrates them into the motion estimation and discrete cosine transform modules of the MPEG encoder. Experimental results show the approach dynamically adjusts the degree of hardware approximation based on the input video to respect the given quality bound across different videos while achieving up to a 38% power saving over a conventional non-approximated MPEG encoder architecture.
This document lists 69 MATLAB projects related to various domains including image processing, biometrics, medical image processing, computer vision, and more. It provides the project titles, domains or sub-domains, programming languages and years. It also lists the support that will be provided to registered students including the IEEE base paper, documentation, source code, execution help, and publication support. The projects involve techniques such as image segmentation, super resolution, steganography, action recognition, and license plate detection. Support includes documentation templates, software installation guides, and assistance with conferences or journal publications.
This document lists 50 VLSI projects from 2016-2017 across various domains such as low power, high speed data transmission, area efficient/timing and delay reduction, audio/video processing, networking, verification, and Tanner/Microwind. The projects cover a range of topics including ECG acquisition systems, modular multiplication, adaptive radios, arrhythmia prediction, electrical capacitance tomography, approximate computing using memoization, FFT processors, error correction coding, carry skip adders, dynamic voltage and frequency scaling, code compression, turbo decoding, variable digital filters, parallel FFTs, MIMO communications, timing error correction, LU decomposition, FPGA logic architectures, LDPC decoding, minimum energy systems, elliptic curve multiplication, NB
This document lists 97 potential final year projects for students in various domains including embedded systems, Internet of Things, robotics, ZigBee, GSM and GPS, consumer electronics, automation, and more. It provides contact information for the project coordinator and describes the domain and programming language/year for each potential project. Students who register will receive support including an IEEE base paper, documentation, source code, and assistance with publication.
This document describes a high-speed FPGA implementation of an elliptic curve cryptography processor based on redundant signed digit representation. The processor employs pipelining techniques to achieve high throughput for Karatsuba–Ofman multiplication. It also includes an efficient modular adder without comparison and a high throughput modular divider to maximize frequency. The processor supports the NIST P256 curve and performs single-point multiplication in 2.26 ms at a maximum frequency of 160 MHz on a Xilinx Virtex 5 FPGA.
Retiming of digital circuits is conventionally based on the estimates of propagation delays across different paths in the data-flow graphs (DFGs) obtained by discrete component
timing model, which implicitly assumes that operation of a node can begin only after the completion of the operation(s) of its preceding node(s) to obey the data dependence requirement. Such a discrete component timing model very often gives much higher
estimates of the propagation delays than the actuals particularly when the computations in the DFG nodes correspond to fixed point arithmetic operations like additions and multiplications
Pre encoded multipliers based on non-redundant radix-4 signed-digit encodingPvrtechnologies Nellore
This paper introduces an architecture for pre-encoded multipliers used in digital signal processing based on offline encoding of coefficients using a non-redundant radix-4 signed-digit encoding technique. Experimental analysis shows the proposed multipliers using this encoding technique, along with a coefficients memory, are more efficient in terms of area and power compared to conventional Modified Booth encoding schemes.
Quality of-protection-driven data forwarding for intermittently connected wir...Pvrtechnologies Nellore
The document proposes a quality-of-protection (QoP)-driven data forwarding strategy for intermittent wireless networks. The existing system relies on collaborative data delivery, but non-cooperative behavior impairs network QoP and user quality of experience (QoE). The proposed system evaluates process-based and relationship-based credibility of nodes in a distributed manner using locally recorded forwarding behavior information. An intrusion detection mechanism helps select reliable relay nodes for transmission, improving QoP, QoE, reducing network load, and enhancing resource utilization. Numerical results show the proposed approach provides reliable data transmission with high QoP and improved user QoE.
The document provides information about an IT services company called Coalesce Technologies. It discusses Coalesce's services, commitment to client satisfaction, growing network, and customized solutions. It also describes the library management system project, including the problems with existing systems, proposed new system features, and UML diagrams for modeling the system. Key aspects of the proposed system include automating transactions, providing a simple GUI, efficient database updating, and restricting administrative access for security.
The document discusses an e-voting system project that aims to provide a secure and user-friendly online voting system. It outlines the existing paper-based voting system and proposes an online voting system where voters can cast their votes from anywhere in India through a database that stores voter information and votes. The proposed system is designed to accurately record and retrieve voter information and votes in a planned, reliable, and redundant manner. It requires hardware like a PC and Windows OS along with Java programming language to develop the online voting software.
PVR Technology lists 26 new web-based projects including an airline automation system, attendance management system, automated college admission system, clustering datasets using machine learning algorithms, corporate transportation system, courier information system, e-voting system, e-complaints system in PHP, e-learning system in PHP, exam result application, hairstylesalon system, inventory management system, library management system, medical reference system, online placement and training system, online admission system, online bank system, vehicle investigation system, resource management system in PHP, online vegetables purchase system, online shopping for gadgets system, online seat booking system, online requitment system, online quiz system, online placement and training cell system, online movie ticket system,
The document proposes a new medium access control (MAC) protocol called PoCMAC that uses distributed power control to manage interference in full-duplex WiFi networks. PoCMAC allows simultaneous uplink and downlink transmissions between an access point and half-duplex clients. It identifies regimes where power control provides throughput gains and develops a full 802.11-based protocol for distributed selection of three-node topologies. Simulations and software-defined radio experiments show PoCMAC achieves higher capacity and throughput compared to half-duplex networks, while maintaining similar fairness.
This document contains information about PVR Technology, including their head office location, website, email, and phone number. It then lists 20 project titles related to power electronics and renewable energy systems. The projects cover topics like power quality improvement, single-stage converters, grid-connected inverters, maximum power point tracking, stand-alone energy sources, power factor correction, resonant inverters, DC-DC converters, AC-DC converters, filters, and induction heating applications.
Control cloud-data-access-privilege-and-anonymity-with-fully-anonymous-attrib...Pvrtechnologies Nellore
This document proposes a scheme called AnonyControl to address privacy concerns with cloud data access. It aims to control access privileges while preserving user identity privacy. AnonyControl decentralizes the central authority across multiple attribute authorities to limit identity leakage and achieve semi-anonymity. It also generalizes file access control to privilege control to manage cloud data operations in a fine-grained way. The document describes existing access control schemes, disadvantages related to identity privacy, and how AnonyControl improves on previous work by protecting user identities through attribute distribution across authorities.
Control cloud data access privilege and anonymity with fully anonymous attrib...Pvrtechnologies Nellore
This document proposes two schemes, AnonyControl and AnonyControl-F, to address privacy issues in existing access control schemes for cloud storage. AnonyControl decentralizes the central authority to limit identity leakage and achieve semi-anonymity. It also generalizes file access control to privilege control. AnonyControl-F fully prevents identity leakage and achieves full anonymity. The schemes use attribute-based encryption and a multi-authority system to securely control user access privileges without revealing identity information. Security analysis shows the schemes are secure under certain cryptographic assumptions.
The document proposes CloudKeyBank, a key management framework that addresses the confidentiality, search privacy, and owner authorization of outsourced encryption keys. It does so using a new cryptographic primitive called Searchable Conditional Proxy Re-Encryption (SC-PRE) that combines Hidden Vector Encryption and Proxy Re-Encryption. The framework allows key owners to encrypt keys for outsourcing while maintaining privacy and granting controlled authorization. It aims to solve security issues not addressed by traditional outsourced data solutions.
Circuit ciphertext policy attribute-based hybrid encryption with verifiablePvrtechnologies Nellore
The document proposes a scheme for circuit ciphertext-policy attribute-based hybrid encryption with verifiable delegation in cloud computing. It aims to ensure data confidentiality, fine-grained access control, and verifiability of delegated computation results. The scheme uses a combination of ciphertext-policy attribute-based encryption, symmetric encryption, and verifiable computation. It is proven secure based on computational assumptions and simulations show it is practical for cloud computing applications.
Introduction- e - waste – definition - sources of e-waste– hazardous substances in e-waste - effects of e-waste on environment and human health- need for e-waste management– e-waste handling rules - waste minimization techniques for managing e-waste – recycling of e-waste - disposal treatment methods of e- waste – mechanism of extraction of precious metal from leaching solution-global Scenario of E-waste – E-waste in India- case studies.
Batteries -Introduction – Types of Batteries – discharging and charging of battery - characteristics of battery –battery rating- various tests on battery- – Primary battery: silver button cell- Secondary battery :Ni-Cd battery-modern battery: lithium ion battery-maintenance of batteries-choices of batteries for electric vehicle applications.
Fuel Cells: Introduction- importance and classification of fuel cells - description, principle, components, applications of fuel cells: H2-O2 fuel cell, alkaline fuel cell, molten carbonate fuel cell and direct methanol fuel cells.
Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024Sinan KOZAK
Sinan from the Delivery Hero mobile infrastructure engineering team shares a deep dive into performance acceleration with Gradle build cache optimizations. Sinan shares their journey into solving complex build-cache problems that affect Gradle builds. By understanding the challenges and solutions found in our journey, we aim to demonstrate the possibilities for faster builds. The case study reveals how overlapping outputs and cache misconfigurations led to significant increases in build times, especially as the project scaled up with numerous modules using Paparazzi tests. The journey from diagnosing to defeating cache issues offers invaluable lessons on maintaining cache integrity without sacrificing functionality.
Comparative analysis between traditional aquaponics and reconstructed aquapon...bijceesjournal
The aquaponic system of planting is a method that does not require soil usage. It is a method that only needs water, fish, lava rocks (a substitute for soil), and plants. Aquaponic systems are sustainable and environmentally friendly. Its use not only helps to plant in small spaces but also helps reduce artificial chemical use and minimizes excess water use, as aquaponics consumes 90% less water than soil-based gardening. The study applied a descriptive and experimental design to assess and compare conventional and reconstructed aquaponic methods for reproducing tomatoes. The researchers created an observation checklist to determine the significant factors of the study. The study aims to determine the significant difference between traditional aquaponics and reconstructed aquaponics systems propagating tomatoes in terms of height, weight, girth, and number of fruits. The reconstructed aquaponics system’s higher growth yield results in a much more nourished crop than the traditional aquaponics system. It is superior in its number of fruits, height, weight, and girth measurement. Moreover, the reconstructed aquaponics system is proven to eliminate all the hindrances present in the traditional aquaponics system, which are overcrowding of fish, algae growth, pest problems, contaminated water, and dead fish.
Advanced control scheme of doubly fed induction generator for wind turbine us...IJECEIAES
This paper describes a speed control device for generating electrical energy on an electricity network based on the doubly fed induction generator (DFIG) used for wind power conversion systems. At first, a double-fed induction generator model was constructed. A control law is formulated to govern the flow of energy between the stator of a DFIG and the energy network using three types of controllers: proportional integral (PI), sliding mode controller (SMC) and second order sliding mode controller (SOSMC). Their different results in terms of power reference tracking, reaction to unexpected speed fluctuations, sensitivity to perturbations, and resilience against machine parameter alterations are compared. MATLAB/Simulink was used to conduct the simulations for the preceding study. Multiple simulations have shown very satisfying results, and the investigations demonstrate the efficacy and power-enhancing capabilities of the suggested control system.
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...IJECEIAES
Medical image analysis has witnessed significant advancements with deep learning techniques. In the domain of brain tumor segmentation, the ability to
precisely delineate tumor boundaries from magnetic resonance imaging (MRI)
scans holds profound implications for diagnosis. This study presents an ensemble convolutional neural network (CNN) with transfer learning, integrating
the state-of-the-art Deeplabv3+ architecture with the ResNet18 backbone. The
model is rigorously trained and evaluated, exhibiting remarkable performance
metrics, including an impressive global accuracy of 99.286%, a high-class accuracy of 82.191%, a mean intersection over union (IoU) of 79.900%, a weighted
IoU of 98.620%, and a Boundary F1 (BF) score of 83.303%. Notably, a detailed comparative analysis with existing methods showcases the superiority of
our proposed model. These findings underscore the model’s competence in precise brain tumor localization, underscoring its potential to revolutionize medical
image analysis and enhance healthcare outcomes. This research paves the way
for future exploration and optimization of advanced CNN models in medical
imaging, emphasizing addressing false positives and resource efficiency.
artificial intelligence and data science contents.pptxGauravCar
What is artificial intelligence? Artificial intelligence is the ability of a computer or computer-controlled robot to perform tasks that are commonly associated with the intellectual processes characteristic of humans, such as the ability to reason.
› ...
Artificial intelligence (AI) | Definitio
Applications of artificial Intelligence in Mechanical Engineering.pdfAtif Razi
Historically, mechanical engineering has relied heavily on human expertise and empirical methods to solve complex problems. With the introduction of computer-aided design (CAD) and finite element analysis (FEA), the field took its first steps towards digitization. These tools allowed engineers to simulate and analyze mechanical systems with greater accuracy and efficiency. However, the sheer volume of data generated by modern engineering systems and the increasing complexity of these systems have necessitated more advanced analytical tools, paving the way for AI.
AI offers the capability to process vast amounts of data, identify patterns, and make predictions with a level of speed and accuracy unattainable by traditional methods. This has profound implications for mechanical engineering, enabling more efficient design processes, predictive maintenance strategies, and optimized manufacturing operations. AI-driven tools can learn from historical data, adapt to new information, and continuously improve their performance, making them invaluable in tackling the multifaceted challenges of modern mechanical engineering.
Design and optimization of ion propulsion dronebjmsejournal
Electric propulsion technology is widely used in many kinds of vehicles in recent years, and aircrafts are no exception. Technically, UAVs are electrically propelled but tend to produce a significant amount of noise and vibrations. Ion propulsion technology for drones is a potential solution to this problem. Ion propulsion technology is proven to be feasible in the earth’s atmosphere. The study presented in this article shows the design of EHD thrusters and power supply for ion propulsion drones along with performance optimization of high-voltage power supply for endurance in earth’s atmosphere.
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...IJECEIAES
Climate change's impact on the planet forced the United Nations and governments to promote green energies and electric transportation. The deployments of photovoltaic (PV) and electric vehicle (EV) systems gained stronger momentum due to their numerous advantages over fossil fuel types. The advantages go beyond sustainability to reach financial support and stability. The work in this paper introduces the hybrid system between PV and EV to support industrial and commercial plants. This paper covers the theoretical framework of the proposed hybrid system including the required equation to complete the cost analysis when PV and EV are present. In addition, the proposed design diagram which sets the priorities and requirements of the system is presented. The proposed approach allows setup to advance their power stability, especially during power outages. The presented information supports researchers and plant owners to complete the necessary analysis while promoting the deployment of clean energy. The result of a case study that represents a dairy milk farmer supports the theoretical works and highlights its advanced benefits to existing plants. The short return on investment of the proposed approach supports the paper's novelty approach for the sustainable electrical system. In addition, the proposed system allows for an isolated power setup without the need for a transmission line which enhances the safety of the electrical network
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...
Drops division and replication of data in cloud for optimal performance and security
1. This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation
information: DOI 10.1109/TCC.2015.2400460, IEEE Transactions on Cloud Computing
IEEE TRANSACTIONS ON CLOUD COMPUTING 1
DROPS: Division and Replication of Data in
Cloud for Optimal Performance and Security
Mazhar Ali, Student Member, IEEE, Kashif Bilal, Student Member, IEEE, Samee U. Khan, Senior
Member, IEEE, Bharadwaj Veeravalli, Senior Member, IEEE, Keqin Li, Senior Member, IEEE, and
Albert Y. Zomaya, Fellow, IEEE
Abstract—Outsourcing data to a third-party administrative control, as is done in cloud computing, gives rise to security concerns.
The data compromise may occur due to attacks by other users and nodes within the cloud. Therefore, high security measures are
required to protect data within the cloud. However, the employed security strategy must also take into account the optimization
of the data retrieval time. In this paper, we propose Division and Replication of Data in the Cloud for Optimal Performance and
Security (DROPS) that collectively approaches the security and performance issues. In the DROPS methodology, we divide a
file into fragments, and replicate the fragmented data over the cloud nodes. Each of the nodes stores only a single fragment
of a particular data file that ensures that even in case of a successful attack, no meaningful information is revealed to the
attacker. Moreover, the nodes storing the fragments, are separated with certain distance by means of graph T-coloring to prohibit
an attacker of guessing the locations of the fragments. Furthermore, the DROPS methodology does not rely on the traditional
cryptographic techniques for the data security; thereby relieving the system of computationally expensive methodologies. We
show that the probability to locate and compromise all of the nodes storing the fragments of a single file is extremely low. We
also compare the performance of the DROPS methodology with ten other schemes. The higher level of security with slight
performance overhead was observed.
Index Terms—Centrality, cloud security, fragmentation, replication, performance.
!
1 INTRODUCTION
THE cloud computing paradigm has reformed the
usage and management of the information tech-
nology infrastructure [7]. Cloud computing is char-
acterized by on-demand self-services, ubiquitous net-
work accesses, resource pooling, elasticity, and mea-
sured services [22, 8]. The aforementioned character-
istics of cloud computing make it a striking candidate
for businesses, organizations, and individual users
for adoption [25]. However, the benefits of low-cost,
negligible management (from a users perspective),
and greater flexibility come with increased security
concerns [7].
Security is one of the most crucial aspects among
those prohibiting the wide-spread adoption of cloud
computing [14, 19]. Cloud security issues may stem
M. Ali, K. Bilal, and S. U. Khan are with the Department
of Electrical and Computer Engineering, North Dakota
State University, Fargo, ND 58108-6050, USA. E-mail:
{mazhar.ali,kashif.bilal,samee.khan}@ndsu.edu
B. Veeravallii is with the Department of Electrical and Com-
puter Engineering, The National University of Singapore. E-mail:
elebv@nus.edu.sg
K. Li is with the Department of Computer Science, State University
of New York , New Paltz, NY 12561. E-mail: lik@ndsu.edu
A.Y. Zomaya is with the School of Information Technologies, The
University of Sydney, Sydney, NSW 2006, Australia. E-mail: al-
bert.zomaya@sydney.edu.au
due to the core technology′
s implementation (virtual
machine (VM) escape, session riding, etc.), cloud ser-
vice offerings (structured query language injection,
weak authentication schemes, etc.), and arising from
cloud characteristics (data recovery vulnerability, In-
ternet protocol vulnerability, etc.) [5]. For a cloud to be
secure, all of the participating entities must be secure.
In any given system with multiple units, the highest
level of the system′
s security is equal to the security
level of the weakest entity [12]. Therefore, in a cloud,
the security of the assets does not solely depend on
an individual’s security measures [5]. The neighboring
entities may provide an opportunity to an attacker to
bypass the users defenses.
The off-site data storage cloud utility requires users
to move data in cloud’s virtualized and shared envi-
ronment that may result in various security concerns.
Pooling and elasticity of a cloud, allows the physi-
cal resources to be shared among many users [22].
Moreover, the shared resources may be reassigned to
other users at some instance of time that may result
in data compromise through data recovery method-
ologies [22]. Furthermore, a multi-tenant virtualized
environment may result in a VM to escape the bounds
of virtual machine monitor (VMM). The escaped VM
can interfere with other VMs to have access to unau-
thorized data [9]. Similarly, cross-tenant virtualized
network access may also compromise data privacy
and integrity. Improper media sanitization can also
leak customer′
s private data [5].
For More Details Contact G.Venkat Rao
PVR TECHNOLOGIES 8143271457
2. This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation
information: DOI 10.1109/TCC.2015.2400460, IEEE Transactions on Cloud Computing
IEEE TRANSACTIONS ON CLOUD COMPUTING 2
Fig. 1: The DROPS methodology
The data outsourced to a public cloud must be
secured. Unauthorized data access by other users and
processes (whether accidental or deliberate) must be
prevented [14]. As discussed above, any weak entity
can put the whole cloud at risk. In such a scenario,
the security mechanism must substantially increase
an attacker’s effort to retrieve a reasonable amount
of data even after a successful intrusion in the cloud.
Moreover, the probable amount of loss (as a result of
data leakage) must also be minimized.
A cloud must ensure throughput, reliability, and
security [15]. A key factor determining the throughput
of a cloud that stores data is the data retrieval time
[21]. In large-scale systems, the problems of data re-
liability, data availability, and response time are dealt
with data replication strategies [3]. However, placing
replicas data over a number of nodes increases the
attack surface for that particular data. For instance,
storing m replicas of a file in a cloud instead of one
replica increases the probability of a node holding file
to be chosen as attack victim, from 1
n
to m
n
, where n
is the total number of nodes.
From the above discussion, we can deduce that
both security and performance are critical for the
next generation large-scale systems, such as clouds.
Therefore, in this paper, we collectively approach
the issue of security and performance as a secure
data replication problem. We present Division and
Replication of Data in the Cloud for Optimal Perfor-
mance and Security (DROPS) that judicially fragments
user files into pieces and replicates them at strategic
locations within the cloud. The division of a file into
fragments is performed based on a given user criteria
such that the individual fragments do not contain any
meaningful information. Each of the cloud nodes (we
use the term node to represent computing, storage,
physical, and virtual machines) contains a distinct
fragment to increase the data security. A successful
attack on a single node must not reveal the loca-
tions of other fragments within the cloud. To keep
an attacker uncertain about the locations of the file
fragments and to further improve the security, we
select the nodes in a manner that they are not adjacent
and are at certain distance from each other. The node
separation is ensured by the means of the T-coloring
[6]. To improve data retrieval time, the nodes are se-
lected based on the centrality measures that ensure an
improved access time. To further improve the retrieval
time, we judicially replicate fragments over the nodes
that generate the highest read/write requests. The
selection of the nodes is performed in two phases.
In the first phase, the nodes are selected for the initial
placement of the fragments based on the centrality
measures. In the second phase, the nodes are selected
for replication. The working of the DROPS methodol-
ogy is shown as a high-level work flow in Fig. 1. We
implement ten heuristics based replication strategies
as comparative techniques to the DROPS methodol-
ogy. The implemented replication strategies are: (a)
A-star based searching technique for data replication
problem (DRPA-star), (b) weighted A-star (WA-star),
(c) A -star, (d) suboptimal A-star1 (SA1), (e) subop-
timal A-star2 (SA2), (f) suboptimal A-star3 (SA3), (g)
Local Min-Min, (h) Global Min-Min, (i) Greedy algo-
rithm, and (j) Genetic Replication Algorithm (GRA).
The aforesaid strategies are fine-grained replication
techniques that determine the number and locations
of the replicas for improved system performance. For
our studies, we use three Data Center Network (DCN)
architectures, namely: (a) Three tier, (b) Fat tree, and
(c) DCell. We use the aforesaid architectures because
they constitute the modern cloud infrastructures and
the DROPS methodology is proposed to work for the
cloud computing paradigm.
Our major contributions in this paper are as follows:
We develop a scheme for outsourced data that
takes into account both the security and per-
formance. The proposed scheme fragments and
replicates the data file over cloud nodes.
The proposed DROPS scheme ensures that even
in the case of a successful attack, no meaningful
information is revealed to the attacker.
We do not rely on traditional cryptographic tech-
niques for data security. The non-cryptographic
nature of the proposed scheme makes it faster to
perform the required operations (placement and
retrieval) on the data.
We ensure a controlled replication of the file frag-
ments, where each of the fragments is replicated
only once for the purpose of improved security.
The remainder of the paper is organized as follows.
Section 2 provides an overview of the related work in
the field. In Section 3, we present the preliminaries.
The DROPS methodology is introduced in Section 4.
Section 5 explains the experimental setup and results,
and Section 6 concludes the paper.
For More Details Contact G.Venkat Rao
PVR TECHNOLOGIES 8143271457
3. This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation
information: DOI 10.1109/TCC.2015.2400460, IEEE Transactions on Cloud Computing
IEEE TRANSACTIONS ON CLOUD COMPUTING 3
2 RELATED WORK
Juels et al. [10] presented a technique to ensure the
integrity, freshness, and availability of data in a cloud.
The data migration to the cloud is performed by the
Iris file system. A gateway application is designed
and employed in the organization that ensures the
integrity and freshness of the data using a Merkle
tree. The file blocks, MAC codes, and version numbers
are stored at various levels of the tree. The proposed
technique in [10] heavily depends on the user′
s em-
ployed scheme for data confidentiality. Moreover, the
probable amount of loss in case of data tempering as a
result of intrusion or access by other VMs cannot be
decreased. Our proposed strategy does not depend
on the traditional cryptographic techniques for data
security. Moreover, the DROPS methodology does
not store the whole file on a single node to avoid
compromise of all of the data in case of successful
attack on the node.
The authors in [11] approached the virtualized and
multi-tenancy related issues in the cloud storage by
utilizing the consolidated storage and native access
control. The Dike authorization architecture is pro-
posed that combines the native access control and
the tenant name space isolation. The proposed system
is designed and works for object based file systems.
However, the leakage of critical information in case of
improper sanitization and malicious VM is not han-
dled. The DROPS methodology handles the leakage
of critical information by fragmenting data file and
using multiple nodes to store a single file.
The use of a trusted third party for providing
security services in the cloud is advocated in [22]. The
authors used the public key infrastructure (PKI) to en-
hance the level of trust in the authentication, integrity,
and confidentiality of data and the communication
between the involved parties. The keys are generated
and managed by the certification authorities. At the
user level, the use of temper proof devices, such as
smart cards was proposed for the storage of the keys.
Similarly, Tang et. al. have utilized the public key
cryptography and trusted third party for providing
data security in cloud environments [20]. However,
the authors in [20] have not used the PKI infrastruc-
ture to reduce the overheads. The trusted third party
is responsible for the generation and management of
public/private keys. The trusted third party may be a
single server or multiple servers. The symmetric keys
are protected by combining the public key cryptogra-
phy and the (k, n) threshold secret sharing schemes.
Nevertheless, such schemes do not protect the data
files against tempering and loss due to issues arising
from virtualization and multi-tenancy.
A secure and optimal placement of data objects in a
distributed system is presented in [21]. An encryption
key is divided into n shares and distributed on differ-
ent sites within the network. The division of a key into
n shares is carried out through the (k, n) threshold
secret sharing scheme. The network is divided into
clusters. The number of replicas and their placement
is determined through heuristics. A primary site is
selected in each of the clusters that allocates the repli-
cas within the cluster. The scheme presented in [21]
combines the replication problem with security and
access time improvement. Nevertheless, the scheme
focuses only on the security of the encryption key.
The data files are not fragmented and are handled as
a single file. The DROPS methodology, on the other
hand, fragments the file and store the fragments on
multiple nodes. Moreover, the DROPS methodology
focuses on the security of the data within the cloud
computing domain that is not considered in [21].
3 PRELIMINARIES
Before we go into the details of the DROPS methodol-
ogy, we introduce the related concepts in the follow-
ing for the ease of the readers.
3.1 Data Fragmentation
The security of a large-scale system, such as cloud de-
pends on the security of the system as a whole and the
security of individual nodes. A successful intrusion
into a single node may have severe consequences, not
only for data and applications on the victim node, but
also for the other nodes. The data on the victim node
may be revealed fully because of the presence of the
whole file [17]. A successful intrusion may be a result
of some software or administrative vulnerability [17].
In case of homogenous systems, the same flaw can
be utilized to target other nodes within the system.
The success of an attack on the subsequent nodes
will require less effort as compared to the effort on
the first node. Comparatively, more effort is required
for heterogeneous systems. However, compromising
a single file will require the effort to penetrate only
a single node. The amount of compromised data can
be reduced by making fragments of a data file and
storing them on separate nodes [17, 21]. A successful
intrusion on a single or few nodes will only provide
access to a portion of data that might not be of
any significance. Moreover, if an attacker is uncertain
about the locations of the fragments, the probability
of finding fragments on all of the nodes is very low.
Let us consider a cloud with M nodes and a file
with z number of fragments. Let s be the number of
successful intrusions on distinct nodes, such that s>z.
The probability that s number of victim nodes contain
all of the z sites storing the file fragments (represented
by P(s,z)) is given as:
P(s,z) =
(
s
z
)(
M − s
s − z
)
(
M
s
)
. (1)
For More Details Contact G.Venkat Rao
PVR TECHNOLOGIES 8143271457
4. This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation
information: DOI 10.1109/TCC.2015.2400460, IEEE Transactions on Cloud Computing
IEEE TRANSACTIONS ON CLOUD COMPUTING 4
If M = 30, s = 10, and z = 7, then P(10,7) = 0.0046.
However, if we choose M = 50, s = 20, and z = 15,
then P(20,15) = 0.000046. With the increase in M, the
probability of a state reduces further. Therefore, we
can say that the greater the value of M, the less prob-
able that an attacker will obtain the data file. In cloud
systems with thousands of nodes, the probability for
an attacker to obtain a considerable amount of data,
reduces significantly. However, placing each fragment
once in the system will increase the data retrieval
time. To improve the data retrieval time, fragments
can be replicated in a manner that reduces retrieval
time to an extent that does not increase the aforesaid
probability.
3.2 Centrality
The centrality of a node in a graph provides the
measure of the relative importance of a node in the
network. The objective of improved retrieval time
in replication makes the centrality measures more
important. There are various centrality measures; for
instance, closeness centrality, degree centrality, be-
tweenness centrality, eccentricity centrality, and eigen-
vector centrality. We only elaborate on the closeness,
betweenness, and eccentricity centralities because we
are using the aforesaid three centralities in this work.
For the remainder of the centralities, we encourage
the readers to review [24].
3.2.1 Betweenness Centrality
The betweenness centrality of a node n is the number
of the shortest paths, between other nodes, passing
through n [24]. Formally, the betweenness centrality
of any node v in a network is given as:
Cb(v) = ∑
a≠v≠b
δab(v)
δab
, (2)
where δab is the total number of shortest paths be-
tween a and b, and δab(v) is the number of shortest
paths between a and b passing through v. The variable
Cb(v) denotes the betweenness centrality for node v.
3.2.2 Closeness Centrality
A node is said to be closer with respect to all of
the other nodes within a network, if the sum of the
distances from all of the other nodes is lower than
the sum of the distances of other candidate nodes
from all of the other nodes [24]. The lower the sum
of distances from the other nodes, the more central is
the node. Formally, the closeness centrality of a node
v in a network is defined as:
Cc(v) =
N − 1
∑
a≠v
d(v,a)
, (3)
where N is total number of nodes in a network and
d(v,a) represents the distance between node v and
node a.
TABLE 1: Notations and their meanings
Symbols Meanings
M Total number of nodes in the cloud
N Total number of file fragments to be placed
Ok k-th fragment of file
ok Size of Ok
Si i-th node
si Size of Si
ceni Centrality measure for Si
colSi Color assigned to Si
T A set containing distances by which assignment of
fragments must be separated
ri
k Number of reads for Ok from Si
Ri
k Aggregate read cost of ri
k
wi
k Number of writes for Ok from Si
Wi
k Aggregate write cost of wi
k
NNi
k Nearest neighbor of Si holding Ok
c(i,j) Communication cost between Si and Sj
Pk Primary node for Ok
Rk Replication schema of Ok
RT Replication time
3.2.3 Eccentricity
The eccentricity of a node n is the maximum distance
to any node from a node n [24]. A node is more central
in the network, if it is less eccentric. Formally, the
eccentricity can be given as:
E(va) = maxbd(va,vb), (4)
where d(va,vb) represents the distance between node
va and node vb. It may be noted that in our evaluation
of the strategies the centrality measures introduced
above seem very meaningful and relevant than using
simple hop-count kind of metrics.
3.3 T-coloring
Suppose we have a graph G = (V,E) and a set T
containing non-negative integers including 0. The T-
coloring is a mapping function f from the vertices of V
to the set of non-negative integers, such that f(x)- f(y)
∉ T, where (x,y) ∈ E. The mapping function f assigns
a color to a vertex. In simple words, the distance
between the colors of the adjacent vertices must not
belong to T. Formulated by Hale [6], the T-coloring
problem for channel assignment assigns channels to
the nodes, such that the channels are separated by a
distance to avoid interference.
4 DROPS
4.1 System Model
Consider a cloud that consists of M nodes, each with
its own storage capacity. Let Si
represents the name
of i-th node and si denotes total storage capacity of
Si
. The communication time between Si
and Sj
is
the total time of all of the links within a selected path
For More Details Contact G.Venkat Rao
PVR TECHNOLOGIES 8143271457
5. This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation
information: DOI 10.1109/TCC.2015.2400460, IEEE Transactions on Cloud Computing
IEEE TRANSACTIONS ON CLOUD COMPUTING 5
from Si
to Sj
represented by c(i, j). We consider N
number of file fragments such that Ok denotes k-th
fragment of a file while ok represents the size of k-th
fragment. Let the total read and write requests from Si
for Ok be represented by ri
k and wi
k, respectively. Let
Pk denote the primary node that stores the primary
copy of Ok. The replication scheme for Ok denoted by
Rk is also stored at Pk. Moreover, every Si
contains
a two-field record, storing Pk for Ok and NNi
k that
represents the nearest node storing Ok. Whenever
there is an update in Ok, the updated version is
sent to Pk that broadcasts the updated version to all
of the nodes in Rk. Let b(i,j) and t(i,j) be the total
bandwidth of the link and traffic between sites Si
and Sj
, respectively . The centrality measure for Si
is represented by ceni. Let colSi store the value of
assigned color to Si
. The colSi can have one out of two
values, namely: open color and close color. The value
open color represents that the node is available for
storing the file fragment. The value close color shows
that the node cannot store the file fragment. Let T be
a set of integers starting from zero and ending on a
prespecified number. If the selected number is three,
then T = {0,1,2,3}. The set T is used to restrict the
node selection to those nodes that are at hop-distances
not belonging to T. For the ease of reading, the most
commonly used notations are listed in Table 1.
Our aim is to minimize the overall total network
transfer time or replication time (RT) or also termed
as replication cost (RC). The RT is composed of two
factors: (a) time due to read requests and (b) time due
to write requests. The total read time of Ok by Si
from
NNi
k is denoted by Ri
k and is given by:
Ri
k = ri
kokc(i,NNi
k). (5)
The total time due to the writing of Ok by Si
ad-
dressed to the Pk is represented as Wi
k and is given:
Wi
k = wi
kok(c(i,Pk) + ∑
(j∈Rk),j≠i
c(Pk,j)). (6)
The overall RT is represented by:
RT =
M
∑
i=1
N
∑
k=1
(Ri
k + Wi
k) (7)
The storage capacity constraint states that a file frag-
ment can only be assigned to a node, if storage
capacity of the node is greater or equal to the size
of fragment. The bandwidth constraint states that
b(i,j) ≥ t(i,j)∀i, ∀j. The DROPS methodology as-
signs the file fragments to the nodes in a cloud that
minimizes the RT, subject to capacity and bandwidth
constraints.
4.2 DROPS
In a cloud environment, a file in its totality, stored
at a node leads to a single point of failure [17]. A
successful attack on a node might put the data con-
fidentiality or integrity, or both at risk. The aforesaid
scenario can occur both in the case of intrusion or
accidental errors. In such systems, performance in
terms of retrieval time can be enhanced by employing
replication strategies. However, replication increases
the number of file copies within the cloud. Thereby,
increasing the probability of the node holding the file
to be a victim of attack as discussed in Section 1.
Security and replication are essential for a large-scale
system, such as cloud, as both are utilized to provide
services to the end user. Security and replication must
be balanced such that one service must not lower the
service level of the other.
In the DROPS methodology, we propose not to
store the entire file at a single node. The DROPS
methodology fragments the file and makes use of the
cloud for replication. The fragments are distributed
such that no node in a cloud holds more than a
single fragment, so that even a successful attack on
the node leaks no significant information. The DROPS
methodology uses controlled replication where each
of the fragments is replicated only once in the cloud to
improve the security. Although, the controlled repli-
cation does not improve the retrieval time to the level
of full-scale replication, it significantly improves the
security.
In the DROPS methodology, user sends the data file
to cloud. The cloud manager system (a user facing
server in the cloud that entertains user’s requests)
upon receiving the file performs: (a) fragmentation,
(b) first cycle of nodes selection and stores one frag-
ment over each of the selected node, and (c) second
cycle of nodes selection for fragments replication.
The cloud manager keeps record of the fragment
placement and is assumed to be a secure entity.
The fragmentation threshold of the data file is spec-
ified to be generated by the file owner. The file owner
can specify the fragmentation threshold in terms of
either percentage or the number and size of different
fragments. The percentage fragmentation threshold,
for instance, can dictate that each fragment will be
of 5% size of the total size of the file. Alternatively,
the owner may generate a separate file containing
information about the fragment number and size, for
instance, fragment 1 of size 5,000 Bytes, fragment 2
of size 8,749 Bytes. We argue that the owner of the
file is the best candidate to generate fragmentation
threshold. The owner can best split the file such that
each fragment does not contain significant amount
of information as the owner is cognizant of all the
facts pertaining to the data. The default percentage
fragmentation threshold can be made a part of the
Service Level Agreement (SLA), if the user does not
specify the fragmentation threshold while uploading
the data file. We primarily focus the storage system
security in this work with an assumption that the
communication channel between user and the cloud
For More Details Contact G.Venkat Rao
PVR TECHNOLOGIES 8143271457
6. This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation
information: DOI 10.1109/TCC.2015.2400460, IEEE Transactions on Cloud Computing
IEEE TRANSACTIONS ON CLOUD COMPUTING 6
is secure.
Algorithm 1 Algorithm for fragment placement
Inputs and initializations:
O = {O1,O2,...,ON }
o = {sizeof(O1),sizeof(O2),....,sizeof(ON )}
col = {open color,close color}
cen = {cen1,cen2,...,cenM }
col ← open color∀ i
cen ← ceni∀ i
Compute:
for each Ok ∈ O do
select Si
Si
← indexof(max(ceni))
if colSi = open color and si >= ok then
Si
← Ok
si ← si − ok
colSi ← close color
Si
’← distance(Si
,T) /*returns all nodes at
distance T from Si and stores in temporary set Si’*/
colSi′ ← close color
end if
end for
Once the file is split into fragments, the DROPS
methodology selects the cloud nodes for fragment
placement. The selection is made by keeping an equal
focus on both security and performance in terms of
the access time. We choose the nodes that are most
central to the cloud network to provide better access
time. For the aforesaid purpose, the DROPS method-
ology uses the concept of centrality to reduce access
time. The centralities determine how central a node is
based on different measures as discussed in Section
3.2. We implement DROPS with three centrality mea-
sures, namely: (a) betweenness, (b) closeness, and (c)
eccentricity centrality. However, if all of the fragments
are placed on the nodes based on the descending
order of centrality, then there is a possibility that
adjacent nodes are selected for fragment placement.
Such a placement can provide clues to an attacker as
to where other fragments might be present, reducing
the security level of the data. To deal with the security
aspects of placing fragments, we use the concept of
T-coloring that was originally used for the channel
assignment problem [6]. We generate a non-negative
random number and build the set T starting from
zero to the generated random number. The set T is
used to restrict the node selection to those nodes that
are at hop-distances not belonging to T. For the said
purpose, we assign colors to the nodes, such that,
initially, all of the nodes are given the open color.
Once a fragment is placed on the node, all of the nodes
within the neighborhood at a distance belonging to
T are assigned close color. In the aforesaid process,
we lose some of the central nodes that may increase
the retrieval time but we achieve a higher security
level. If somehow the intruder compromises a node
and obtains a fragment, then the location of the
other fragments cannot be determined. The attacker
can only keep on guessing the location of the other
fragments. However, as stated previously in Section
3.1, the probability of a successful coordinated attack
is extremely minute. The process is repeated until all
of the fragments are placed at the nodes. Algorithm
1 represents the fragment placement methodology.
In addition to placing the fragments on the central
nodes, we also perform a controlled replication to
increase the data availability, reliability, and improve
data retrieval time. We place the fragment on the
node that provides the decreased access cost with an
objective to improve retrieval time for accessing the
fragments for reconstruction of original file. While
replicating the fragment, the separation of fragments
as explained in the placement technique through T-
coloring, is also taken care off. In case of a large
number of fragments or small number of nodes, it
is also possible that some of the fragments are left
without being replicated because of the T-coloring.
As discussed previously, T-coloring prohibits to store
the fragment in neighborhood of a node storing a
fragment, resulting in the elimination of a number of
nodes to be used for storage. In such a case, only for
the remaining fragments, the nodes that are not hold-
ing any fragment are selected for storage randomly.
The replication strategy is presented in Algorithm 2.
To handle the download request from user, the
cloud manager collects all the fragments from the
nodes and re-assemble them into a single file. After-
wards, the file is sent to the user.
Algorithm 2 Algorithm for fragment′
s replication
for each Ok in O do
select Si
that has max(Ri
k + Wi
k)
if colSi = open color and si >= ok then
Si
← Ok
si ← si − ok
colSi ← close color
Si
’← distance(Si
,T) /*returns all nodes at
distance T from Si and stores in temporary set Si’*/
colSi′ ← close color
end if
end for
4.3 Discussion
A node is compromised with a certain amount of an
attacker’s effort. If the compromised node stores the
data file in totality, then a successful attack on a cloud
node will result in compromise of an entire data file.
However, if the node stores only a fragment of a file,
then a successful attack reveals only a fragment of
a data file. Because the DROPS methodology stores
fragments of data files over distinct nodes, an attacker
has to compromise a large number of nodes to obtain
meaningful information. The number of compromised
For More Details Contact G.Venkat Rao
PVR TECHNOLOGIES 8143271457
7. This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation
information: DOI 10.1109/TCC.2015.2400460, IEEE Transactions on Cloud Computing
IEEE TRANSACTIONS ON CLOUD COMPUTING 7
TABLE 2: Various attacks handled by DROPS methodology
Attack Description
Data Recovery Rollback of VM to some previous state. May expose previously stored data.
Cross VM attack Malicious VM attacking co-resident VM that may lead to data breach.
Improper media sanitization Data exposure due to improper sanitization of storage devices.
E-discovery Data exposure of one user due to seized hardware for investigations related to some other users.
VM escape A malicious user or VM escapes from the control of VMM. Provides access to storage and compute devices.
VM rollback Rollback of VM to some previous state. May expose previously stored data.
nodes must be greater than n because each of the com-
promised node may not give fragment in the DROPS
methodology as the nodes are separated based on
the T-coloring. Alternatively, an attacker has to com-
promise the authentication system of cloud [23]. The
effort required by an attacker to compromise a node
(in systems dealing with fragments/shares of data) is
given in [23] as:
EConf = min(EAuth,n × EBreakIn), (8)
where EConf is the effort required to compromise
the confidentiality, EAuth is the effort required to
compromise authentication, and EBreakIn is the effort
required to compromise a single node. Our focus in
this paper is on the security of the data in the cloud
and we do not take into account the security of the
authentication system. Therefore, we can say that to
obtain n fragments, the effort of an attacker increases
by a factor of n. Moreover, in case of the DROPS
methodology, the attacker must correctly guess the
nodes storing fragments of file. Therefore, in the worst
case scenario, the set of nodes compromised by the
attacker will contain all of the nodes storing the file
fragments. From Equation (1), we observe that the
probability of the worst case to be successful is very
low. The probability that some of the machines (av-
erage case) storing the file fragments will be selected
is high in comparison to the worst case probability.
However, the compromised fragments will not be
enough to reconstruct the whole data. In terms of
the probability, the worst, average, and best cases are
dependent on the number of nodes storing fragments
that are selected for an attack. Therefore, all of the
three cases are captured by Equation (1).
Besides the general attack of a compromised node,
the DROPS methodology can handle the attacks in
which attacker gets hold of user data by avoiding or
disrupting security defenses. Table 2 presents some of
the attacks that are handled by the DROPS methodol-
ogy. The presented attacks are cloud specific that stem
from clouds core technologies. Table 2 also provides a
brief description of the attacks. It is noteworthy that
even in case of successful attacks (that are mentioned),
the DROPS methodology ensures that the attacker
gets only a fragment of file as DROPS methodology
stores only a single fragment on the node. Moreover,
the successful attack has to be on the node that stores
the fragment.
5 EXPERIMENTAL SETUP AND RESULTS
The communicational backbone of cloud computing is
the Data Center Network (DCN) [2]. In this paper, we
use three DCN architectures namely: (a) Three tier, (b)
Fat tree, and (c) DCell [1]. The Three tier is the legacy
DCN architecture. However, to meet the growing de-
mands of the cloud computing, the Fat tree and Dcell
architectures were proposed [2]. Therefore, we use
the aforementioned three architectures to evaluate the
performance of our scheme on legacy as well as state
of the art architectures. The Fat tree and Three tier
architectures are switch-centric networks. The nodes
are connected with the access layer switches. Multiple
access layer switches are connected using aggregate
layer switches. Core layers switches interconnect the
aggregate layer switches.. The Dcell is a server centric
network architecture that uses servers in addition
to switches to perform the communication process
within the network [1]. A server in the Dcell architec-
ture is connected to other servers and a switch. The
lower level dcells recursively build the higher level
dcells. The dcells at the same level are fully connected.
For details about the aforesaid architectures and their
performance analysis, the readers are encouraged to
read [1] and [2].
5.1 Comparative techniques
We compared the results of the DROPS methodol-
ogy with fine-grained replication strategies, namely:
(a) DRPA-star, (b) WA-star, (c) A -star, (d) SA1, (e)
SA2, (f) SA3, (g) Local Min-Min, (h) Global Min-
Min, (i) Greedy algorithm, and (j) Genetic Replication
Algorithm (GRA). The DRPA-star is a data replication
algorithm based on the A-star best-first search algo-
rithm. The DRPA-star starts from the null solution
that is called a root node. The communication cost
at each node n is computed as: cost(n) = g(n) + h(n),
where g(n) is the path cost for reaching n and h(n) is
called the heuristic cost and is the estimate of cost
from n to the goal node. The DRPA-star searches
all of the solutions of allocating a fragment to a
node. The solution that minimizes the cost within
the constraints is explored while others are discarded.
The selected solution is inserted into a list called
For More Details Contact G.Venkat Rao
PVR TECHNOLOGIES 8143271457
8. This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation
information: DOI 10.1109/TCC.2015.2400460, IEEE Transactions on Cloud Computing
IEEE TRANSACTIONS ON CLOUD COMPUTING 8
the OPEN list. The list is ordered in the ascending
order so that the solution with the minimum cost
is expanded first. The heuristic used by the DRPA-
star is given as h(n) = max(0,(mmk(n)g(n))), where
mmk(n) is the least cost replica allocation or the max-
min RC. Readers are encouraged to see the details
about DRPA-star in [13]. The WA-Star is a refinement
of the DRPA-star that implements a weighted func-
tion to evaluate the cost. The function is given as:
f(n) = f(n) + h(n) + (1 − (d(n)/D)h(n). The variable
d(n) represents the depth of the node n and D denotes
the expected depth of the goal node [13]. The A -star
is also a variation of the DRPA-star that uses two lists,
OPEN and FOCAL. The FOCAL list contains only
those nodes from the OPEN list that have f greater
than or equal to the lowest f by a factor of 1 + .
The node expansion is performed from the FOCAL list
instead of the OPEN list. Further details about WA-
Star and A -star can be found in [13]. The SA1 (sub-
optimal assignments), SA2, and SA3 are DRPA-star
based heuristics. In SA1, at level R or below, only the
best successors of node n having the least expansion
cost are selected. The SA2 selects the best successors
of node n only for the first time when it reaches
the depth level R. All other successors are discarded.
The SA3 works similar to the SA2, except that the
nodes are removed from OPEN list except the one
with the lowest cost. Readers are encouraged to read
[13] for further details about SA1, SA2, and SA3. The
LMM can be considered as a special case of the bin
packing algorithm. The LMM sorts the file fragments
based on the RC of the fragments to be stored at a
node. The LMM then assigns the fragments in the
ascending order. In case of a tie, the file fragment
with minimum size is selected for assignment (name
local Min-Min is derived from such a policy). The
GMM selects the file fragment with global minimum
of all the RC associated with a file fragment. In case
of a tie, the file fragment is selected at random. The
Greedy algorithm first iterates through all of the M
cloud nodes to find the best node for allocating a
file fragment. The node with the lowest replication
cost is selected. The second node for the fragment
is selected in the second iteration. However, in the
second iteration that node is selected that produces
the lowest RC in combination with node already
selected. The process is repeated for all of the file
fragments. Details of the greedy algorithm can be
found in [18]. The GRA consists of chromosomes rep-
resenting various schemes for storing file fragments
over cloud nodes. Every chromosome consists of M
genes, each representing a node. Every gene is a N
bit string. If the k-th file fragment is to be assigned
to Si
, then the k-th bit of i-th gene holds the value
of one. Genetic algorithms perform the operations of
selection, crossover, and mutation. The value for the
crossover rate (µc) was selected as 0.9, while for the
mutation rate (µm) the value was 0.01. The use of the
values for µc and µm is advocated in [16].The best
chromosome represents the solution. GRA utilizes mix
and match strategy to reach the solution. More details
about GRA can be obtained from [16].
5.2 Workload
The size of files were generated using a uniform dis-
tribution between 10Kb and 60 Kb. The primary nodes
were randomly selected for replication algorithms. For
the DROPS methodology, the Si′
s selected during the
first cycle of the nodes selection by Algorithm 1 were
considered as the primary nodes.
The capacity of a node was generated using a
uniform distribution between (1
2
CS)C and (3
2
CS)C,
where 0 ≤ C ≥ 1. For instance, for CS = 150 and
C = 0.6 the capacities of the nodes were uniformly
distributed between 45 and 135. The mean value of
g in the OPEN and FOCAL lists was selected as the
value of , for WA-star and A -star, respectively. The
value for level R was set to ⌊d
2
⌋, where d is the depth
of the search tree(number of fragments).
The read/write (R/W) ratio for the simulations
that used fixed value was selected to be 0.25 (The
R/W ratio reflecting 25% reads and 75% writes within
the cloud). The reason for choosing a high workload
(lower percentage of reads and higher percentage
of writes) was to evaluate the performance of the
techniques under extreme cases. The simulations that
studied the impact of change in the R/W ratio used
various workloads in terms of R/W ratios. The R/W
ratios selected were in the range of 0.10 to 0.90. The
selected range covered the effect of high, medium, and
low workloads with respect to the R/W ratio.
5.3 Results and Discussion
We compared the performance of the DROPS method-
ology with the algorithms discussed in Section 5.1.
The behavior of the algorithms was studied by: (a)
increasing the number of nodes in the system, (b)
increasing the number of objects keeping number
of nodes constant, (c) changing the nodes storage
capacity, and (d) varying the read/write ratio. The
aforesaid parameters are significant as they affect the
problem size and the performance of algorithms [13].
5.3.1 Impact of increase in number of cloud nodes
We studied the performance of the placement tech-
niques and the DROPS methodology by increasing the
number of nodes. The performance was studied for
the three discussed cloud architectures. The numbers
of nodes selected for the simulations were 100, 500,
1,024, 2,400, and 30,000. The number of nodes in
the Dcell architecture increases exponentially [2]. For
a Dcell architecture, with two nodes in the Dcell0,
the architecture consists of 2,400 nodes. However,
increasing a single node in the Dcell0, the total nodes
increases to 30, 000 [2]. The number of file fragments
For More Details Contact G.Venkat Rao
PVR TECHNOLOGIES 8143271457
9. This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation
information: DOI 10.1109/TCC.2015.2400460, IEEE Transactions on Cloud Computing
IEEE TRANSACTIONS ON CLOUD COMPUTING 9
20
30
40
50
60
70
80
100 500 1024 2400 30000
RCsavings(%)
No. of nodes
DRPA-star
LMM
WA-star
GMM
Aeps-star
SA1
SA2
SA3
Greedy
GRA
DROPS-BC
DROPS-CC
DROPS-EC
AƐ
(a)
10
20
30
40
50
60
70
80
100 500 1024 2400 30000
RCsavings(%)
No. of nodes
DRPA-star
LMM
WA-star
GMM
Aeps-star
SA1
SA2
SA3
Greedy
GRA
DROPS-BC
DROPS-CC
DROPS-EC
AƐ
(b)
Fig. 2: (a) RC versus number of nodes (Three tier) (b) RC versus number of nodes (Fat tier)
20
30
40
50
60
70
80
100 500 1024 2400 30000
RCsavings(%)
No. of nodes
DRPA-star
LMM
WA-star
GMM
Aeps-star
SA1
SA2
SA3
Greedy
GRA
DROPS-BC
DROPS-CC
DROPS-EC
AƐ
(a)
18
19
20
21
22
23
24
25
100 500 1024 2400 30000
RCsavings(%)
No. of nodes
DROPS-BC
DROPS-CC
DROPS-EC
(b)
Fig. 3: (a) RC versus number of nodes (Dcell) (b) RC versus number of nodes for DROPS variations with
maximum available capacity constraint (Three tier)
18
19
20
21
22
23
24
100 500 1024 2400 30000
RCsavings(%)
No. of nodes
DROPS-BC
DROPS-CC
DROPS-EC
(a)
18
20
22
24
26
28
30
100 500 1024 2400 30000
RCsavings(%)
No. of nodes
DROPS-BC
DROPS-CC
DROPS-EC
(b)
Fig. 4: RC versus number of nodes for DROPS variations with maximum available capacity constraints (a) Fat
tree (b) Dcell
was set to 50. For the first experiment we used
C = 0.2. Fig. 2 (a), Fig. 2 (b), and Fig. 3 (a) show
the results for the Three tier, Fat tree, and Dcell
architectures, respectively. The reduction in network
transfer time for a file is termed as RC. In the figures,
the BC stands for the betweenness centrality, the CC
stands for closeness centrality, and the EC stands for
eccentricity centrality. The interesting observation is
that although all of the algorithms showed similar
trend in performance within a specific architecture,
the performance of the algorithms was better in the
Dcell architecture as compared to three tier and fat
tree architectures. This is because the Dcell archi-
tecture exhibits better inter node connectivity and
robustness [2]. The DRPA-star gave best solutions as
compared to other techniques and registered consis-
tent performance with the increase in the number
of nodes. Similarly, WA-star, A -star, GRA, greedy,
and SA3 showed almost consistent performance with
various number of nodes. The performance of LMM
and GMM gradually increased with the increase in
number of nodes since the increase in the number of
nodes increased the number of bins. The SA1 and SA2
also showed almost constant performance in all of the
three architectures. However, it is important to note
that SA2 ended up with a decrease in performance
For More Details Contact G.Venkat Rao
PVR TECHNOLOGIES 8143271457
10. This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation
information: DOI 10.1109/TCC.2015.2400460, IEEE Transactions on Cloud Computing
IEEE TRANSACTIONS ON CLOUD COMPUTING 10
10
20
30
40
50
60
70
80
50 100 200 300 400 500
RCsavings(%)
No. of fragments
DRPA-star
LMM
WA-star
GMM
Aeps-star
SA1
SA2
SA3
Greedy
GRA
DROPS-BC
DROPS-CC
DROPS-EC
AƐ
(a)
20
30
40
50
60
70
80
50 100 200 300 400 500
RCsavings(%)
No. of fragments
DRPA-star
LMM
WA-star
GMM
Aeps-star
SA1
SA2
SA3
Greedy
GRA
DROPS-BC
DROPS-CC
DROPS-EC
AƐ
(b)
Fig. 5: (a) RC versus number of file fragments (Three tier) (b) RC versus number of file fragments (Fat tier)
as compared to the initial performance. This may be
due to the fact that SA2 only expands the node with
minimum cost when it reaches at certain depth for
the first time. Such a pruning for the first time, might
have purged nodes by providing better global access
time. The DROPS methodology, did not employ full-
scale replication. Every fragment is replicated only
once in the system. The smaller number of replicas of
any fragment and separation of nodes by T-coloring
decreased the probability of finding that fragment by
an attacker. Therefore, the increase in the security
level of the data is accompanied by the drop in
performance as compared to the comparative tech-
niques discussed in this paper. It is important to note
that the DROPS methodology was implemented using
three centrality measures namely: (a) betweenness, (b)
closeness, and (c) eccentricity. However, Fig. 2(a) and
Fig. 2(b) show only a single plot. Due to the inherent
structure of the Three tier and Fat tree architectures,
all of the nodes in the network are at the same
distance from each other or exist at the same level.
Therefore, the centrality measure is the same for all of
the nodes. This results in the selection of same node
for storing the file fragment. Consequently, the per-
formance showed the same value and all three lines
are on the same points. However, this is not the case
for the Dcell architecture. In the Dcell architecture,
nodes have different centrality measures resulting in
the selection of different nodes. It is noteworthy to
mention that in Fig 3(a), the eccentricity centrality
performs better as compared to the closeness and be-
tweenness centralities because the nodes with higher
eccentricity are located closer to all other nodes within
the network. To check the effect of closeness and
betweenness centralities, we modified the heuristic
presented in Algorithm 1. Instead of selecting the
node with criteria of only maximum centrality, we
selected the node with: (a) maximum centrality and
(b) maximum available storage capacity. The results
are presented in Fig. 3 (b), Fig. 4 (a), and Fig. 4 (b). It is
evident that the eccentricity centrality resulted in the
highest performance while the betweenness centrality
showed the lowest performance. The reason for this
is that nodes with higher eccentricity are closer to all
other nodes in the network that results in lower RC
value for accessing the fragments.
5.3.2 Impact of increase in number of file fragments
The increase in number of file fragments can strain
the storage capacity of the cloud that, in turn may
affect the selection of the nodes. To study the impact
on performance due to increase in number of file
fragments, we set the number of nodes to 30,000. The
numbers of file fragments selected were 50, 100, 200,
300, 400, and 500. The workload was generated with
C = 45% to observe the effect of increase number
of file fragments with fairly reasonable amount of
memory and to discern the performance of all the
algorithms. The results are shown in Fig. 5 (a), Fig. 5
(b), and Fig. 6 (a) for the Three tier, Fat tree, and Dcell
architectures, respectively. It can be observed from the
plots that the increase in the number of file fragments
reduced the performance of the algorithms, in general.
However, the greedy algorithm showed the most
improved performance. The LMM showed the highest
loss in performance that is little above 16%. The loss in
performance can be attributed to the storage capacity
constraints that prohibited the placements of some
fragments at nodes with optimal retrieval time. As
discussed earlier, the DROPS methodology produced
similar results in three tier and fat tree architectures.
However, from the Dcell architecture, it is clear that
the DROPS methodology with eccentricity centrality
maintains the supremacy on the other two centralities.
5.3.3 Impact of increase in storage capacity of nodes
Next, we studied the effect of change in the nodes
storage capacity. A change in storage capacity of the
nodes may affect the number of replicas on the node
due to storage capacity constraints. Intuitively, a lower
node storage capacity may result in the elimination
of some optimal nodes to be selected for replication
because of violation of storage capacity constraints.
The elimination of some nodes may degrade the per-
formance to some extent because a node giving lower
access time might be pruned due to non-availability
For More Details Contact G.Venkat Rao
PVR TECHNOLOGIES 8143271457
11. This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation
information: DOI 10.1109/TCC.2015.2400460, IEEE Transactions on Cloud Computing
IEEE TRANSACTIONS ON CLOUD COMPUTING 11
10
20
30
40
50
60
70
80
90
50 100 200 300 400 500
RCsavings(%)
No. of fragments
DRPA-star
LMM
WA-star
GMM
Aeps-star
SA1
SA2
SA3
Greedy
GRA
DROPS-BC
DROPS-CC
DROPS-EC
AƐ
(a)
10
20
30
40
50
60
70
80
20 25 30 35 40 45
RCsavings(%)
Node storage capacity
DRPA-star
LMM
WA-star
GMM
Aeps-star
SA1
SA2
SA3
Greedy
GRA
DROPS-BC
DROPS-CC
DROPS-EC
AƐ
(b)
Fig. 6: (a) RC versus number of file fragments (Dcell) (b) RC versus nodes storage capacity (Three tier)
10
20
30
40
50
60
70
80
20 25 30 35 40 45
RCsavings(%)
Node storage capacity
DRPA-star
LMM
WA-star
GMM
Aeps-star
SA1
SA2
SA3
Greedy
GRA
DROPS-BC
DROPS-CC
DROPS-EC
AƐ
(a)
10
20
30
40
50
60
70
80
20 25 30 35 40 45
RCsavings(%)
Node storage capacity
DRPA-star
LMM
WA-star
GMM
Aeps-star
SA1
SA2
SA3
Greedy
GRA
DROPS-BC
DROPS-CC
DROPS-EC
AƐ
(b)
Fig. 7: (a) RC versus nodes storage capacity (Fat tree) (b) RC versus nodes storage capacity (Dcell)
20
30
40
50
60
70
80
90
10 20 30 40 50 60 70 80 90
RCsavings(%)
R/W ratio
DRPA-star
LMM
WA-star
GMM
Aeps-star
SA1
SA2
SA3
Greedy
GRA
DROPS-BC
DROPS-CC
DROPS-EC
AƐ
(a)
20
30
40
50
60
70
80
90
10 20 30 40 50 60 70 80 90
RCsavings(%)
R/W ratio
DRPA-star
LMM
WA-star
GMM
Aeps-star
SA1
SA2
SA3
Greedy
GRA
DROPS-BC
DROPS-CC
DROPS-EC
AƐ
(b)
Fig. 8: (a) RC versus R/W ratio (Three tree) (b) RC versus R/W ratio (Fat tree)
20
30
40
50
60
70
80
90
10 20 30 40 50 60 70 80 90
RCsavings(%)
R/W ratio
DRPA-star
LMM
WA-star
GMM
Aeps-star
SA1
SA2
SA3
Greedy
GRA
DROPS-BC
DROPS-CC
DROPS-EC
AƐ
Fig. 9: RC versus R/W ratio (Dcell)
of enough storage space to store the file fragment.
Higher node storage capacity allows full-scale repli-
cation of fragments, increasing the performance gain.
However, node capacity above certain level will not
change the performance significantly as replicating
the already replicated fragments will not produce con-
siderable performance increase. If the storage nodes
have enough capacity to store the allocated file frag-
ments, then a further increase in the storage capacity
of a node cannot cause the fragments to be stored
again. Moreover, the T-coloring allows only a single
replica to be stored on any node. Therefore, after a
certain point, the increase in storage capacity might
not affect the performance.
We increase the nodes storage capacity incremen-
tally from 20% to 40%. The results are shown in Fig.
6 (b), Fig. 7 (a), and Fig. 7 (b). It is observable from
For More Details Contact G.Venkat Rao
PVR TECHNOLOGIES 8143271457
12. This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation
information: DOI 10.1109/TCC.2015.2400460, IEEE Transactions on Cloud Computing
IEEE TRANSACTIONS ON CLOUD COMPUTING 12
0
15
30
45
60
75
90
500 1024 2400 30000
Failednodes(%)
No. of nodes
Three tier
Fat tree
Dcell
Fig. 10: Fault tolerance level of DROPS
the plots that initially, all of the algorithms showed
significant increase in performance with an increase
in the storage capacity. Afterwards, the marginal in-
crease in the performance reduces with the increase in
the storage capacity. The DRPA-star, greedy, WA-star,
and A -star showed nearly similar performance and
recorded higher performance. The DROPS methodol-
ogy did not show any considerable change in results
when compared to previously discussed experiments
(change in number of nodes and files). This is because
the DROPS methodology does not go for a full-scale
replication of file fragments rather they are replicated
only once and a single node only stores a single
fragment. Single time replication does not require
high storage capacity. Therefore, the change in nodes
storage capacity did not affect the performance of
DROPS to a notable extent.
5.3.4 Impact of increase in the read/write ratio
The change in R/W ratio affects the performance of
the discussed comparative techniques. An increase in
the number of reads would lead to a need of more
replicas of the fragments in the cloud. The increased
number of replicas decreases the communication cost
associated with the reading of fragments. However,
the increased number of writes demands that the
replicas be placed closer to the primary node. The
presence of replicas closer to the primary node results
in decreased RC associated with updating replicas.
The higher write ratios may increase the traffic on the
network for updating the replicas.
Fig. 8 (a), Fig. 8 (b), and Fig. 9 show the perfor-
mance of the comparative techniques and the DROPS
methodology under varying R/W ratios. It is ob-
served that all of the comparative techniques showed
an increase in the RC savings up to the R/W ratio of
0.50. The decrease in the number of writes caused the
reduction of cost associated with updating the replicas
of the fragments. However, all of the comparative
techniques showed some sort of decrease in RC saving
for R/W ratios above 0.50. This may be attributed to
the fact that an increase in the number of reads caused
more replicas of fragments resulting in increased cost
of updating the replicas. Therefore, the increased cost
of updating replicas underpins the advantage of de-
creased cost of reading with higher number of replicas
at R/W ratio above 0.50. It is also important to men-
tion that even at higher R/W ratio values the DRPA-
star, WA-star, A -star, and Greedy algorithms almost
maintained their initial RC saving values. The high
performance of the aforesaid algorithms is due to the
fact that these algorithms focus on the global RC value
while replicating the fragments. Therefore, the global
perception of these algorithms resulted in high perfor-
mance. Alternatively, LMM and GMM did not show
substantial performance due to their local RC view
while assigning a fragment to a node. The SA1, SA2,
and SA3 suffered due to their restricted search tree
that probably ignored some globally high performing
nodes during expansion. The DROPS methodology
maintained almost consistent performance as is ob-
servable from the plots. The reason for this is that the
DROPS methodology replicates the fragments only
once, so varying R/W ratios did not affect the results
considerably. However, the slight changes in the RC
value are observed. This might be due to the reason
that different nodes generate high cost for R/W of
fragments with different R/W ratio.
As discussed earlier, the comparative techniques
focus on the performance and try to reduce the RC
as much as possible. The DROPS methodology, on
the other hand, is proposed to collectively approach
the security and performance. To increase the security
level of the data, the DROPS methodology sacrifices
the performance to certain extent. Therefore, we see a
drop in the performance of the DROPS methodology
as compared to discussed comparative techniques.
However, the drop in performance is accompanied by
much needed increase in security level.
Moreover, it is noteworthy that the difference in
performance level of the DROPS methodology and
the comparative techniques is least with the reduced
storage capacity of the nodes (see Fig. 6 (b), Fig. 7
(a), and Fig. 7 (b)). The reduced storage capacity pro-
scribes the comparative techniques to place as many
replicas as required for the optimized performance. A
further reduction in the storage capacity will tend to
even lower the performance of the comparative tech-
niques. Therefore, we conclude that the difference in
performance level of the DROPS methodology and the
comparative techniques is least when the comparative
techniques reduce the extensiveness of replication for
any reason.
Due to the fact that the DROPS methodology re-
duces the number of replicas, we have also investi-
gates the fault tolerance of the DROPS methodology.
If two nodes storing the same file fragment fail, the
result will be incomplete or faulty file. We randomly
picked and failed the nodes to check that what per-
centage of failed nodes will result in loss of data or
For More Details Contact G.Venkat Rao
PVR TECHNOLOGIES 8143271457
13. This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation
information: DOI 10.1109/TCC.2015.2400460, IEEE Transactions on Cloud Computing
IEEE TRANSACTIONS ON CLOUD COMPUTING 13
TABLE 3: Average RC (%) savings for increase in number of nodes
Architec-
ture
DRPA LMM wa-star GMM A -star SA1 SA2 SA3 Greedy GRA DROPS-
BC
DROPS-
CC
DROPS-
EC
Three
tier
74.70 36.23 72.55 45.62 71.82 59.86 49.09 64.38 69.1 66.1 24.41 24.41 24.41
Fat
tree
76.76 38.95 75.22 45.77 73.33 60.89 52.67 68.33 71.64 70.54 23.28 23.28 23.28
Dcell 79.6 44.32 76.51 46.34 76.43 62.03 54.90 71.53 73.09 72.34 23.06 25.16 30.20
TABLE 4: Average RC (%) savings for increase in number of fragments
Architec-
ture
DRPA LMM wa-star GMM A -star SA1 SA2 SA3 Greedy GRA DROPS-
BC
DROPS-
CC
DROPS-
EC
Three
tier
74.63 40.08 69.69 48.67 68.82 60.29 49.65 62.18 71.25 64.44 23.93 23.93 23.93
Fat
tree
75.45 44.33 70.90 52.66 70.58 61.12 51.09 64.64 71.73 66.90 23.42 23.42 23.42
Dcell 76.08 45.90 72.49 52.78 72.33 62.12 50.02 64.66 70.92 69.50 23.17 25.35 28.17
TABLE 5: Average RC (%) savings for increase in storage capacity
Architec-
ture
DRPA LMM wa-star GMM A -star SA1 SA2 SA3 Greedy GRA DROPS-
BC
DROPS-
CC
DROPS-
EC
Three
tier
72.37 28.26 71.99 40.63 71.19 59.29 48.67 61.83 72.09 63.54 19.89 19.89 19.89
Fat
tree
69.19 28.34 70.73 41.99 66.20 60.28 51.29 61.83 69.33 62.16 21.60 21.60 21.60
Dcell 73.57 31.04 71.37 42.41 67.70 60.79 50.42 63.78 69.64 64.03 21.91 22.88 24.68
TABLE 6: Average RC (%) savings for increase in R/W ratio
Architec-
ture
DRPA LMM wa-star GMM A -star SA1 SA2 SA3 Greedy GRA DROPS-
BC
DROPS-
CC
DROPS-
EC
Three
tier
77.28 32.54 76.32 53.20 75.38 55.13 49.61 59.74 73.64 58.27 24.08 24.08 24.08
Fat
tree
76.29 31.47 74.81 52.08 73.37 53.33 49.35 57.87 71.61 57.47 23.68 23.68 23.68
Dcell 78.72 33.66 78.03 55.82 76.47 57.44 52.28 61.94 74.54 60.16 23.32 23.79 24.23
selection of two nodes storing same file fragment.
The numbers of nodes used in aforesaid experiment
were 500, 1,024, 2,400, and 30, 000. The number of file
fragments was set to 50. The results are shown in Fig.
10. As can be seen in Fig. 10, the increase in number of
nodes increases the fault tolerance level. The random
failure has generated a reasonable percentage for a
soundly decent number of nodes.
We report the average RC (%) savings in Table 3, Ta-
ble 4, Table 5, and Table 6. The averages are computed
over all of the RC (%) savings within a certain class of
experiments. Table 3 reveals the average results of all
of the experiments conducted to observe the impact of
increase in the number of nodes in the cloud for all of
the three discussed cloud architectures. Table 4 depicts
the average RC (%) savings for the increase in the
number of fragments. Table 5 and Table 6 describe the
average results for the increase the storage capacity
and R/W ratio, respectively. It is evident from the
average results that the Dcell architecture showed
better results due to its higher connectivity ratio.
6 CONCLUSIONS
We proposed the DROPS methodology, a cloud stor-
age security scheme that collectively deals with the
security and performance in terms of retrieval time.
The data file was fragmented and the fragments are
dispersed over multiple nodes. The nodes were sepa-
rated by means of T-coloring. The fragmentation and
dispersal ensured that no significant information was
obtainable by an adversary in case of a successful
attack. No node in the cloud, stored more than a single
fragment of the same file. The performance of the
DROPS methodology was compared with full-scale
replication techniques. The results of the simulations
revealed that the simultaneous focus on the security
and performance, resulted in increased security level
of data accompanied by a slight performance drop.
Currently with the DROPS methodology, a user has
to download the file, update the contents, and upload
it again. It is strategic to develop an automatic update
mechanism that can identify and update the required
fragments only. The aforesaid future work will save
For More Details Contact G.Venkat Rao
PVR TECHNOLOGIES 8143271457
14. This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation
information: DOI 10.1109/TCC.2015.2400460, IEEE Transactions on Cloud Computing
IEEE TRANSACTIONS ON CLOUD COMPUTING 14
the time and resources utilized in downloading, up-
dating, and uploading the file again. Moreover, the
implications of TCP incast over the DROPS method-
ology need to be studied that is relevant to distributed
data storage and access.
REFERENCES
[1] K. Bilal, S. U. Khan, L. Zhang, H. Li, K. Hayat, S. A. Madani,
N. Min-Allah, L. Wang, D. Chen, M. Iqbal, C. Z. Xu, and A. Y.
Zomaya, “Quantitative comparisons of the state of the art data
center architectures,” Concurrency and Computation: Practice and
Experience, Vol. 25, No. 12, 2013, pp. 1771-1783.
[2] K. Bilal, M. Manzano, S. U. Khan, E. Calle, K. Li, and A.
Zomaya, “On the characterization of the structural robustness
of data center networks,” IEEE Transactions on Cloud Computing,
Vol. 1, No. 1, 2013, pp. 64-77.
[3] D. Boru, D. Kliazovich, F. Granelli, P. Bouvry, and A. Y. Zomaya,
“Energy-efficient data replication in cloud computing datacen-
ters,” In IEEE Globecom Workshops, 2013, pp. 446-451. .
[4] Y. Deswarte, L. Blain, and J-C. Fabre, “Intrusion tolerance in dis-
tributed computing systems,” In Proceedings of IEEE Computer
Society Symposium on Research in Security and Privacy, Oakland
CA, pp. 110-121, 1991.
[5] B. Grobauer, T.Walloschek, and E. Stocker, “Understanding
cloud computing vulnerabilities,” IEEE Security and Privacy, Vol.
9, No. 2, 2011, pp. 50-57.
[6] W. K. Hale, “Frequency assignment: Theory and applications,”
Proceedings of the IEEE, Vol. 68, No. 12, 1980, pp. 1497-1514.
[7] K. Hashizume, D. G. Rosado, E. Fernndez-Medina, and E. B.
Fernandez, “An analysis of security issues for cloud comput-
ing,” Journal of Internet Services and Applications, Vol. 4, No. 1,
2013, pp. 1-13.
[8] M. Hogan, F. Liu, A.Sokol, and J. Tong, “NIST cloud computing
standards roadmap,” NIST Special Publication, July 2011.
[9] W. A. Jansen, “Cloud hooks: Security and privacy issues in
cloud computing,” In 44th Hawaii IEEE International Conference
onSystem Sciences (HICSS), 2011, pp. 1-10.
[10] A. Juels and A. Opera, “New approaches to security and
availability for cloud data,” Communications of the ACM, Vol.
56, No. 2, 2013, pp. 64-73.
[11] G. Kappes, A. Hatzieleftheriou, and S. V. Anastasiadis, “Dike:
Virtualization-aware Access Control for Multitenant Filesys-
tems,” University of Ioannina, Greece, Technical Report No.
DCS2013-1, 2013.
[12] L. M. Kaufman, “Data security in the world of cloud comput-
ing,” IEEE Security and Privacy, Vol. 7, No. 4, 2009, pp. 61-64.
[13] S. U. Khan, and I. Ahmad, “Comparison and analysis of
ten static heuristics-based Internet data replication techniques,”
Journal of Parallel and Distributed Computing, Vol. 68, No. 2, 2008,
pp. 113-136.
[14] A. N. Khan, M. L. M. Kiah, S. U. Khan, and S. A. Madani,
“Towards Secure Mobile Cloud Computing: A Survey,” Future
Generation Computer Systems, Vol. 29, No. 5, 2013, pp. 1278-1299.
[15] A. N. Khan, M.L. M. Kiah, S. A. Madani, and M. Ali, “En-
hanced dynamic credential generation scheme for protection
of user identity in mobile-cloud computing, The Journal of
Supercomputing, Vol. 66, No. 3, 2013, pp. 1687-1706 .
[16] T. Loukopoulos and I. Ahmad, “Static and adaptive dis-
tributed data replication using genetic algorithms,” Journal of
Parallel and Distributed Computing, Vol. 64, No. 11, 2004, pp.
1270-1285.
[17] A. Mei, L. V. Mancini, and S. Jajodia, “Secure dynamic frag-
ment and replica allocation in large-scale distributed file sys-
tems,” IEEE Transactions on Parallel and Distributed Systems, Vol.
14, No. 9, 2003, pp. 885-896.
[18] L. Qiu, V. N. Padmanabhan, and G. M. Voelker, “On the
placement of web server replicas,” In Proceedings of INFOCOM
2001, Twentieth Annual Joint Conference of the IEEE Computer and
Communications Societies, Vol. 3, pp. 1587-1596, 2001.
[19] D. Sun, G. Chang, L. Sun, and X. Wang, “Surveying and
analyzing security, privacy and trust issues in cloud computing
environments,” Procedia Engineering, Vol. 15, 2011, pp. 2852
2856.
[20] Y. Tang, P. P. Lee, J. C. S. Lui, and R. Perlman, “Secure overlay
cloud storage with access control and assured deletion,” IEEE
Transactions on Dependable and Secure Computing, Vol. 9, No. 6,
Nov. 2012, pp. 903-916.
[21] M. Tu, P. Li, Q. Ma, I-L. Yen, and F. B. Bastani, “On the
optimal placement of secure data objects over Internet,” In
Proceedings of 19th IEEE International Parallel and Distributed
Processing Symposium, pp. 14-14, 2005.
[22] D. Zissis and D. Lekkas, “Addressing cloud computing secu-
rity issues,” Future Generation Computer Systems, Vol. 28, No. 3,
2012, pp. 583-592.
[23] J. J. Wylie, M. Bakkaloglu, V. Pandurangan, M. W. Bigrigg,
S. Oguz, K. Tew, C. Williams, G. R. Ganger, and P. K. Khosla,
“Selecting the right data distribution scheme for a survivable
storage system,” Carnegie Mellon University, Technical Report
CMU-CS-01-120, May 2001.
[24] M. Newman, Networks: An introduction, Oxford University
Press, 2009.
[25] A. R. Khan, M. Othman, S. A. Madani, S. U. Khan,
“A survey of mobile cloud computing application
models,” IEEE Communications Surveys and Tutorials, DOI:
10.1109/SURV.2013.062613.00160.
Mazhar Ali is currently a PhD student at North Dakota State Uni-
versity, Fargo, ND, USA. His research interests include information
security, formal verification, modeling, and cloud computing systems.
Kashif Bilal did his PhD in Electrical and Computer Engineering
from the North Dakota State University, USA. His research interests
include data center networks, distributed computing, and energy
efficiency.
Samee U. Khan is an assistant professor at the North Dakota State
University. His research interest include topics, such as sustainable
computing, social networking, and reliability. He is a senior member
of IEEE, and a fellow of IET and BCS.
Bharadwaj Veeravalli is an associate professor at the National
University of Singapore. His main stream research interests include,
Scheduling problems, Cloud/Cluster/Grid computing, Green Stor-
age, and Multimedia computing. He is a senior member of the IEEE.
Keqin Li is a SUNY distinguished professor. His research interests
include mainly in the areas of design and analysis of algorithms,
parallel and distributed computing, and computer networking. He is
a senior member of the IEEE.
Albert Y. Zomaya is currently the chair professor of high per-
formance computing and networking in the School of Information
Technologies, The University of Sydney. He is a fellow of IEEE, IET,
and AAAS.
For More Details Contact G.Venkat Rao
PVR TECHNOLOGIES 8143271457