This document proposes a scheme for ensuring data integrity and dependability for cloud storage systems. The scheme utilizes erasure coding to distribute data across multiple servers and generate verification tokens to enable lightweight auditing. It allows users to efficiently audit that their data is intact and to identify any misbehaving servers. The scheme also supports secure dynamic operations like updates, deletes and appends while maintaining the same level of integrity assurance. Analysis shows the scheme is efficient and resilient to various security threats from malicious servers.
Effective & Flexible Cryptography Based Scheme for Ensuring User`s Data Secur...ijsrd.com
Cloud computing has been envisioned as the next-generation architecture of IT enterprise. In contrast to traditional solutions, where the IT services are under proper physical, logical and personnel controls, cloud computing moves the application software and databases to the large data centers, where the management of the data and services may not be fully trustworthy. This unique attribute, however, poses many new security challenges which have not been well understood. In this article, we focus on cloud data storage security, which has always been an important aspect of quality of service. To ensure the correctness of users' data in the cloud, we propose an effective and flexible cryptography based scheme. Extensive security and performance analysis shows that the proposed scheme is highly efficient and resilient against malicious data modification attack.
Enhancement of the Cloud Data Storage Architectural Framework in Private CloudINFOGAIN PUBLICATION
The data storage in the cloud typically resides in a service providing environment collocated with data from different clients. The institutions or organizations moving the sensitive and regulated data into the cloud in order to maintain the account for the means by which the access data is controlled and the data is kept secure. Data can take many forms. The cloud based application development; it includes the application programs, scripts, and configuration settings, along with the development tools. For deployed applications, it includes records and other content created or used by the applications, as well as account information about the users of the applications. Access controls are one means to keep data away from unauthorized users; encryption is another. Access controls are typically identity-based, which makes authentication of the user’s identity an important issue in cloud computing. In this research paper focus the cloud data storage architectural frame work of encrypted data.
DISTRIBUTED SCHEME TO AUTHENTICATE DATA STORAGE SECURITY IN CLOUD COMPUTINGijcsit
Cloud Computing is the revolution in current generation IT enterprise. Cloud computing displaces
database and application software to the large data centres, where the management of services and data
may not be predictable, where as the conventional solutions, for IT services are under proper logical,
physical and personal controls. This aspect attribute, however comprises different security challenges
which have not been well understood. It concentrates on cloud data storage security which has always been
an important aspect of quality of service (QOS). In this paper, we designed and simulated an adaptable and
efficient scheme to guarantee the correctness of user data stored in the cloud and also with some prominent
features. Homomorphic token is used for distributed verification of erasure – coded data. By using this
scheme, we can identify misbehaving servers. In spite of past works, our scheme supports effective and
secure dynamic operations on data blocks such as data insertion, deletion and modification. In contrast to
traditional solutions, where the IT services are under proper physical, logical and personnel controls,
cloud computing moves the application software and databases to the large data centres, where the data
management and services may not be absolutely truthful. This effective security and performance analysis
describes that the proposed scheme is extremely flexible against malicious data modification, convoluted
failures and server clouding attacks.
Data Partitioning Technique In Cloud: A Survey On Limitation And BenefitsIJERA Editor
In recent years,increment in the growth and popularity of cloud services has lead the enterprises to an increase in the capability to handle, store and retrieve critical data. This technology access a shared group of configurable computing resources, which are- servers,storage and applications. Cloud computing is a succeeding generation architecture of IT enterprise, which convert the application software and databaseto large data hubs.Data security and storage of data is an essential functionality of cloud services.It allows data storage in the cloud server efficiently without any worry. Cloud services includes request service, wide web access, measured services, just single click away ,easy usage, just pay for the services you use and location independent.All these features poses many security challenges.The data partitioning techniques are used in literature, for privacy conserving and security of data, using third party auditor (TPA). Objective of the current workis to review all available partitioning technique in literature and analyze them. Through this work authors will compare and identify the limitations and benefits of the available and widely used partitioning techniques.
Effective & Flexible Cryptography Based Scheme for Ensuring User`s Data Secur...ijsrd.com
Cloud computing has been envisioned as the next-generation architecture of IT enterprise. In contrast to traditional solutions, where the IT services are under proper physical, logical and personnel controls, cloud computing moves the application software and databases to the large data centers, where the management of the data and services may not be fully trustworthy. This unique attribute, however, poses many new security challenges which have not been well understood. In this article, we focus on cloud data storage security, which has always been an important aspect of quality of service. To ensure the correctness of users' data in the cloud, we propose an effective and flexible cryptography based scheme. Extensive security and performance analysis shows that the proposed scheme is highly efficient and resilient against malicious data modification attack.
Enhancement of the Cloud Data Storage Architectural Framework in Private CloudINFOGAIN PUBLICATION
The data storage in the cloud typically resides in a service providing environment collocated with data from different clients. The institutions or organizations moving the sensitive and regulated data into the cloud in order to maintain the account for the means by which the access data is controlled and the data is kept secure. Data can take many forms. The cloud based application development; it includes the application programs, scripts, and configuration settings, along with the development tools. For deployed applications, it includes records and other content created or used by the applications, as well as account information about the users of the applications. Access controls are one means to keep data away from unauthorized users; encryption is another. Access controls are typically identity-based, which makes authentication of the user’s identity an important issue in cloud computing. In this research paper focus the cloud data storage architectural frame work of encrypted data.
DISTRIBUTED SCHEME TO AUTHENTICATE DATA STORAGE SECURITY IN CLOUD COMPUTINGijcsit
Cloud Computing is the revolution in current generation IT enterprise. Cloud computing displaces
database and application software to the large data centres, where the management of services and data
may not be predictable, where as the conventional solutions, for IT services are under proper logical,
physical and personal controls. This aspect attribute, however comprises different security challenges
which have not been well understood. It concentrates on cloud data storage security which has always been
an important aspect of quality of service (QOS). In this paper, we designed and simulated an adaptable and
efficient scheme to guarantee the correctness of user data stored in the cloud and also with some prominent
features. Homomorphic token is used for distributed verification of erasure – coded data. By using this
scheme, we can identify misbehaving servers. In spite of past works, our scheme supports effective and
secure dynamic operations on data blocks such as data insertion, deletion and modification. In contrast to
traditional solutions, where the IT services are under proper physical, logical and personnel controls,
cloud computing moves the application software and databases to the large data centres, where the data
management and services may not be absolutely truthful. This effective security and performance analysis
describes that the proposed scheme is extremely flexible against malicious data modification, convoluted
failures and server clouding attacks.
Data Partitioning Technique In Cloud: A Survey On Limitation And BenefitsIJERA Editor
In recent years,increment in the growth and popularity of cloud services has lead the enterprises to an increase in the capability to handle, store and retrieve critical data. This technology access a shared group of configurable computing resources, which are- servers,storage and applications. Cloud computing is a succeeding generation architecture of IT enterprise, which convert the application software and databaseto large data hubs.Data security and storage of data is an essential functionality of cloud services.It allows data storage in the cloud server efficiently without any worry. Cloud services includes request service, wide web access, measured services, just single click away ,easy usage, just pay for the services you use and location independent.All these features poses many security challenges.The data partitioning techniques are used in literature, for privacy conserving and security of data, using third party auditor (TPA). Objective of the current workis to review all available partitioning technique in literature and analyze them. Through this work authors will compare and identify the limitations and benefits of the available and widely used partitioning techniques.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Survey on securing outsourced storages in cloudeSAT Journals
Abstract Cloud computing is one of the buzzwords of technological developments in the IT industry and service sectors. Widening the social capabilities of servicing for a user on the internet while narrowing the insufficiency to store information and provide facilities locally, computing interests are shifting towards cloud services. Cloud services although contributes to major advantages for servicing also incurs notification to major security issues. The issues and the approaches that can be taken to minimise or even eliminate their effects are discussed in this paper to progress toward more secure storage services on the cloud. Keywords: Cloud computing, Cloud Security, Outsourced Storages, Storage as a Service
International Journal of Computational Engineering Research(IJCER)ijceronline
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology.
Unit 3 -Data storage and cloud computingMonishaNehkal
Data storage
Cloud storage
Cloud storage from LANs to WANs
Cloud computing services
Cloud computing at work
File system
Data management
Management services
Enhancing Data Storage Security in Cloud Computing Through SteganographyIDES Editor
in cloud computing data storage is a significant issue
because the entire data reside over a set of interconnected
resource pools that enables the data to be accessed through
virtual machines. It moves the application software’s and
databases to the large data centers where the management of
data is actually done. As the resource pools are situated over
various corners of the world, the management of data and
services may not be fully trustworthy. So, there are various
issues that need to be addressed with respect to the
management of data, service of data, privacy of data, security
of data etc. But the privacy and security of data is highly
challenging. To ensure privacy and security of data-at-rest in
cloud computing, we have proposed an effective and a novel
approach to ensure data security in cloud computing by means
of hiding data within images following is the concept of
steganography. The main objective of this paper is to prevent
data access from cloud data storage centers by unauthorized
users. This scheme perfectly stores data at cloud data storage
centers and retrieves data from it when it is needed.
A Study of Data Storage Security Issues in Cloud Computingvivatechijri
Cloudcomputingprovidesondemandservicestoitsclients.Datastorageisamongoneoftheprimaryservices providedbycloudcomputing.Cloudserviceproviderhoststhedataofdataownerontheirserverandusercan accesstheirdatafromtheseservers.Asdata,ownersandserversaredifferentidentities,theparadigmofdata storagebringsupmanysecuritychallenges.Anindependentmechanismisrequiredtomakesurethatdatais correctlyhostedintothecloudstorageserver.Inthispaper,wewilldiscussthedifferenttechniquesthatare usedforsecuredatastorageoncloud. Cloud computing is a functional paradigm that is evolving and making IT utilization easier by the day for consumers. Cloud computing offers standardized applications to users online and in a manner that can be accessed regularly. Such applications can be accessed by as many persons as permitted within an organization without bothering about the maintenance of such application. The Cloud also provides a channel to design and deploy user applications including its storage space and database without bothering about the underlying operating system. The application can run without consideration for on premise infrastructure. Also, the Cloud makes massive storage available both for data and databases. Storage of data on the Cloud is one of the core activities in Cloud computing. Storage utilizes infrastructure spread across several geographical locations.
Cloud computing is the internet based computing it is also known as "pay per use model"; we can pay only for the resources that are in the use. The key barrier to widespread uptake of cloud computing is the lack of trust in clouds by potential customers. While preventive controls for security and privacy measures are actively being researched, there is still little focus on detective controls related to cloud accountability and auditability. The complexity resulting from the sheer amount of virtualization and data distribution carried out in current clouds has also revealed an urgent need for research in cloud accountability, as has the shift in focus of customer concerns from server health and utilization to the integrity and safety of end-users' data. In this paper we purpose the method to store data provenance using Amazon S3 and simple DB.
Preserving Privacy Policy- Preserving public auditing for data in the cloudinventionjournals
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Secure Auditing and Deduplicating Data in Cloud1crore projects
IEEE PROJECTS 2015
1 crore projects is a leading Guide for ieee Projects and real time projects Works Provider.
It has been provided Lot of Guidance for Thousands of Students & made them more beneficial in all Technology Training.
Dot Net
DOTNET Project Domain list 2015
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
Java Project Domain list 2015
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
ECE IEEE Projects 2015
1. Matlab project
2. Ns2 project
3. Embedded project
4. Robotics project
Eligibility
Final Year students of
1. BSc (C.S)
2. BCA/B.E(C.S)
3. B.Tech IT
4. BE (C.S)
5. MSc (C.S)
6. MSc (IT)
7. MCA
8. MS (IT)
9. ME(ALL)
10. BE(ECE)(EEE)(E&I)
TECHNOLOGY USED AND FOR TRAINING IN
1. DOT NET
2. C sharp
3. ASP
4. VB
5. SQL SERVER
6. JAVA
7. J2EE
8. STRINGS
9. ORACLE
10. VB dotNET
11. EMBEDDED
12. MAT LAB
13. LAB VIEW
14. Multi Sim
CONTACT US
1 CRORE PROJECTS
Door No: 214/215,2nd Floor,
No. 172, Raahat Plaza, (Shopping Mall) ,Arcot Road, Vadapalani, Chennai,
Tamin Nadu, INDIA - 600 026
Email id: 1croreprojects@gmail.com
website:1croreprojects.com
Phone : +91 97518 00789 / +91 72999 51536
International Journal of Engineering Research and Development (IJERD)IJERD Editor
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
Privacy preserving external auditing for data storage security in cloudeSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Enhancing Availability of Data in Mixed Homomorphic Encryption in Cloudijtsrd
In Forthcoming times of information technology companies, cloud computing updated as the structural model. There are so many benefits of cloud computing in technical as well as in organization. But still there are many new objections will carried in cloud computing for example in data security in cloud storage. There are many approaches available for data security in cloud storage like encryption with obfuscation technique, watermark security, data partitioning technique. In above all the approaches, there is no possibility that cloud data centers are operate computation on encrypted data so every time if user wants to modify data, it is necessary to decrypt data. The most used technique for providing security in cloud storage is Homomorphic encryption. In the homomorphic encryption technique, there is no need to decrypt whole data whenever user wants to update data. In the existing system used the mixed homomorphic scheme which reduce noise level in homomorphic encryption technique. The existing system focus on data corruption and data modification but what if system failure and power failure occurs. The user data may be loss in any reasons and user may not have any copy of data. The Existing system not focus on data loss. So in proposed work focus on availability of data by erasure code. By applying the erasure code if in any case user data is loss, will be reconstructed which provide more security than existing system. Bhargavi Patel ""Enhancing Availability of Data in Mixed Homomorphic Encryption in Cloud"" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-4 , June 2019, URL: https://www.ijtsrd.com/papers/ijtsrd25104.pdf
Paper URL: https://www.ijtsrd.com/engineering/computer-engineering/25104/enhancing-availability-of-data-in-mixed-homomorphic-encryption-in-cloud/bhargavi-patel
Cloud Computing is the revolution in current generation IT enterprise. Cloud computing displaces database and application software to the large data centres, where the management of services and data may not be predictable, where as the conventional solutions, for IT services are under proper logical, physical and personal controls. This aspect attribute, however comprises different security challenges which have not been well understood. It concentrates on cloud data storage security which has always been an important aspect of quality of service (QOS). In this paper, we designed and simulated an adaptable and efficient scheme to guarantee the correctness of user data stored in the cloud and also with some prominent features. Homomorphic token is used for distributed verification of erasure – coded data. By using this scheme, we can identify misbehaving servers. In spite of past works, our scheme supports effective and secure dynamic operations on data blocks such as data insertion, deletion and modification. In contrast to traditional solutions, where the IT services are under proper physical, logical and personnel controls, cloud computing moves the application software and databases to the large data centres, where the data management and services may not be absolutely truthful. This effective security and performance analysis describes that the proposed scheme is extremely flexible against malicious data modification, convoluted failures and server clouding attacks.
Cloud Computing is the revolution in current generation IT enterprise. Cloud computing displaces database and application software to the large data centres, where the management of services and data may not be predictable, where as the conventional solutions, for IT services are under proper logical, physical and personal controls. This aspect attribute, however comprises different security challenges which have not been well understood. It concentrates on cloud data storage security which has always been an important aspect of quality of service (QOS). In this paper, we designed and simulated an adaptable and efficient scheme to guarantee the correctness of user data stored in the cloud and also with some prominent features. Homomorphic token is used for distributed verification of erasure – coded data. By using this scheme, we can identify misbehaving servers. In spite of past works, our scheme supports effective and secure dynamic operations on data blocks such as data insertion, deletion and modification. In contrast to traditional solutions, where the IT services are under proper physical, logical and personnel controls, cloud computing moves the application software and databases to the large data centres, where the data management and services may not be absolutely truthful. This effective security and performance analysis describes that the proposed scheme is extremely flexible against malicious data modification, convoluted failures and server clouding attacks.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Survey on securing outsourced storages in cloudeSAT Journals
Abstract Cloud computing is one of the buzzwords of technological developments in the IT industry and service sectors. Widening the social capabilities of servicing for a user on the internet while narrowing the insufficiency to store information and provide facilities locally, computing interests are shifting towards cloud services. Cloud services although contributes to major advantages for servicing also incurs notification to major security issues. The issues and the approaches that can be taken to minimise or even eliminate their effects are discussed in this paper to progress toward more secure storage services on the cloud. Keywords: Cloud computing, Cloud Security, Outsourced Storages, Storage as a Service
International Journal of Computational Engineering Research(IJCER)ijceronline
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology.
Unit 3 -Data storage and cloud computingMonishaNehkal
Data storage
Cloud storage
Cloud storage from LANs to WANs
Cloud computing services
Cloud computing at work
File system
Data management
Management services
Enhancing Data Storage Security in Cloud Computing Through SteganographyIDES Editor
in cloud computing data storage is a significant issue
because the entire data reside over a set of interconnected
resource pools that enables the data to be accessed through
virtual machines. It moves the application software’s and
databases to the large data centers where the management of
data is actually done. As the resource pools are situated over
various corners of the world, the management of data and
services may not be fully trustworthy. So, there are various
issues that need to be addressed with respect to the
management of data, service of data, privacy of data, security
of data etc. But the privacy and security of data is highly
challenging. To ensure privacy and security of data-at-rest in
cloud computing, we have proposed an effective and a novel
approach to ensure data security in cloud computing by means
of hiding data within images following is the concept of
steganography. The main objective of this paper is to prevent
data access from cloud data storage centers by unauthorized
users. This scheme perfectly stores data at cloud data storage
centers and retrieves data from it when it is needed.
A Study of Data Storage Security Issues in Cloud Computingvivatechijri
Cloudcomputingprovidesondemandservicestoitsclients.Datastorageisamongoneoftheprimaryservices providedbycloudcomputing.Cloudserviceproviderhoststhedataofdataownerontheirserverandusercan accesstheirdatafromtheseservers.Asdata,ownersandserversaredifferentidentities,theparadigmofdata storagebringsupmanysecuritychallenges.Anindependentmechanismisrequiredtomakesurethatdatais correctlyhostedintothecloudstorageserver.Inthispaper,wewilldiscussthedifferenttechniquesthatare usedforsecuredatastorageoncloud. Cloud computing is a functional paradigm that is evolving and making IT utilization easier by the day for consumers. Cloud computing offers standardized applications to users online and in a manner that can be accessed regularly. Such applications can be accessed by as many persons as permitted within an organization without bothering about the maintenance of such application. The Cloud also provides a channel to design and deploy user applications including its storage space and database without bothering about the underlying operating system. The application can run without consideration for on premise infrastructure. Also, the Cloud makes massive storage available both for data and databases. Storage of data on the Cloud is one of the core activities in Cloud computing. Storage utilizes infrastructure spread across several geographical locations.
Cloud computing is the internet based computing it is also known as "pay per use model"; we can pay only for the resources that are in the use. The key barrier to widespread uptake of cloud computing is the lack of trust in clouds by potential customers. While preventive controls for security and privacy measures are actively being researched, there is still little focus on detective controls related to cloud accountability and auditability. The complexity resulting from the sheer amount of virtualization and data distribution carried out in current clouds has also revealed an urgent need for research in cloud accountability, as has the shift in focus of customer concerns from server health and utilization to the integrity and safety of end-users' data. In this paper we purpose the method to store data provenance using Amazon S3 and simple DB.
Preserving Privacy Policy- Preserving public auditing for data in the cloudinventionjournals
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Secure Auditing and Deduplicating Data in Cloud1crore projects
IEEE PROJECTS 2015
1 crore projects is a leading Guide for ieee Projects and real time projects Works Provider.
It has been provided Lot of Guidance for Thousands of Students & made them more beneficial in all Technology Training.
Dot Net
DOTNET Project Domain list 2015
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
Java Project Domain list 2015
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
ECE IEEE Projects 2015
1. Matlab project
2. Ns2 project
3. Embedded project
4. Robotics project
Eligibility
Final Year students of
1. BSc (C.S)
2. BCA/B.E(C.S)
3. B.Tech IT
4. BE (C.S)
5. MSc (C.S)
6. MSc (IT)
7. MCA
8. MS (IT)
9. ME(ALL)
10. BE(ECE)(EEE)(E&I)
TECHNOLOGY USED AND FOR TRAINING IN
1. DOT NET
2. C sharp
3. ASP
4. VB
5. SQL SERVER
6. JAVA
7. J2EE
8. STRINGS
9. ORACLE
10. VB dotNET
11. EMBEDDED
12. MAT LAB
13. LAB VIEW
14. Multi Sim
CONTACT US
1 CRORE PROJECTS
Door No: 214/215,2nd Floor,
No. 172, Raahat Plaza, (Shopping Mall) ,Arcot Road, Vadapalani, Chennai,
Tamin Nadu, INDIA - 600 026
Email id: 1croreprojects@gmail.com
website:1croreprojects.com
Phone : +91 97518 00789 / +91 72999 51536
International Journal of Engineering Research and Development (IJERD)IJERD Editor
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
Privacy preserving external auditing for data storage security in cloudeSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Enhancing Availability of Data in Mixed Homomorphic Encryption in Cloudijtsrd
In Forthcoming times of information technology companies, cloud computing updated as the structural model. There are so many benefits of cloud computing in technical as well as in organization. But still there are many new objections will carried in cloud computing for example in data security in cloud storage. There are many approaches available for data security in cloud storage like encryption with obfuscation technique, watermark security, data partitioning technique. In above all the approaches, there is no possibility that cloud data centers are operate computation on encrypted data so every time if user wants to modify data, it is necessary to decrypt data. The most used technique for providing security in cloud storage is Homomorphic encryption. In the homomorphic encryption technique, there is no need to decrypt whole data whenever user wants to update data. In the existing system used the mixed homomorphic scheme which reduce noise level in homomorphic encryption technique. The existing system focus on data corruption and data modification but what if system failure and power failure occurs. The user data may be loss in any reasons and user may not have any copy of data. The Existing system not focus on data loss. So in proposed work focus on availability of data by erasure code. By applying the erasure code if in any case user data is loss, will be reconstructed which provide more security than existing system. Bhargavi Patel ""Enhancing Availability of Data in Mixed Homomorphic Encryption in Cloud"" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-4 , June 2019, URL: https://www.ijtsrd.com/papers/ijtsrd25104.pdf
Paper URL: https://www.ijtsrd.com/engineering/computer-engineering/25104/enhancing-availability-of-data-in-mixed-homomorphic-encryption-in-cloud/bhargavi-patel
Cloud Computing is the revolution in current generation IT enterprise. Cloud computing displaces database and application software to the large data centres, where the management of services and data may not be predictable, where as the conventional solutions, for IT services are under proper logical, physical and personal controls. This aspect attribute, however comprises different security challenges which have not been well understood. It concentrates on cloud data storage security which has always been an important aspect of quality of service (QOS). In this paper, we designed and simulated an adaptable and efficient scheme to guarantee the correctness of user data stored in the cloud and also with some prominent features. Homomorphic token is used for distributed verification of erasure – coded data. By using this scheme, we can identify misbehaving servers. In spite of past works, our scheme supports effective and secure dynamic operations on data blocks such as data insertion, deletion and modification. In contrast to traditional solutions, where the IT services are under proper physical, logical and personnel controls, cloud computing moves the application software and databases to the large data centres, where the data management and services may not be absolutely truthful. This effective security and performance analysis describes that the proposed scheme is extremely flexible against malicious data modification, convoluted failures and server clouding attacks.
Cloud Computing is the revolution in current generation IT enterprise. Cloud computing displaces database and application software to the large data centres, where the management of services and data may not be predictable, where as the conventional solutions, for IT services are under proper logical, physical and personal controls. This aspect attribute, however comprises different security challenges which have not been well understood. It concentrates on cloud data storage security which has always been an important aspect of quality of service (QOS). In this paper, we designed and simulated an adaptable and efficient scheme to guarantee the correctness of user data stored in the cloud and also with some prominent features. Homomorphic token is used for distributed verification of erasure – coded data. By using this scheme, we can identify misbehaving servers. In spite of past works, our scheme supports effective and secure dynamic operations on data blocks such as data insertion, deletion and modification. In contrast to traditional solutions, where the IT services are under proper physical, logical and personnel controls, cloud computing moves the application software and databases to the large data centres, where the data management and services may not be absolutely truthful. This effective security and performance analysis describes that the proposed scheme is extremely flexible against malicious data modification, convoluted failures and server clouding attacks.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
Dynamic Resource Allocation and Data Security for CloudAM Publications
Cloud computing is the next generation of IT organization. Cloud computing moves the software and
databases to the large centres where the management of services and data may not be fully trusted. In this system, we
focus on cloud data storage security, which has been an important aspect of quality of services. To ensure the
correctness of user’s data in the cloud, we propose an effective scheme with Advanced Encryption Standard and MD5
algorithm. Extensive security and performance analysis shows that the proposed scheme is highly efficient. In
proposed work we have developed efficient parallel data processing in clouds and present our research project for
parallel security. Parallel security is the data processing framework to explicitly exploit the dynamic storage along with
data security. We have proposed a strong, formal model for data security on cloud and corruption detection.
Excellent Manner of Using Secure way of data storage in cloud computingEditor IJMTER
The major challenging issue in Cloud computing is Security. Providing Security is big issue
towards protecting data from third person as well as in Internet. This mainly deals the Security how it is
provided. Various type of services are there to protect our data and Various Services are available in Cloud
Computing to Utilize effective manner as Software as a Service (SaaS), Platform as a Service (PaaS),
Hardware as a Service (HaaS). Cloud computing is the use of computing resources (hardware and
software) that are delivered as a service over Internet network. Cloud Computing moves the Application
software and databases to the large data centres, where the administration of the data and services may not
be fully trustworthy that is in third party here the party has to get certified and authorized. Since Cloud
Computing share distributed resources via network in the open environment thus it makes new security
risks towards the correctness of the data in cloud. I propose in this paper flexibility of data storage
mechanism in the distributed environment by using the homomorphism token generation. In the proposed
system, users need to allow auditing the cloud storage with lightweight communication. While using
Encryption and Decryption methods it is very burden for a single processor. Than the processing
Capabilities can we utilize from Cloud Computing.
Enhanced Data Partitioning Technique for Improving Cloud Data Storage SecurityEditor IJMTER
Cloud computing is a model for enabling for on demand network access to shared
configurable computing resources (e.g. networks, servers, storage, applications, and services).It is
based on virtualization and distributed computing technologies. Cloud Data storage systems enable
user to store data efficiently on server without any trouble of data resources. User can easily store
and retrieve their data remotely. The two biggest concerns about cloud data storage are reliability and
security. Clients aren’t like to entrust their data to another third party or companies without a
guarantee that they will be able to access therein formations whenever they want. In the existing
system, the data are stored in the cloud using dynamic data operation with computation which makes
the user need to make a copy for further updating and verification of the data loss. Different
distributed storing auditing techniques are used for overcoming the problem of data loss. Recent
work of this paper has show that data partitioning technique used for data storage by providing
Digital signature to every partitioning data and user .this technique allow user to upload or retrieve
the data with matching the digital signatures provided to them. This method ensures high cloud
storage integrity, enhanced error localization and easy identification of misbehaving server and
unauthorized access to the cloud server. Hence this work aims to store the data securely in reduced
space with less time and computational cost.
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology.
For further details contact:
N.RAJASEKARAN B.E M.S 9841091117,9840103301.
IMPULSE TECHNOLOGIES,
Old No 251, New No 304,
2nd Floor,
Arcot road ,
Vadapalani ,
Chennai-26.
Crypto multi tenant an environment of secure computing using cloud sqlijdpsjournal
Today’s most modern research area of computing is cloud comput
ing due to its ability to diminish the costs
associated with virtualization, high availability, dynamic resource pools and increases the efficien
cy of
computing. But still it contains some drawbacks such as privacy, security, etc. This paper is thorou
ghly
focused on the security of data of multi tenant model obtains from the virtualization feature of clo
ud
computing. We use AES
-
128 bit algorithm and cloud SQL to protect sensitive data before storing in the
cloud. When the authorized customer arises for usag
e of data, then data firstly decrypted after that
provides to the customer. Multi tenant infrastructure is supported by Google, which prefers pushing
of
contents in short iteration cycle. As the customer is distributed and their demands can arise anywhe
re,
anytime so data can’t store at particular site it must be available different sites also. For this f
aster
accessing by different users from different places Google is the best one. To get high reliability a
nd
availability data is stored in encrypted befor
e storing in database and updated every time after usage. It is
very easy to use without requiring any software. This authenticate user can recover their encrypted
and
decrypted data, afford efficient and data storage security in the cloud.
Privacy-Preserving Public Auditing for Regenerating-Code-Based Cloud Storage1crore projects
IEEE PROJECTS 2015
1 crore projects is a leading Guide for ieee Projects and real time projects Works Provider.
It has been provided Lot of Guidance for Thousands of Students & made them more beneficial in all Technology Training.
Dot Net
DOTNET Project Domain list 2015
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
Java Project Domain list 2015
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
ECE IEEE Projects 2015
1. Matlab project
2. Ns2 project
3. Embedded project
4. Robotics project
Eligibility
Final Year students of
1. BSc (C.S)
2. BCA/B.E(C.S)
3. B.Tech IT
4. BE (C.S)
5. MSc (C.S)
6. MSc (IT)
7. MCA
8. MS (IT)
9. ME(ALL)
10. BE(ECE)(EEE)(E&I)
TECHNOLOGY USED AND FOR TRAINING IN
1. DOT NET
2. C sharp
3. ASP
4. VB
5. SQL SERVER
6. JAVA
7. J2EE
8. STRINGS
9. ORACLE
10. VB dotNET
11. EMBEDDED
12. MAT LAB
13. LAB VIEW
14. Multi Sim
CONTACT US
1 CRORE PROJECTS
Door No: 214/215,2nd Floor,
No. 172, Raahat Plaza, (Shopping Mall) ,Arcot Road, Vadapalani, Chennai,
Tamin Nadu, INDIA - 600 026
Email id: 1croreprojects@gmail.com
website:1croreprojects.com
Phone : +91 97518 00789 / +91 72999 51536
For further details contact:
N.RAJASEKARAN B.E M.S 9841091117,9840103301.
IMPULSE TECHNOLOGIES,
Old No 251, New No 304,
2nd Floor,
Arcot road ,
Vadapalani ,
Chennai-26.
www.impulse.net.in
Email: ieeeprojects@yahoo.com/ imbpulse@gmail.com
Cloud computing is the technology which enables obtaining resources like so services,
software, hardware over the internet. With cloud storage users can store their data remotely and
enjoy on-demand services and application from the configurable resources. The cloud data storage
has many benefits over local data storage. Users should be able to just use the cloud storage as if it is
local, without worrying about the need to verify its integrity. The problem is that ensuring data
security and integrity of data of user. Sohere, I am going to have public audit ability for cloud storage
that users can resort to a third-party auditor (TPA) to check the integrity of data. This paper gives the
various issues related to privacy while storing the user’s data to the cloud storage during the TPA
auditing. Without appropriate security and privacy solutions designed for clouds this computing
paradigm could become a big failure. I am a giving privacy-preserving public auditing using ring
signature process for secure cloud storage system. This paper is going to analyze various techniques
to solve these issues and to provide the privacy and security to the data in cloud
Security in multi cloud data storage with sic architectureeSAT Journals
Abstract Cloud computing is becoming an important thing to deal with, in many organizations around the world. It provides many benefits like 1. cost, 2.Reliability and 3.Ease in retrieval of data. Security in cloud computing is gaining more and more importance as organizations often store sensitive data and important data on the cloud. Security of data in cloud is an issue which should be focused carefully. Customers do not want to lose their sensitive data due to malicious insiders and hackers in the cloud. In addition, the loss of service availability has caused many problems for a large number of recently. Data intrusion technique create many problems for the users of cloud computing. The other issues such as data theft, data lost should be overcome to provide better services to the customers. It is observed that the research into the use of intercloud providers to maintain security has received less attention from the research community than has the use of single clouds. Multi-cloud environment has ability to reduce the security risks as well as it can ensure the security and reliability. The system aim to provide a framework to deploy a secure cloud database that will guarantee to avoid security risks facing the cloud computing community This paper suggests new architecture for cloud environment which will help in reducing the security threats. The efficient and secure use of cloud computing will provide many benefit to the organizations in terms of money and ease in access to the data. Keywords: Cloud computing, single cloud, multi-clouds, cloud Storage, data integrity, data intrusion, service availability, Performance, cost-reduction.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Similar to S.A.kalaiselvan toward secure and dependable storage services (20)
Immunizing Image Classifiers Against Localized Adversary Attacksgerogepatton
This paper addresses the vulnerability of deep learning models, particularly convolutional neural networks
(CNN)s, to adversarial attacks and presents a proactive training technique designed to counter them. We
introduce a novel volumization algorithm, which transforms 2D images into 3D volumetric representations.
When combined with 3D convolution and deep curriculum learning optimization (CLO), itsignificantly improves
the immunity of models against localized universal attacks by up to 40%. We evaluate our proposed approach
using contemporary CNN architectures and the modified Canadian Institute for Advanced Research (CIFAR-10
and CIFAR-100) and ImageNet Large Scale Visual Recognition Challenge (ILSVRC12) datasets, showcasing
accuracy improvements over previous techniques. The results indicate that the combination of the volumetric
input and curriculum learning holds significant promise for mitigating adversarial attacks without necessitating
adversary training.
Final project report on grocery store management system..pdfKamal Acharya
In today’s fast-changing business environment, it’s extremely important to be able to respond to client needs in the most effective and timely manner. If your customers wish to see your business online and have instant access to your products or services.
Online Grocery Store is an e-commerce website, which retails various grocery products. This project allows viewing various products available enables registered users to purchase desired products instantly using Paytm, UPI payment processor (Instant Pay) and also can place order by using Cash on Delivery (Pay Later) option. This project provides an easy access to Administrators and Managers to view orders placed using Pay Later and Instant Pay options.
In order to develop an e-commerce website, a number of Technologies must be studied and understood. These include multi-tiered architecture, server and client-side scripting techniques, implementation technologies, programming language (such as PHP, HTML, CSS, JavaScript) and MySQL relational databases. This is a project with the objective to develop a basic website where a consumer is provided with a shopping cart website and also to know about the technologies used to develop such a website.
This document will discuss each of the underlying technologies to create and implement an e- commerce website.
Water scarcity is the lack of fresh water resources to meet the standard water demand. There are two type of water scarcity. One is physical. The other is economic water scarcity.
Hierarchical Digital Twin of a Naval Power SystemKerry Sado
A hierarchical digital twin of a Naval DC power system has been developed and experimentally verified. Similar to other state-of-the-art digital twins, this technology creates a digital replica of the physical system executed in real-time or faster, which can modify hardware controls. However, its advantage stems from distributing computational efforts by utilizing a hierarchical structure composed of lower-level digital twin blocks and a higher-level system digital twin. Each digital twin block is associated with a physical subsystem of the hardware and communicates with a singular system digital twin, which creates a system-level response. By extracting information from each level of the hierarchy, power system controls of the hardware were reconfigured autonomously. This hierarchical digital twin development offers several advantages over other digital twins, particularly in the field of naval power systems. The hierarchical structure allows for greater computational efficiency and scalability while the ability to autonomously reconfigure hardware controls offers increased flexibility and responsiveness. The hierarchical decomposition and models utilized were well aligned with the physical twin, as indicated by the maximum deviations between the developed digital twin hierarchy and the hardware.
Explore the innovative world of trenchless pipe repair with our comprehensive guide, "The Benefits and Techniques of Trenchless Pipe Repair." This document delves into the modern methods of repairing underground pipes without the need for extensive excavation, highlighting the numerous advantages and the latest techniques used in the industry.
Learn about the cost savings, reduced environmental impact, and minimal disruption associated with trenchless technology. Discover detailed explanations of popular techniques such as pipe bursting, cured-in-place pipe (CIPP) lining, and directional drilling. Understand how these methods can be applied to various types of infrastructure, from residential plumbing to large-scale municipal systems.
Ideal for homeowners, contractors, engineers, and anyone interested in modern plumbing solutions, this guide provides valuable insights into why trenchless pipe repair is becoming the preferred choice for pipe rehabilitation. Stay informed about the latest advancements and best practices in the field.
Industrial Training at Shahjalal Fertilizer Company Limited (SFCL)MdTanvirMahtab2
This presentation is about the working procedure of Shahjalal Fertilizer Company Limited (SFCL). A Govt. owned Company of Bangladesh Chemical Industries Corporation under Ministry of Industries.
NO1 Uk best vashikaran specialist in delhi vashikaran baba near me online vas...Amil Baba Dawood bangali
Contact with Dawood Bhai Just call on +92322-6382012 and we'll help you. We'll solve all your problems within 12 to 24 hours and with 101% guarantee and with astrology systematic. If you want to take any personal or professional advice then also you can call us on +92322-6382012 , ONLINE LOVE PROBLEM & Other all types of Daily Life Problem's.Then CALL or WHATSAPP us on +92322-6382012 and Get all these problems solutions here by Amil Baba DAWOOD BANGALI
#vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore#blackmagicformarriage #aamilbaba #kalajadu #kalailam #taweez #wazifaexpert #jadumantar #vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore #blackmagicforlove #blackmagicformarriage #aamilbaba #kalajadu #kalailam #taweez #wazifaexpert #jadumantar #vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore #Amilbabainuk #amilbabainspain #amilbabaindubai #Amilbabainnorway #amilbabainkrachi #amilbabainlahore #amilbabaingujranwalan #amilbabainislamabad
CFD Simulation of By-pass Flow in a HRSG module by R&R Consult.pptxR&R Consult
CFD analysis is incredibly effective at solving mysteries and improving the performance of complex systems!
Here's a great example: At a large natural gas-fired power plant, where they use waste heat to generate steam and energy, they were puzzled that their boiler wasn't producing as much steam as expected.
R&R and Tetra Engineering Group Inc. were asked to solve the issue with reduced steam production.
An inspection had shown that a significant amount of hot flue gas was bypassing the boiler tubes, where the heat was supposed to be transferred.
R&R Consult conducted a CFD analysis, which revealed that 6.3% of the flue gas was bypassing the boiler tubes without transferring heat. The analysis also showed that the flue gas was instead being directed along the sides of the boiler and between the modules that were supposed to capture the heat. This was the cause of the reduced performance.
Based on our results, Tetra Engineering installed covering plates to reduce the bypass flow. This improved the boiler's performance and increased electricity production.
It is always satisfying when we can help solve complex challenges like this. Do your systems also need a check-up or optimization? Give us a call!
Work done in cooperation with James Malloy and David Moelling from Tetra Engineering.
More examples of our work https://www.r-r-consult.dk/en/cases-en/
CFD Simulation of By-pass Flow in a HRSG module by R&R Consult.pptx
S.A.kalaiselvan toward secure and dependable storage services
1. National Conference on Advances in Computer Science and Information Technology ISBN:978-81-924031-8-2
1
9 7 8 8 1 9 2 4 0 3 1 8 2
1
Kalaiselvan SA,2
Jai Maharaj G
1
Assistant professor/CSE,Arignar Anna Instituteof Science & Technology,Sriperumbudur-
2
II Year M.Tech(IT),Bharath University,Chennai -
1
kalaiselvan_be@ymail.com, 2
vkarthick86@gmail.com
Abstract— Cloud storage enables users to remotely
store their data and enjoy the on-demand high
quality cloud applications without the burden of
local hardware and software management. Though
the benefits are clear, such a service is also
relinquishing users’ physical possession of their
outsourced data, which inevitably poses new
security risks towards the correctness of the data in
cloud. In order to address this new problem and
further achieve a secure and dependable cloud
storage service, we propose in this paper a flexible
distributed storage integrity auditing mechanism,
utilizing the homomorphic token and distributed
erasure-coded data. The proposed design allows
users to audit the cloud storage with very
lightweight communication and computation cost.
The auditing result not only ensures strong cloud
storage correctness guarantee, but also
simultaneously achieves fast data error
localization, i.e., the identification of misbehaving
server. Considering the cloud data are dynamic in
nature, the proposed design further supports secure
and efficient dynamic operations on outsourced
data, including block modification, deletion, and
append. Analysis shows the proposed scheme is
highly efficient and resilient against Byzantine
failure, malicious data modification attack, and
even server colluding attacks.
Key Words – Saas, CSP, TPA
1. INTRODUCTION
Several trends are opening up the era of cloud
computing, which is an Internet-based development
and use of computer technology. The ever cheaper
and more powerful processors, together with the
Software as a Service (SaaS) computing rchitecture,
are transforming data centers into pools of
computing service on a huge scale. The increasing
Network bandwidth and reliable yet flexible
network Connections make it even possible that
users can now Subscribe high quality services from
data and software that Reside solely on remote data
centers.
Moving data into the cloud offers great
convenience to Users since they don’t have to care
about the complexities Of direct hardware
Toward Secure and Dependable Storage Services
In Cloud Computing
2. National Conference on Advances in Computer Science and Information Technology ISBN:978-81-924031-8-2
2
9 7 8 8 1 9 2 4 0 3 1 8 2
management. The pioneer of cloud Computing
vendors, Amazon Simple Storage Service (S3), and
Amazon Elastic Compute Cloud (EC2) [2] are both
well-known examples. While these internet-based
online services do provide huge amounts of storage
space and customizable computing resources, this
computing.
• C. Wang is with the Department of Electrical
and Computer Engineering, Illinois Institute of
Technology, 1451 East 55th St., Apt. 1017 N,
Chicago, IL 60616. E-mail: cwang55@iit.edu. .
• Q. Wang is with the Department of Electrical
and Computer Engineering, Illinois Institute of
Technology, 500 East 33rd St., Apt. 602,
Chicago, IL 60616. E-mail: qwang38@iit.edu..
• K. Ren is with the Department of Electrical and
Computer Engineering, Illinois Institute of
Technology, 3301 Dearborn St., Siegel Hall
319, Chicago, IL 60616. E-mail:
kren@ece.iit.edu. .
• N. Cao is with the Department of Electrical and
Computer Engineering, Worcester Polytechnic
Institute, 100 Institute Road, Worcester, MA
01609. E-mail: ncao@wpi.edu. .
• W. Lou is with the Department of Computer
Science, Virginia Polytechnic Institute and State
University, Falls Church, VA 22043. E-mail:
wjlou@vt.edu.
Manuscript received 4 Apr. 2010; revised 14 Sept.
2010; accepted 25 Dec. 2010; published online 6
May 2011. For information on obtaining reprints of
this article, please send e-mail o:tsc@computer.org
and reference IEEECS Log Number TSCSI-2010-
04-0033. Digital Object Identifier no.
10.1109/TSC.2011.24. platform shift, however, is
eliminating the responsibility of local machines for
data maintenance at the same time. As a result,
users are at the mercy of their cloud service
providers (CSP) for the availability and integrity of
their data [4]. On the one hand, although the loud
infrastructures are much more powerful and reliable
than personal computing devices, broad range of
both internal and external threats for data integrity
still exist. Examples of outages and data loss
incidents of noteworthy cloud storage services
appear from time to time [5], [6], [7], [8], [9]. On
the other hand, since users may not retain a local
copy of outsourced data, there exist various
incentives for CSP to behave unfaithfully toward
the cloud users regarding the status of their
outsourced data. For example, to increase the profit
margin by reducing cost, it is possible for CSP to
discard rarely accessed data without being detected
in a timely fashion [10]. Similarly, CSP may even
attempt to hide data loss incidents so as to maintain
a reputation [11], [12], [13]. Therefore, although
outsourcing data into the cloud is economically
3. National Conference on Advances in Computer Science and Information Technology ISBN:978-81-924031-8-2
3
9 7 8 8 1 9 2 4 0 3 1 8 2
attractive for the cost and complexity of long-term
large-scale data storage, its lacking of offering
strong assurance of data integrity and availability
may impede its wide adoption by both enterprise
and individual cloud users.
In order to achieve the assurances of cloud
data integrity and availability and enforce the
quality of cloud storage service, efficient methods
that enable on-demand data correctness verification
on behalf of cloud users have to be designed.
However, the fact that users no longer have physical
possession of data in the cloud prohibits the direct
adoption of traditional cryptographic primitives for
the purpose of data integrity protection. Hence, the
verification of cloud storage correctness must be
conducted without explicit knowledge of the whole
data files .Meanwhile, cloud storage is not just a
third party data warehouse. The data stored in the
cloud may not only beaccessed but also be
frequently updated by the users including insertion,
deletion, modification, appending, etc. Thus, it is
also imperative to support the integration of this
dynamic feature into the cloud storage correctness
assurance, which makes the system design even
more challenging. Last but not the least, the
deployment of cloud computing is powered by data
centers running in a simultaneous, cooperated, and
distributed manner [3]. It is more advantages for
individual users to store their data redundantly
across multiple physical servers so as to reduce the
data integrity and availability threats. Thus,
distributed protocols for storage correctness
assurance will be of most importance in achieving
robust and secure cloud storage systems. However,
such important area remains to be fully explored in
the literature.
Recently, the importance of ensuring the
remote data integrity has been highlighted by the
following research works under different system
and security models. These techniques, while can be
useful to ensure the storage correctness without
having users possessing local data, are all focusing
on single server scenario. They may be useful for
quality-of-service testing ,but does not guarantee
the data availability in case of server failures.
Although direct applying these techniques to
distributed storage (multiple servers) could be
straightforward, the resulted storage Verification
overhead would be linear to the number of servers.
As an complementary approach, researchers have
also proposed distributed protocols for ensuring
storage correctness across multiple servers or peers.
However, while providing efficient cross server
storage verification and data availability insurance,
these schemes are all focusing on static or archival
data. As a result, their capabilities of handling
dynamic data remains unclear, which inevitably
4. National Conference on Advances in Computer Science and Information Technology ISBN:978-81-924031-8-2
4
9 7 8 8 1 9 2 4 0 3 1 8 2
limits their full applicability in cloud storage
scenarios.
In this paper, we propose an effective and
flexible distributed storage verification scheme with
explicit dynamic data support to ensure the
correctness and availability of users’ data in the
cloud. We rely on erasure correcting code in the file
distribution preparation to provide redundancies and
guarantee the data dependability against Byzantine
servers [26], where a storage server may fail in
arbitrary ways. This construction drastically reduces
the communication and storage overhead as
compared to the traditional replication-based file
distribution techniques. By utilizing the
homomorphic token with distributed verification of
erasure-coded data, our scheme achieves the storage
correctness insurance as well as data error
localization: whenever data corruption has been
detected during the storage correctness verification,
our scheme can almost guarantee the simultaneous
localization of data errors, i.e., the identification of
the misbehaving server(s). In order to strike a good
balance between error resilience and data dynamics,
we further explore the algebraic property of our
token computation and erasure-coded data, and
demonstrate how to efficiently support dynamic
operation on data blocks, while maintaining the
same level of storage correctness assurance. In
order to save the time, computation resources, and
even the related online burden of users, we also
provide the extension of the proposed main scheme
to support third-party auditing,
Fig. 1. Cloud storage service architecture.
Where users can safely delegate the integrity
checking tasks to third-party auditors (TPA) and be
worry-free to use the cloud storage services. Our
work is among the first few ones in this field to
consider distributed data storage security in cloud
computing. Our contribution can be summarized as
the following three aspects: 1) Compared to many
of its predecessors, which only provide binary
results about the storage status across the distributed
servers, the proposed scheme achieves the
integration of storage correctness insurance and
data error localization, i.e., the identification of
misbehaving server(s). 2) Unlike most prior works
for ensuring remote data integrity, the new scheme
further supports secure and efficient dynamic
5. National Conference on Advances in Computer Science and Information Technology ISBN:978-81-924031-8-2
5
9 7 8 8 1 9 2 4 0 3 1 8 2
operations on data blocks, including: update, delete,
and append. 3) The experiment results demonstrate
the proposed scheme is highly efficient. Extensive
security analysis shows our scheme is resilient
against Byzantine failure, malicious data
modification attack, and even server colluding
attacks.
The rest of the paper is organized as follows:
Section 2 introduces the system model, adversary
model, our design goal, and notations. Then we
provide the detailed description of our scheme in
Sections 3 and 4. Section 5 gives the security
analysis and performance evaluations, followed by
Section 6 which overviews the related work.Finally,
Section 7 concludes the whole paper.
2. PROBLEM STATEMENT
2.1 System Model
A representative network architecture for cloud
storage service architecture is illustrated in Fig.
1.Three different network entities can be identified
as follows:
• User: an entity, who has data to be stored in the
cloud and relies on the cloud for data torage and
computation, can be either enterprise or
individual customers.
• Cloud Server (CS): an entity, which is managed
by cloud service provider (CSP) to provide data
storage service and has significant storage space
and computation resources (we will not
differentiate CS and CSP hereafter).
• Third-Party Auditor: an optional TPA, who has
expertise and capabilities that users may not
have, is trusted to assess and expose risk of
cloud storage services on behalf of the users
upon request.
In cloud data storage, a user stores his data
through a CSP into a set of cloud servers, which are
running in a simultaneous, cooperated, and
distributed manner. Data redundancy can be
employed with a technique of erasure correcting
Code to further tolerate faults or server crash as
User’s data grow in size and importance.
Thereafter, for application purposes, the user
interacts with the cloud servers via CSP to access or
retrieve his data. In some cases, the user may need
to perform block level operations on his data. The
most general forms of these operations we are
considering are block update, delete, insert, and
append. Note that in this paper, we put more focus
on the support of file-oriented cloud applications
other than non file application data, such as social
networking data. In other words, the cloud data we
are considering is not expected to be rapidly
changing in a relative short period.
As users no longer possess their data locally,
it is of critical importance to ensure users that their
6. National Conference on Advances in Computer Science and Information Technology ISBN:978-81-924031-8-2
6
9 7 8 8 1 9 2 4 0 3 1 8 2
data are being correctly stored and maintained. That
is, users should be equipped with security means so
that they can make continuous correctness
assurance (to enforce cloud storage service-level
agreement) of their stored data even without the
existence of local copies. In case that users do not
necessarily have the time, feasibility or resources to
monitor their data online, they can delegate the data
auditing tasks to an optional trusted TPA of their
respective choices. However, to securely introduce
such a TPA, any possible leakage of user’s
outsourced data toward TPA through the auditing
protocol should be prohibited.
In our model, we assume that the point-to-
point communication channels between each cloud
server and the user is authenticated and reliable,
which can be achieved in practice with little
overhead. These authentication handshakes are
omitted in the following presentation.
2.2 Adversary Model
From user’s perspective, the adversary model has to
capture all kinds of threats toward his cloud data
integrity. Because cloud data do not reside at user’s
local site but at CSP’s address domain, these threats
can come from two different sources: internal and
external attacks. For internal attacks, a CSP can be
self-interested, untrusted, and possibly malicious.
Not only does it desire to move data that has not
been or is rarely accessed to a lower tier of storage
than agreed for monetary reasons, but it may also
attempt to hide a data loss incident due to
management errors, Byzantine failures, and so on.
For external attacks, data integrity threats may
come from outsiders who are beyond the control
domain of CSP, for example, the economically
motivated attackers. They may compromise a
number of cloud data storage servers in different
time intervals and subsequently be able to modify
or delete users’ data while remaining undetected by
CSP.
Therefore, we consider the adversary in our
model has the following capabilities, which
captures both external and internal threats toward
the cloud data integrity. Specifically, the adversary
is interested in continuously corrupting the user’s
data files stored on individual servers. Once a server
is comprised, an adversary can pollute the original
data files by modifying or introducing its own
fraudulent data to prevent the original data from
being retrieved by the user. This corresponds to the
threats from external attacks. In the worst case
scenario, the adversary can compromise all the
storage servers so that he can intentionally modify
the data files as long as they are internally
consistent. In fact, this is equivalent to internal
7. National Conference on Advances in Computer Science and Information Technology ISBN:978-81-924031-8-2
7
9 7 8 8 1 9 2 4 0 3 1 8 2
attack case where all servers are assumed colluding
together from the early stages of application or
service deployment to hide a data loss or corruption
incident.
2.3 Design Goals
To ensure the security and dependability for cloud
data storage under the aforementioned adversary
model, we aim to design efficient mechanisms for
dynamic data verification and operation and achieve
the following goals:
1. Storage correctness: to ensure users that their
data are indeed stored appropriately and kept intact
all the time in the cloud.
2. Fast localization of data error: to effectively
locate the malfunctioning server when data
corruption has been detected.
3. Dynamic data support: to maintain the same level
of storage correctness assurance even if users
modify, delete, or append their data files in the
cloud.
4. Dependability: to enhance data availability
against Byzantine failures, malicious data
modification and server colluding attacks, i.e.,
minimizing the effect brought by data errors or
server failures.
5. Lightweight: to enable users to perform storage
correctness checks with minimum overhead.
3. ENSURING CLOUD DATA
STORAGE
In cloud data storage system, users store their data
in the cloud and no longer possess the data locally.
Thus, the correctness and availability of the data
files being stored on the distributed cloud servers
must be guaranteed. One of the key issues is to
effectively detect any unauthorized data
modification and corruption, possibly due to server
compromise and/or random Byzantine failures.
Besides, in the distributed case when such
inconsistencies are successfully detected, to find
which server the data error lies in is also of great
significance, since it can always be the first step to
fast recover the storage errors and/or identifying
potential threats of external attacks.
To address these problems, our main scheme
for ensuring cloud data storage is presented in this
section. The first part of the section is devoted to a
review of basic tools from coding theory that is
needed in our scheme for file distribution across
cloud servers. Subsequently, it is shown how to
derive a challenge-response protocol for verifying
the storage correctness as well as identifying
misbehaving servers. The procedure for file
retrieval and error recovery based on erasure-
correcting code is also outlined. Finally, we
describe how to extend our scheme to third party
8. National Conference on Advances in Computer Science and Information Technology ISBN:978-81-924031-8-2
8
9 7 8 8 1 9 2 4 0 3 1 8 2
auditing with only slight modification of the main
design.
3.1 File Distribution Preparation
It is well known that erasure-correcting code may
be used to tolerate multiple failures in distributed
storage systems. In cloud data storage, we rely on
this technique to disperse the
data file F redundantly across a set of n ¼ m þ k
distributed servers. By placing each of the m þ k
vectors on a different server, the original data file
can survive the failure of any k of the m þ k servers
without any data loss, with a space overhead of
k=m. For support of efficient sequential I/O to the
original file, our file layout is systematic, i.e., the
unmodified m data file vectors together with k
parity vectors is distributed across m þ k different
servers.
3.2 Challenge Token Precomputation
In order to achieve assurance of data storage
correctness and data error localization
simultaneously, our scheme entirely relies on the
precomputed verification tokens. The main idea is
as follows: before file distribution the user
precomputes a certain number of short verification
tokens on individual vector each token covering
a random subset of data blocks. Later, when the
user wants to make sure the storage correctness for
the data in the cloud, he challenges the cloud
servers with a set of randomly generated block
indices. Upon receiving challenge, each cloud
server computes a short “signature” over the
specified blocks and returns them to the user. The
values of these signatures should match the
corresponding tokens precomputed by the user.
Meanwhile, as all servers operate over the same
subset of the indices, the requested response values
for integrity check must also be a valid codeword
determined by the secret matrix P.
Suppose the user wants to challenge the
cloud servers t times to ensure the correctness of
data storage. After token generation, the user has
the choice of either keeping the precomputed tokens
locally or storing them in encrypted form on the
cloud servers. In our case here, the user stores them
locally to obviate the need for encryption and lower
the bandwidth overhead during dynamic data
operation which will be discussed shortly.
3.3 Correctness Verification and Error
Localization
Error localization is a key prerequisite for
eliminating errors in storage systems. It is also of
critical importance to identify potential threats from
external attacks. However, many previous schemes
[23], [24] do not explicitly consider the problem of
data error localization, thus only providing binary
results for the storage verification. Our scheme
outperforms those by integrating the correctness
9. National Conference on Advances in Computer Science and Information Technology ISBN:978-81-924031-8-2
9
9 7 8 8 1 9 2 4 0 3 1 8 2
verification and error localization (misbehaving
server identification) in our challenge-response
protocol: the response values from servers for each
challenge not only determine the correctness of the
distributed storage, but also contain information to
locate potential data error(s). Specifically, the
procedure of the ith challenge-response for a cross-
check over the n servers.Once the inconsistency
among the storage has been successfully detected,
we can rely on the precomputed verification tokens
to further determine where the potential data
error(s) lies in.
3.4 File Retrieval and Error Recovery
Since our layout of file matrix is systematic, the
user can reconstruct the original file by
downloading the data vectors from the first m
servers, assuming that they return the correct
response values. Notice that our verification scheme
is based on random spot-checking, so the storage
correctness assurance is a probabilistic one.
However, by choosing system parameters ðe:g:; r; l;
tÞ appropriately and conducting enough times of
verification, we can guarantee the successful file
retrieval with high probability. On the other hand,
whenever the data corruption is detected, the
comparison of precomputed tokens and received
response values can guarantee the identification of
misbehaving server(s) (again with high probability),
which will be discussed shortly. Therefore, the user
can always ask servers to send back blocks of the r
rows specified in the challenge and regenerate the
correct blocks by erasure correction, shown in
Algorithm 3, as long as the number of identified
misbehaving servers is less than k. (otherwise, there
is no way to recover the corrupted blocks due to
lack of redundancy, even if we know the position of
misbehaving servers.) The newly recovered blocks
can then be redistributed to the misbehaving servers
to maintain the correctness of storage.
3.5 Toward Third Party Auditing
As discussed in our architecture, in case the user
does not have the time, feasibility, or resources to
perform the storage correctness verification, he can
optionally delegate this task to an independent
third-party auditor, making the cloud storage
publicly verifiable. However, as pointed out by the
recent work [30], [31], to securely introduce an
effective TPA, the auditing process should bring in
no new vulnerabilities toward user data privacy.
Namely, TPA should not learn user’s data content
through the delegated data auditing.
The correctness validation and misbehaving server
identification for TPA is just similar to the previous
scheme. The only difference is that TPA does not
have to take away the blinding values in the servers’
response but verifies directly. As TPA does not
know the secret blinding key, there is no way for
TPA to learn the data content information during
10. National Conference on Advances in Computer Science and Information Technology ISBN:978-81-924031-8-2
10
9 7 8 8 1 9 2 4 0 3 1 8 2
auditing process. Therefore, the privacy-preserving
third party auditing is achieved. Note that compared
to previous scheme, we only change the sequence of
file encoding, token precomputation, and blinding.
Thus, the overall computation overhead and
communication overhead remains roughly the same.
4. PROVIDING DYNAMIC DATA
OPERATION SUPPORT
So far, we assumed that F represents static or
archived data. This model may fit some application
scenarios, such as libraries and scientific data sets.
However, in cloud data storage, there are many
potential scenarios where data stored in the cloud is
dynamic, like electronic documents, photos, or log
files, etc. Therefore, it is crucial to consider the
dynamic case, where a user may wish to perform
various block-level operations of update, delete, and
append to modify the data file while maintaining
the storage correctness assurance.
Since data do not reside at users’ local site
but at cloud service provider’s address domain,
supporting dynamic data operation can be quite
challenging. On the one hand, CSP needs to process
the data dynamics request without knowing the
secret keying material. On the other hand, users
need to ensure that all the dynamic data operation
request has been faithfully processed by CSP. To
address this problem, we briefly explain our
approach methodology here and provide the details
later. For any data dynamic operation, the user must
first generate the corresponding resulted file blocks
and parities. This part of operation has to be carried
out by the user, since only he knows the secret
matrix P. Besides, to ensure the changes of data
blocks correctly reflected in the cloud address
domain, the user also needs to modify the
corresponding storage verification tokens to
accommodate the changes on data blocks. Only
with the accordingly changed storage verification
tokens, the previously discussed challenge-response
protocol can be carried on successfully even after
data dynamics.
4.1 Update Operation
In cloud data storage, a user may need to modify
some data block(s) stored in the cloud, from its
current value fij to a new one, fij þ _fij. We refer
this operation as data update. Fig. 2 gives the high
level logical representation of data block update.
Due to the linear property of Reed-Solomon code, a
user can perform the update operation and generate
the updated parity blocks by using _fij only, without
involving any other unchanged blocks. To maintain
the corresponding parity vectors as well as be
consistent with the original file layout, In other
words, the cloud servers cannot simply abstract
away the random blinding information on parity
blocks by subtracting the old and newly updated
11. National Conference on Advances in Computer Science and Information Technology ISBN:978-81-924031-8-2
11
9 7 8 8 1 9 2 4 0 3 1 8 2
parity blocks. As a result, the secret matrixPis still
being well protected, and the guarantee of storage
correctness remains.
4.2 Delete Operation
Sometimes, after being stored in the cloud, ertain
data blocks may need to be deleted. The delete
operation we are considering is a general one, in
which user replaces the data block with zero or
some special reserved data symbol. From this point
of view, the delete operation is actually a special
case of the data update operation, where the original
data blocks can be replaced with zeros or some
predetermined special blocks. Therefore, we can
rely on the2. In practice, it is possible that only a
fraction of tokens need amendment, since the
updated blocks may not be covered by all the
tokens. Logical representation of data dynamics,
including block update, append, and delete. update
procedure to support delete operation. Also, all the
affected tokens have to be modified and the updated
parity information has to be blinded using the same
method specified in an update operation.
4.3 Append Operation
In some cases, the user may want to increase the
size of his stored data by adding blocks at the end of
the data file, which we refer as data append. We
anticipate that the most frequent append operation
in cloud data storage is bulk append, in which the
user needs to upload a large number of blocks (not a
single block) at one time. Given the file matrix F
illustrated in file distribution preparation, appending
blocks toward the end of a data file is equivalent to
concatenate corresponding rows at the bottom of the
matrix layout In order to fully support block
insertion operation, recent work suggests utilizing
additional data structure
On the other hand, using the block index
mapping information, the user can still access or
retrieve the file as it is. Note that as a tradeoff, the
extra data structure information has to be
maintained locally on the user side.
5. SECURITY ANALYSIS AND
PERFORMANCE EVALUATION
In this section, we analyze our proposed
scheme in terms of correctness, security, and
efficiency. Our security analysis focuses on the
adversary model defined in Section 2. We also
evaluate the efficiency of our scheme via
implementation of both file distribution preparation
and verification token precomputation.
5.1 Security Strength
5.1.1 Detection Probability against Data
Modification In our scheme, servers are required to
operate only on specified rows in each challenge-
response protocol execution. We will show that this
“sampling” strategy on selected rows instead of all
12. National Conference on Advances in Computer Science and Information Technology ISBN:978-81-924031-8-2
12
9 7 8 8 1 9 2 4 0 3 1 8 2
can greatly reduce the computational overhead on
the server, while maintaining high detection
probability for data corruption. We will leave the
explanation on collusion resistance of our scheme
against this worst case scenario in a later section.
Assume the adversary modifies the data blocks in z
rows out of the l rows in the encoded file matrix.
Let r be the number of different rows for which the
user asks for checking in a challenge. Let X be a
discrete random variable that is defined to be the
number of rows chosen by the user that matches the
rows modified by the adversary.
5.1.2 Identification Probability for Misbehaving
Servers .
We have shown that, if the adversary
modifies the data blocks among any of the data
storage servers, our sampling checking scheme can
successfully detect the attack with high probability.
As long as the data modification is caught, the user
will further determine which server is
malfunctioning. The probability for error
localization or identifying misbehaving server(s)
can be computed in a similar way. It is the product
of the matching probability for sampling check and
the probability of complementary event for the false
negative result.
5.1.3 Security Strength against Worst Case Scenario
We now explain why it is a must to blind the parity
blocks and how our proposed schemes achieve
collusion resistance against the worst case scenario
in the adversary model. Recall that in the file
distribution preparation, the redundancy parity
vectors are calculated via multiplying the file matrix
F by P, where P is the secret parity generation
matrix we later rely on for storage correctness
assurance. If we disperse all the generated vectors
directly after token precomputation, i.e., without
blinding, malicious servers that collaborate can
reconstruct the secret P matrix easily: they can pick
blocks from the same rows among the data and
parity vectors to establish a set of m _ k linear
equations and solve for the m _ k entries of the
parity generation matrix P. Once they have the
knowledge of P, those malicious servers can
consequently modify any part of the data blocks and
calculate the corresponding parity blocks, and vice
versa, making their codeword relationship always
consistent. Therefore, our storage correctness
challenge scheme would be undermined—even if
those modified blocks are covered by the specified
rows, the storage correctness check equation would
always hold.
5.2 Performance Evaluation
We now assess the performance of the proposed
storage auditing scheme. We focus on the cost of
file distribution preparation as well as the token
generation. Our experiment is conducted on a
13. National Conference on Advances in Computer Science and Information Technology ISBN:978-81-924031-8-2
13
9 7 8 8 1 9 2 4 0 3 1 8 2
system with an Intel Core 2 processor running at
1.86 GHz, 2,048 MB of RAM, and a 7,200 RPM
Western Digital 250 GB Serial ATA drive.
Algorithms are implemented using open-source
erasure coding library Jerasure written in C. All
results represent the mean of 20 trials.
5.2.1 File Distribution Preparation
As discussed, file distribution preparation includes
the generation of parity vectors (the encoding part)
as well as the corresponding parity blinding part.
We consider two sets of different parameters for the
ðm; kÞ Reed-Solomon encoding, both of which
work over. shows the total cost for preparing a 1
GB file before outsourcing. In the figure on the left,
we set the number of data vectors m constant at 10,
while decreasing the number of parity vectors k
from 10 to 2.
5.2.2 Challenge Token Computation
Although in our scheme the number of verification
token t is a fixed priori determined before file
distribution, we can overcome this issue by
choosing sufficient large t in practice. For example,
when t is selected to be 7,300 and 14,600, the data
file can be verified every day for the next 20 years
and 40 years, respectively, which should be of
enough use in practice. Note that instead of directly
computing each token, our implementation uses the
Horner algorithm suggested in to calculate token i
from the back, and achieves a slightly faster
performance.
6. RELATED WORK
Juels and Kaliski Jr. [10] described a formal “proof
of retrievability” (POR) model for ensuring the
remote data integrity. Their scheme combines spot-
checking and errorcorrecting code to ensure both
possession and retrievability of files on archive
service systems. Shacham and Waters built on this
model and constructed a random linear function-
based homomorphic authenticator which enables
unlimited number of challenges and requires less
communication overhead due to its usage of
relatively small size of BLS signature. Later in their
subsequent work, Bowers extended POR model to
distributed systems. However, all these schemes are
focusing on static data. The effectiveness of their
schemes rests primarily on the preprocessing steps
that the user conducts before outsourcing the data
file F. Any change to the contents of F, even few
bits, must propagate through the errorcorrecting
code and the corresponding random shuffling
process, thus introducing significant computation
and communication complexity. Recently, Dodis et
al. gave theoretical studies on generalized
framework for different variants of existing POR
work. Ateniese et al. [11] defined the “provable
data possession” (PDP) model for ensuring
possession of file on untrusted storages. Their
14. National Conference on Advances in Computer Science and Information Technology ISBN:978-81-924031-8-2
14
9 7 8 8 1 9 2 4 0 3 1 8 2
scheme utilized public key-based homomorphic
tags for auditing the data file. However, the
precomputation of the tags imposes heavy
computation overhead that can be expensive for an
entire file. In their subsequent work, Ateniese et al.
Described a PDP scheme that uses only symmetric
key-based cryptography. This method has lower
overhead than their previous scheme and allows for
block updates, deletions, and appends to the stored
file, which has also been supported in our work.
However, their scheme focuses on single server
scenario and does not provide data availability
guarantee against server failures, leaving both the
distributed scenario and data error recovery issue
unexplored. The explicit support of data dynamics
has further been studied in the two recent work.
Wang et al proposed to combine BLS-based
homomorphic authenticator with Merkle Hash Tree
to support fully data dynamics, while Erway et al.
Developed a skip list-based scheme to enable
provable data possession with fully dynamics
support. The incremental cryptography work done
by Bellare et al. It also provides a set of
cryptographic building blocks such as hash, MAC,
and signature functions that may be employed for
storage integrity verification while supporting
dynamic operations on data. However, this branch
of work falls into the traditional data integrity
protection mechanism, where local copy of data has
to be maintained for the verification. It is not yet
clear how the work can be adapted to cloud storage
scenario where users no longer have the data at
local sites but still need to ensure the storage
correctness efficiently in the cloud.
In other related work, to ensure data
possession of multiple replicas across the
distributed storage system. They extended the PDP
scheme to cover multiple replicas without encoding
each replica separately, providing guarantee that
multiple copies of data are actually maintained. Lilli
bridge et al. presented a P2P backup scheme in
which blocks of a data file are dispersed across m þ
k peers using an erasure code. Peers can request
random blocks from their backup peers and verify
the integrity using separate keyed cryptographic
hashes attached on each block. Their scheme can
detect data loss from free-riding peers, but does not
ensure all data are unchanged. Filho and Barreto
proposed to verify data integrity using RSA-based
hash to demonstrate uncheatable data possession in
peer-to-peer file sharing networks. However, their
proposal requires exponentiation over the entire
data file, which is clearly impractical for the server
whenever the file is large. Shah et al. proposed
allowing a TPA to keep online storage honest by
first encrypting the data then sending a number of
precomputed symmetric-keyed hashes over the
encrypted data to the auditor. However, their
15. National Conference on Advances in Computer Science and Information Technology ISBN:978-81-924031-8-2
15
9 7 8 8 1 9 2 4 0 3 1 8 2
scheme only works for encrypted files, and auditors
must maintain long-term state. Schwarz and Miller
proposed to ensure static file integrity across
multiple distributed servers, using erasure-coding
and block-level file integrity checks. We adopted
some ideas of their distributed storage verification
protocol.
7. CONCLUSION
In this paper, we investigate the problem of data
security in cloud data storage, which is essentially a
distributed storage system. To achieve the
assurances of cloud data integrity and availability
and enforce the quality of dependable cloud storage
service for users, we propose an effective and
flexible distributed scheme with explicit dynamic
data support, including block update, delete, and
append. We
rely on erasure-correcting code in the file
distribution preparation to provide redundancy
parity vectors and guarantee the data dependability.
By utilizing the homomorphic token with
distributed verification of erasurecoded data, our
scheme achieves the integration of
storagecorrectness insurance and data error
localization, i.e., whenever data corruption has been
detected during the storage correctness verification
across the distributed servers, we can almost
guarantee the simultaneous identification of the
misbehaving server(s). Considering the time,
computation resources, and even the related online
burden of users, we also provide the extension of
the proposed main scheme to support third-party
auditing, where users can safely delegate the
integrity checking tasks to thirdparty auditors and
be worry-free to use the cloud storage services.
Through detailed security and extensive experiment
results, we show that our scheme is highly efficient
and resilient to Byzantine failure, malicious data
modification attack, and even server colluding
attacks.
ACKNOWLEDGMENTS
This work was supported in part by the US National
Science Foundation under grants CNS-1054317,
CNS- 1116939, CNS-1156318, and CNS-1117111,
and by an Amazon web service research grant. A
preliminary version [1] of this paper was presented
at the 17th IEEE International Workshop on Quality
of Service (IWQoS ’09).
REFERENCES
[1] C. Wang, Q. Wang, K. Ren, and W. Lou,
“Ensuring Data Storage Security in Cloud
Computing,” Proc. 17th Int’l Workshop Quality of
Service (IWQoS ’09), pp. 1-9, July 2009.
[2] Amazon.com, “Amazon Web Services (AWS),”
http://aws. amazon.com, 2009.
16. National Conference on Advances in Computer Science and Information Technology ISBN:978-81-924031-8-2
16
9 7 8 8 1 9 2 4 0 3 1 8 2
[3] Sun Microsystems, Inc., “Building Customer
Trust in Cloud Computing with Transparent
Security,” https://www.sun.com/
offers/details/sun_transparency.xml, Nov. 2009.
[4] K. Ren, C. Wang, and Q. Wang, “Security
Challenges for the Public Cloud,” IEEE Internet
Computing, vol. 16, no. 1, pp. 69-73, 2012.
[5] M. Arrington, “Gmail Disaster: Reports of Mass
Email Deletions,”
http://www.techcrunch.com/2006/12/28/gmail-
disasterreportsof- mass-email-deletions, Dec. 2006.
[6] J. Kincaid, “MediaMax/TheLinkup Closes Its
Doors,”http://
www.techcrunch.com/2008/07/10/mediamaxthelink
up-closesits-doors, July 2008.
[7] Amazon.com, “Amazon S3 Availability Event:
July 20, 2008,” http://status.aws.amazon.com/s3-
20080720.html, July 2008.
[8] S. Wilson, “Appengine Outage,”
http://www.cio-weblog.com/
50226711/appengine_outage.php, June 2008.
[9] B. Krebs, “Payment Processor Breach May Be
Largest Ever,”
http://voices.washingtonpost.com/securityfix/2009/
01/ payment_processor_breach_may_b.html, Jan.
2009.
[10] A. Juels and B.S. Kaliski Jr., “PORs: Proofs of
Retrievability for Large Files,” Proc. 14th ACM
Conf. Computer and Comm. Security (CCS ’07),
pp. 584-597, Oct. 2007.
[11] G. Ateniese, R. Burns, R. Curtmola, J. Herring,
L. Kissner, Z. Peterson, and D. Song, “Provable
Data Possession at Untrusted Stores,” Proc. 14th
ACM Conf. Computer and Comm. Security (CCS
’07), pp. 598-609, Oct. 2007.
[12] M.A. Shah, M. Baker, J.C. Mogul, and R.
Swaminathan, “Auditing to Keep Online Storage
Services Honest,” Proc. 11th USENIX Workshop
Hot Topics in Operating Systems (HotOS ’07), pp.
1-6, 2007.
[13] M.A. Shah, R. Swaminathan, and M. Baker,
“Privacy-Preserving Audit and Extraction of Digital
Contents,” Cryptology ePrint Archive, Report
2008/186, http://eprint.iacr.org, 2008.
[14] G. Ateniese, R.D. Pietro, L.V. Mancini, and G.
Tsudik, “Scalable and Efficient Provable Data
Possession,” Proc. Fourth Int’l Conf.Security and
Privacy in Comm. Netowrks (SecureComm ’08),
pp. 1-10, 2008.
[15] Q. Wang, C. Wang, J. Li, K. Ren, and W. Lou,
“Enabling Public Verifiability and Data Dynamics
for Storage Security in Cloud Computing,” Proc.
14th European Conf. Research in ComputerSecurity
(ESORICS ’09), pp. 355-370, 2009.