1) The document proposes a privacy-preserving public auditing scheme for cloud storage. It allows a third party auditor (TPA) to efficiently check the integrity of outsourced data in the cloud without learning anything about the data contents.
2) The scheme utilizes homomorphic linear authenticators to generate proofs of data storage correctness, enabling TPA to audit multiple tasks simultaneously in a batch manner with minimal overhead.
3) Extensive analysis shows the scheme is provably secure and efficient, addressing key issues of public auditability, data privacy, and lightweight verification.
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Cloud computing is the technology which enables obtaining resources like so services,
software, hardware over the internet. With cloud storage users can store their data remotely and
enjoy on-demand services and application from the configurable resources. The cloud data storage
has many benefits over local data storage. Users should be able to just use the cloud storage as if it is
local, without worrying about the need to verify its integrity. The problem is that ensuring data
security and integrity of data of user. Sohere, I am going to have public audit ability for cloud storage
that users can resort to a third-party auditor (TPA) to check the integrity of data. This paper gives the
various issues related to privacy while storing the user’s data to the cloud storage during the TPA
auditing. Without appropriate security and privacy solutions designed for clouds this computing
paradigm could become a big failure. I am a giving privacy-preserving public auditing using ring
signature process for secure cloud storage system. This paper is going to analyze various techniques
to solve these issues and to provide the privacy and security to the data in cloud
Data Partitioning Technique In Cloud: A Survey On Limitation And BenefitsIJERA Editor
In recent years,increment in the growth and popularity of cloud services has lead the enterprises to an increase in the capability to handle, store and retrieve critical data. This technology access a shared group of configurable computing resources, which are- servers,storage and applications. Cloud computing is a succeeding generation architecture of IT enterprise, which convert the application software and databaseto large data hubs.Data security and storage of data is an essential functionality of cloud services.It allows data storage in the cloud server efficiently without any worry. Cloud services includes request service, wide web access, measured services, just single click away ,easy usage, just pay for the services you use and location independent.All these features poses many security challenges.The data partitioning techniques are used in literature, for privacy conserving and security of data, using third party auditor (TPA). Objective of the current workis to review all available partitioning technique in literature and analyze them. Through this work authors will compare and identify the limitations and benefits of the available and widely used partitioning techniques.
Enhanced Data Partitioning Technique for Improving Cloud Data Storage SecurityEditor IJMTER
Cloud computing is a model for enabling for on demand network access to shared
configurable computing resources (e.g. networks, servers, storage, applications, and services).It is
based on virtualization and distributed computing technologies. Cloud Data storage systems enable
user to store data efficiently on server without any trouble of data resources. User can easily store
and retrieve their data remotely. The two biggest concerns about cloud data storage are reliability and
security. Clients aren’t like to entrust their data to another third party or companies without a
guarantee that they will be able to access therein formations whenever they want. In the existing
system, the data are stored in the cloud using dynamic data operation with computation which makes
the user need to make a copy for further updating and verification of the data loss. Different
distributed storing auditing techniques are used for overcoming the problem of data loss. Recent
work of this paper has show that data partitioning technique used for data storage by providing
Digital signature to every partitioning data and user .this technique allow user to upload or retrieve
the data with matching the digital signatures provided to them. This method ensures high cloud
storage integrity, enhanced error localization and easy identification of misbehaving server and
unauthorized access to the cloud server. Hence this work aims to store the data securely in reduced
space with less time and computational cost.
Security Check in Cloud Computing through Third Party Auditorijsrd.com
In cloud computing, data owners crowd their data on cloud servers and users (data consumers) can access the data from cloud servers. Due to the data outsourcing, however, it requires an independent auditing service to check the data integrity in the cloud. Some existing remote integrity checking method scan only serve for static records data. Thus, cannot be used in the auditing service since the data in the cloud can be animatedly updated. Thus, an efficient and secure dynamic auditing protocol is required to convince data owners that the data are correctly stored in the cloud. In this paper, we first design an auditing framework for cloud storage systems for privacy-preserving auditing protocol. Then, we extend our auditing protocol to support the data dynamic operations, which is efficient to secure the random model.
Improve HLA based Encryption Process using fixed Size Aggregate Key generationEditor IJMTER
Cloud computing is an innovative idea for IT industries which provides several services to
users. In cloud computing secure authentication and data integrity of data is a major challenge, due to
internal and external threats. For improvement in data security over cloud, various techniques are
used.MAC based authentication is one of them, which suffers from undesirable systematic demerits
which have bounded usage and not secure verification, which may pose additional online load to users,
in a public auditing setting. Reliable and secure auditing are also challenging in cloud. In Cloud auditing
existing audit systems are based on aggregate key HLA algorithm. This algorithm is based on variable
sizes, different aggregate key generation, which encounters with security issues at decryption level.
Current Scheme generates a high length of key decryption that encounters with problem of space
complexity. To overcome these issues, We can improve HLA algorithm by improve aggregate key
generation, based on fixed key size. This algorithm generates constant aggregate key which will
overcomes problem of sharing of keys, security issues and space complexity.
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Cloud computing is the technology which enables obtaining resources like so services,
software, hardware over the internet. With cloud storage users can store their data remotely and
enjoy on-demand services and application from the configurable resources. The cloud data storage
has many benefits over local data storage. Users should be able to just use the cloud storage as if it is
local, without worrying about the need to verify its integrity. The problem is that ensuring data
security and integrity of data of user. Sohere, I am going to have public audit ability for cloud storage
that users can resort to a third-party auditor (TPA) to check the integrity of data. This paper gives the
various issues related to privacy while storing the user’s data to the cloud storage during the TPA
auditing. Without appropriate security and privacy solutions designed for clouds this computing
paradigm could become a big failure. I am a giving privacy-preserving public auditing using ring
signature process for secure cloud storage system. This paper is going to analyze various techniques
to solve these issues and to provide the privacy and security to the data in cloud
Data Partitioning Technique In Cloud: A Survey On Limitation And BenefitsIJERA Editor
In recent years,increment in the growth and popularity of cloud services has lead the enterprises to an increase in the capability to handle, store and retrieve critical data. This technology access a shared group of configurable computing resources, which are- servers,storage and applications. Cloud computing is a succeeding generation architecture of IT enterprise, which convert the application software and databaseto large data hubs.Data security and storage of data is an essential functionality of cloud services.It allows data storage in the cloud server efficiently without any worry. Cloud services includes request service, wide web access, measured services, just single click away ,easy usage, just pay for the services you use and location independent.All these features poses many security challenges.The data partitioning techniques are used in literature, for privacy conserving and security of data, using third party auditor (TPA). Objective of the current workis to review all available partitioning technique in literature and analyze them. Through this work authors will compare and identify the limitations and benefits of the available and widely used partitioning techniques.
Enhanced Data Partitioning Technique for Improving Cloud Data Storage SecurityEditor IJMTER
Cloud computing is a model for enabling for on demand network access to shared
configurable computing resources (e.g. networks, servers, storage, applications, and services).It is
based on virtualization and distributed computing technologies. Cloud Data storage systems enable
user to store data efficiently on server without any trouble of data resources. User can easily store
and retrieve their data remotely. The two biggest concerns about cloud data storage are reliability and
security. Clients aren’t like to entrust their data to another third party or companies without a
guarantee that they will be able to access therein formations whenever they want. In the existing
system, the data are stored in the cloud using dynamic data operation with computation which makes
the user need to make a copy for further updating and verification of the data loss. Different
distributed storing auditing techniques are used for overcoming the problem of data loss. Recent
work of this paper has show that data partitioning technique used for data storage by providing
Digital signature to every partitioning data and user .this technique allow user to upload or retrieve
the data with matching the digital signatures provided to them. This method ensures high cloud
storage integrity, enhanced error localization and easy identification of misbehaving server and
unauthorized access to the cloud server. Hence this work aims to store the data securely in reduced
space with less time and computational cost.
Security Check in Cloud Computing through Third Party Auditorijsrd.com
In cloud computing, data owners crowd their data on cloud servers and users (data consumers) can access the data from cloud servers. Due to the data outsourcing, however, it requires an independent auditing service to check the data integrity in the cloud. Some existing remote integrity checking method scan only serve for static records data. Thus, cannot be used in the auditing service since the data in the cloud can be animatedly updated. Thus, an efficient and secure dynamic auditing protocol is required to convince data owners that the data are correctly stored in the cloud. In this paper, we first design an auditing framework for cloud storage systems for privacy-preserving auditing protocol. Then, we extend our auditing protocol to support the data dynamic operations, which is efficient to secure the random model.
Improve HLA based Encryption Process using fixed Size Aggregate Key generationEditor IJMTER
Cloud computing is an innovative idea for IT industries which provides several services to
users. In cloud computing secure authentication and data integrity of data is a major challenge, due to
internal and external threats. For improvement in data security over cloud, various techniques are
used.MAC based authentication is one of them, which suffers from undesirable systematic demerits
which have bounded usage and not secure verification, which may pose additional online load to users,
in a public auditing setting. Reliable and secure auditing are also challenging in cloud. In Cloud auditing
existing audit systems are based on aggregate key HLA algorithm. This algorithm is based on variable
sizes, different aggregate key generation, which encounters with security issues at decryption level.
Current Scheme generates a high length of key decryption that encounters with problem of space
complexity. To overcome these issues, We can improve HLA algorithm by improve aggregate key
generation, based on fixed key size. This algorithm generates constant aggregate key which will
overcomes problem of sharing of keys, security issues and space complexity.
International Journal of Computational Engineering Research(IJCER)ijceronline
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology.
International Journal of Engineering Research and Development (IJERD)IJERD Editor
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
Survey on securing outsourced storages in cloudeSAT Journals
Abstract Cloud computing is one of the buzzwords of technological developments in the IT industry and service sectors. Widening the social capabilities of servicing for a user on the internet while narrowing the insufficiency to store information and provide facilities locally, computing interests are shifting towards cloud services. Cloud services although contributes to major advantages for servicing also incurs notification to major security issues. The issues and the approaches that can be taken to minimise or even eliminate their effects are discussed in this paper to progress toward more secure storage services on the cloud. Keywords: Cloud computing, Cloud Security, Outsourced Storages, Storage as a Service
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Effective & Flexible Cryptography Based Scheme for Ensuring User`s Data Secur...ijsrd.com
Cloud computing has been envisioned as the next-generation architecture of IT enterprise. In contrast to traditional solutions, where the IT services are under proper physical, logical and personnel controls, cloud computing moves the application software and databases to the large data centers, where the management of the data and services may not be fully trustworthy. This unique attribute, however, poses many new security challenges which have not been well understood. In this article, we focus on cloud data storage security, which has always been an important aspect of quality of service. To ensure the correctness of users' data in the cloud, we propose an effective and flexible cryptography based scheme. Extensive security and performance analysis shows that the proposed scheme is highly efficient and resilient against malicious data modification attack.
Centralized Data Verification Scheme for Encrypted Cloud Data ServicesEditor IJMTER
Cloud environment supports data sharing between multiple users. Data integrity is violated
due to hardware / software failures and human errors. Data owners and public verifiers are involved to
efficiently audit cloud data integrity without retrieving the entire data from the cloud server. File and
block signatures are used in the integrity verification process.
“One Ring to RUle Them All” (Oruta) scheme is used for privacy-preserving public auditing process. In
oruta homomorphic authenticators are constructed using Ring Signatures. Ring signatures are used to
compute verification metadata needed to audit the correctness of shared data. The identity of the signer
on each block in shared data is kept private from public verifiers. Homomorphic authenticable ring
signature (HARS) scheme is applied to provide identity privacy with blockless verification. Batch
auditing mechanism supports to perform multiple auditing tasks simultaneously. Oruta is compatible
with random masking to preserve data privacy from public verifiers. Dynamic data management process
is handled with index hash tables. Traceability is not supported in oruta scheme. Data dynamism
sequence is not managed by the system. The system obtains high computational overhead
The proposed system is designed to perform public data verification with privacy. Traceability features
are provided with identity privacy. Group manager or data owner can be allowed to reveal the identity of
the signer based on verification metadata. Data version management mechanism is integrated with the
system.
Insuring Security for Outsourced Data Stored in Cloud EnvironmentEditor IJCATR
The cloud storage offers users with infrastructure flexibility, faster deployment of applications and data, cost
control, adaptation of cloud resources to real needs, improved productivity, etc. Inspite of these advantageous factors, there
are several deterrents to the widespread adoption of cloud computing remain. Among them, security towards the correctness
of the outsourced data and issues of privacy lead a major role. In order to avoid security risk for the outsourced data, we
propose the dynamic audit services that enables integrity verification of untrusted and outsourced storages. An interactive
proof system (IPS) with the zero knowledge property is introduced to provide public auditability without downloading raw
data and protect privacy of the data. In the proposed system data owner stores the large number of data in cloud after e
encrypting the data with private key and also send public key to third party auditor (TPA) for auditing purpose. TPA in
clouds and it’s maintained by CSP. An Authorized Application (AA), which holds a data owners secret key (sk) and
manipulate the outsourced data and update the associated IHT stored in TPA. Finally Cloud users access the services through
the AA. Our system also provides secure auditing while the data owner outsourcing the data in the cloud. And after
performing auditing operations, security solutions are enhanced for the purpose of detecting malicious users with the help of
Certificate Authority
International Journal of Computational Engineering Research(IJCER)ijceronline
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology.
International Journal of Engineering Research and Development (IJERD)IJERD Editor
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
Survey on securing outsourced storages in cloudeSAT Journals
Abstract Cloud computing is one of the buzzwords of technological developments in the IT industry and service sectors. Widening the social capabilities of servicing for a user on the internet while narrowing the insufficiency to store information and provide facilities locally, computing interests are shifting towards cloud services. Cloud services although contributes to major advantages for servicing also incurs notification to major security issues. The issues and the approaches that can be taken to minimise or even eliminate their effects are discussed in this paper to progress toward more secure storage services on the cloud. Keywords: Cloud computing, Cloud Security, Outsourced Storages, Storage as a Service
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Effective & Flexible Cryptography Based Scheme for Ensuring User`s Data Secur...ijsrd.com
Cloud computing has been envisioned as the next-generation architecture of IT enterprise. In contrast to traditional solutions, where the IT services are under proper physical, logical and personnel controls, cloud computing moves the application software and databases to the large data centers, where the management of the data and services may not be fully trustworthy. This unique attribute, however, poses many new security challenges which have not been well understood. In this article, we focus on cloud data storage security, which has always been an important aspect of quality of service. To ensure the correctness of users' data in the cloud, we propose an effective and flexible cryptography based scheme. Extensive security and performance analysis shows that the proposed scheme is highly efficient and resilient against malicious data modification attack.
Centralized Data Verification Scheme for Encrypted Cloud Data ServicesEditor IJMTER
Cloud environment supports data sharing between multiple users. Data integrity is violated
due to hardware / software failures and human errors. Data owners and public verifiers are involved to
efficiently audit cloud data integrity without retrieving the entire data from the cloud server. File and
block signatures are used in the integrity verification process.
“One Ring to RUle Them All” (Oruta) scheme is used for privacy-preserving public auditing process. In
oruta homomorphic authenticators are constructed using Ring Signatures. Ring signatures are used to
compute verification metadata needed to audit the correctness of shared data. The identity of the signer
on each block in shared data is kept private from public verifiers. Homomorphic authenticable ring
signature (HARS) scheme is applied to provide identity privacy with blockless verification. Batch
auditing mechanism supports to perform multiple auditing tasks simultaneously. Oruta is compatible
with random masking to preserve data privacy from public verifiers. Dynamic data management process
is handled with index hash tables. Traceability is not supported in oruta scheme. Data dynamism
sequence is not managed by the system. The system obtains high computational overhead
The proposed system is designed to perform public data verification with privacy. Traceability features
are provided with identity privacy. Group manager or data owner can be allowed to reveal the identity of
the signer based on verification metadata. Data version management mechanism is integrated with the
system.
Insuring Security for Outsourced Data Stored in Cloud EnvironmentEditor IJCATR
The cloud storage offers users with infrastructure flexibility, faster deployment of applications and data, cost
control, adaptation of cloud resources to real needs, improved productivity, etc. Inspite of these advantageous factors, there
are several deterrents to the widespread adoption of cloud computing remain. Among them, security towards the correctness
of the outsourced data and issues of privacy lead a major role. In order to avoid security risk for the outsourced data, we
propose the dynamic audit services that enables integrity verification of untrusted and outsourced storages. An interactive
proof system (IPS) with the zero knowledge property is introduced to provide public auditability without downloading raw
data and protect privacy of the data. In the proposed system data owner stores the large number of data in cloud after e
encrypting the data with private key and also send public key to third party auditor (TPA) for auditing purpose. TPA in
clouds and it’s maintained by CSP. An Authorized Application (AA), which holds a data owners secret key (sk) and
manipulate the outsourced data and update the associated IHT stored in TPA. Finally Cloud users access the services through
the AA. Our system also provides secure auditing while the data owner outsourcing the data in the cloud. And after
performing auditing operations, security solutions are enhanced for the purpose of detecting malicious users with the help of
Certificate Authority
Weight loss PowerPoint templates designed by skilled, experienced and professional graphic designers that are intended to solve the big issue of time consumption. These are fully editable slides and organized in a way to simplify the most complex topics and present it in an attractive manner. You can easily incorporate charts, diagrams and animations along with the content to these layouts in order to explore new technologies and trends of businesses in a unique way. All editable slides are digitally created to pinpoint and clarify the message, and summarize information regarding topic.
Cardiovascular system PowerPoint templates designed by skilled, experienced and professional graphic designers that are intended to solve the big issue of time consumption. These are fully editable slides and organized in a way to simplify the most complex topics and present it in an attractive manner. You can easily incorporate charts, diagrams and animations along with the content to these layouts in order to explore new technologies and trends of businesses in a unique way. All editable slides are digitally created to pinpoint and clarify the message, and summarize information regarding topic.
Performance hosting with Ninefold for Spree Apps and StoresAndrew Sharpe
Our relationship started when Ninefold chose Spree as the App to performance test our platform. We chose Spree because for Spree apps every millisecond matters. As a part of the trip, I presented at Spree inaugural webinar series on why performance matters for your Spree app and how hosting can help.
This is just the start of exciting work we are doing together. We discussed how Ninefold and Spree can bring better performance to Spree stores: Spree on the technology and Hub side, Ninefold from hosting.
Pills PowerPoint templates designed by skilled, experienced and professional graphic designers that are intended to solve the big issue of time consumption. These are fully editable slides and organized in a way to simplify the most complex topics and present it in an attractive manner. You can easily incorporate charts, diagrams and animations along with the content to these layouts in order to explore new technologies and trends of businesses in a unique way. All editable slides are digitally created to pinpoint and clarify the message, and summarize information regarding topic.
Healthy beats of heart PowerPoint templates designed by skilled, experienced and professional graphic designers that are intended to solve the big issue of time consumption. These are fully editable slides and organized in a way to simplify the most complex topics and present it in an attractive manner. You can easily incorporate charts, diagrams and animations along with the content to these layouts in order to explore new technologies and trends of businesses in a unique way. All editable slides are digitally created to pinpoint and clarify the message, and summarize information regarding topic.
DOTNET 2013 IEEE CLOUDCOMPUTING PROJECT Privacy preserving public auditing fo...IEEEGLOBALSOFTTECHNOLOGIES
To Get any Project for CSE, IT ECE, EEE Contact Me @ 09849539085, 09966235788 or mail us - ieeefinalsemprojects@gmail.com-Visit Our Website: www.finalyearprojects.org
JAVA 2013 IEEE CLOUDCOMPUTING PROJECT Privacy preserving public auditing for ...IEEEGLOBALSOFTTECHNOLOGIES
To Get any Project for CSE, IT ECE, EEE Contact Me @ 09849539085, 09966235788 or mail us - ieeefinalsemprojects@gmail.com-Visit Our Website: www.finalyearprojects.org
To Get any Project for CSE, IT ECE, EEE Contact Me @ 09849539085, 09966235788 or mail us - ieeefinalsemprojects@gmail.co¬m-Visit Our Website: www.finalyearprojects.org
Research Report on Preserving Data Confidentiality & Data Integrity in ...Manish Sahani
ABSTRACT : Currently, cloud-based application is so very famous, but preserving the confidentiality of the user’s data is a huge task to accomplish. Keeping this need in mind, here a solution is proposed which will preserve the data confidentiality & integrity in cloud environment. For providing data confidentiality we will use AES algorithms, by virtue of which the secret data will be converted to cipher text and it becomes very difficult for the user to get the meaningful plain text. Here the basic emphasis is also on the data integrity so that the user’s data can’t be duplicated or copied.Keywords:Data Confidentiality, Data Integrity, AES algorithm
https://jst.org.in/index.html
Our Journals has a act as vehicles for the dissemination of knowledge, making research findings accessible to a wide audience. This accessibility is vital for the progress of education, as it allows students, educators, and professionals to stay informed about the latest developments in their respective fields.
Enhanced security framework to ensure data security in cloud using security b...eSAT Journals
Abstract Data security and Access control is a challenging research work in Cloud Computing. Cloud service users upload there private and confidential data over the cloud. As the data is transferred among the server and client, the data is to be protected from unauthorized entries into the server, by authenticating the user’s and provide high secure priority to the data. So the Experts always recommend using different passwords for different logins. Any normal person cannot possibly follow that advice and memorize all their usernames and passwords. That is where password managers come in. The purpose of this paper is to secure data from unauthorized person using Security blanket algorithm.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Privacy-Preserving Public Auditing for Regenerating-Code-Based Cloud Storage1crore projects
IEEE PROJECTS 2015
1 crore projects is a leading Guide for ieee Projects and real time projects Works Provider.
It has been provided Lot of Guidance for Thousands of Students & made them more beneficial in all Technology Training.
Dot Net
DOTNET Project Domain list 2015
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
Java Project Domain list 2015
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
ECE IEEE Projects 2015
1. Matlab project
2. Ns2 project
3. Embedded project
4. Robotics project
Eligibility
Final Year students of
1. BSc (C.S)
2. BCA/B.E(C.S)
3. B.Tech IT
4. BE (C.S)
5. MSc (C.S)
6. MSc (IT)
7. MCA
8. MS (IT)
9. ME(ALL)
10. BE(ECE)(EEE)(E&I)
TECHNOLOGY USED AND FOR TRAINING IN
1. DOT NET
2. C sharp
3. ASP
4. VB
5. SQL SERVER
6. JAVA
7. J2EE
8. STRINGS
9. ORACLE
10. VB dotNET
11. EMBEDDED
12. MAT LAB
13. LAB VIEW
14. Multi Sim
CONTACT US
1 CRORE PROJECTS
Door No: 214/215,2nd Floor,
No. 172, Raahat Plaza, (Shopping Mall) ,Arcot Road, Vadapalani, Chennai,
Tamin Nadu, INDIA - 600 026
Email id: 1croreprojects@gmail.com
website:1croreprojects.com
Phone : +91 97518 00789 / +91 72999 51536
Authenticated and unrestricted auditing of big data space on cloud through v...IJMER
Cloud unlocks a different era in Information technology where it has the capability of providing the customers with a variety of scalable and flexible services. Cloud provides these services through a prepaid system, which helps the customers cut down on large investments on IT hardware
and other infrastructure. Also according to the Cloud viewpoint, customers don’t have control on their
respective data. Hence security of data is a big issue of using a Cloud service. Present work shows that
the data auditing can be done by any third party agent who is trusted and known as auditor. The auditor can verify the integrity of the data without having the ownership of the actual data. There are many disadvantages for the above approach. One of them is the absence of a required verification procedure among the auditor and service provider which means any person can ask for the verification of the file which puts this auditing at certain risk. Also in the existing scheme the data updates can be
done only for coarse granular updates i.e. blocks with the uneven size. And hence resulting in repeated communication and updating of auditor for a whole file block causing higher communication costs and
requires more storage space. In this paper, the emphasis is to give a proper breakdown for types of
fixed granular updates and put forward a design that will be capable to maintain authenticated and unrestricted auditing. Based on this system, there is also an approach for remarkably decreasing the communication costs for auditing little updates
Public Key Encryption algorithms Enabling Efficiency Using SaaS in Cloud Comp...Editor IJMTER
The Most great challenging in Cloud computing is Security. Here Security plays key role
in this paper proposed concept mainly deals with security at the end user access. While coming to the
end user access that are connected through the public networks. Here the end user wants to access his
application or services protected by the unauthorized persons. In this area if we want to apply
encryption or decryption methods such as RSA, 3DES, MD5, Blow fish. Etc.,
Whereas we can utilize these services at the end user access in cloud computing. Here there is
problem of encryption and decryption of the messages, services and applications. They are is lot of
time to take encrypt as well as decrypt and more number of processing capabilities are needed to use
the mechanism. For that problem we are introducing to use of cloud computing in SaaS model. i.e.,
scalable is applicable in this area so whenever it requires we can utilize the SaaS model.
In Cloud computing use of computing resources (hardware and software) that are delivered as a
service over Internet network. In advance earlier there is problem of using key size in various
algorithm like 64 bit it take some long period to encrypt the data.
Breast Tissue Identification in Digital Mammogram Using Edge Detection Techni...IIRindia
With cloud computing, users can remotely store their data into the cloud and use on-demand high-quality applications. Data outsourcing: users are relieved from the burden of data storage and maintenance When users put their data (of large size) on the cloud, the data integrity protection is challenging enabling public audit for cloud data storage security is important Users can ask an external audit party to check the integrity of their outsourced data. Purpose of developing data security for data possession at un-trusted cloud storage servers we are often limited by the resources at the cloud server as well as at the client. Given that the data sizes are large and are stored at remote servers, accessing the entire file can be expensive in input output costs to the storage server. Also transmitting the file across the network to the client can consume heavy bandwidths. Since growth in storage capacity has far outpaced the growth in data access as well as network bandwidth, accessing and transmitting the entire archive even occasionally greatly limits the scalability of the network resources. Furthermore, the input output to establish the data proof interferes with the on-demand bandwidth of the server used for normal storage and retrieving purpose. The Third Party Auditor is a respective person to manage the remote data in a global manner.
International Journal of Engineering and Science Invention (IJESI)inventionjournals
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
Similar to Privacy preserving public auditing for secure cloud storage (20)
2024.06.01 Introducing a competency framework for languag learning materials ...Sandy Millin
http://sandymillin.wordpress.com/iateflwebinar2024
Published classroom materials form the basis of syllabuses, drive teacher professional development, and have a potentially huge influence on learners, teachers and education systems. All teachers also create their own materials, whether a few sentences on a blackboard, a highly-structured fully-realised online course, or anything in between. Despite this, the knowledge and skills needed to create effective language learning materials are rarely part of teacher training, and are mostly learnt by trial and error.
Knowledge and skills frameworks, generally called competency frameworks, for ELT teachers, trainers and managers have existed for a few years now. However, until I created one for my MA dissertation, there wasn’t one drawing together what we need to know and do to be able to effectively produce language learning materials.
This webinar will introduce you to my framework, highlighting the key competencies I identified from my research. It will also show how anybody involved in language teaching (any language, not just English!), teacher training, managing schools or developing language learning materials can benefit from using the framework.
How to Split Bills in the Odoo 17 POS ModuleCeline George
Bills have a main role in point of sale procedure. It will help to track sales, handling payments and giving receipts to customers. Bill splitting also has an important role in POS. For example, If some friends come together for dinner and if they want to divide the bill then it is possible by POS bill splitting. This slide will show how to split bills in odoo 17 POS.
Students, digital devices and success - Andreas Schleicher - 27 May 2024..pptxEduSkills OECD
Andreas Schleicher presents at the OECD webinar ‘Digital devices in schools: detrimental distraction or secret to success?’ on 27 May 2024. The presentation was based on findings from PISA 2022 results and the webinar helped launch the PISA in Focus ‘Managing screen time: How to protect and equip students against distraction’ https://www.oecd-ilibrary.org/education/managing-screen-time_7c225af4-en and the OECD Education Policy Perspective ‘Students, digital devices and success’ can be found here - https://oe.cd/il/5yV
This is a presentation by Dada Robert in a Your Skill Boost masterclass organised by the Excellence Foundation for South Sudan (EFSS) on Saturday, the 25th and Sunday, the 26th of May 2024.
He discussed the concept of quality improvement, emphasizing its applicability to various aspects of life, including personal, project, and program improvements. He defined quality as doing the right thing at the right time in the right way to achieve the best possible results and discussed the concept of the "gap" between what we know and what we do, and how this gap represents the areas we need to improve. He explained the scientific approach to quality improvement, which involves systematic performance analysis, testing and learning, and implementing change ideas. He also highlighted the importance of client focus and a team approach to quality improvement.
How to Create Map Views in the Odoo 17 ERPCeline George
The map views are useful for providing a geographical representation of data. They allow users to visualize and analyze the data in a more intuitive manner.
The Indian economy is classified into different sectors to simplify the analysis and understanding of economic activities. For Class 10, it's essential to grasp the sectors of the Indian economy, understand their characteristics, and recognize their importance. This guide will provide detailed notes on the Sectors of the Indian Economy Class 10, using specific long-tail keywords to enhance comprehension.
For more information, visit-www.vavaclasses.com
Read| The latest issue of The Challenger is here! We are thrilled to announce that our school paper has qualified for the NATIONAL SCHOOLS PRESS CONFERENCE (NSPC) 2024. Thank you for your unwavering support and trust. Dive into the stories that made us stand out!
The French Revolution, which began in 1789, was a period of radical social and political upheaval in France. It marked the decline of absolute monarchies, the rise of secular and democratic republics, and the eventual rise of Napoleon Bonaparte. This revolutionary period is crucial in understanding the transition from feudalism to modernity in Europe.
For more information, visit-www.vavaclasses.com
Privacy preserving public auditing for secure cloud storage
1. 1
Privacy-Preserving Public Auditing for
Secure Cloud Storage
Cong Wang, Student Member, IEEE, Sherman S.-M. Chow, Qian Wang, Student Member, IEEE,
Kui Ren, Member, IEEE, and Wenjing Lou, Member, IEEE
Abstract—Using Cloud Storage, users can remotely store their data and enjoy the on-demand high quality applications and
services from a shared pool of configurable computing resources, without the burden of local data storage and maintenance.
However, the fact that users no longer have physical possession of the outsourced data makes the data integrity protection in
Cloud Computing a formidable task, especially for users with constrained computing resources. Moreover, users should be able
to just use the cloud storage as if it is local, without worrying about the need to verify its integrity. Thus, enabling public auditability
for cloud storage is of critical importance so that users can resort to a third party auditor (TPA) to check the integrity of outsourced
data and be worry-free. To securely introduce an effective TPA, the auditing process should bring in no new vulnerabilities towards
user data privacy, and introduce no additional online burden to user. In this paper, we propose a secure cloud storage system
supporting privacy-preserving public auditing. We further extend our result to enable the TPA to perform audits for multiple users
simultaneously and efficiently. Extensive security and performance analysis show the proposed schemes are provably secure
and highly efficient.
Index Terms—Data storage, privacy-preserving, public auditability, cryptographic protocols, cloud computing.
3
1 INTRODUCTION
CLOUD Computing has been envisioned as the
next-generation information technology (IT) ar-
chitecture for enterprises, due to its long list of un-
precedented advantages in the IT history: on-demand
self-service, ubiquitous network access, location in-
dependent resource pooling, rapid resource elasticity,
usage-based pricing and transference of risk [1]. As
a disruptive technology with profound implications,
Cloud Computing is transforming the very nature of
how businesses use information technology. One fun-
damental aspect of this paradigm shifting is that data
is being centralized or outsourced to the Cloud. From
users’ perspective, including both individuals and IT
enterprises, storing data remotely to the cloud in a
flexible on-demand manner brings appealing benefits:
relief of the burden for storage management, universal
data access with independent geographical locations,
and avoidance of capital expenditure on hardware,
software, and personnel maintenances, etc [2].
While Cloud Computing makes these advantages
more appealing than ever, it also brings new and chal-
lenging security threats towards users’ outsourced
data. Since cloud service providers (CSP) are separate
• Cong Wang, Qian Wang, and Kui Ren are with the Department of
Electrical and Computer Engineering, Illinois Institute of Technology,
Chicago, IL 60616. E-mail: {cong,qian,kren}@ece.iit.edu.
• Sherman S.-M. Chow is with the Department of Computer Science,
Courant Institute of Mathematical Sciences, New York University,
New York, NY 10012. E-mail: schow@cs.nyu.edu.
• Wenjing Lou is with the Department of Electrical and Computer
Engineering, Worcester Polytechnic Institute, Worcester, MA 01609.
E-mail: wjlou@ece.wpi.edu.
administrative entities, data outsourcing is actually
relinquishing user’s ultimate control over the fate of
their data. As a result, the correctness of the data
in the cloud is being put at risk due to the follow-
ing reasons. First of all, although the infrastructures
under the cloud are much more powerful and reli-
able than personal computing devices, they are still
facing the broad range of both internal and external
threats for data integrity. Examples of outages and
security breaches of noteworthy cloud services appear
from time to time [3]–[7]. Secondly, there do exist
various motivations for CSP to behave unfaithfully
towards the cloud users regarding the status of their
outsourced data. For examples, CSP might reclaim
storage for monetary reasons by discarding data that
has not been or is rarely accessed, or even hide data
loss incidents so as to maintain a reputation [8]–[10].
In short, although outsourcing data to the cloud is
economically attractive for long-term large-scale data
storage, it does not immediately offer any guarantee
on data integrity and availability. This problem, if
not properly addressed, may impede the successful
deployment of the cloud architecture.
As users no longer physically possess the storage of
their data, traditional cryptographic primitives for the
purpose of data security protection cannot be directly
adopted [11]. In particular, simply downloading all
the data for its integrity verification is not a practical
solution due to the expensiveness in I/O and trans-
mission cost across the network. Besides, it is often
insufficient to detect the data corruption only when
accessing the data, as it does not give users correctness
assurance for those unaccessed data and might be too
2. 2
late to recover the data loss or damage. Considering
the large size of the outsourced data and the user’s
constrained resource capability, the tasks of auditing
the data correctness in a cloud environment can be
formidable and expensive for the cloud users [10],
[12]. Moreover, the overhead of using cloud storage
should be minimized as much as possible, such that
user does not need to perform too many operations
to use the data (in additional to retrieving the data).
For example, it is desirable that users do not need to
worry about the need to verify the integrity of the data
before or after the data retrieval. Besides, there may be
more than one user accesses the same cloud storage,
say in an enterprise setting. For easier management,
it is desirable that the cloud server only entertains
verification request from a single designated party.
To fully ensure the data integrity and save the cloud
users’ computation resources as well as online burden,
it is of critical importance to enable public auditing
service for cloud data storage, so that users may
resort to an independent third party auditor (TPA)
to audit the outsourced data when needed. The TPA,
who has expertise and capabilities that users do not,
can periodically check the integrity of all the data
stored in the cloud on behalf of the users, which
provides a much more easier and affordable way for
the users to ensure their storage correctness in the
cloud. Moreover, in addition to help users to evaluate
the risk of their subscribed cloud data services, the
audit result from TPA would also be beneficial for the
cloud service providers to improve their cloud based
service platform, and even serve for independent
arbitration purposes [9]. In a word, enabling public
auditing services will play an important role for this
nascent cloud economy to become fully established,
where users will need ways to assess risk and gain
trust in the cloud.
Recently, the notion of public auditability has been
proposed in the context of ensuring remotely stored
data integrity under different system and security
models [8], [10], [11], [13]. Public auditability allows
an external party, in addition to the user himself,
to verify the correctness of remotely stored data.
However, most of these schemes [8], [10], [13] do not
consider the privacy protection of users’ data against
external auditors. Indeed, they may potentially re-
veal user data information to the auditors, as will
be discussed in Section 3.4. This severe drawback
greatly affects the security of these protocols in Cloud
Computing. From the perspective of protecting data
privacy, the users, who own the data and rely on
TPA just for the storage security of their data, do not
want this auditing process introducing new vulnera-
bilities of unauthorized information leakage towards
their data security [14]. Moreover, there are legal
regulations, such as the US Health Insurance Porta-
bility and Accountability Act (HIPAA) [15], further
demanding the outsourced data not to be leaked to
external parties [9]. Exploiting data encryption before
outsourcing [11] is one way to mitigate this privacy
concern, but it is only complementary to the privacy-
preserving public auditing scheme to be proposed
in this paper. Without a properly designed auditing
protocol, encryption itself cannot prevent data from
“flowing away” towards external parties during the
auditing process. Thus, it does not completely solve
the problem of protecting data privacy but just re-
duces it to the key management. Unauthorized data
leakage still remains a problem due to the potential
exposure of decryption keys.
Therefore, how to enable a privacy-preserving
third-party auditing protocol, independent to data
encryption, is the problem we are going to tackle
in this paper. Our work is among the first few
ones to support privacy-preserving public auditing
in Cloud Computing, with a focus on data storage.
Besides, with the prevalence of Cloud Computing, a
foreseeable increase of auditing tasks from different
users may be delegated to TPA. As the individual
auditing of these growing tasks can be tedious and
cumbersome, a natural demand is then how to enable
the TPA to efficiently perform multiple auditing tasks
in a batch manner, i.e., simultaneously.
To address these problems, our work utilizes the
technique of public key based homomorphic linear
authenticator (or HLA for short) [8], [10], [13], which
enables TPA to perform the auditing without demand-
ing the local copy of data and thus drastically re-
duces the communication and computation overhead
as compared to the straightforward data auditing
approaches. By integrating the HLA with random
masking, our protocol guarantees that the TPA could
not learn any knowledge about the data content
stored in the cloud server during the efficient auditing
process. The aggregation and algebraic properties of
the authenticator further benefit our design for the
batch auditing. Specifically, our contribution can be
summarized as the following three aspects:
1) We motivate the public auditing system of data
storage security in Cloud Computing and pro-
vide a privacy-preserving auditing protocol, i.e.,
our scheme enables an external auditor to audit
user’s outsourced data in the cloud without
learning the data content.
2) To the best of our knowledge, our scheme is the
first to support scalable and efficient public au-
diting in the Cloud Computing. Specifically, our
scheme achieves batch auditing where multiple
delegated auditing tasks from different users can
be performed simultaneously by the TPA.
3) We prove the security and justify the perfor-
mance of our proposed schemes through con-
crete experiments and comparisons with the
state-of-the-art.
The rest of the paper is organized as follows. Section
3. 3
II introduces the system and threat model, and our de-
sign goals. Then we provide the detailed description
of our scheme in Section III. Section IV gives the secu-
rity analysis and performance evaluation, followed by
Section V which overviews the related work. Finally,
Section VI gives the concluding remark of the whole
paper.
2 PROBLEM STATEMENT
2.1 The System and Threat Model
We consider a cloud data storage service involving
three different entities, as illustrated in Fig. 1: the cloud
user (U), who has large amount of data files to be
stored in the cloud; the cloud server (CS), which is
managed by the cloud service provider (CSP) to provide
data storage service and has significant storage space
and computation resources (we will not differentiate
CS and CSP hereafter); the third party auditor (TPA),
who has expertise and capabilities that cloud users
do not have and is trusted to assess the cloud storage
service reliability on behalf of the user upon request.
Users rely on the CS for cloud data storage and
maintenance. They may also dynamically interact
with the CS to access and update their stored data for
various application purposes. To save the computa-
tion resource as well as the online burden, cloud users
may resort to TPA for ensuring the storage integrity
of their outsourced data, while hoping to keep their
data private from TPA.
We consider the existence of a semi-trusted CS
as [16] does. Namely, in most of time it behaves
properly and does not deviate from the prescribed
protocol execution. However, for their own benefits
the CS might neglect to keep or deliberately delete
rarely accessed data files which belong to ordinary
cloud users. Moreover, the CS may decide to hide the
data corruptions caused by server hacks or Byzantine
failures to maintain reputation. We assume the TPA,
who is in the business of auditing, is reliable and
independent, and thus has no incentive to collude
with either the CS or the users during the auditing
process. However, it harms the user if the TPA could
learn the outsourced data after the audit.
To authorize the CS to respond to the audit dele-
gated to TPA’s, the user can sign a certificate granting
audit rights to the TPA’s public key, and all audits
from the TPA are authenticated against such a certifi-
cate. These authentication handshakes are omitted in
the following presentation.
2.2 Design Goals
To enable privacy-preserving public auditing for
cloud data storage under the aforementioned model,
our protocol design should achieve the following
security and performance guarantees.
1) Public auditability: to allow TPA to verify the
correctness of the cloud data on demand without
D a t
a F l o w
U s e r s
D a t
a A
u d i
t
i n
g
D e l
e
g
a t
i o
n
S e c u r i t y M
e s
s a g
e F l o w
P u b l i c D
a t
a
A
u d
i
t
i n g
T h i r d P a r t y A u d i t o r
C l o u d S e r v e r s
Fig. 1: The architecture of cloud data storage service
retrieving a copy of the whole data or introduc-
ing additional online burden to the cloud users.
2) Storage correctness: to ensure that there exists
no cheating cloud server that can pass the TPA’s
audit without indeed storing users’ data intact.
3) Privacy-preserving: to ensure that the TPA can-
not derive users’ data content from the informa-
tion collected during the auditing process.
4) Batch auditing: to enable TPA with secure and
efficient auditing capability to cope with mul-
tiple auditing delegations from possibly large
number of different users simultaneously.
5) Lightweight: to allow TPA to perform auditing
with minimum communication and computa-
tion overhead.
3 THE PROPOSED SCHEMES
This section presents our public auditing scheme
which provides a complete outsourcing solution of data
– not only the data itself, but also its integrity check-
ing. We start from an overview of our public auditing
system and discuss two straightforward schemes and
their demerits. Then we present our main scheme
and show how to extent our main scheme to support
batch auditing for the TPA upon delegations from
multiple users. Finally, we discuss how to generalize
our privacy-preserving public auditing scheme and its
support of data dynamics.
3.1 Definitions and Framework
We follow a similar definition of previously proposed
schemes in the context of remote data integrity check-
ing [8], [11], [13] and adapt the framework for our
privacy-preserving public auditing system.
A public auditing scheme consists of four
algorithms (KeyGen, SigGen, GenProof,
VerifyProof). KeyGen is a key generation
algorithm that is run by the user to setup the
scheme. SigGen is used by the user to generate
verification metadata, which may consist of MAC,
signatures, or other related information that will be
used for auditing. GenProof is run by the cloud
server to generate a proof of data storage correctness,
while VerifyProof is run by the TPA to audit the
proof from the cloud server.
4. 4
Running a public auditing system consists of two
phases, Setup and Audit:
• Setup: The user initializes the public and secret
parameters of the system by executing KeyGen,
and pre-processes the data file F by using
SigGen to generate the verification metadata.
The user then stores the data file F and the
verification metadata at the cloud server, and
deletes its local copy.
As part of pre-processing, the user may alter
the data file F by expanding it or including
additional metadata to be stored at server.
• Audit: The TPA issues an audit message or chal-
lenge to the cloud server to make sure that the
cloud server has retained the data file F properly
at the time of the audit. The cloud server will
derive a response message from a function of the
stored data file F and its verification metadata by
executing GenProof. The TPA then verifies the
response via VerifyProof.
Our framework assumes the TPA is stateless, which
is a desirable property achieved by our proposed
solution. It is easy to extend the framework above to
capture a stateful auditing system, essentially by split-
ing the verification metadata into two parts which are
stored by the TPA and the cloud server respectively.
Our design does not assume any additional prop-
erty on the data file. If the user wants to have more
error-resiliency, he/she can always first redundantly
encodes the data file and then uses our system with
the data file that has error-correcting codes integrated.
3.2 Notation and Preliminaries
• F – the data file to be outsourced, denoted as a
sequence of n blocks m1, . . . , mn ∈ Zp for some
large prime p.
• MAC(·)(·) – message authentication code (MAC)
function, defined as: K × {0, 1}∗
→ {0, 1}l
where
K denotes the key space.
• H(·), h(·) – cryptographic hash functions.
We now introduce some necessary cryptographic
background for our proposed scheme.
Bilinear Map. Let G1, G2 and GT be multiplicative
cyclic groups of prime order p. Let g1 and g2 be
generators of G1 and G2, respectively. A bilinear map
is a map e : G1 × G2 → GT such that for all u ∈ G1,
v ∈ G2 and a, b ∈ Zp, e(ua
, vb
) = e(u, v)ab
. This
bilinearity implies that for any u1, u2 ∈ G1, v ∈ G2,
e(u1 · u2, v) = e(u1, v) · e(u2, v). Of course, there exists
an efficiently computable algorithm for computing
e and the map should be non-trivial, i.e., e is non-
degenerate: e(g1, g2) = 1.
3.3 The Basic Schemes
Before giving our main result, we study two classes of
schemes as a warm-up. The first one is a MAC-based
solution which suffers from undesirable systematic
demerits – bounded usage and stateful verification,
which may pose additional online burden to users, in
a public auditing setting. This somehow also shows
that the auditing problem is still not easy to solve
even we have introduced a TPA. The second one is a
system based on homomorphic linear authenticators
(HLA), which covers many recent proof of storage
systems. We will pinpoint the reason why all existing
HLA-based systems are not privacy-preserving. The
analysis of these basic schemes leads to our main
result, which overcomes all these drawbacks. Our
main scheme to be presented is based on a specific
HLA scheme.
MAC-based Solution. There are two possible ways to
make use of MAC to authenticate the data. A trivial
way is just uploading the data blocks with their MACs
to the server, and sends the corresponding secret key
sk to the TPA. Later, the TPA can randomly retrieve
blocks with their MACs and check the correctness via
sk. Apart from the high (linear in the sampled data
size) communication and computation complexities,
the TPA requires the knowledge of the data blocks
for verification.
To circumvent the requirement of the data in
TPA verification, one may restrict the verification
to just consist of equality checking. The idea is
as follows. Before data outsourcing, the cloud user
chooses s random message authentication code keys
{skτ }1≤τ≤s, pre-computes s (deterministic) MACs,
{MACskτ (F)}1≤τ≤s for the whole data file F, and
publishes these verification metadata (the keys and
the MACs) to TPA. The TPA can reveal a secret key
skτ to the cloud server and ask for a fresh keyed
MAC for comparison in each audit. This is privacy-
preserving as long as it is impossible to recover F in
full given MACskτ (F) and skτ . However, it suffers
from the following severe drawbacks: 1) the number
of times a particular data file can be audited is limited
by the number of secret keys that must be fixed a
priori. Once all possible secret keys are exhausted, the
user then has to retrieve data in full to re-compute
and re-publish new MACs to TPA; 2) The TPA also
has to maintain and update state between audits, i.e.,
keep track on the revealed MAC keys. Considering
the potentially large number of audit delegations from
multiple users, maintaining such states for TPA can
be difficult and error prone; 3) it can only support
static data, and cannot efficiently deal with dynamic
data at all. However, supporting data dynamics is also
of critical importance for cloud storage systems. For
the reason of brevity and clarity, our main protocol
will be presented based on static data. Section 3.6 will
describe how to adapt our protocol for dynamic data.
HLA-based Solution. To effectively support public
auditability without having to retrieve the data blocks
themselves, the HLA technique [8], [10], [13] can be
5. 5
TPA Cloud Server
1. Retrieve file tag t, verify its
signature, and quit if fail;
2. Generate a random challenge
{(i,νi)}i∈I
−−−−−−−−−−−−−−→
challenge request chal
3. Compute µ′
= i∈I νimi, and also
chal = {(i, νi)}i∈I; σ = i∈I σνi
i ;
4. Randomly pick r ← Zp, and compute
R = e(u, v)r
and γ = h(R);
{µ,σ,R}
←−−−−−−−−−−−−−−−−
storage correctness proof
5. Compute µ = r + γµ′
mod p ;
6. Compute γ = h(R), and then
verify {µ, σ, R} via Equation 1.
Fig. 2: The privacy-preserving public auditing protocol
used. HLAs, like MACs, are also some unforgeable
verification metadata that authenticate the integrity
of a data block. The difference is that HLAs can be
aggregated. It is possible to compute an aggregated
HLA which authenticates a linear combination of the
individual data blocks.
At a high level, an HLA-based proof of storage
system works as follow. The user still authenticates
each element of F = (m1, · · · , mn) by a set of HLAs
Φ. The cloud server stores {F, Φ}. The TPA verifies the
cloud storage by sending a random set of challenge
{νi}. (More precisely, F, Φ and {νi} are all vectors,
so {νi} is an ordered set or {i, νi} should be sent).
The cloud server then returns µ = i νi · mi and an
aggregated authenticator σ (both are computed from
F, Φ and {νi}) that is supposed to authenticate µ.
Though allowing efficient data auditing and con-
suming only constant bandwidth, the direct adoption
of these HLA-based techniques is still not suitable for
our purposes. This is because the linear combination
of blocks, µ = i νi · mi, may potentially reveal user
data information to TPA, and violates the privacy-
preserving guarantee. Specifically, if an enough num-
ber of the linear combinations of the same blocks are
collected, the TPA can simply derive the user’s data
content by solving a system of linear equations.
3.4 Privacy-Preserving Public Auditing Scheme
Overview. To achieve privacy-preserving public au-
diting, we propose to uniquely integrate the homo-
morphic linear authenticator with random masking
technique. In our protocol, the linear combination of
sampled blocks in the server’s response is masked
with randomness generated the server. With random
masking, the TPA no longer has all the necessary
information to build up a correct group of linear
equations and therefore cannot derive the user’s data
content, no matter how many linear combinations of
the same set of file blocks can be collected. On the
other hand, the correctness validation of the block-
authenticator pairs can still be carried out in a new
way which will be shown shortly, even with the
presence of the randomness. Our design makes use
of a public key based HLA, to equip the auditing
protocol with public auditability. Specifically, we use
the HLA proposed in [13], which is based on the
short signature scheme proposed by Boneh, Lynn and
Shacham (hereinafter referred as BLS signature) [17].
Scheme Details. Let G1, G2 and GT be multiplicative
cyclic groups of prime order p, and e : G1 × G2 →
GT be a bilinear map as introduced in preliminaries.
Let g be a generator of G2. H(·) is a secure map-to-
point hash function: {0, 1}∗
→ G1, which maps strings
uniformly to G1. Another hash function h(·) : GT →
Zp maps group element of GT uniformly to Zp. The
proposed scheme is as follows:
Setup Phase: The cloud user runs KeyGen to gener-
ate the public and secret parameters. Specifically, the
user chooses a random signing key pair (spk, ssk),
a random x ← Zp, a random element u ← G1,
and computes v ← gx
. The secret parameter is
sk = (x, ssk) and the public parameters are pk =
(spk, v, g, u, e(u, v)).
Given a data file F = (m1, . . . , mn), the user runs
SigGen to compute authenticator σi for each block
mi: σi ← (H(Wi) · umi
)x
∈ G1. Here Wi = name||i
and name is chosen by the user uniformly at random
from Zp as the identifier of file F. Denote the set of
authenticators by Φ = {σi}1≤i≤n.
The last part of SigGen is for ensuring the integrity
of the unique file identifier name. One simple way to
do this is to compute t = name||SSigssk(name) as the
file tag for F, where SSigssk(name) is the signature
on name under the private key ssk. For simplicity,
we assume the TPA knows the number of blocks n.
The user then sends F along with the verification
metadata (Φ, t) to the server and deletes them from
local storage.
Audit Phase: The TPA first retrieves the file tag
t. With respect to the mechanism we describe in
the Setup phase, the TPA verifies the signature
6. 6
SSigssk(name) via spk, and quits by emitting FALSE
if the verification fails. Otherwise, the TPA recovers
name.
Now it comes to the “core” part of the auditing
process. To generate the challenge message for the
audit “chal”, the TPA picks a random c-element subset
I = {s1, . . . , sc} of set [1, n]. For each element i ∈ I,
the TPA also chooses a random value νi (of bit length
that can be shorter than |p|, as explained in [13]).
The message “chal” specifies the positions of the
blocks that are required to be checked. The TPA sends
chal = {(i, νi)}i∈I to the server.
Upon receiving challenge chal = {(i, νi)}i∈I, the
server runs GenProof to generate a response proof
of data storage correctness. Specifically, the server
chooses a random element r ← Zp, and calculates R =
e(u, v)r
∈ GT . Let µ′
denote the linear combination of
sampled blocks specified in chal: µ′
= i∈I νimi. To
blind µ′
with r, the server computes: µ = r+γµ′
mod
p, where γ = h(R) ∈ Zp. Meanwhile, the server also
calculates an aggregated authenticator σ = i∈I σνi
i ∈
G1. It then sends {µ, σ, R} as the response proof of
storage correctness to the TPA. With the response from
the server, the TPA runs VerifyProof to validate
the response by first computing γ = h(R) and then
checking the verification equation
R · e(σγ
, g)
?
= e((
sc
i=s1
H(Wi)νi
)γ
· uµ
, v) (1)
The protocol is illustrated in Fig. 2. The correctness
of the above verification equation can be elaborated
as follows:
R · e(σγ
, g) = e(u, v)r
· e((
sc
i=s1
(H(Wi) · umi
)x·νi
)γ
, g)
= e(ur
, v) · e((
sc
i=s1
(H(Wi)νi
· uνimi
)γ
, g)x
= e(ur
, v) · e((
sc
i=s1
H(Wi)νi
)γ
· uµ′
γ
, v)
= e((
sc
i=s1
H(Wi)νi
)γ
· uµ′
γ+r
, v)
= e((
sc
i=s1
H(Wi)νi
)γ
· uµ
, v)
Properties of Our Protocol. It is easy to see that
our protocol achieves public auditability. There is
no secret keying material or states for the TPA to
keep or maintain between audits, and the auditing
protocol does not pose any potential online burden on
users. This approach ensures the privacy of user data
content during the auditing process by employing a
random masking r to hide µ, a linear combination of
the data blocks. Note that the value R in our protocol,
which enables the privacy-preserving guarantee, will
not affect the validity of the equation, due to the
circular relationship between R and γ in γ = h(R)
and the verification equation. Storage correctness thus
follows from that of the underlying protocol [13]. The
security of this protocol will be formally proven in
Section 4. Besides, the HLA helps achieve the constant
communication overhead for server’s response during
the audit: the size of {σ, µ, R} is independent of the
number of sampled blocks c.
Previous work [8], [10] showed that if the server is
missing a fraction of the data, then the number of
blocks that needs to be checked in order to detect
server misbehavior with high probability is in the
order of O(1). For examples, if the server is missing
1% of the data F, to detect this misbehavior with
probability larger than 95%, the TPA only needs to
audit for c = 300 (up to c = 460 for 99%) randomly
chosen blocks of F. Given the huge volume of data
outsourced in the cloud, checking a portion of the
data file is more affordable and practical for both
the TPA and the cloud server than checking all the
data, as long as the sampling strategies provides high
probability assurance. In Section 4, we will present the
experiment result based on these sampling strategies.
3.5 Support for Batch Auditing
With the establishment of privacy-preserving public
auditing, the TPA may concurrently handle multi-
ple auditing upon different users’ delegation. The
individual auditing of these tasks for the TPA can
be tedious and very inefficient. Given K auditing
delegations on K distinct data files from K different
users, it is more advantageous for the TPA to batch
these multiple tasks together and audit at one time.
Keeping this natural demand in mind, we slightly
modify the protocol in a single user case, and achieves
the aggregation of K verification equations (for K
auditing tasks) into a single one, as shown in Equation
2. As a result, a secure batch auditing protocol for
simultaneous auditing of multiple tasks is obtained.
The details are described as follows.
Setup Phase: Basically, the users just perform Setup
independently. Suppose there are K users in the
system, and each user k has a data file Fk =
(mk,1, . . . , mk,n) to be outsourced to the cloud server,
where k ∈ {1, . . . , K}. For simplicity, we assume
each file Fk has the same number of n blocks. For
a particular user k, denote his/her secret key as
(xk, sskk), and the corresponding public parameter
as (spkk, vk, g, uk, e(uk, vk)) where vk = gxk
. Similar
to the single user case, each user k has already ran-
domly chosen a different (with overwhelming prob-
ability) name namek ∈ Zp for his/her file Fk, and
has correctly generated the corresponding file tag
tk = namek||SSigsskk
(namek). Then, each user k runs
SigGen and computes σk,i for block mk,i: σk,i ←
(H(namek||i) · u
mk,i
k )xk
= (H(Wk,i) · u
mk,i
k )xk
∈ G1
7. 7
TPA Cloud Server
1. Verify file tag tk for each
user k, and quit if fail; For each user k (1 ≤ k ≤ K):
2. Generate a random challenge
{(i,νi)}i∈I
−−−−−−−−−−−−−−→
challenge request chal
3. Compute µ′
k, σk, Rk as single user case;
chal = {(i, νi)}i∈I; 4. Compute R = R1 · R2 · · · RK,
L = vk1||vk2|| · · · ||vkK
and γk = h(R||vk||L);
{{σk,µk}1≤k≤K ,R}
←−−−−−−−−−−−−−−−−
storage correctness proof
5. Compute µk = rk + γkµ′
k mod p ;
6. Compute γk = h(R||vk||L)
for each user k and perform
batch auditing via Equation 2.
Fig. 3: The batch auditing protocol
(i ∈ {1, . . . , n}), where Wk,i = namek||i. Finally, each
user k sends file Fk, set of authenticators Φk, and tag
tk to the server and deletes them from local storage.
Audit Phase: TPA first retrieves and verifies file tag
tk for each user k for later auditing. If the verification
fails, TPA quits by emitting FALSE; otherwise, TPA
recovers namek. Then TPA sends the audit challenge
chal = {(i, νi)}i∈I to the server for auditing data files
of all K users.
Upon receiving chal, for each user k ∈ {1, . . . , K},
the server randomly picks rk ∈ Zp and computes
Rk = e(uk, vk)rk
. Denote R = R1 · R2 · · · RK, and
L = vk1||vk2|| · · · ||vkK, our protocol further requires
the server to compute γk = h(R||vk||L). Then, the ran-
domly masked responses can be generated as follows:
µk = γk
sc
i=s1
νimk,i + rk mod p and σk =
sc
i=s1
σνi
k,i.
The server then responses the TPA with
{{σk, µk}1≤k≤K, R}.
To verify the response, the TPA can first compute
γk = h(R||vk||L) for 1 ≤ k ≤ K. Next, TPA checks if
the following equation holds:
R · e(
K
k=1
σk
γk
, g)
?
=
K
k=1
e((
sc
i=s1
H(Wk,i)νi
)γk
· uµk
k , vk)
(2)
The batch protocol is illustrated in Fig. 3. Here the
left-hand side (LHS) of Equation 2 expands as:
LHS = R1 · R2 · · · RK ·
K
k=1
e(σk
γk
, g)
=
K
k=1
Rk · e(σk
γk
, g)
=
K
k=1
e((
sc
i=s1
H(Wk,i)νi
)γk
· uµk
k , vk)
which is the right hand side, as required. Note that
the last equality follows from Equation 1.
Efficiency Improvement. As shown in Equation 2,
batch auditing not only allows TPA to perform
the multiple auditing tasks simultaneously, but also
greatly reduces the computation cost on the TPA side.
This is because aggregating K verification equations
into one helps reduce the number of relatively expen-
sive pairing operations from 2K, as required in the
individual auditing, to K + 1. Thus, a considerable
amount of auditing time is expected to be saved.
Identification of Invalid Responses. The verifica-
tion equation (Equation 2) only holds when all the
responses are valid, and fails with high probability
when there is even one single invalid response in the
batch auditing, as we will show in Section 4. In many
situations, a response collection may contain invalid
responses, especially {µk}1≤k≤K, caused by accidental
data corruption, or possibly malicious activity by a
cloud server. The ratio of invalid responses to the
valid could be quite small, and yet a standard batch
auditor will reject the entire collection. To further sort
out these invalid responses in the batch auditing, we
can utilize a recursive divide-and-conquer approach
(binary search), as suggested by [18]. Specifically, if
the batch auditing fails, we can simply divide the
collection of responses into two halves, and recurse
the auditing on halves via Equation 2. TPA may now
require the server to send back all the {Rk}1≤k≤K,
as in individual auditing. In Section 4.2.2, we show
through carefully designed experiment that using this
recursive binary search approach, even if up to 18%
of responses are invalid, batch auditing still performs
faster than individual verification.
3.6 Support for Data Dynamics
In Cloud Computing, outsourced data might not
only be accessed but also updated frequently by
users for various application purposes [10], [19]–
8. 8
[21]. Hence, supporting data dynamics for privacy-
preserving public auditing is also of paramount im-
portance. Now we show how to build upon the exist-
ing work [10] and adapt our main scheme to support
data dynamics, including block level operations of
modification, deletion and insertion.
In [10], data dynamics support is achieved by
replacing the index information i with mi in the
computation of block authenticators and using the
classic data structure – Merkle hash tree (MHT) [22]
for the underlying block sequence enforcement. As
a result, the authenticator for each block is changed
to σi = (H(mi) · umi
)x
. We can adopt this tech-
nique in our design to achieve privacy-preserving
public risk auditing with support of data dynamics.
Specifically, in the Setup phase, the user has to
generate and send the tree root T RMHT to TPA as
additional metadata, where the leaf nodes of MHT
are values of H(mi). In the Audit phase, besides
{µ, σ, R}, the server’s response should also include
{H(mi)}i∈I and their corresponding auxiliary authen-
tication information aux in the MHT. Upon receiving
the response, TPA should first use T RMHT and aux
to authenticate {H(mi)}i∈I computed by the server.
Once {H(mi)}i∈I are authenticated, TPA can then per-
form the auditing on {µ, σ, R, {H(mi)}i∈I} via Equa-
tion 1, where s1≤i≤sc
H(Wi)νi
is now replaced by
s1≤i≤sc
H(mi)νi
. Data privacy is still preserved due
to the random mask. The details of handling dynamic
operations are similar to [10] and thus omitted.
3.7 Learning µ′
from σ
Though our scheme prevents the TPA from directly
deriving µ′
from µ, it does not rule out the possi-
bility of offline guessing attack from the TPA using
valid σ from the response. Specifically, the TPA can
always guess whether the stored data contains certain
message ˜m, by checking e(σ, g)
?
= e(( sc
i=s1
H(Wi)νi
) ·
u˜µ′
, v), where ˜µ′
is constructed from random coeffi-
cients chosen by the TPA in the challenge and the
guessed message ˜m. Thus, our main scheme is not
semantically secure yet. However, we must note that
˜µ′
is chosen from Zp and |p| is usually larger than
160 bits in practical security settings (see Section 4.2).
Therefore, the TPA has to test basically all the values
of Zp in order to make a successful guess. Given
no background information, the success probability
of such all-or-nothing guess launched by TPA can
be negligible. Nonetheless, for completeness, we will
give a provably zero-knowledge based public audit-
ing scheme, which further eliminates the possibilities
of above offline guessing attack. The details and cor-
responding proofs can be found in Appendix A.
3.8 Generalization
As mentioned before, our protocol is based on the
HLA in [13]. Recently, it has been shown in [23] that
HLA can be constructed by homomorphic identifi-
cation protocols. One may apply the random mask-
ing technique we used to construct the correspond-
ing zero knowledge proof for different homomorphic
identification protocols. Therefore, it follows that our
privacy-preserving public auditing system for secure
cloud storage can be generalized based on other com-
plexity assumptions, such as factoring [23].
4 EVALUATION
4.1 Security Analysis
We evaluate the security of the proposed scheme by
analyzing its fulfillment of the security guarantee de-
scribed in Section 2.2, namely, the storage correctness
and privacy-preserving property. We start from the
single user case, where our main result is originated.
Then we show the security guarantee of batch audit-
ing for the TPA in multi-user setting.
4.1.1 Storage Correctness Guarantee
We need to prove that the cloud server cannot gen-
erate valid response for the TPA without faithfully
storing the data, as captured by Theorem 1.
Theorem 1: If the cloud server passes the Audit
phase, then it must indeed possess the specified data
intact as it is.
Proof: The proof consists of two steps. First, we
show that there exists an extractor of µ′
in the random
oracle model. Once a valid response {σ, µ′
} are ob-
tained, the correctness of this statement follows from
Theorem 4.2 in [13].
Now, the cloud server is treated as an adversary.
The extractor controls the random oracle h(·) and
answers the hash query issued by the cloud server.
For a challenge γ = h(R) returned by the extractor, the
cloud server outputs {σ, µ, R} such that the following
equation holds.
R · e(σγ
, g) = e((
sc
i=s1
H(Wi)νi
)γ
· uµ
, v). (3)
Suppose that an extractor can rewind a cloud server
in the protocol to the point just before the challenge
h(R) is given. Now the extractor sets h(R) to be γ∗
=
γ. The cloud server outputs {σ, µ∗
, R} such that the
following equation holds.
R · e(σγ∗
, g) = e((
sc
i=s1
H(Wi)νi
)γ∗
· uµ∗
, v). (4)
The extractor then obtains {σ, µ′
= (µ − µ∗
)/(γ − γ∗
)}
as a valid response of the underlying proof of storage
system [13]. To see, recall that σi = (H(Wi) · umi
)x
,
9. 9
TABLE 1: Notation of cryptographic operations.
Hasht
G1
hash t values into the group G1.
Multt
G
t multiplications in group G.
Expt
G(ℓ) t exponentiations gai , for g ∈ G, |ai| = ℓ.
m-MultExpt
G(ℓ) t m-term exponentiations
m
i=1
gai .
P airt
G1,G2
t pairings e(ui, gi), where ui ∈ G1, gi ∈ G2.
m-MultP airt
G1,G2
t m-term pairings
m
i=1
e(ui, gi).
divide (3) by (4), we have
e(σγ−γ∗
, g) = e((
sc
i=s1
H(Wi)νi
)γ−γ∗
· uµ−µ∗
, v)
e(σγ−γ∗
, g) = e((
sc
i=s1
H(Wi)νi
)γ−γ∗
, gx
)e(uµ−µ∗
, gx
)
σγ−γ∗
= (
sc
i=s1
H(Wi)νi
)x(γ−γ∗
)
· ux(µ−µ∗
)
(
sc
i=s1
σνi
i )γ−γ∗
= (
sc
i=s1
H(Wi)νi
)x(γ−γ∗
)
· ux(µ−µ∗
)
ux(µ−µ∗
)
= (
sc
i=s1
(σi/H(Wi)x
)νi
)γ−γ∗
ux(µ−µ∗
)
= (
sc
i=s1
(uxmi
)νi
)γ−γ∗
µ − µ∗
= (
sc
i=s1
miνi) · (γ − γ∗
)
(
sc
i=s1
miνi) = (µ − µ∗
)/(γ − γ∗
)
Finally, we remark that this extraction argument
and the random oracle paradigm are also used in the
proof of the underlying scheme [13].
4.1.2 Privacy Preserving Guarantee
We want to make sure that the TPA cannot derive
users’ data content from the information collected
during auditing process.
Theorem 2: From the server’s response {σ, µ, R},
TPA cannot recover µ′
.
Proof: We show the existence of a simulator that
can produce a valid response even without the knowl-
edge of µ′
, in the random oracle model. Now, the TPA
is treated as an adversary. Given a valid σ from the
cloud server, firstly, randomly pick γ, µ from Zp, set
R ← e((
sc
i=s1
H(Wi)νi
)γ
·uµ
, v)/e(σγ
, g). Finally, back-
patch γ = h(R) since the simulator is controlling the
random oracle h(·). We remark that this backpatching
technique in the random oracle model is also used in
the proof of the underlying scheme [13].
4.1.3 Security Guarantee for Batch Auditing
Now we show that our way of extending our result to
a multi-user setting will not affect the aforementioned
TABLE 2: Performance under different number of
sampled blocks c for high assurance (≥ 95%) auditing.
Our Scheme [13]
Sampled blocks c 460 300 460 300
Sever comp. time (ms) 411.00 270.20 407.66 265.87
TPA comp. time (ms) 507.79 476.81 504.25 472.55
Comm. cost (Byte) 160 40 160 40
security insurance, as shown in Theorem 3.
Theorem 3: Our batch auditing protocol achieves
the same storage correctness and privacy preserving
guarantee as in the single-user case.
Proof: The privacy-preserving guarantee in the
multi-user setting is very similar to that of Theorem 2,
and thus omitted here. For the storage correctness
guarantee, we are going to reduce it to the single-user
case. We use the forking technique as in the proof
of Theorem 1. However, the verification equation
for the batch audits involves K challenges from the
random oracle. This time we need to ensure that all
the other K − 1 challenges are determined before the
forking of the concerned random oracle response. This
can be done using the idea in [24]. As soon as the
adversary issues the very first random oracle query
for γi = h(R||vi||L) for any i ∈ [1, K], the simulator
immediately determines the values γj = h(R||vj||L)
for all j ∈ [1, K]. This is possible since they are all
using the same R and L. Now, all but one of the γk’s
in Equation 2 are equal, so a valid response can be
extracted similar to the single-user case in the proof
of Theorem 1.
4.2 Performance Analysis
We now assess the performance of the proposed
privacy-preserving public auditing schemes to show
that they are indeed lightweight. We will focus on
the cost of the efficiency of the privacy-preserving
protocol and our proposed batch auditing technique.
The experiment is conducted using C on a Linux
system with an Intel Core 2 processor running at 1.86
GHz, 2048 MB of RAM, and a 7200 RPM Western
Digital 250 GB Serial ATA drive with an 8 MB buffer.
Our code uses the Pairing-Based Cryptography (PBC)
library version 0.4.18. The elliptic curve utilized in the
experiment is a MNT curve, with base field size of 159
bits and the embedding degree 6. The security level
is chosen to be 80 bit, which means |νi| = 80 and |p|
= 160. All experimental results represent the mean of
20 trials.
4.2.1 Cost of Privacy-Preserving Protocol
We begin by estimating the cost in terms of basic cryp-
tographic operations, as notated in Table 1. Suppose
there are c random blocks specified in the challenge
10. 10
0 50 100 150 200
400
420
440
460
480
500
520
Number of auditing tasks
Auditingtimepertask(ms)
individual auditing
batch auditing (c=460)
batch auditing (c=300)
Fig. 4: Comparison on auditing time between batch
and individual auditing. Per task auditing time de-
notes the total auditing time divided by the number
of tasks. For clarity reasons, we omit the straight curve
for individual auditing when c=300.
message chal during the Audit phase. Under this set-
ting, we quantify the cost introduced of the privacy-
preserving auditing in terms of server computation,
auditor computation as well as communication over-
head.
On the server side, the generated response includes
an aggregated authenticator σ = i∈I σνi
i ∈ G1, a
random factor R = e(u, v)r
∈ GT , and a blinded linear
combination of sampled blocks µ = γ i∈I νimi + r ∈
Zp, where γ = h(R) ∈ Zp. The corresponding com-
putation cost is c-MultExp1
G1
(|νi|), Exp1
GT
(|p|), and
Hash1
Zp
+ Addc
Zp
+ Multc+1
Zp
, respectively. Compared
to the existing HLA-based solution for ensuring re-
mote data integrity [13]1
, the extra cost for protecting
the user privacy, resulted from the random mask R,
is only a constant: Exp1
GT
(|p|) + Mult1
Zp
+ Hash1
Zp
+
Add1
Zp
, which has nothing to do with the number of
sampled blocks c. When c is set to be 300 to 460 for
high assurance of auditing, as discussed in Section 3.4,
the extra cost for privacy-preserving guarantee on the
server side would be negligible against the total server
computation for response generation.
Similarly, on the auditor side, upon receiving
the response {σ, R, µ}, the corresponding compu-
tation cost for response validation is Hash1
Zp
+ c-
MultExp1
G1
(|νi|) + Hashc
G1
+ Mult1
G1
+ Mult1
GT
+
Exp3
G1
(|p|) + Pair2
G1,G2
, among which only Hash1
Zp
+
Exp2
G1
(|p|) + Mult1
GT
account for the additional con-
stant computation cost. For c = 460 or 300, and con-
sidering the relatively expensive pairing operations,
this extra cost imposes little overhead on the overall
cost of response validation, and thus can be ignored.
For the sake of completeness, Table 2 gives the ex-
periment result on performance comparison between
our scheme and the state-of-the-art [13]. It can be
1. We refer readers to [13] for a detailed description of their HLA-
based solution.
0 2 4 6 8 10 12 14 16 18
410
420
430
440
450
460
470
480
490
500
510
Fraction of invalid responses α
Auditingtimepertask(ms)
individual auditing
batch auditing (c=460)
batch auditing (c=300)
Fig. 5: Comparison on auditing time between batch
and individual auditing, when α-fraction of 256 re-
sponses are invalid. Per task auditing time denotes the
total auditing time divided by the number of tasks.
shown that the performance of our scheme is almost
the same as that of [13], even if our scheme supports
privacy-preserving guarantee while [13] does not. For
the extra communication cost of our scheme opposing
to [13], the server’s response {σ, R, µ} contains an ad-
ditional random element R, which is a group element
of GT and has the size close to 960 bits.
4.2.2 Batch Auditing Efficiency
Discussion in Section 3.5 gives an asymptotic effi-
ciency analysis on the batch auditing, by considering
only the total number of pairing operations. However,
on the practical side, there are additional less expen-
sive operations required for batching, such as modular
exponentiations and multiplications. Meanwhile, the
different sampling strategies, i.e., different number of
sampled blocks c, is also a variable factor that affects
the batching efficiency. Thus, whether the benefits
of removing pairings significantly outweighs these
additional operations is remained to be verified. To
get a complete view of batching efficiency, we conduct
a timed batch auditing test, where the number of
auditing tasks is increased from 1 to approximately
200 with intervals of 8. The performance of the cor-
responding non-batched (individual) auditing is pro-
vided as a baseline for the measurement. Following
the same experimental settings c = 300 and c = 460,
the average per task auditing time, which is computed
by dividing total auditing time by the number of
tasks, is given in Fig. 4 for both batch and individual
auditing. It can be shown that compared to individual
auditing, batch auditing indeed helps reducing the
TPA’s computation cost, as more than 11% and 14%
of per-task auditing time is saved, when c is set to be
460 and 300, respectively.
4.2.3 Sorting out Invalid Responses
Now we use experiment to justify the efficiency of
our recursive binary search approach for the TPA to
11. 11
sort out the invalid responses when batch auditing
fails, as discussed in Section 3.5. This experiment is
tightly pertained to the work in [18], which evaluates
the batch verification efficiency of various short sig-
natures.
To evaluate the feasibility of the recursive approach,
we first generate a collection of 256 valid responses,
which implies the TPA may concurrently handle 256
different auditing delegations. We then conduct the
tests repeatedly while randomly corrupting an α-
fraction, ranging from 0 to 18%, by replacing them
with random values. The average auditing time per
task against the individual auditing approach is pre-
sented in Fig. 5. The result shows that even the
number of invalid responses exceeds 15% of the total
batch size, the performance of batch auditing can
still be safely concluded as more preferable than the
straightforward individual auditing. Note that the
random distribution of invalid responses within the
collection is nearly the worst-case for batch auditing.
If invalid responses are grouped together, it is possible
to achieve even better results.
5 RELATED WORK
Ateniese et al. [8] are the first to consider public au-
ditability in their defined “provable data possession”
(PDP) model for ensuring possession of data files on
untrusted storages. Their scheme utilizes the RSA-
based homomorphic linear authenticators for auditing
outsourced data and suggests randomly sampling a
few blocks of the file. However, the public auditability
in their scheme demands the linear combination of
sampled blocks exposed to external auditor. When
used directly, their protocol is not provably privacy
preserving, and thus may leak user data information
to the auditor. Juels et al. [11] describe a “proof
of retrievability” (PoR) model, where spot-checking
and error-correcting codes are used to ensure both
“possession” and “retrievability” of data files on re-
mote archive service systems. However, the number
of audit challenges a user can perform is fixed a
priori, and public auditability is not supported in their
main scheme. Although they describe a straightfor-
ward Merkle-tree construction for public PoRs, this
approach only works with encrypted data. Dodis et
al. [25] give a study on different variants of PoR
with private auditability. Shacham et al. [13] design an
improved PoR scheme built from BLS signatures [17]
with full proofs of security in the security model
defined in [11]. Similar to the construction in [8], they
use publicly verifiable homomorphic linear authenti-
cators that are built from provably secure BLS signa-
tures. Based on the elegant BLS construction, a com-
pact and public verifiable scheme is obtained. Again,
their approach does not support privacy-preserving
auditing for the same reason as [8]. Shah et al. [9], [14]
propose allowing a TPA to keep online storage honest
by first encrypting the data then sending a number
of pre-computed symmetric-keyed hashes over the
encrypted data to the auditor. The auditor verifies
both the integrity of the data file and the server’s
possession of a previously committed decryption key.
This scheme only works for encrypted files, and it
suffers from the auditor statefulness and bounded
usage, which may potentially bring in online burden
to users when the keyed hashes are used up.
In other related work, Ateniese et al. [19] propose a
partially dynamic version of the prior PDP scheme,
using only symmetric key cryptography but with
a bounded number of audits. In [20], Wang et al.
consider a similar support for partial dynamic data
storage in a distributed scenario with additional fea-
ture of data error localization. In a subsequent work,
Wang et al. [10] propose to combine BLS-based HLA
with MHT to support both public auditability and
full data dynamics. Almost simultaneously, Erway
et al. [21] developed a skip lists based scheme to
enable provable data possession with full dynamics
support. However, the verification in these two proto-
cols requires the linear combination of sampled blocks
just as [8], [13], and thus does not support privacy-
preserving auditing. While all the above schemes
provide methods for efficient auditing and provable
assurance on the correctness of remotely stored data,
none of them meet all the requirements for privacy-
preserving public auditing in cloud computing. More
importantly, none of these schemes consider batch
auditing, which can greatly reduce the computation
cost on the TPA when coping with a large number of
audit delegations.
6 CONCLUSION
In this paper, we propose a privacy-preserving public
auditing system for data storage security in Cloud
Computing. We utilize the homomorphic linear au-
thenticator and random masking to guarantee that
the TPA would not learn any knowledge about the
data content stored on the cloud server during the ef-
ficient auditing process, which not only eliminates the
burden of cloud user from the tedious and possibly
expensive auditing task, but also alleviates the users’
fear of their outsourced data leakage. Considering
TPA may concurrently handle multiple audit sessions
from different users for their outsourced data files, we
further extend our privacy-preserving public auditing
protocol into a multi-user setting, where the TPA can
perform multiple auditing tasks in a batch manner
for better efficiency. Extensive analysis shows that our
schemes are provably secure and highly efficient.
ACKNOWLEDGMENTS
This work was supported in part by the US National
Science Foundation under grant CNS-0831963, CNS-
0626601, CNS-0716306, and CNS-0831628.
12. 12
APPENDIX A
ZERO KNOWLEDGE PUBLIC AUDITING
Here we present a public auditing scheme with prov-
ably zero knowledge leakage. The setup phase is
similar to our main scheme presented in Section 3.4.
The secret parameters are sk = (x, ssk) and the public
parameters are pk = (spk, v, g, u, e(u, v), g1), where
g1 ∈ G1 is an additional public group element.
In the audit phase, upon receiving challenge
chal = {(i, νi)}i∈I, the server chooses three random
elements rm, rσ, ρ ← Zp, and calculates R = e(g1, g)rσ
·
e(u, v)rm
∈ GT and γ = h(R) ∈ Zp. Let µ′
de-
note the linear combination of sampled blocks µ′
=
i∈I νimi, and σ denote the aggregated authenticator
σ = i∈I σνi
i ∈ G1. To make the auditing scheme with
zero knowledge leakage, the server has to blind both
µ′
and σ. Specifically, the server computes: µ = rm +
γµ′
mod p, and Σ = σ·gρ
1. It then sends {ς, µ, Σ, R} as
the response proof of storage correctness to the TPA,
where ς = rσ + γρ mod p. With the response from
the server, the TPA runs VerifyProof to validate
the response by first computing γ = h(R) and then
checking the verification equation
R · e(Σγ
, g)
?
= e((
sc
i=s1
H(Wi)νi
)γ
· uµ
, v) · e(g1, g)ς
(5)
The correctness of the above verification equation
can be elaborated as follows:
R · e(Σγ
, g) = e(g1, g)rσ
· e(u, v)rm
· e((σ · gρ
1)γ
, g)
= e(g1, g)rσ
· e(u, v)rm
· e((σγ
, g) · e(gργ
1 , g)
= e(u, v)rm
· e((σγ
, g) · e(g1, g)rσ+ργ
= e((
sc
i=s1
H(Wi)νi
)γ
· uµ
, v) · e(g1, g)ς
The last equality follows from the elaboration of
Equation 1.
Theorem 4: The above auditing protocol achieves
zero-knowledge information leakage to the TPA, and
it also ensures the storage correctness guarantee.
Proof: Zero-knowledge is easy to see. Randomly
pick γ, µ, ς from Zp and Σ from G1, set R ←
e((
sc
i=s1
H(Wi)νi
)γ
·uµ
, v)·e(g1, g)ς
/e(Σγ
, g) and back-
patch γ = h(R). For proof of storage correctness,
we can extract ρ similar to the extraction of µ′
as in
the proof of Theorem 1. With ρ, σ can be recovered
from Σ. To conclude, a valid pair of σ and µ′
can be
extracted.
REFERENCES
[1] P. Mell and T. Grance, “Draft NIST working definition of
cloud computing,” Referenced on June. 3rd, 2009 Online
at http://csrc.nist.gov/groups/SNS/cloud-computing/index.
html, 2009.
[2] M. Armbrust, A. Fox, R. Griffith, A. D. Joseph, R. H. Katz,
A. Konwinski, G. Lee, D. A. Patterson, A. Rabkin, I. Stoica,
and M. Zaharia, “Above the clouds: A berkeley view of cloud
computing,” University of California, Berkeley, Tech. Rep.
UCB-EECS-2009-28, Feb 2009.
[3] M. Arrington, “Gmail disaster: Reports of mass email
deletions,” Online at http://www.techcrunch.com/2006/
12/28/gmail-disasterreports-of-mass-email-deletions/,
December 2006.
[4] J. Kincaid, “MediaMax/TheLinkup Closes Its Doors,”
Online at http://www.techcrunch.com/2008/07/10/
mediamaxthelinkup-closes-its-doors/, July 2008.
[5] Amazon.com, “Amazon s3 availability event: July 20, 2008,”
Online at http://status.aws.amazon.com/s3-20080720.html,
2008.
[6] S. Wilson, “Appengine outage,” Online at http://www.
cio-weblog.com/50226711/appengine outage.php, June 2008.
[7] B. Krebs, “Payment Processor Breach May Be Largest Ever,”
Online at http://voices.washingtonpost.com/securityfix/
2009/01/payment processor breach may b.html, Jan. 2009.
[8] G. Ateniese, R. Burns, R. Curtmola, J. Herring, L. Kissner,
Z. Peterson, and D. Song, “Provable data possession at un-
trusted stores,” in Proc. of CCS’07, Alexandria, VA, October
2007, pp. 598–609.
[9] M. A. Shah, R. Swaminathan, and M. Baker, “Privacy-
preserving audit and extraction of digital contents,” Cryptol-
ogy ePrint Archive, Report 2008/186, 2008.
[10] Q. Wang, C. Wang, J. Li, K. Ren, and W. Lou, “Enabling public
verifiability and data dynamics for storage security in cloud
computing,” in Proc. of ESORICS’09, volume 5789 of LNCS.
Springer-Verlag, Sep. 2009, pp. 355–370.
[11] A. Juels and J. Burton S. Kaliski, “Pors: Proofs of retrievability
for large files,” in Proc. of CCS’07, Alexandria, VA, October
2007, pp. 584–597.
[12] Cloud Security Alliance, “Security guidance for critical
areas of focus in cloud computing,” 2009, http://www.
cloudsecurityalliance.org.
[13] H. Shacham and B. Waters, “Compact proofs of retrievability,”
in Proc. of Asiacrypt 2008, vol. 5350, Dec 2008, pp. 90–107.
[14] M. A. Shah, M. Baker, J. C. Mogul, and R. Swaminathan,
“Auditing to keep online storage services honest,” in Proc. of
HotOS’07. Berkeley, CA, USA: USENIX Association, 2007, pp.
1–6.
[15] 104th United States Congress, “Health Insurance Portability
and Accountability Act of 1996 (HIPPA),” Online at http://
aspe.hhs.gov/admnsimp/pl104191.htm, 1996.
[16] S. Yu, C. Wang, K. Ren, and W. Lou, “Achieving secure,
scalable, and fine-grained access control in cloud computing,”
in Proc. of IEEE INFOCOM’10, San Diego, CA, USA, March
2010.
[17] D. Boneh, B. Lynn, and H. Shacham, “Short signatures from
the Weil pairing,” J. Cryptology, vol. 17, no. 4, pp. 297–319,
2004.
[18] A. L. Ferrara, M. Greeny, S. Hohenberger, and M. Pedersen,
“Practical short signature batch verification,” in Proceedings of
CT-RSA, volume 5473 of LNCS. Springer-Verlag, 2009, pp. 309–
324.
[19] G. Ateniese, R. D. Pietro, L. V. Mancini, and G. Tsudik,
“Scalable and efficient provable data possession,” in Proc. of
SecureComm’08, 2008, pp. 1–10.
[20] C. Wang, Q. Wang, K. Ren, and W. Lou, “Ensuring data storage
security in cloud computing,” in Proc. of IWQoS’09, July 2009,
pp. 1–9.
[21] C. Erway, A. Kupcu, C. Papamanthou, and R. Tamassia,
“Dynamic provable data possession,” in Proc. of CCS’09, 2009,
pp. 213–222.
[22] R. C. Merkle, “Protocols for public key cryptosystems,” in Proc.
of IEEE Symposium on Security and Privacy, Los Alamitos, CA,
USA, 1980.
[23] G. Ateniese, S. Kamara, and J. Katz, “Proofs of storage from
homomorphic identification protocols,” in ASIACRYPT, 2009,
pp. 319–333.
[24] M. Bellare and G. Neven, “Multi-signatures in the plain public-
key model and a general forking lemma,” in ACM Conference
on Computer and Communications Security, 2006, pp. 390–399.
[25] Y. Dodis, S. P. Vadhan, and D. Wichs, “Proofs of retrievability
via hardness amplification,” in TCC, 2009, pp. 109–127.